Enhancing Requirements Engineering Practices Using Large Language Models
No Thumbnail Available
Date
2024
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Background: Large Language Models (LLMs) offer users natural language
interaction, technical insights and task automation capabilities. However,
the systematic integration of LLMs within Requirements Engineering (RE)
processes presents unique challenges and limitations. A need to develop and understand
mechanisms to integrate LLMs in an efficient and responsible manner
has been observed in current state-of-the-art literature. This requires scientific
inquiry on three key LLM aspects: 1. Technical proficiency in assisting with
RE tasks and processes, 2. Ethical and regulatory constraints on their usage,
and 3. Artefacts and processes that enable the systematic integration of LLMs
in an efficient and responsible manner within organisations.
Objective: This thesis investigates the technical capabilities and ethical/
regulatory constraints to address aspects 1 and 2 before collecting preliminary
evidence that motivates further researcher on aspect 3 to enable the systematic
integration of LLMs to assist users within RE processes.
Method: A multi-methodology approach, combining quasi-experiments, interviews,
a survey and a case study was employed to gather empirical data
on LLMs’ technical abilities and user experiences in assisting users within
RE tasks and processes. A tertiary review followed by a meta-analysis of the
literature on ethical AI guidelines and frameworks was conducted to identify
and understand the constraints involved in the using LLMs for RE tasks and
processes in practice.
Findings: The results of the empirical experiments revealed that LLMs are
capable of performing technical tasks like generating requirements, evaluating
the quality of user stories, binary requirements classification and tracing interdependant
requirements with varying levels of performance. The comparative
analysis of ethical AI guidelines and frameworks revealed the constraints and
requirements involved in the use of LLMs in practice. The industrial case
study and the survey resulted in ten recommendations for the responsible and
systematic integration of LLMs for RE in practice.
Conclusion: In conclusion, a need for a human-LLM task delegation framework
has been observed, the importance of validating LLMs in real-world-like
environments has been highlighted, the under-emphasised human and organisational
aspects have been brought to the forefront. Future work needs to
delve deeper into identifying and mitigating the challenges associated with
the adoption and integration of LLMs. This includes the need to study the
organisational barriers, deciding factors, and artefacts that influence and enable
the responsible and systematic adoption of LLMs.
Description
Keywords
Large Language Models, Requirements Engineering, Prompt Engineering, Trustworthy AI, Multi-methodology