Fine-Tuning Large Language Models for Practical Software Engineering: Case Studies in Automated Patch Generation

dc.contributor.authorZhou, Jiayun
dc.contributor.departmentUniversity of Gothenburg / Department of Philosophy,Lingustics and Theory of Scienceeng
dc.contributor.departmentGöteborgs universitet / Institutionen för filosofi, lingvistik och vetenskapsteoriswe
dc.date.accessioned2024-10-21T07:49:32Z
dc.date.available2024-10-21T07:49:32Z
dc.date.issued2024-10-21
dc.description.abstractIn recent years, software development has become increasingly complex, posing challenges in problem-solving, code optimization, and error correction. The rise of Artificial Intelligence (AI) and Large Language Models (LLMs) has introduced new opportunities to automate these tasks, revolutionizing code generation, understanding, and maintenance. This study investigates the fine-tuning of LLMs, particularly the DeepSeek Coder 6.7b model, using real business code data from Epiroc, a leading company in the mining and infrastructure industries. The objective is to improve the model’s ability to generate code patches that meet evolving business requirements. Fine-tuning strategies, including data preparation and optimization techniques, were applied to enhance the model’s accuracy, reliability, and adaptability. The results demonstrate significant improvements across multiple metrics, including correctness, maintainability, and efficiency, with the fine-tuned model outperforming the baseline in patch generation tasks. Challenges related to dataset complexity, long sequence processing, and resource constraints were addressed through data preprocessing and resource-efficient training methods. This research highlights the potential of LLMs in automating patch generation and improving programming efficiency, providing valuable insights and methodologies for future projects in AI-assisted software development. The findings lay the groundwork for further advancements in intelligent programming assistants, which promise to enhance the future of software engineering.sv
dc.identifier.urihttps://hdl.handle.net/2077/83733
dc.language.isoengsv
dc.setspec.uppsokHumanitiesTheology
dc.subjectFine-Tuning; Large Language Model; Patch Generationsv
dc.titleFine-Tuning Large Language Models for Practical Software Engineering: Case Studies in Automated Patch Generationsv
dc.title.alternativeFine-Tuning Large Language Models for Practical Software Engineering: Case Studies in Automated Patch Generationsv
dc.typeText
dc.type.degreeStudent essay
dc.type.uppsokH2

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Master Thesis Fine Tuning Large Language Models for Pracital Software Engineering.pdf
Size:
917.94 KB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
4.68 KB
Format:
Item-specific license agreed upon to submission
Description: