Harnessing the Power of Large Language Models: Automatically Generating High-Quality, Readable, and Maintainable Unit Test Cases for Java and Python
In this talk, Lisa Waugh will discuss the challenges in delivering high-quality software due to manual testing and the need for automated unit test generation. Despite the development of techniques and tools to automatically generate unit tests, the generated tests often suffer from poor readability and lack resemblance to developer-written tests. To address these issues, Lisa's team has investigated the use of large language models (LLMs) to improve the readability and usability of automatically generated tests. They have created a pipeline to guide LLMs in generating high-coverage, usable test cases for Java and Python, with plans to expand to other languages. The unit test cases include mocking so they can be run in a CI/CD pipeline prior to deployment. Lisa will present the results of their LLM-based test generation with statistical analysis guidance and compare the results of that work with existing test generation methods. She will also introduce a product that enables developers to create readable and runnable tests with good coverage.
Lisa Waugh is a Software Development Engineer in Test, passionate about quality and innovation. She brings over 40 years of experience in information technology, with concentration on DevOps, Continuous Testing, and Performance Testing. She has a diverse background in multi-platform hardware and software technology and extensive technical experience in cloud, workstation, mainframe, and client server environments. She is primarily involved with automating continuous testing pipelines to include performance, load, and system testing, identification of potential performance issues, and performance tuning of cloud microservices. She is currently in the Developer Experience group in the CIO responsible for defining and assisting in the implementation of automated testing tools.