STAREAST 2025 - AI/ML

Wednesday, April 30

Kevin Pyles
FamilySearch
W1

AI/ML SDET: A Title Change or a New Role?

Wednesday, April 30, 2025 - 11:30am to 12:30pm

Artificial Intelligence/Machine Learning Software Development Engineer in Test. What a title! But what does that person even do? Are they using AI/ML to test software? Are they testing AI or ML? This isn’t just a title change for a software tester, this is really a mind shift and tool change. What if you don’t even have a UI in which to test your product? What if you don’t ship a UI or even a product? The world of software is changing with advancements in AI/ML and it is time to make sure we level up our testing abilities to match the revolution. Join Kevin as he walks you through his...

W7

Is Your Playbook for Generative AI in Test Automation on Point?

Wednesday, April 30, 2025 - 1:30pm to 2:30pm

Generative AI allows us to create test automation scripts really fast, but how can we be sure the outcome is truly complete? In this talk, Julio will discuss the critical process of how to define your prompts and evaluate if the results from AI-driven test automation creation cover all the necessary bases. He will bring a playbook for GenAI in test automation created based on the results of research that collected advice and feedback from dozens of test automation engineers. In the session, Julio will discuss practical examples of the mindset, a set of steps to be followed when using...

Philip Daye
Insider Intelligence
W13

From Vision to Velocity: Accelerating Agile Testing with Generative AI

Wednesday, April 30, 2025 - 2:45pm to 3:45pm

In the rapidly evolving landscape of Agile and DevOps, traditional methods of testing business-facing features often struggle to keep pace with the demands for faster and more thorough testing. However, the fusion of traditional testing wisdom with cutting-edge AI presents a unique opportunity to enhance software quality and delivery speed, offering innovative solutions to longstanding challenges. Join Phillip as he examines key test design techniques - such as equivalence partitioning, boundary value analysis, decision table testing, and state transition testing - and explores how...

Thursday, May 1

T1

Harnessing the Power of Large Language Models: Automatically Generating High-Quality, Readable, and Maintainable Unit Test Cases for Java and Python

Thursday, May 1, 2025 - 9:45am to 10:45am

In this talk, Lisa Waugh will discuss the challenges in delivering high-quality software due to manual testing and the need for automated unit test generation. Despite the development of techniques and tools to automatically generate unit tests, the generated tests often suffer from poor readability and lack resemblance to developer-written tests. To address these issues, Lisa's team has investigated the use of large language models (LLMs) to improve the readability and usability of automatically generated tests. They have created a pipeline to guide LLMs in generating high-coverage,...

Fidelity Investments
T7

Agentic Automation for Hyper Automation

Thursday, May 1, 2025 - 11:15am to 12:15pm

In web and mobile automation, each phase is time-consuming and requires significant manual intervention. For example, purchasing a book from Amazon involves scripting the entire process: opening the browser, entering application details, providing payment information, and completing the purchase. Rajkumar J. Bhojan has a proposal on Hyper Automation which involves using Agents and Generative AI (GenAI) to streamline these tasks. Specifically, agents will create the necessary scripts to automate each phase, with each agent communicating with others to optimize results. To achieve this, Dr....

Otis Elevator Co
T13

Automating the Testing of AI/ML Models: Tools, Skills, and Best Practices

Thursday, May 1, 2025 - 1:30pm to 2:30pm

As AI and ML models become integral to software systems, ensuring their accuracy and reliability through effective testing is paramount. Traditional testing approaches often fall short in addressing the unique challenges posed by these models, such as handling large datasets, verifying model predictions, and maintaining robustness against data drift. This presentation explores Otis Elevator's journey in automating the testing of AI/ML models using cutting-edge tools and techniques. Ayisha Tabbassum's team faced significant hurdles in manually testing their AI models, from the sheer volume...

T19

Testing Under Pressure: Leveraging AI to Satisfy the C-Suite

Thursday, May 1, 2025 - 3:00pm to 4:00pm

In an era where "doing more with less" has become a corporate mantra, executives are increasingly turning to Artificial Intelligence (AI) to boost efficiency and productivity. This shift places significant pressure on testing teams to rapidly adopt AI solutions without compromising quality. In this engaging presentation, a C-suite executive and a testing expert offer a unique "fly on the wall" perspective into the dynamic between leadership expectations and the realities faced by testers. The session will delve into the challenges testers encounter under executive mandates, explore...