AI Isn’t Replacing Testers — It’s Helping Us Do Better Work

 QonfX was my first in-person conference experience and I wasn't quite sure what to expect going into it. The theme — “The Future of Testing,” with a focus on AI in QA — was right up my alley. 

I'm always on the lookout for ways to improve and streamline processes. I figured at the very least, I could gain some insights into what other QA leaders are doing to introduce AI into their workflows, and what I can take back to my own team.

And ok maybe I was a little starstruck to see Michael Bolton was going to be present. 

My favourite session was Lavanya Mohan’s experiment — building and testing an app entirely with AI tools. She used GitHub Copilot to scaffold a simple app, but most of the heavy lifting of testing came from prompting ChatGPT. The importance of getting the prompt just right was evident in a few comical ways.

When she asked ChatGPT to generate 30 unique pizza toppings, it started strong — but by topping #20, it began repeating itself. Classic AI quirk. 

Upon further investigation, it was noted that the prompt had correctly asked for 30 distinct toppings, but the uniqueness was only enforced on the IDs, not the names. A great reminder of how precision in prompts really matters.

The first step in the testing was to set up the page object models, which she did by providing the HTML source directly in the prompt. This was the HTML generated by CoPilot previously. In reviewing the locators, she noticed they were not using Playwright best practices. 

She thought something wrong with the prompt, but after some digging realized it was actually an issue with the app itself - she had never had CoPilot build the app with suitable locators. This is an issue to be mindful of in developing with AI.

The work seemed to go a fair bit smoother on the API testing side. ChatGPT was able to add some methods to the POM and the endpoints tests cases were generated without much fanfare.

I do wonder — with all the back-and-forth checking of AI output — did it actually save time in the end? I didn’t get to ask her, but I’m planning to run my own experiment and find out.

All the speakers tended to allude to and agree on one fact: AI is not coming for our jobs, but it is changing how we work. This isn't necessarily a bad thing - quite the contrary. 

But as seen above, it can get you in a lot of trouble very quickly if you don't know what you are looking for. Therefore, you need to know good testing in order to spot bad AI.

📌 “When working with AI, humans are at the core and need to be the drivers.”

That line really stuck with me. It’s a reminder that AI isn’t the expert — we are. It can help us move faster, but it still needs our judgment to move in the right direction.

So with my team of very talented testers, I hope we can put our heads together and build a prompt that can help us run our Jira tickets through the AI to generate a list of test cases. It's a small step, but has the potential for big impact. If we can even reduce our time spent on test cases by 50%, it frees us up for more in-depth and creative exploratory testing - the kind of stuff we love.

All in all, my first foray into the conference world? A total success. I met so many fantastic QAs, and left feeling inspired. It's quite eye opening to realize that despite how varied our products are, we all boil down to the same fundamentals of software testing - it's mind blowing how many ways that can be applied!

Always curious to hear how others are incorporating AI in their workflows - let's chat :)

Comments

Popular posts from this blog

Understanding HTTP Response Status Codes

API Testing Overview

Testing vs. Checking