AI is everywhere right now, and while some of it is hype, the impact on software testing is very real.
The uncomfortable truth? Teams that don’t adapt risk falling behind. Manual-heavy processes, fragile test suites, and slow regression cycles simply can’t keep up with today’s pace of change.
The good news: AI isn’t here to replace testers — it’s here to replace the repetitive, low-value work that holds testers back. From
model-driven testing that makes collaboration easier and coverage stronger, to
agentic AI that can explore systems and spot issues at speed, these tools are already reshaping how testing gets done. But testers still bring what AI can’t: judgment, ethics, and the ability to know when “green” isn’t really good enough.
Join Daniel Howard (Senior Analyst, Bloor Research) and Jonathon Wright (Chief AI Officer, Keysight) for a straight-talking session on how to evolve your skills and stay indispensable in an AI-augmented world.
You'll learn:
- How automation and model-driven testing can cut test maintenance, increase coverage, and free you from regression drudgery
- Where AI really adds measurable value — and where it’s just marketing noise
- The skills, metrics, and governance practices testers need to prove their value and stay ahead