Testtalks | Automation Awesomeness | Helping You Succeed With Test Automation

  • Autor: Vários
  • Narrador: Vários
  • Editor: Podcast
  • Duración: 308:20:33
  • Mas informaciones

Informações:

Sinopsis

TestTalks is a weekly podcast hosted by Joe Colantonio, which geeks out on all things software test automation. TestTalks covers news found in the testing space, reviews books about automation and speaks with some of the thought leaders in the test automation field. Well aim to interview some of todays most successful and inspiring software engineers and test automation thought leaders.During the interviews, the spotlighted engineer will tell us about his or her testing experience, sharing their successes and failures as well as which testing techniques are working for them right now. Well all learn more about testing through these talks hence the name TestTalks.

Episodios

  • AI Testing from Production Logs: Generate Smarter Regression Tests with Tanvi Mittal

    17/03/2026 Duración: 27min

    What if your production logs could automatically generate new test cases? In this episode, Joe Colantonio sits down with Tanvi Mittal to break down how AI-powered log mining is changing the way teams approach software testing, quality engineering, and DevOps. Most teams ignore production logs or use them only for debugging. But those logs contain real user behavior, real failures, and real edge cases—the exact scenarios your test suite is probably missing.

  • AI Testing: How to Ensure Quality in Non-Deterministic Systems with Adam Sandman

    10/03/2026 Duración: 43min

    How do you ensure software quality when the system you're testing doesn't give the same output twice? Go to https://links.testguild.com/inflectra and start your free 30-day trial, no credit card, no contract required. That's the core challenge facing every QA team building or testing AI-powered applications today and it's breaking all the rules we've relied on for decades. In this episode of the TestGuild Automation Podcast, I sit down with Adam Sandman, co-founder of Inflectra, to get into what non-deterministic AI testing actually means in practice, why traditional pass/fail testing no longer cuts it, and what quality professionals need to do differently right now. We cover: Why AI-generated code is raising the stakes for QA teams while budgets stay flat The fundamental difference between deterministic and non-deterministic systems — and why it changes everything about how you test How to set acceptable risk thresholds for AI systems (hint: it depends on whether you're building an e-commerce chatbot or an

  • Test Automation Tools That Scale: From Zero to 1.6M Users with Sanjay Kumar

    03/03/2026 Duración: 29min

    What does it really take to build a test automation tool that millions of testers rely on, without venture capital, paid ads, or a massive team? In this episode, we explore how SelectorsHub grew into one of the most widely used productivity tools in software testing, reaching over 1.6 million testers worldwide. You'll discover: How to build test automation tools that solve real QA pain Why community-driven development beats chasing funding How to prioritize features when you have thousands of users Whether AI testing tools will replace selector-based automation How to choose between Playwright vs Selenium using automation analysis What founders and QA leaders can learn from scaling without VC If you're an automation engineer, QA lead, DevOps professional, or tool builder looking to scale smarter, this episode delivers real-world insight without hype. Whether you're building frameworks internally or launching your own automation product, you'll walk away with a clearer strategy for solving problems testers a

  • AI Test Automation: Ship Twice as Fast with 10x Coverage with Karim Jouini

    24/02/2026 Duración: 42min

    AI test automation is evolving fast — but most tools still generate brittle code that breaks with every UI change. See it for yourself now: https://links.testguild.com/Thunders In this episode of the TestGuild Podcast, Joe Colantonio sits down with Karim Jouini, founder of Thunders, to explore a radically different approach to AI testing: executing test automation in plain English without generating Selenium or Playwright code. Instead of "auto-healing selectors," Thunders interprets natural language directly — allowing teams to: Ship twice as fast Achieve 10x test coverage with the same resources Reduce regression cycles from weeks to days Eliminate massive automation maintenance overhead Karim shares real-world case studies, including: A European bank that reduced a 3-year core banking upgrade testing effort to 4 months A SaaS company that transitioned from a traditional QA team to AI-assisted product-led testing We also discuss: Whether AI test agents replace QA roles How QA managers must shift from i

  • Performance Testing with AI w/ Akash Thakur

    17/02/2026 Duración: 26min

    Is traditional performance testing becoming obsolete? In this episode, performance engineering expert Akash Thakur shares why AI is fundamentally transforming load testing, scripting, observability, and shift-left strategies. With 17 years of real-world enterprise experience, Akash explains how AI-augmented tools are already reducing scripting time by 30%, improving analysis speed, and helping teams move from reactive performance testing to predictive intelligence. You'll learn: How AI is accelerating performance scripting and analysis Why shift-left performance testing is finally becoming realistic The role of structured data in predictive QA models How to test AI applications (LLMs, GPUs, inference throughput) differently than traditional web apps What the future role of performance engineers looks like — architect, not script writer If you're a performance tester, SRE, QA leader, or DevOps engineer wondering how AI will impact your role — this episode gives you practical, actionable insights you can appl

  • Spec2TestAI: Stop Defects Before They Reach Production with Missy Trumpler

    27/01/2026 Duración: 34min

    Most teams find defects after the damage is done — during regression, late-stage testing, or production incidents. That's expensive, stressful, and completely avoidable. Try Spec2Test AI now: https://testguild.me/spec2testdemo In this episode, Joe Colantonio sits down with Missy Trumpler, CEO of AgileAILabs, to explore how Spec2TestAI helps teams prevent defects before code ships by applying AI directly to requirements. You'll learn: Why traditional test automation still misses critical risk How predictive, requirements-based AI testing works in practice What "shift-left" actually looks like beyond the buzzword How to reduce escaped defects without writing more tests Why secure, explainable AI matters for QA and enterprise teams This conversation is especially valuable for software testers, automation engineers, and QA leaders who want earlier visibility into risk, faster feedback, and higher confidence releases. Don't miss Automation Guild 2026 - Register Now: https://testguild.me/podag26

  • Locust Performance Testing with AI and Observability with Lars Holmberg

    13/01/2026 Duración: 30min

    Performance testing often fails for one simple reason: teams can't see where the slowdown actually happens. In this episode, we explore Locust load testing and why Python-based performance testing is becoming the go-to choice for modern DevOps, QA, and SRE teams. You'll learn how Locust enables highly realistic user behavior, massive concurrency, and distributed load testing — without the overhead of traditional enterprise tools. We also dive into: Why Python works so well for AI-assisted load testing How Locust fits naturally into CI/CD and GitHub Actions The real difference between load testing vs performance testing How observability and end-to-end tracing eliminate guesswork Common performance testing mistakes even experienced teams make Whether you're a software tester, automation engineer, or QA leader looking to shift-left performance testing, this conversation will help you design smarter tests and catch scalability issues before your users do.

  • Top 8 Automation Testing Trends for 2026 with Joe Colantonio

    06/01/2026 Duración: 12min

    AI testing is everywhere — but clarity isn't. In this episode, Joe Colantonio breaks down the real test automation trends for 2026, based on data from 40,000+ testers, 510 live Q&A questions, and 50+ interviews with industry leaders. This isn't vendor hype or futuristic speculation. It's what working testers are actually worried about — and what they're doing next. You'll learn: Why 72.8% of testers prioritize AI, yet don't trust it alone The real reason AI testing feels harder instead of easier How integration chaos is blocking automation success Why "AI auditor" and "quality strategist" are emerging career paths What agentic AI, MCPs, and vibe testing really mean in practice How compliance, accessibility, and security will redefine QA in 2026 If you're a tester, automation engineer, or QA leader trying to stay relevant — this episode gives you the signal through the noise, and a clear path forward. If you're a software tester, automation engineer, or QA leader looking ahead to 2026, this episode lays ou

  • Automation Testing Podcast 2026: New Schedule, Events, Discounts with Joe Colantonio

    28/12/2025 Duración: 02min

    This is a special end-of-year episode of the Automation Testing Podcast. With family in town and a busy holiday season, Joe didn't want to skip a week without checking in and saying thank you to the TestGuild community. In this short episode, Joe shares: A huge milestone as the podcast approaches its 13-year anniversary Why the Automation Testing Podcast is moving from Sundays to Tuesdays starting in 2026 How loyal listeners can still get $100 off a full 5-day Automation Guild 2026 pass A sneak peek at TestGuild IRL — live, in-person events coming next year Gratitude for the listeners, YouTube community, and sponsors who make TestGuild possible If you're a software tester, automation engineer, or QA leader looking ahead to 2026, this episode lays out what's coming — and how to stay connected. Discount code: 100GUILDCOIN (https://testguild.me/podag26) Questions or ideas? Email Joe directly at joe@testguild.com As always — test everything, and keep the good.

  • AI Testing LLMs & RAG: What Testers Must Validate with Imran Ali

    21/12/2025 Duración: 32min

    AI is transforming how software is built, but testing AI systems requires an entirely new mindset. Don't miss AutomationGuild 2026 - Register Now:  https://testguild.me/podag26 Use code TestGuildPod20 to get 20% off your ticket. In this episode, Joe Colantonio sits down with Imran Ali to break down what AI testing really looks like when you're dealing with LLMs, RAG pipelines, and autonomous QA workflows. You'll learn: Why traditional pass/fail testing breaks down with LLMs How to test non-deterministic AI outputs for consistency and accuracy Practical techniques for detecting hallucinations, grounding issues, and prompt injection risks How RAG systems change the way testers validate AI-powered applications Where AI delivers quick wins today—and where human validation still matters This conversation goes beyond hype and gets into real-world AI testing strategies QA teams are using right now to keep up with AI-generated code, faster release cycles, and DevOps velocity. If you're a tester, automation engineer,

  • AI Codebase Discovery for Testers with Ben Fellows

    14/12/2025 Duración: 44min

    What if understanding your codebase was no longer a blocker for great testing? Most testers were trained to work around the code — clicking through UIs, guessing selectors, and relying on outdated docs or developer explanations. In this episode, Playwright expert Ben Fellows flip that model on its head. Using AI tools like Cursor, testers can now explore the codebase directly — asking questions, uncovering APIs, understanding data relationships, and spotting risk before a single test is written. This isn't about becoming a developer. It's about using AI to finally see how the system really works — and using that insight to test smarter, earlier, and with far more confidence. If you've ever joined a new team, inherited a legacy app, or struggled to understand what really changed in a release, this episode is for you. Registration for Automation Guild 2026 Now: https://testguild.me/podag26

  • Gatling Studio: Start Performance Testing in Minutes (No Expertise Required) with Shaun Brown and Stephane Landelle

    07/12/2025 Duración: 40min

    Performance testing has traditionally been one of the hardest parts of QA,slow onboarding, complex scripting, difficult debugging, and too many late-stage surprises. Try Gatling Studio for yourself now: https://links.testguild.com/gatling In this episode, Joe sits down with Stéphane Landelle, creator of Gatling, and Shaun Brown to explore how Gatling is reinventing the load-testing experience. You'll hear how Gatling evolved from a developer-first framework into a far more accessible platform that supports Java, Kotlin, JavaScript/TypeScript, and AI-assisted creation. We break down the thinking behind Gatling Studio, a new companion tool designed to make recording, filtering, correlating, and debugging performance tests dramatically easier. Whether you're a developer, SDET, or automation engineer, you'll learn: How to onboard quickly into performance testing—even without deep expertise Why Gatling Studio offers a smoother way to record traffic and craft tests Where AI is already improving load test authoring

  • AI-Driven Manual Regression: Test Only What Truly Matters With Wilhelm Haaker and Daniel Garay

    01/12/2025 Duración: 39min

    Manual regression testing isn't going away—yet most teams still struggle with deciding what actually needs to be retested in fast release cycles. See how AI can help your manual testing now: https://testguild.me/parasoftai In this episode, we explore how Parasoft's Test Impact Analysis helps QA teams run fewer tests while improving confidence, coverage, and release velocity. Wilhelm Haaker (Director of Solution Engineering) and Daniel Garay (Director of QA) join Joe to unpack how code-level insights and real coverage data eliminate guesswork during regression cycles. They walk through how Parasoft CTP identifies exactly which manual or automated tests are impacted by code changes—and how teams use this to reduce risk, shrink regression time, and avoid redundant testing. What You'll Learn: Why manual regression remains a huge bottleneck in modern DevOps How Test Impact Analysis reveals the exact tests affected by code changes How code coverage + impact analysis reduce risk without expanding the test suite Ways

  • Top Automation Guild Survey Insights for 2026 with Joe Colantonio

    24/11/2025 Duración: 08min

    Automation Guild turns 10 this year, and the 2026 survey revealed some of the strongest trends and signals the testing community has ever shared. Register now: https://testgld.link/ag26reg In this episode, Joe breaks down the most important insights shaping Automation Guild 2026 and what they mean for testers, automation engineers, and QA leaders. You'll hear why AI-powered testing is dominating every category, why Playwright has officially become the tool testers want most, the challenges that continue to follow teams year after year, and how testers are navigating shrinking teams, faster releases, and rising expectations. This episode gives you a clear, data-driven snapshot of why Automation Guild 2026 matters — and how this year's event is designed to help you stay relevant, sharpen your skills, and tackle the problems that keep slowing down teams. Perfect for anyone considering joining the Guild, planning their 2026 automation strategy, or just trying to make sense of the rapid changes happening in testin

  • Testing AI Vibe Coding: Stop Vulnerabilities Early with Sarit Tager

    16/11/2025 Duración: 32min

    AI is accelerating software delivery, but it's also introducing new security risks that most developers and automation engineers never see coming. In this episode, we explore how AI-generated code can embed vulnerabilities by default, how "vibe coding" is reshaping developer workflows, and what teams must do to secure their pipelines before bad code reaches production. You'll learn how to prompt more securely, how guardrails can stop vulnerabilities at generation time, how to prioritize real risks instead of false positives, and how AI can be used to protect your applications just as effectively as attackers use it to exploit them. Whether you're using Cursor, Copilot, Playwright MCP, or any AI tool in your automation workflow, this conversation gives you a clear roadmap for staying ahead of AI-driven vulnerabilities — without slowing down delivery. Featuring Sarit Tager, VP of Product for Application Security at Palo Alto Networks, who reveals real-world insights on securing AI-generated code, understanding

  • 4 Free TestGuild Tools Every Tester Should Be Using with Joe Colantonio

    09/11/2025 Duración: 17min

    In this solo episode, Joe Colantonio shares four powerful free TestGuild tools designed to help testers, automation engineers, and QA leaders work smarter. Discover how to instantly find the right testing tool for your team, assess automation risk, check your site's accessibility, and benchmark your automation maturity — all in one session. Whether you're looking to improve test coverage, adopt better practices, or simply save time, these tools were built with you in mind. What You'll Learn: – How to choose the right test automation tool fast – How to identify and reduce testing risk – How to check your site's accessibility compliance – How to assess your team's automation maturity level Try the tools free: Tool Matcher: https://testgld.link/toolmatcher Accessibility Scanner: https://testgld.link/scanner Risk Calc: https://testgld.link/riskcalc Automation Readiness Quiz: https://testgld.link/scorequiz ️ Join us for the 10th Annual Automation Guild Conference: https://testgld.link/IrHaNIVX

  • AI Testing Made Trustworthy using FizzBee

    02/11/2025 Duración: 32min

    As AI tools like Copilot, Claude, and Cursor start writing more of our code, the biggest challenge isn't generating software — it's trusting it. In this episode, JP (Jayaprabhakar) Kadarkarai, founder of FizzBee, joins Joe Colantonio to explore how autonomous, model-based testing can validate AI-generated software automatically and help teams ship with confidence. FizzBee uses a unique approach that connects design, code, and behavior into one continuous feedback loop — automatically testing for concurrency issues and validating that your implementation matches your intent. You'll discover: Why AI-generated code can't be trusted without validation How model-based testing works and why it's crucial for AI-driven development The difference between example-based and property-based testing How FizzBee detects concurrency bugs without intrusive tracing Why autonomous testing is becoming mandatory for the AI era Whether you're a software tester, DevOps engineer, or automation architect, this conversation will chang

  • Test Automation Optimus Prime Halloween Special

    19/10/2025 Duración: 41min

    In this Halloween special, Joe Colantonio and Paul Grossman discuss the evolution of automation testing, focusing on the integration of AI tools, project management strategies, and the importance of custom logging. Paul shares insights from his recent job experience, detailing how he inherited a project and the challenges he faced. Paul also goes over his Optimus Prime framework and uses it to explore various automation tools, the significance of dynamic waiting, and how to handle test case collisions. The discussion also highlights the role of AI in enhancing automation frameworks and the importance of version control in software development.

  • Playwright AI Vibe Testing: True Self-healing Tests with Vasusen Patil

    12/10/2025 Duración: 41min

    Flaky Playwright tests got you down? Discover Vibe Testing, a new AI-driven approach that lets Playwright tests understand design intent, adapt to UI changes, and self-heal intelligently. In this episode, Joe Colantonio talks with Vasusen Patil, Co-Founder and CEO of Donobu, about how their platform extends Playwright with AI-powered “Vibe Testing.” You’ll discover how this approach blends visual assertions with contextual understanding to build resilient, low-flake tests that keep shipping smooth. You’ll take away: What “Vibe Testing” really means and why it’s a game-changer How AI-authored Playwright tests can self-heal without false positives The key to balancing autonomy with tester control Why Donobu’s local-first model keeps your data safe while cutting test flakiness under 2 % How to try Donobu’s free Playwright AI toolkit If you want to see where test automation is heading next — and how to future-proof your QA career — don’t miss this one.

  • Playwright Testing: How to Make UI and API Tests 10x Faster with Naeem Malik

    05/10/2025 Duración: 27min

    Did you know that Playwright offers an elegant, unified framework that seamlessly integrates both UI and API testing within a single language and test runner? Don't miss the early bird Automation Guild discount: https://testguild.me/ag26early This episode explores how Playwright empowers teams to simplify test maintenance, eliminate silos between dev and QA, and gain true full-stack confidence. You’ll discover: How to make your tests 10x faster and more reliable by using API requests for setup instead of brittle UI flows. How to write hybrid tests that validate both UI actions and backend APIs in a single flow. A modern, unified testing strategy that reduces operational friction and helps teams deliver high-quality applications with confidence. Our guest, Naeem Malik, brings 15 years of QA and automation expertise. As the creator of Test Automation TV and bestselling Udemy courses, Naeem specializes in making complex test automation concepts simple, practical, and impactful for engineering teams. Whether you’

página 1 de 30