Brno, March 26, 2026 · Passage Hotel · 350+ attendees · 6 talks · Florian Fieber, Petr Škoda, Petr Fifka, Konstiantyn Teltov, Robin Weiss, Kristián Kottfer · Organizer: YES4Q | Passion for Quality · Moderator: Jiří Charvát · Photography: Petr Vokurek · testcrunch.cz Brno, 26. března 2026 · Hotel Passage · 350+ testerů a QA profesionálů · 6 přednášek · Florian Fieber, Petr Škoda, Petr Fifka, Konstiantyn Teltov, Robin Weiss, Kristián Kottfer · Organizátor: YES4Q | Passion for Quality · Moderátor: Jiří Charvát · Fotografie: Petr Vokurek · testcrunch.cz

Six talks, one theme, three hundred testers — and a crash test that never ends.

While the world outside was busy rewriting its own requirements mid-sprint, over three hundred testers and QA professionals gathered at Brno's Passage Hotel with one shared mission: to make sure software — unlike everything else — actually works. This year's TestCrunch brought six talks centred on a single theme: artificial intelligence. Not as a threat, but as a challenge, an opportunity, and a compelling argument for why testers matter more than ever.

Host Jiří Charvát set the tone from the first minute: "The whole geopolitical situation looks like one big crash test." Which is precisely why everyone showed up — somebody has to run it.

Opening: Jiří Charvát Breaks the Ice

Opening: Jiří Charvát Breaks the Ice

Florian Fieber: The Future of Testing in the Age of AI

Florian Fieber: The Future of Testing in the Age of AI

Petr Škoda: QA Strikes Back

Petr Škoda: QA Strikes Back

Petr Fifka: Lucy, the Bot, and the Journey Through AI Euphoria

Petr Fifka: Lucy, the Bot, and the Journey Through AI Euphoria

Konstiantyn Teltov: Design Patterns Aren’t Boring. They’re the Rails for Your AI.

Konstiantyn Teltov: Design Patterns Aren’t Boring. They’re the Rails for Your AI.

Robin Weiss: Where Did the Testers Go? And Why Should That Worry Us?

Robin Weiss: Where Did the Testers Go? And Why Should That Worry Us?

Kristián Kottfer: How Do You Test an Entire Planet? Literally.

Kristián Kottfer: How Do You Test an Entire Planet? Literally.

The highlight of the day wasn't lunch. It was the man who tests the Earth.


Host Jiří Charvát admitted he didn't want to believe the abstract. He read it three times. He went to check the website to make sure it was real. Then he introduced the talk with a reference to Alex Garland's TV series Devs — about a machine capable of simulating the entire universe, past and future.

"This is the same thing. Just in its early stages."


Kristián Kottfer, QA engineer at BAE Systems OneArc — a company born from Bohemia Interactive Simulations — took the stage. His team builds a real-time graphical simulation of the entire planet Earth at 1:1 scale, used for military, police, and emergency services training in around sixty countries worldwide.

 

Why simulate the planet instead of just going outside? "Because we need to get better at warfare. Or at defending ourselves." Instead of burning real ammunition or deploying to live exercises, soldiers step into a simulator — on screen, in a headset, or inside a motion-based vehicle simulator — and train safely. Anywhere on Earth. Repeatedly. With a complete recording of every second.


What Are You Actually Testing When You Test a Planet?


VBS4 with the VBS Blue engine is not a game. The goal is not entertainment but training
effectiveness and compliance with military doctrine.

 

Real-time rendering targeting 60 frames per second — each frame must complete in 16
milliseconds. The engine calculates lighting at the triangle level, sun position, atmospheric scattering, weather, particle effects. Since rendering everything is impossible, the system dynamically decides — what's far away gets simplified, what the user can't see doesn't get drawn.


The data pipeline is a modular network of interconnected plugins — a DAG structure — that generates and modifies world content. Over sixty biomes, each with multiple seasonal layers. And then — what Kristián described as the most fascinating challenge — multimodal sensors. VBS Blue doesn't just simulate what the human eye sees. It simulates night vision and thermal imaging, each with entirely different physical models. Wet asphalt behaves differently in thermal view. Air humidity affects heat dispersion differently depending on conditions. And all three display modes must be physically consistent — with each other and with reality.

 

"Now you know how our QA feels. When you want to test a system like this, it can sometimes feel like an impossible task."


Seven QA Engineers for 21 Developers — and a System Where Lives Depend on the Output 

 

The Blue Team has 21 engine developers and 7 QA engineers. Seniority is extreme: some team members have been with the company for ten to twenty years. The testing environment uses no commercial tools — it's a suite of internal applications. BlueView is the production view. DiagManager is a deep control console. Dev script provides deterministic test automation.

 

The CI/CD pipeline produces daily and nightly builds with visual regression tests — pixel-by-pixel image comparisons, GIF diffs, reports via Hub, Sheets, and Slack. And here appears one of the talk's most interesting problems: non-determinism from the graphics engine itself. Upscaling algorithms produce variable results. Micro-stutters — tiny rendering hiccups invisible to the human eye — cause a visual test to fail because a single pixel changed for a fraction of a second. And every PC renders slightly differently.

 

Finding the right tolerance threshold that catches real regressions without flooding the team with false alarms is itself a technical challenge.

 

AI in a System Where the Output Affects Real Lives

 

Kristián was direct and measured on the topic of AI — and all the more valuable for it at the end of a day full of AI discussions. In a company working for the defence sectors of sixty countries, security and compliance requirements are extreme. A random AI system with access to internal infrastructure is simply not an option.

The realistic plan: AI as an assistant for summarising visual test results, log analysis, generating reports, and communicating changes across teams. Not as a replacement for expert judgment in safety-critical parts of the system.

"Most teams that start with AI fail because their eyes are too big. We start small and measure the benefit."


The Day's Final Word

 

Kristián Kottfer closed with a thought that perfectly bookended the entire TestCrunch 2026 day: "We in QA always work with chaos. And the only way to manage it is slow, steady, iterative progress — solving one problem at a time, not looking for a solution that solves everything at once."

Key Takeaways from TestCrunch 2026

The day was long, the talks diverse — from theory to live demos to the simulation of an
entire planet. But through all the variety ran one common thread:

AI doesn't change whether testers are needed. It changes what testers need to be.

Florian Fieber said: there's more work for people who understand risk. Petr Škoda showed how AI saves time — so there's more of it for what matters. Petr Fifka named the euphoria trap and the way out. Konstiantyn Teltov reminded us that without structure, AI is just an expensive chaos generator. Robin Weiss warned that short-sighted optimisation today can mean crisis tomorrow. And Kristián Kottfer closed: iteratively, steadily, with measured benefit.

The Terminator is postponed. But the work is increasing.

The panel discussion "The Terminator Is Postponed — For Now", moderated by Petr
Svoboda, is covered in a separate article (link coming soon)

The warm-up day before the conference featured three practical workshops:
TestCrunch Community (Warm Up) Sessions — three Practical Workshops the
Day Before the Conference (link coming soon)

More articles

Do you need improve your business?

Contact us for a consultation. We are YES4Q!

+420 777 629 545