Tests as Ceremony: When AI Breaks the Safety Net
AI-generated tests pass. That's the problem.
Passing is not a useful correctness criterion. Mark Seemann makes this argument sharply: AI-generated tests have "little epistemological content." They skip the critical step of seeing a test fail before writing code. The test exists, the coverage number goes up, and everyone moves on. But the test never proved anything. It never caught a bug, because it was never designed to catch one.

