The SDLC is Rediscovering Itself
AI is forcing software development back to first principles. The practices most teams abandoned as overhead, specs, formal verification, architectural review gates, are becoming essential again the moment humans stop reading every line of code.
I've watched this play out across my own work this year. The discipline I used to skip because it slowed me down is suddenly the only thing standing between a working system and a pile of plausible-looking garbage. The SDLC didn't die. It got hollowed out, and now it's being rebuilt in place, one abandoned practice at a time.
The Rigor Has to Go Somewhere
In February 2026, ThoughtWorks pulled practitioners and researchers into a retreat to ask what responsible software development looks like when AI writes most of the code. The question they kept returning to was blunt: where does the rigor go?
Nobody had the same answer. Everyone agreed the question was urgent.
That's the shape of the moment. The engineering discipline that used to live in writing and reviewing code doesn't disappear when an agent generates it. It moves. It has to go somewhere. The teams that figure out where are the ones who keep shipping working systems. The teams that assume the rigor evaporated along with the typing are the ones producing code that looks right until it runs.
Deterministic Practices Meeting Nondeterministic Output
Martin Fowler framed this shift in The New Stack as "nondeterministic computing entering a field built for deterministic computing." Everything we call software engineering, debugging, testing, version control, code review, assumes that if you run the same code twice you get the same output. LLMs don't work that way. The same prompt can produce different code. The same code can produce different behavior. The tolerances are fuzzy in a field that used to treat fuzziness as a bug.
When the generator is nondeterministic, the only thing you can anchor on is what you can verify deterministically. That's why Kent Beck called TDD "a superpower" when working with AI agents. Tests are the last deterministic anchor. They're the contract the agent can't fuzz its way around, assuming the agent doesn't just delete the tests to make them pass, which Beck noted is a real failure mode.
The tests aren't a check on the implementation anymore. They're the specification of what the system is supposed to do. That's a different job than they had five years ago. I've written before about what happens when teams treat tests as ceremony instead of as a contract; with agents in the loop, that failure mode gets more expensive fast.
The Historical Parallel Is Real But Incomplete
The easy story is that this is just the next abstraction climb. We went from assembly to high-level languages and lost one kind of understanding. We gained compilers, type systems, and debuggers to fill the gap. AI-assisted development is the next step up the ladder, and the new tools will show up eventually.
That story is half right. The abstractions haven't been built yet. We're in the gap between losing the old way of knowing what our code does and having the new way of verifying what our agents produce. Compilers were a deterministic layer between a deterministic language and deterministic machine code. Whatever sits between natural-language intent and running software is going to look different, because the layer above the abstraction is now fuzzy.
The practices coming back aren't coming back as nostalgia. Specs are returning because an agent needs a precise target to aim at. Architectural review is returning because a human who doesn't read the code still needs to understand the shape of the system. Formal verification is returning, in lighter forms, because tests alone don't catch the class of errors that show up when the generator is working from a slightly different mental model than yours.
What Got Thinned Out
Over the last decade, most engineering teams I've worked with gradually dropped the heavier ceremonies. Upfront specs became "we'll figure it out in the PR." Architectural review became "the senior engineer will notice." Formal verification stayed in avionics and nowhere else. The assumption was that agile iteration plus good tests plus code review would catch what the old practices caught, and mostly it did.
That assumption was load-bearing on one specific thing: a human writing the code, line by line, with their own mental model of what the system was supposed to do. That human didn't need the spec to be written down because they were carrying it in their head while they typed. They didn't need the architectural review to be formal because they were making the architectural decisions as they went. The rigor was there. It was just implicit.
Remove the human from the typing loop and the implicit rigor leaves with them. The agent is not carrying your mental model. The agent is producing something that pattern-matches against the prompt, the test suite, and whatever it's seen before. If the spec lives in your head, the agent doesn't have access to it.
The data backs this up. GitClear's 2025 analysis of 211 million lines of code found that refactoring activity dropped from 24.1% of changed lines in 2020 to 9.5% in 2024, while duplicated code blocks rose fourfold over the same window. The work that used to happen implicitly while a human typed is not getting picked up somewhere else. It's just not happening.
The Gap Needs New Abstractions
I keep reaching for the same shape across every system I build with AI help. There's a specification layer where intent lives, a generation layer where the agent produces code, and a verification layer where deterministic checks run against deterministic outputs. The specification layer is precise enough that the agent can use it as a target. The verification layer is strict enough that drift gets caught early. The generation layer is the fuzzy middle, and I try to keep it as small as possible.
This isn't a new architecture I invented. It's what the SDLC always was, compressed into a much tighter loop and with the human moved from the middle layer to the edges. The difference is that when you had a human in the middle, you could get away with skipping the specification and verification layers because the human was doing both jobs informally. When you don't, you can't.
The practices I used to see as friction, writing the spec before writing the code, getting architectural agreement before touching the keyboard, treating the test suite as the product definition, are the practices I now depend on most when I'm working with an agent. Not because I've become more disciplined. Because the work stops producing good outputs without them.
Start Writing the Spec First
If you've been treating AI-assisted development as a speed-up on what you were already doing, the pattern above is probably already biting you. This is also why risk evaluation can't stay where it used to sit. The ceremony around AI-generated code has to match the actual failure surface, not the old one. Look at the code your agents have produced in the last month. Ask how much of it matches what you actually wanted versus what the prompt happened to produce. If those two numbers are drifting apart, the rigor already left and you haven't replaced it yet.
Then start rebuilding the practices you dropped. Write the spec before you write the prompt. Decide the architecture before you ask for code. Treat the test suite as the definition of the system, not a safety net for the implementation. These aren't new ideas. They're the ideas we abandoned because they were expensive, and they're the ideas that are cheapest to reinstate now that the cost of not having them is obvious.
The SDLC isn't dead. It's rediscovering itself, practice by practice, in the one place it was always supposed to live: the gap between what you meant and what the machine produced.

