The Software That Shouldn't Exist
Everyone's worried about AI replacing engineers. The more interesting question is what happens when the cost of building software drops so dramatically that entirely new categories of software become viable.
The industry is calling this "personalized software." Custom tools built for a specific person, a specific context, a specific moment. Software that never leaves your machine. Software that would never justify a product. Software that, until recently, simply wouldn't exist.
Shifting Left - How Small Teams Handle Organizational Gaps Without Breaking
Every small organization has gaps. Maybe you have an engineering lead but no dedicated DevOps team. Maybe your product manager is stretched thin and the tech lead is absorbing PM responsibilities. Maybe a designer role is emerging, but nobody owns it yet.
These gaps often emerge in specific domains. Growing organizations typically need four types of engineering leadership, and early-stage teams almost never have all of them covered. This is normal. The question is: how do you respond?
Teams may make the mistake of dumping the entire burden on one person. They identify the gap, find whoever is closest to it, and expect that person to absorb all the additional work. This breaks people.
There's a better approach I call "shifting left."
Working in the Mud - The Mental Model That Keeps Engineering Teams Moving
Every engineering blog paints a picture of clean microservices, continuous deployment, and comprehensive observability. I've been in this industry for over a decade, and I've never experienced this ideal state across the board. I've seen glimmers. Teams that nail one dimension. But never everything at once.
That gap between the ideal and reality is what I call working in the mud.
AI-Assisted Development Changes What Matters in Framework Selection
The two-minute deploy is killing my productivity.
That sounds wrong until you think about proportions. Two minutes is nothing. But when AI-assisted development shrinks the time spent writing code, those two-minute deploys start consuming a much larger percentage of your development cycle.
I discovered this while building with a managed backend framework that requires redeployment even during local sandbox development. The frontend rebuilds in seconds. The backend takes two minutes. Suddenly, that backend deploy time is where I spend most of my dev cycle waiting.
A caveat before going further: this observation comes from a greenfield project where I'm moving quickly and iterating frequently. AI-assisted development changes the structure of work in existing projects too, but this effect is most pronounced when building something new and small, where rapid iteration is the default.
Stop Fighting the Wrong Battles - The Three-Level Problem Framework
Most engineering teams waste weeks solving the wrong problems.
They polish user interfaces while core APIs fail. They optimize conversion funnels while databases crash. They redesign onboarding flows while authentication randomly breaks.
This happens because everything gets labeled "high priority" without any systematic way to determine what actually needs fixing first.
Here's a three-level framework that immediately clarifies what to fix first, what can wait, and what's wasting everyone's time.
Risk Evaluation in the Age of AI-Aided Development
Engineering teams have always made judgment calls about risk and speed. With AI development tools becoming standard practice, that judgment call has gained a new dimension demanding more careful consideration.
The Seven Tiers of SaaS Engineering Complexity
In cycling, the pain doesn't decrease as you get better. You just get faster.
This applies directly to software engineering. Engineers don't find work easier as they mature; they tackle increasingly complex problems that maintain the same cognitive challenge. A senior engineer debugging distributed systems experiences similar mental strain as a junior fixing their first API bug. The difference is the tier of complexity they're operating within.
The On-Premises Revenue Trap - Why Enterprise SaaS Deployments Kill Engineering Velocity
Enterprise customers love asking for on-prem deployments. The contract values look irresistible: 2-5x your standard SaaS pricing, multi-year commitments, and the validation that comes with enterprise logos. But having managed hybrid and full on-prem deployments across multiple SaaS platforms, I can tell you the operational reality is a trap that strangles engineering teams.
The numbers tell a stark story: research shows that personnel costs represent 50-85% of total on-prem application ownership, with the vast majority of that time spent on monitoring, maintenance, and troubleshooting rather than innovation.
Quality In, Quality Out - The Real Driver of AI Output Quality
Every engineering team is racing to implement AI tools, but most are optimizing the wrong variable. They're tweaking prompts and comparing models while ignoring the fundamental truth: your AI output quality is entirely dependent on your input quality. When you ask a default AI model a question, you're getting the average of the internet. Those 1,000 words generated from your 50-word prompt? They're coming from random web content, not your expertise. The companies winning with AI aren't using better models. They're feeding those models better inputs through curated knowledge bases, documented processes, and structured organizational wisdom. This isn't just theory. Research shows that RAG systems pulling from quality knowledge sources increase accuracy by nearly 40% compared to models operating on training data alone. For engineering leaders, this means the competitive advantage isn't the AI tool itself. It's the quality of information you feed it. Start building your knowledge systems now, because input quality isn't just a performance optimization. It's your strategic moat in an AI-commoditized world.

