Brian Conn Brian Conn

Claude Code as an Operational Partner for DevOps

People are building incredible things with AI coding tools. But there's a quieter, equally powerful use case: using Claude Code as an operational partner. DevOps work is half investigation, and AI coding tools are remarkably effective at analysis, script generation, and iterative diagnostics alongside a human who handles execution and judgment.

Read More
Brian Conn Brian Conn

The AI Adoption Ladder - A Practical Framework for Engineering Teams

Most AI adoption failures share the same origin story: someone tries the hardest possible task, it fails spectacularly, and they declare they'll "come back next year." This happens constantly because teams lack a mental model for sequencing adoption.

After helping engineers navigate AI integration, I've developed a staged approach I call the AI Use-Case Ladder. It sequences adoption by risk and blast radius, building confidence and literacy before touching anything that could damage production systems.

Read More
Brian Conn Brian Conn

The Three Levels of AI Product Integration - A Framework for SaaS Leaders

SaaS companies are all AI companies these days. How deep does does that AI integration really go, though?

They bolt on a "Generate with AI" button, watch users test their hardest problems, and wonder why adoption craters after week one. The issue isn't AI capability. It's integration depth. After working with multiple SaaS teams on AI implementations, I've seen a clear pattern: companies that understand how deeply AI should touch their product consistently outperform those chasing the latest demo.

Here's the framework I use to help teams navigate this decision.

Read More
Brian Conn Brian Conn

The Three Pillars of Scalable Data Processing

Every unit of work in a data processing system should aspire to be small, independently processable, and consistently sized. When these three properties hold, scaling becomes almost trivially simple. Reality rarely cooperates, which is why understanding these properties matters so much for platform engineering.

Read More
Brian Conn Brian Conn

The Async Decoupling Pattern for Scalable Batch Processing

Batch processing architecture has a clean pattern that scales elegantly: decouple batch systems asynchronously from everything else. When you get this right, your real-time system stays stable regardless of batch volume, and you never need elaborate job scheduling to avoid infrastructure strain.

Read More
Brian Conn Brian Conn

Batch and Real-Time Platforms Have Different Jobs

When designing data platforms, I frequently encounter teams trying to build one unified system that handles both real-time streaming and batch analytics. The instinct makes sense: both workloads operate on the same underlying data, so why not share infrastructure?

Getting this architecture right has real consequences.

The challenge is that these workloads have fundamentally different characteristics. Supporting both well on a single platform is expensive and complex. In most cases, you get better results by separating them early and letting each system lean into its strengths.

Read More
Brian Conn Brian Conn

Making Interviews Objective with AI (Without Making Them Worse)

Everyone has opinions about candidates. That's the problem.

We're supposed to ask standard questions, evaluate people against the job description, and test whether they can do the work. Instead, we dig into areas where we think they're weak, ask different questions for each person, and end up testing our biases instead of their abilities.

Read More
Brian Conn Brian Conn

The Software That Shouldn't Exist

Everyone's worried about AI replacing engineers. The more interesting question is what happens when the cost of building software drops so dramatically that entirely new categories of software become viable.

The industry is calling this "personalized software." Custom tools built for a specific person, a specific context, a specific moment. Software that never leaves your machine. Software that would never justify a product. Software that, until recently, simply wouldn't exist.

Read More
Brian Conn Brian Conn

Shifting Left - How Small Teams Handle Organizational Gaps Without Breaking

Every small organization has gaps. Maybe you have an engineering lead but no dedicated DevOps team. Maybe your product manager is stretched thin and the tech lead is absorbing PM responsibilities. Maybe a designer role is emerging, but nobody owns it yet.

These gaps often emerge in specific domains. Growing organizations typically need four types of engineering leadership, and early-stage teams almost never have all of them covered. This is normal. The question is: how do you respond?

Teams may make the mistake of dumping the entire burden on one person. They identify the gap, find whoever is closest to it, and expect that person to absorb all the additional work. This breaks people.

There's a better approach I call "shifting left."

Read More
Brian Conn Brian Conn

Working in the Mud - The Mental Model That Keeps Engineering Teams Moving

Every engineering blog paints a picture of clean microservices, continuous deployment, and comprehensive observability. I've been in this industry for over a decade, and I've never experienced this ideal state across the board. I've seen glimmers. Teams that nail one dimension. But never everything at once.

That gap between the ideal and reality is what I call working in the mud.

Read More
Brian Conn Brian Conn

Software Architecture Is a Building - A Mental Model for Technical Decisions

Most architecture discussions devolve into abstract debates about microservices, monoliths, and database choices. After years of explaining these concepts to engineers and product leaders, I've found that thinking about software architecture like a physical building cuts through the noise and makes the tradeoffs viscerally clear.

This isn't just a teaching metaphor. It's a decision framework that surfaces why some changes cost weeks and others cost months, why certain tech debt compounds silently while other debt screams at you daily, and how to gauge the right amount of architectural runway to build.

Read More
Brian Conn Brian Conn

AI-Assisted Development Changes What Matters in Framework Selection

The two-minute deploy is killing my productivity.

That sounds wrong until you think about proportions. Two minutes is nothing. But when AI-assisted development shrinks the time spent writing code, those two-minute deploys start consuming a much larger percentage of your development cycle.

I discovered this while building with a managed backend framework that requires redeployment even during local sandbox development. The frontend rebuilds in seconds. The backend takes two minutes. Suddenly, that backend deploy time is where I spend most of my dev cycle waiting.

A caveat before going further: this observation comes from a greenfield project where I'm moving quickly and iterating frequently. AI-assisted development changes the structure of work in existing projects too, but this effect is most pronounced when building something new and small, where rapid iteration is the default.

Read More
Brian Conn Brian Conn

Stop Fighting the Wrong Battles - The Three-Level Problem Framework

Most engineering teams waste weeks solving the wrong problems.

They polish user interfaces while core APIs fail. They optimize conversion funnels while databases crash. They redesign onboarding flows while authentication randomly breaks.

This happens because everything gets labeled "high priority" without any systematic way to determine what actually needs fixing first.

Here's a three-level framework that immediately clarifies what to fix first, what can wait, and what's wasting everyone's time.

Read More
Brian Conn Brian Conn

Tactical Work Shedding - How to Plan for the Plan to Fail

Every sprint plan will go sideways. Every project timeline will hit unexpected problems. This isn't pessimism. It's pattern recognition from years of leading engineering teams through hundreds of delivery cycles.

The question isn't whether your estimates will be wrong. The question is: have you already decided what to cut when they are?

I call this approach Tactical Work Shedding, and it's a reliable practice I've used as a tech lead collaborating with product managers on sprint planning and feature delivery.

Read More
Brian Conn Brian Conn

Risk Evaluation in the Age of AI-Aided Development

Engineering teams have always made judgment calls about risk and speed. With AI development tools becoming standard practice, that judgment call has gained a new dimension demanding more careful consideration.

Read More
Brian Conn Brian Conn

The Seven Tiers of SaaS Engineering Complexity

In cycling, the pain doesn't decrease as you get better. You just get faster.

This applies directly to software engineering. Engineers don't find work easier as they mature; they tackle increasingly complex problems that maintain the same cognitive challenge. A senior engineer debugging distributed systems experiences similar mental strain as a junior fixing their first API bug. The difference is the tier of complexity they're operating within.

Read More
Brian Conn Brian Conn

Your Dashboards Are a Code Smell (And How to Fix It)

I've been on call for over a decade across production SaaS platforms. I've debugged cascading failures at 3 AM, managed 99.99%+ uptime commitments, and transformed reactive teams into proactive operational excellence cultures. Through all of that, I've learned one uncomfortable truth: if your team relies on dashboards for incident response, you have an observability problem.

Dashboards are the lowest common denominator for monitoring. Over-reliance on them (or truly any reliance on them for production incident response) is a code smell for your observability strategy.

Read More
Brian Conn Brian Conn

Designing Monitoring Tools for the Job to Be Done

Successful monitoring platforms rest on a fundamental principle that many teams overlook: the format of a page should be determined by who you expect to be there and what job they need to accomplish.

This requires purpose-built interfaces, not configuration layers. Different users come to your monitoring platform with completely different needs, and your page design should reflect those differences from the ground up.

Read More
Brian Conn Brian Conn

The On-Premises Revenue Trap - Why Enterprise SaaS Deployments Kill Engineering Velocity

Enterprise customers love asking for on-prem deployments. The contract values look irresistible: 2-5x your standard SaaS pricing, multi-year commitments, and the validation that comes with enterprise logos. But having managed hybrid and full on-prem deployments across multiple SaaS platforms, I can tell you the operational reality is a trap that strangles engineering teams.

The numbers tell a stark story: research shows that personnel costs represent 50-85% of total on-prem application ownership, with the vast majority of that time spent on monitoring, maintenance, and troubleshooting rather than innovation.

Read More