The Three Questions That Tell You What to Automate

Not every repetitive task is worth automating. Some tasks feel tedious but resist automation because they require subtle judgment at every step. Others feel complex but are actually just long sequences of mechanical steps. Knowing the difference saves you from building automation that never delivers, or manually grinding through work a script could handle.

I've found that three questions reliably separate the automatable from the not-yet-automatable. They work whether you're evaluating a candidate for AI assistance, a custom script, or a full workflow tool. This framework also applies to choosing what to build versus what to shed, as I explored in Tactical Work Shedding.

Question One: Is It Mechanical or Does It Require Judgment?

Picture a spectrum. On one end, you have work that follows exact steps every time. On the other end, you have work that requires human judgment, pattern recognition, or contextual decision-making.

The key distinction is between conditional logic and actual judgment. Conditional work sounds complex because it has branches: if it's this type, do these steps; if it meets these requirements, do those steps instead. Conditional logic can be written down. It can be codified. It can be automated.

Judgment means you're weighing factors that change based on context, making calls that depend on experience, or evaluating quality in ways that resist simple rules. A decision tree with fifty branches is still mechanical. A single decision that requires reading the room is judgment. These judgment calls often deserve to stay manual.

When evaluating a task, ask yourself: could I write a decision tree that covers every scenario? If yes, even if that tree is large and branching, the work is mechanical. If you keep finding scenarios where the answer is "it depends on things I can't easily articulate," you're looking at judgment work.

Most tasks people dismiss as "too complex to automate" are actually just conditional. They have a lot of branches, which makes them feel complex, but every branch follows predictable rules. McKinsey's 2025 research reinforces this: current technology could automate about 57 percent of U.S. work hours, yet over 70 percent of employer-sought skills span both automatable and non-automatable work. Most roles aren't purely mechanical or purely judgment. They're a mix, and the real leverage comes from separating the two.

Question Two: What's the Work-to-Review Ratio?

Even mechanical tasks often need a human check before the results go live. That's fine. The question isn't whether review is needed, but the ratio between automated work and human review time.

Consider an offer letter. Someone needs to fill out a detailed document with the right compensation, title, start date, benefits elections, and legal language. Tedious, time-consuming, and almost entirely mechanical. Once it's filled out, a human reads through it in five minutes to make sure the details are correct. That's a great ratio: two hours of work condensed into an automated step, followed by five minutes of review.

Now consider the opposite. If automating a task only saves three minutes before requiring human review, what's the point? You can't do anything meaningful in a three-minute window. The automation hasn't freed you up; it's just added a context switch. Gloria Mark's research at UC Irvine found it takes an average of 23 minutes to resume a task after an interruption. If your automation only buys three minutes before requiring review, the context switch costs more than the time saved.

The ratio matters because automation isn't free. Every handoff carries a minimum overhead: you stop what you're doing, load the context of the automated work, review it, approve it, and return to your previous task. If the automated portion is small relative to that overhead, the automation creates more friction than it removes.

Look for tasks where automation handles a substantial block of work before needing human input. The sweet spot: automated work runs long enough for you to genuinely focus on something else, and review is quick enough that it doesn't become its own project.

Question Three: How Batchy Can You Make the Review?

This builds on the second question, but addresses a subtlety most people miss.

Even when the work-to-review ratio is good for individual steps, the overall workflow might still require too many interruptions. If you automate step one, review it, automate step two, review it, automate step three, review it, you've created a sequence that keeps pulling you back. Each review might only take a minute, but the constant context switching destroys any time savings. Microsoft's 2025 Work Trend Index found employees face a ping every two minutes during core hours, totaling 275 interruptions per day. Poorly designed automation that adds to that interrupt stream makes the problem worse, not better.

The solution is batching. Instead of reviewing each step individually, can you group the review into larger checkpoints?

Take an employee onboarding workflow. The first step might be generating an offer letter, which requires review before sending. That's an unbatched review step: you need to verify the details before the letter goes out. But the next phase might involve setting up accounts across seven different systems: email, Slack, HR platform, code repositories, project management, VPN access, and internal wiki. Each is mechanical, and none is irreversible.

So instead of reviewing each account setup individually, you let the automation run through all seven. It takes an hour. You're doing other work. When it's done, you spend five or ten minutes on a spot check: verify the accounts exist, confirm the permissions look right, make sure nothing was missed. Then you hit send.

That's batchy automation. The unbatched review (the offer letter) happens once upfront. The batched review (the system setup) happens once at the end. In between, you're free. This same principle applies beyond workflow automation. In systems architecture, batching and real-time processing have fundamentally different jobs.

Putting It Together

These three questions form a quick evaluation framework:

  1. Is the work mechanical (conditional) or does it require judgment? If mechanical, proceed.
  2. What's the ratio of automated work to human review time? If the ratio is favorable (lots of work, quick review), proceed.
  3. Can you batch the review steps to minimize interruptions? If yes, you've found something worth automating.

When all three answers line up, you're looking at a strong automation candidate: predictable rules, substantial automated blocks, and batchable review checkpoints.

When answers don't line up, the task isn't necessarily impossible to automate. You may need to restructure the workflow: extract mechanical portions from judgment-heavy tasks, or redesign processes to batch review by grouping non-destructive steps and gating only where mistakes are costly to reverse.

The Automation Instinct

Over time, this framework becomes instinctive. You start seeing tasks through the lens of these three questions automatically. Someone describes a pain point, and before they finish explaining, you're already categorizing it: mechanical work, good review ratio, batchable checkpoints. That's automatable.

Or: judgment-heavy, constant oversight needed, tiny increments between reviews. That's a human task, at least for now.

The value isn't just in identifying what to automate. It's in identifying what not to automate, and being able to articulate why. That saves you from building automation that technically works but doesn't save time because review overhead eats the gains.

The simplest version of this: if someone can go off and do two hours of mechanical work, and you can review the output in five minutes, automate it. If someone needs you standing over their shoulder for every button click, the automation isn't ready yet.


Related Content

Next
Next

From Two Minutes to Ten Seconds - The ROI of Personalized Software