Fundamentals For Everyone fundamentals decision-making critical-thinking

When NOT to use an agent

Agents are powerful, but they're not the answer to everything. Sometimes a script, a form, or a human is the better choice. Here's how to tell the difference.

7 min read
On this page

The AI industry wants you to believe agents can do everything. They can’t. And using an agent where a simpler tool would do is like driving a Ferrari to your mailbox. Technically possible. Wildly impractical. Kind of embarrassing if anyone’s watching.

I’ve watched teams spend weeks building agent workflows for problems that a 10-line script could solve. The allure is real: agents feel futuristic, they’re fun to build, and “we’re using AI” looks great in a slide deck. But the best engineers and the savviest business owners know when to reach for an agent and when to reach for something boring.

When a script is the better choice

If a task is the same every single time, with no judgment involved, you don’t need an agent. You need a script.

Renaming 500 files according to a pattern? Script. Moving data from one folder to another every night at midnight? Script. Converting a CSV to JSON? Script. Sending the same automated email when someone fills out a form? Script.

These tasks are deterministic. The input is predictable. The output is predictable. Nothing requires reasoning or interpretation. An agent would technically work here, but it would be slower, more expensive, and occasionally get it wrong for no good reason. A script runs in milliseconds and does exactly what you told it. Every time. Forever.

Here’s the litmus test: if you can write out the exact steps as a flowchart with no decision diamonds (no “it depends” branches), a script is better. The moment you have to add “use your judgment here” to the flowchart, that’s when an agent starts making sense.

When a form is the better choice

This one drives me crazy, because I see it constantly. Someone builds a chatbot agent to collect structured information from users. Name, email, phone number, what service they’re interested in, preferred appointment time.

You know what collects that information perfectly every time with zero ambiguity? A form. Five fields. A submit button.

When you use an agent for structured data collection, you’re introducing a dozen failure modes that didn’t need to exist. The user might provide their info in an unexpected order. The agent might misparse something. It might ask a question the user already answered. The “conversation” takes three minutes when a form takes thirty seconds.

I’ve seen a real estate company replace their contact form with an AI chatbot. The chatbot would ask “What’s your budget?” and people would say things like “somewhere around 400 maybe 450 but we could stretch to 500 if the right place came along.” Now the agent has to figure out if that’s $400K or $500K and which number to store. A dropdown menu with price ranges would have handled this instantly.

If you need specific information in a specific format, use a form. Save agents for the conversations that actually benefit from being conversations.

When a human is the better choice

This is the one nobody in the AI industry wants to talk about. There are tasks where putting an agent in the loop is not just inefficient but actively harmful.

Firing someone. Delivering a medical diagnosis. Responding to a customer who just lost a family member and needs to cancel a service. Negotiating a contract where the subtext matters as much as the text. Talking a user through a moment of crisis.

These situations require empathy, and not the performed empathy of an AI that’s been trained to say “I understand how you must feel.” Real empathy. The kind where another person reads the room, adjusts their tone on the fly, and says the thing that needs to be said even if it’s uncomfortable.

AI agents are getting better at sounding empathetic. They’re not getting better at being empathetic. There’s a difference, and the people on the receiving end can usually tell. That said, agents can handle routine customer support well when designed with the right guardrails. Agents for customer support explores where this works and where it breaks down.

There’s also the accountability question. If an agent gives bad advice and someone gets hurt, who’s responsible? If it mishandles a sensitive HR situation, who takes the fall? For decisions with real consequences for real people, a human should be in the loop. Not optional. Required. (For more on this pattern, see Human in the loop.)

When search is the better choice

“What year was the Eiffel Tower built?” Don’t spin up an agent for this. Google it. It takes two seconds and the answer is certain.

Agents add value when you need to synthesize information from multiple sources, reason about it, and produce something new. If you just need a fact, you’re burning tokens and time for no reason.

This applies to developers too. If you need to know the syntax for a Python list comprehension, checking the docs is faster than asking an agent. If you need to know what arguments a function accepts, read the function signature. An agent will give you the answer, sure, but it might also give you a slightly wrong answer from an older version of the library. The docs won’t.

There’s a useful rule of thumb: if the answer exists in one place and you roughly know where that place is, go get it directly. If the answer requires pulling from multiple places and reasoning about what you find, that’s agent territory.

The cost angle people ignore

Agents are not free. Every time an agent runs, it consumes tokens, which cost money. It takes time, usually seconds, sometimes minutes for complex tasks. And it can fail.

A database query returns in milliseconds and never hallucinates. An API call to a weather service gives you exact data. A script that renames files doesn’t suddenly decide to rename them differently because it “interpreted” your instructions in a creative way.

For businesses running agents at scale, the cost difference is real. I’ve seen companies running thousands of agent calls per day to answer questions that could be handled by a simple FAQ page with a search bar. Each call costs fractions of a cent, but multiply that by thousands of calls and months of operation. Then compare it to a static web page that costs essentially nothing to serve.

The most expensive solution is the one that fails unpredictably. Agent failures can cascade. If an agent hallucinates a wrong answer and a downstream system acts on it, you’re debugging something that a deterministic system would never have produced in the first place.

Signs you’ve over-engineered it

If your agent setup has any of these symptoms, you probably went too far:

Your agent has 40+ tools and you’re not sure which ones it actually uses. You’ve written elaborate instructions explaining when to use Tool A vs. Tool B vs. Tool C for similar tasks. Your agent regularly picks the wrong tool and you keep adding more instructions to fix it. The agent workflow takes 30 seconds to do something a direct API call could do in 200 milliseconds. You spend more time debugging the agent than the agent saves you.

The classic tell: when someone asks “why did the agent do that?” and nobody can explain it. If your system’s behavior is a mystery to the people who built it, you’ve got a complexity problem, not an AI problem.

The decision framework

When you’re deciding whether to use an agent, ask yourself four questions.

Is the task ambiguous? Does it require interpreting natural language, dealing with vague inputs, or handling situations that vary each time? If the task is identical every time, you don’t need intelligence. You need automation.

Does it require judgment? Is there a “right answer” that depends on context, trade-offs, or subjective evaluation? If the answer is always the same regardless of context, a lookup table or a rule engine works fine.

Does it combine multiple skills? Does solving the problem require reading data, then reasoning about it, then taking action, then evaluating the result? Single-step tasks rarely need agents. Multi-step tasks with branching logic are where agents earn their keep.

Is the cost of failure acceptable? If the agent gets it wrong, what happens? A bad social media caption draft? Annoying but fixable. A wrong medication dosage? Unacceptable. Match the reliability requirements to the tool’s reliability.

If you answered “yes” to the first few questions, an agent is probably the right call. If you answered “no” to most of them, reach for something simpler and save yourself the headaches.

Agents are a tool, not a religion

The best approach isn’t “use agents everywhere” or “avoid agents completely.” It’s knowing which tool fits which job. Sometimes that’s an agent. Sometimes it’s a bash script, a Google form, a phone call, or a spreadsheet formula.

Understanding how agents work helps you make this call, because once you see the think-act-observe loop and all its overhead, you develop an instinct for when that overhead is worth it and when it’s just waste.

The smartest people I know in this space aren’t the ones who use AI for everything. They’re the ones who know exactly when to use it and when to close the laptop and pick up the phone.

Pair this with AI demos are misleading if you’re in the middle of a tool-evaluation. Vendor demos hide the failure modes that this article describes; together the two pieces are the skeptical-evaluation toolkit.