AI agent vs chatbot vs assistant — the differences that matter
A chatbot answers prompts. An assistant augments a specific task. An agent pursues a goal end-to-end. The choice between them depends on whether you want faster answers, faster tasks, or replaced work.
Every AI product pitch uses at least two of these three words — chatbot, assistant, agent. They're often swapped interchangeably. They shouldn't be. Each describes a different model of how software and humans interact, and picking the wrong one produces the wrong ROI.
Chatbot: turn-based, stateless, reactive
A chatbot takes a prompt, returns a response, and waits. It has no memory between turns (or very shallow memory). It acts only when asked. Classic examples: ChatGPT, Claude, Gemini, customer-service bots.
What a chatbot is good at: answering questions faster than Google, producing first-pass drafts, helping the user think through something. What it isn't good at: finishing work on its own, tracking state across time, making decisions without being asked.
Assistant: task-scoped, partially stateful, interactive
An assistant sits beside a specific task or tool. GitHub Copilot in your IDE. Notion AI in your doc. Gmail Smart Compose in your inbox. The assistant has context — what file you're in, what you just typed, what's on screen — but it doesn't act until you invoke it.
What an assistant is good at: reducing friction inside a task (auto-complete, summarize, rewrite). What it isn't good at: running multi-step work end-to-end.
Agent: goal-directed, persistent, autonomous
An agent takes an objective and runs. Between input and output it may take dozens of steps — planning, calling tools, reading responses, adjusting. It has persistent memory. It can work without a human present. Classic examples: a recruiting agent that sources candidates overnight, a sales agent that preps briefings at 7am, an ops agent that runs morning digests.
What an agent is good at: replacing a recurring process. What it isn't good at: one-off questions (use a chatbot) or single-task help (use an assistant).
A side-by-side comparison
How long the interaction lasts
- Chatbot: seconds — you ask, it answers
- Assistant: minutes — you work on something, it helps
- Agent: hours to weeks — it runs until the goal is met
Who initiates
- Chatbot: human always initiates
- Assistant: human initiates, triggered by context
- Agent: agent can initiate (scheduled, event-driven, or continuous)
What's the output
- Chatbot: a response
- Assistant: a transformation of what you were working on
- Agent: completed work product (a shortlist, a report, a briefing, a closed ticket)
Which one do you actually need?
The practical test:
- If you want to ask questions or think out loud, you want a chatbot.
- If you want the tool you're already using to be 20% faster, you want an assistant inside that tool.
- If you want a piece of recurring work to happen without you, you want an agent.
Most enterprise AI spend flows to chatbots and assistants, because those are easier to deploy. The leverage is in agents — but agents require a different adoption pattern: you hand off a process, not a seat.
"Chatbots save minutes. Assistants save hours. Agents replace functions.
— The Spawnlabs take
Why vendors conflate the three
Because 'agent' sells. Every vendor wants to claim agentic behavior even if they ship a chatbot in a new trench coat. The test is simple: give the product a 24-hour objective and see if it produces finished work when you come back. Most don't.