5. What Agents Really Are

26.11.25 02:29 PM - Comment(s) - By Ines Almeida

Agentic Architectures Series: How Business Leaders Build Systems That Learn
PART II — THE FOUNDATIONS: The Anatomy of Agentic Systems



"A procurement team asked their chatbot to “renew the contract with Vendor A.” The assistant confidently produced a summary of the existing contract and nothing else. The task required checking expiry dates, validating compliance terms, retrieving the latest pricing, updating the request in the workflow system, and initiating approval. The model was capable of answering questions, but incapable of coordinating action. The gap became clear: agents are not extended chat interfaces; they are systems built to reason, retrieve, and act."

Photo by Aideal Hwa on Unsplash


The term “agent” has been stretched, diluted, and overused. In practice, most of what is labeled an “agent” today is a chat interface wrapped around a model. This creates confusion and unrealistic expectations about what these systems can actually do inside an enterprise.


To design reliable AI, leaders need an exact definition. In this series, an agent is not a persona, not an assistant, and not a chatbot. An agent is a system with four core capabilities:

  1. Reasoning: deciding what action to take next,
  2. Retrieval: accessing the information required to act,
  3. Tool use : executing actions in external systems,
  4. Memory: maintaining continuity across steps.


Everything else is implementation detail.


Agents are architectural components, not characters. They perform structured work using defined rules, boundaries, and shared context.


1. Agents are systems that make decisions, not predictions.


Models generate predictions. They output text based on probability. Agents, by contrast, must:

  • evaluate the task,
  • select a tool or information source,
  • decide whether more context is needed,
  • determine when to stop,
  • escalate when unsure,
  • coordinate sequential or multi-step actions,
  • comply with organizational constraints.


This shifts the focus from “output quality” to decision quality.


An agent is not judged by how fluent its responses are but by how consistently it makes appropriate decisions within the boundaries defined for it.


2. Agents operate through explicit, inspectable steps.


A well-designed agent follows a structured sequence:

  1. Interpret the task,
  2. Check memory and context,
  3. Retrieve information if needed,
  4. Select and invoke tools,
  5. Evaluate the tool output,
  6. Decide whether further actions are required,
  7. Produce a final response or escalate.


These steps must be:

  • observable,
  • reproducible,
  • auditable,
  • adjustable.


If a system cannot expose its reasoning steps, it is not an agent; it is an opaque model wrapped in a workflow.


3. Agents rely on retrieval, not improvisation.


Agents should not “know” answers in the way models do. They should access the information required to answer. This includes:

  • documents,
  • knowledge bases,
  • enterprise data stores,
  • APIs,
  • historical cases,
  • structured rules,
  • environmental state,
  • system memory.


Improvisation increases risk. Retrieval increases reliability.


An agent becomes more accurate and more aligned as its retrieval improves not because the model is smarter, but because the environment is clearer.


4. Agents act through tools, not hallucination.


The defining feature of an agent is its ability to execute actions using external systems:

  • search tools,
  • data lookup tools,
  • workflow engines,
  • transaction APIs,
  • content generators,
  • communication channels,
  • calculators or evaluators,
  • application interfaces.


Tools give the agent the ability to do, not just say.


The role of the model is to decide which tools to invoke, with what parameters, and in what order. The role of the architecture is to ensure those decisions are safe, constrained, and aligned.


5. Agents require boundaries to behave predictably.


An unconstrained model is not an agent. An agent becomes predictable when the organization defines:

  • what it is allowed to answer,
  • which tools it can access,
  • when it must escalate,
  • where it must retrieve context,
  • how uncertainty is handled,
  • unacceptable actions or domains,
  • required verification steps,
  • measurable indicators of success.


These boundaries ensure that an agent acts as a controlled component of a larger system, not as a free-form generator. Boundaries transform capability into reliability.


6. Agents depend on memory to maintain state.


Agents must keep track of:

  • what has already been done,
  • which tools were invoked,
  • what the intermediate results are,
  • what remains unresolved,
  • what information has been confirmed,
  • the current objective,
  • constraints and conditions,
  • errors that occurred in previous steps.


Without memory, multi-step tasks fail because the system cannot reason across time. Memory turns sequences into coherent workflows.


Unlike human memory, this is engineered: short-term state, long-term storage, working memory, and episodic logs must be designed explicitly to avoid drift, noise, and contradiction.


7. Agents are not autonomous; they are components in a controlled architecture.


Despite common language, agents are not independent actors. They:

  • operate inside designed workflows,
  • follow organizational rules,
  • use tools made available to them,
  • rely on human oversight,
  • depend on curated context,
  • act according to governance,
  • cannot self-correct without feedback,
  • cannot update their own knowledge without direction.


Autonomy is not the goal. Reliability is.


The purpose of agents is to perform specific forms of reasoning and action within a defined environment, not to replicate human independence.


8. The value of agents emerges from coordination, not capability.


Individual agents can be effective. But the real value scales when:

  • multiple agents collaborate,
  • retrieval, memory, and tool layers unify,
  • instructions and definitions remain consistent,
  • workflows embed verification and escalation,
  • the operating model adapts to support them.


Agents are building blocks of larger architectures. The organization’s advantage comes from how well these blocks are integrated, aligned, and governed not from the isolated performance of any single agent.


Conclusion


Agents are not a technological novelty. They are a design pattern for organizing intelligence so it can operate safely and effectively inside real work.

Understanding what agents are enables leaders to understand what they require: clear context, structured tools, defined boundaries, reliable memory, and a coherent architecture around them.


The next article examines the most foundational layer of that architecture: context, and why its quality determines the predictability of any intelligent system.

Share -