Agentic Architectures Series: How Business Leaders Build Systems That Learn
PART II — THE FOUNDATIONS: The Anatomy of Agentic Systems

"A compliance agent delivered two contradictory answers in the same week. The tasks were identical, but the teams weren’t. One team’s instructions included a recently updated policy; the other relied on a legacy document stored in a separate folder. The agent didn’t contradict itself, the organization did. The system behaved faithfully to the inputs provided. The inconsistency wasn’t the model’s drift but the environment’s fragmentation."
Intelligent systems do not operate on knowledge alone. They operate on context: the instructions, constraints, definitions, data, and rules that shape how they interpret a task and decide what to do next. When context is coherent, systems behave more predictably. When context is inconsistent or incomplete, systems improvise.
Some failures attributed to “model behaviour” originate in missing or conflicting context rather than in limitations of the model. Designing reliable AI begins with understanding what context is, how it is assembled, and how it must be maintained.
1. Context is the set of signals that define how a system should act.
For an agentic system, context includes:
- the user request,
- relevant documents or data,
- domain rules and definitions,
- historical memory,
- tool availability and constraints,
- step-by-step instructions,
- organizational policies,
- role expectations,
- environmental information needed to complete a task.
Context is not secondary; it is the primary driver of system behaviour. A model cannot infer what it has not been given. When context is thin, the system fills gaps with guesses. When context is clear, the system aligns better.
2. Context must be explicit, not assumed.
Human teams rely heavily on shared assumptions: unwritten norms, implicit rules, informal shortcuts, and tacit knowledge developed through experience. AI systems cannot access any of this unless it is deliberately encoded. This requires the organization to make explicit what was previously implicit:
- operational definitions,
- exceptions and edge cases,
- decision criteria,
- allowed inputs and expected outputs,
- restrictions and red lines,
- required sources of truth,
- correct workflows,
- rules for verification or escalation.
When these are not surfaced, the system fills silence with probability. The organization — not the model — creates ambiguity.
3. Missing or inconsistent context results in predictable failure modes.
When context lacks clarity or alignment, systems exhibit consistent patterns of error:
- Contradiction: different documents or prompts define the same concept differently.
- Ambiguity: key terms or criteria are not defined at all.
- Drift: instructions diverge across teams, tools, or channels.
- Noise: retrieval pulls irrelevant or outdated information.
- Overconfidence: the system generates answers without adequate grounding.
- Fragmentation: context varies between use cases or environments, causing inconsistent behaviour.
- Misalignment: the system applies general rules to domain-specific tasks.
These failures are architectural, not algorithmic. Fixing them requires improving the environment.
4. High-quality context is structured, consistent, and validated.
A reliable context layer has the following characteristics:
Structured
Information is broken into well-defined units (e.g., rules, parameters, instructions, examples, definitions) rather than long, uncurated text.
Consistent
Different sources agree on terminology, thresholds, and requirements.
Relevant
Only information that affects the outcome is included. Noise is filtered out.
Current
Outdated documents or deprecated logic are removed proactively.
Traceable
Every element has a clear origin: document, database, rule set, or workflow owner.
Portable
Context is accessible across tools, interfaces, and systems.
Without deliberate curation, context decays faster than data , because operational reality changes faster than documentation.
5. Context is assembled through retrieval, not embedded in the model.
Large models provide general knowledge, not company-specific intelligence. Everything that makes a workflow operationally correct must be retrieved, not assumed.
Context assembly typically includes:
- retrieving relevant documents,
- identifying governing rules,
- filtering for relevance,
- normalizing language or definitions,
- applying formatting rules,
- integrating memory or historical state,
- augmenting user requests with clarifying information,
- applying role or domain constraints.
The quality of retrieval directly determines the quality of reasoning. If retrieval is wrong, everything downstream inherits the error.
6. Context requires governance, not one-time setup.
Context is dynamic. It changes as:
- policies evolve,
- regulations are updated,
- workflows shift,
- new tools are introduced,
- exceptions accumulate,
- teams diverge in their practices,
- definitions are refined.
Without governance, context fragments. Fragmentation leads to inconsistent behaviour, increased risk, and growing reliance on hidden assumptions. Governance must ensure:
- consistent definitions across teams,
- versioning and audit trails,
- regular review cycles,
- removal of deprecated information,
- alignment across interfaces and tools,
- enforcement of boundaries and constraints.
In a well-run architecture, context is treated as an operational asset, not as documentation.
7. Context determines whether systems scale safely.
When context is coherent, organizations see:
- more predictable outputs,
- more consistent decisions across teams,
- easier debugging,
- lower operational risk,
- faster onboarding of new systems and workflows,
- reduced need for fine-tuning,
- greater transparency and trust.
When context is weak, organizations experience:
- contradictory answers,
- escalation bottlenecks,
- system drift,
- inaccessible assumptions,
- inflated risk in regulated workflows,
- inconsistent customer experiences,
- increased reliance on human rework,
- failures that are difficult to trace.
Scaling requires the ability to reproduce behaviour across environments. Reproducibility depends on context, not model power.
8. The organization is responsible for the air the system breathes.
Context does not emerge from the model. It is created by:
- business owners,
- domain experts,
- legal and risk teams,
- data and platform teams,
- operations and support,
- architecture and governance functions.
AI simply reflects what these groups provide or fail to provide.
A system cannot be more aligned than the environment that shapes it. It cannot be more precise than the definitions it receives. It cannot be more reliable than the context it is given.
Conclusion
Reliable AI depends on the environment in which it operates. Context is the primary mechanism through which organizations express intent, enforce constraints, and direct system behaviour.
The next article examines how retrieval — the process of finding and assembling relevant information — acts as the backbone of context, and why retrieval quality determines whether an agent can reason effectively about the work it is asked to perform.


