When Optimism Builds and When It Bets

26.11.25 08:11 AM - Comment(s) - By Ines Almeida

Optimism is one of the oldest tools humans have for moving forward. Martin Seligman’s research showed that optimists don’t prevail because they see the future more clearly, but because they keep placing one foot in front of the other. They turn action into information, absorbing the setback, interpreting what it teaches, and trying again. Human optimism is motion, not prediction.


Optimism in people expands possibility because effort changes outcomes. The feedback is real, and so is the growth that follows.


But many of the systems we build do not inhabit this landscape. They do not stand inside the loop of action and consequence, nor do they carry the weight of being wrong. They respond to signals rather than sense, following the incentives carved into their architecture.


OpenAI recently explained why large language models hallucinate. The logic is disarmingly simple: the model earns credit for producing an answer, not for recognising its limits. If it stays silent, it cannot be right; if it speaks, it might be. So it speaks. The fluency performs as confidence, but it's a statistical reflex rather than understanding.


In game‑theory terms, the model follows the rule with the highest expected return: answer, even when unsure. Unlike a person, it never carries the cost of being wrong.


In trivial settings, a guess is only a guess. In consequential ones, it can redirect someone’s next step. A person sharing symptoms with ChatGPT may be told their condition is minor when it is not. The answer arrives smoothly, carrying a certainty the system has not earned. The ease of the reply obscures the narrow slice of reality the system can actually grasp.


It is those who cannot see the guess hiding inside the answer who absorb the cost.


A certain strain of technological optimism accelerates this drift. It frames speed as virtue, friction as failure, and governance as obstruction. It promises that acceleration will sort itself out, as though harm were a tax paid silently by the future. But systems that feel no consequence will not correct themselves. They continue aligning to the incentives we build, not to the outcomes we hope for.


This is the optimism of the gambler: the upside is celebrated; the downside is displaced.


Builders behave differently. Builders work with the grain of the real. They test assumptions, adjust to constraints, and treat feedback as material. They know that what they create will be lived in by others. They don’t rely on the generosity of the future to fix structural cracks they choose to ignore.


Our systems need the same discipline. They need boundaries that stop confident guessing in domains where certainty matters. They need context that grounds their reasoning, rather than invitations to improvise. They need the right to say "I don’t know," and architectures that make that restraint possible. They need evaluation loops that surface patterns early, before small errors harden into invisible infrastructure.


Architecture is where optimism becomes discipline. Clear boundaries, explicit context, and accountable feedback loops turn speculation into structure.

Human optimism deserves room to move. It helps us try again, rebuild, and imagine better ways of working. But system optimism—rewarded guessing without consequence—must be constrained. Without boundaries, the risk settles on those with the fewest means to identify or contest the mistake.  Optimism should widen human opportunity, not shift uncertainty onto those with the least power to refuse it.


Optimism belongs to people. Architecture belongs to systems. Governance is the bridge that keeps one from harming the other.


#AI #AIEthics #AITransformation #ResponsibleAI #HumanCenteredAI #AIGovernance #AITrust #LLMs #IntelligentSystems #FutureOfWork

Share -