Intelligent CIO Europe Issue 101 | Page 18

TALKING POINT

THE RISKS OF OVER- HUMANISING AI IN ENTERPRISE

By Gastón Milano, CTO, Globant Enterprise AI.

Anthropic has announced its new Claude’ s constitution, reigniting an uncomfortable question: should we be humanising AI at all?

The document outlines the model’ s values, reasoning and behaviours, acting as a blueprint for why Claude should behave in certain ways. It embeds values into reasoning rather than constraints, prioritising safety, ethics, guidance compliance and helpfulness. According to reporting, this marks a rare moment where an AI company openly entertains questions about consciousness and even moral status.
The notion of morally aware AI still feels like science fiction, echoing stories such as Star Trek’ s famous courtroom debate over Data’ s personhood. While unresolved, those questions now reappear in real-world strategy, not fiction.
Across the industry, models are no longer built solely to function but to feel conversational, empathetic and confident. This shift improves usability and engagement, yet introduces subtle risks many organisations underestimate.
Human-like systems can encourage emotional attachment, weakening critical thinking and accountability. Research shows widespread adoption, driven by convenience and intuitive interfaces. However, fluency creates a psychological shortcut: people equate articulate language with authority.
Generative AI does not know; it predicts patterns. Even advanced systems retain error. Hallucinations demonstrate this clearly, producing confident but incorrect outputs. Real incidents, including fabricated legal citations, reveal how easily users can trust polished responses.
Studies suggest models may sound more confident when wrong than when correct. Although accuracy is improving, domainspecific error rates remain significant, especially without oversight.
For businesses, mistakes are costly. Losses linked to AI errors have reached billions, while employees spend hours verifying outputs. Not every organisation can sustain that, turning design choices into governance concerns.
The answer is not rejecting conversational AI but reinforcing transparency. Systems should signal uncertainty, provide traceable sources, define clear limits and remain auditable over time.
Leaders must embed human oversight, demand visible confidence indicators and strengthen AI literacy across organisations. Understanding both capability and limitation is essential at every level.
Ultimately, the issue is not whether AI can seem human, but whether people are ready for the consequences. As debates about machine morality move into policy, confusion between resemblance and reality becomes dangerous.
Systems like Claude may express values and empathy, yet remain probabilistic tools, not conscious beings. Organisations must embrace benefits without surrendering judgement.
Success will belong to those who resist mistaking fluency for intelligence, building cultures grounded in transparency, oversight and informed scepticism. In an age where AI feels increasingly human, clarity about its true nature will define responsible and sustainable adoption.
Organisations should also revisit risk frameworks, ensuring accountability remains clearly assigned to humans, not delegated to systems. Training programmes must reinforce verification habits, especially in high-stakes environments such as healthcare, finance and law.
Designers play a crucial role, shaping interactions that encourage questioning rather than passive acceptance. Small cues, like uncertainty labels or alternative suggestions, can shift user behaviour meaningfully.
Regulation will likely follow, but internal discipline must come first. Responsible adoption depends not only on technical progress but on organisational maturity, governance and cultural readiness to challenge machine outputs. •
18
INTELLIGENT CIO EUROPE www. intelligentcio. com