Intelligent CIO Europe Issue 94 | Page 35

FEATURE: AGENTIC AI

AI’ s silent threat:

Navigating the risks of

autonomous agents

Salvatore Gariuolo, Senior Threat Researcher at Trend Micro, has spent years delving into the evolving landscape of cyberthreats. He sat down with Intelligent CIO to tell us how autonomous AI agents are changing the game, from the risks of their self-guided actions to the new security models required to keep them in check.

t

Tell us about the potential threats from an AI Agent that thinks and acts of its own accord? What are the potential consequences of that?
An AI agent that thinks and acts autonomously carries the risk of performing unintended, potentially harmful actions without the user realising it. It might, for example, send emails to the wrong recipients, alter calendar events, or delete important files. What’ s more concerning is that users may not notice these actions as they unfold, losing the chance to intervene before damage occurs. This risk is amplified by the agent’ s deep integration within digital ecosystems, where it routinely accesses and processes data from multiple services. Because of this integration, attackers don’ t need to hack systems directly; they can manipulate the agent’ s inputs or environments – such as embedding crafted prompts within the webpages it visits – to subtly steer the agent toward undesired behaviours. And since the assistant is no longer limited to answering questions or providing information, its actions can have real, tangible impacts in the user’ s reality.
What’ s being done to mitigate this threat? How can the industry ensure that AI agents don’ t become a new, silent attack vector across different platforms and services?
Mitigating these risks starts with embedding safeguards that maintain user supervision and limit the agent’ s autonomous reach. For example, OpenAI is building in explicit confirmation steps before their agent take sensitive actions – sending emails or making purchases – and blocking high-risk operations outright, like bank transfer. Access should be limited to only what’ s truly necessary – not giving the agent full reach across the user’ s entire digital ecosystem – striking a careful balance between convenience and control.
What are the key ethical and compliance challenges that need to be addressed at a systemic level?
At the systemic level, ethical and compliance challenges revolve around accountability, privacy, and informed consent. When AI agents act autonomously, it’ s critical to define who is responsible if things go wrong – developers, users, or service providers. Privacy becomes complex as agents continuously learn about users and access interconnected services, raising questions about data handling and transparency. Ensuring users understand and consent to what these agents do, especially when decisions can have real-world impacts, is another challenge.
Salvatore Gariuolo, Senior Threat Researcher at Trend Micro
www. intelligentcio. com INTELLIGENTCIO EUROPE 35