EXPERT COLUMN
BEN LEICH CXO Cyber Connections and Digital Content Manager
YOUR AI AVATAR IS SMILING... BECAUSE YOU JUST TOLD IT EVERYTHING!
Over the past few weeks, social feeds have been flooded with people sharing AI‐generated caricatures of themselves. But beneath the novelty sits a problem those in the IT world are more attuned to than most – these models aren’ t just drawing you; they’ re profiling you.
Every time someone uploads a selfie or reveals their job title, hobbies and pets to an AI generator, they’ re feeding a system. Once that data is in the model, it’ s effectively irreversible. You can delete the app, but you can’ t delete what the model has learned. For consumers, this is a quirky trend. For IT leaders, it’ s a warning sign.
Can you trust AI platforms?
Many of these AI tools are built by companies with opaque data policies, vague retention timelines and servers hosted who‐knows‐where. Through trends like these, users hand over biometric data and personal details without reading a single line of the privacy notice. And why would they? Who actually does that?
But biometric data isn’ t like an email address. You can’ t rotate your face the way you rotate a password; and personal information is just as sensitive. When combined, these datasets create a highly detailed profile that can be reused, repurposed or, in the worst cases, sold on.
A new opportunity
From a security perspective, this trend broadens the scope for social engineering. If AI can generate a convincing caricature of you, it can also generate a convincing impersonation of you. Deepfake‐style attacks are no longer the domain of nation‐state actors; they’ re becoming accessible to anyone with a browser and a few minutes to spare.
Imagine a phishing email accompanied by an AI‐generated“ video message” from a colleague. Or a fraudulent account created using a synthetic likeness that passes basic verification checks. These scenarios aren’ t theoretical. They’ re emerging.
And when attackers can pair a synthetic face with voluntarily supplied personal details, the social engineering potential multiplies.
Educating the unaware
The instinctive response is to shrug and say,“ It’ s just a trend.” But if millions of people normalise handing over biometric and personal data to unvetted platforms, that behaviour will bleed into society.
IT teams should be educating staff about the implications of these tools, reviewing policies around biometric and personal data and assessing whether corporate devices are being used to access consumer AI apps. Governments should be more active in spreading awareness of the dangers from these types of trends. This isn’ t about banning fun. This is about recognising that there are real risks to misusing AI. •
18
INTELLIGENT CIO EUROPE www. intelligentcio. com