Intelligent CIO Europe Issue 70 | Page 36

EDITOR ’ S QUESTION
PETER KLIMEK , DIRECTOR OF TECHNOLOGY AT IMPERVA

There are a number of ways in which Generative AI is rapidly advancing cyberthreats and the way online criminals operate . To start with , Generative AI and Large Language Models ( LLMs ) can dramatically improve

In the majority of cases , both employees and the business are completely ignorant about what ’ s at risk .
the sophistication of threat actors . For example , AI will greatly accelerate the discovery of vulnerabilities in existing software ( both commercial off-theshelf and open source libraries ). The MOVEit vulnerability , for instance , showed a fairly high level of sophistication from the attackers in discovering and chaining together multiple vulnerabilities . While we don ’ t know if they were assisted by AI tools in discovering these vulnerabilities , we can safely predict that these tools will be used by attackers in similar attacks in the future .
Secondly , there is the impact on bad bots , which now account for almost a third of all web traffic . Using Generative AI tools , hackers are able to iteratively develop more sophisticated bots faster than ever before , putting businesses at risk of mass disruption through account compromise , data theft , spam , degraded online services and reputational damage .
Businesses are going to need to come up with strategies to deal with the data security implications of Generative AI .
However , it ’ s important to note that LLMs don ’ t just pose an external threat . Given how many blissfully oblivious employees are using third-party Generative AI chatbots and other tools to complete tasks like writing code , we ’ ve already seen that there is a huge insider threat for companies that the LLMs are going to be accessing and sharing data from backend code and other sensitive information .
There ’ s no malicious intent from these employees , but that doesn ’ t make ‘ shadow AI ’ any less dangerous . The genie isn ’ t going back in the bottle – outright bans simply won ’ t work – so businesses are going to need to come up with strategies to deal with the data security implications of Generative AI . Yet currently , only 18 % of businesses have an insider risk management strategy in place , meaning in the majority of cases , both employees and the business are completely ignorant about what ’ s at risk .
In order to get a handle on the issue , businesses need to focus on identifying , classifying , managing and protecting their data . Just controlling how data is accessed or shared would go a long way in making things safer . Here are some key steps every business should be taking :
• Visibility : Organisations must have visibility over every data repository in their environment so that important information stored in shadow databases isn ’ t forgotten or abused .
• Classification : The next step is classifying every data asset according to type , sensitivity and value to the organisation . Effective data classification helps an organisation understand the value of its information assets , whether the data is at risk and which risk mitigation controls should be implemented .
• Monitoring and analytics : Finally , data monitoring and analytics capabilities are essential to detect threats such as anomalous behaviour , data exfiltration , privilege escalation , or suspicious account creation .
36 INTELLIGENTCIO EUROPE www . intelligentcio . com