Technology

How to Fight the Security Risks of Generative AI

|
Generative AI is a powerful tool, but it’s not without its own issues and risks. Click here to learn how you can proactively protect yourself against the security risks of generative AI.

Generative artificial intelligence (generative AI) is a subset of machine learning that deals with creating algorithms that can generate new data (mostly using existing data). This has a wide range of benefits including but not limited to: 

  • High-quality output generated from self-learning from multiple data sources. 
  • Creating designs with less risks. 
  • Improving the accuracy of machine learning. 
  • Enabling systems to create new content using previously created content, such as text, audio, video, images, and code.  

A very prominent example of generative AI is ChatGPT, the popular chatbot uses AI to answer questions and create content ranging from poetry to code. 

What are the security risks for generative AI?

Like most technologies, generative AI has its own issues (and risks) including security risks, data privacy risks, reduction in creativity, and copyright issues. Let’s talk about security risks. The main issue is that generative AI could be used to create fake or fraudulent content/data that could be used by fraudsters and cyber criminals. 

When I was attending a couple of COSAC conferences in 2006 and 2007, a member of an underground virus creation forum demonstrated how they create and test viruses and malware and how they perform a thorough quality assurance on them by tuning them and mutate them continuously till they evade most known anti-virus platforms. 

Generative AI will make such processes quick, easy, and effective. Underground hacking forums are already rife with discussions on how to use ChatGPT to recreate malware strains and techniques described in research publications such as CheckPoint Research and write-ups about common malware. 

CheckPoint noted, “. . . some of the cases clearly showed that many cybercriminals using OpenAI have no development skills at all. Although the tools that we present in this report are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad.”  

In other words, if ChatGPT can make it easier for less sophisticated hackers to learn how to commit harm, imagine what it can do for a skilled cybercriminal.   

How to fight threats   

So how can you deal with these risks? Here are our recommendations:

  • Every company must tighten its data loss protection controls (at both the endpoints and perimeter). This ensures that the digital assets of the company won’t leak (and fall in the hands of fraudsters.) One must understand that any keyword-based DLP tool will offer only a limited success due to its false positives and one must create rules with fingerprints (of the digital assets that are to be monitored and tracked).
  • Companies should utilize zero-trust platforms that rely on anomaly detection rather than signatures (rather than routine anti-virus platforms). 
  • Companies must fine tune their processes with checks and balances to weed out any fake or fraudulent content (rather than relying on fully automated processes).

Employing a zero trust model will minimize some of the risks of evolving technologies like generative AI. 

Contact Investis Digital  

The Investis Digital on-demand hosting platform is built from the ground up with security and data protection by design. Our cyber threat prevention system offers complete DDoS protection and malicious traffic analysis and prevention and underpins every website we build. Combined with the atomized modular architecture of the Connect.ID CMS platform, we can deploy beautifully designed and highly performant websites with as little as two weeks from ideation to build. Contact us to learn how we can protect you.