Cybersecurity in the Age of AI

The Cybersecurity Readiness Podcast Series

Jul 12 2023 • 38 mins

While large language models such as ChatGPT can be used to write malicious code, AI tools are increasingly used to proactively detect and thwart cyber-attacks. There is growing recognition of AI’s potential to fight cybercrime. Ian L. Paterson, CEO, Plurilock, sheds light on how AI has impacted the cybersecurity industry, especially how Generative AI is changing the industry. Describing the role of the AI as a co-pilot, he says, "The way I think about leveraging AI is typically having a human do the first 10%, and the last 10%, an AI is really good at doing the 80% in the middle. So it's not a replacement for the human, but it's an enabler for that human and allows them to do more with less."

Time Stamps

00:02 -- Introduction

02:26 -- Ian L. Paterson's professional highlights

04:56 -- What is generative AI and how does it work?

10:34 -- How can we protect ourselves from phishing attacks?

16:12 -- Leveraging AI for behavioral biometrics

21:21 -- What is generative AI? How are these tools being used to thwart cyber attacks?

24:45 -- How do we speed up detection and remediation?

28:20 -- Cybersecurity is a team sport and it is a team game

32:29 -- Guidance and recommendations

36:19 -- Final thoughts

Memorable Ian Paterson Quotes/Statements

"What we see today is that large language models can appear as if they are themselves intelligent."

"One of the chief dangers of this new (AI) type of technology is that you can now author convincing text at scale."

"What we are seeing today is both an increase in the volume of attacks and an increase in the severity and the convincingness of some of these attacks. I call them multimodal attacks because you're using not only the modality of text but you can also use the modality of video or audio. I think we're going to have to deal with these types of attacks, with these problems, for many years to come."

"You're not going to have a ransomware attack on Monday at 10 am when everybody's refreshed from the weekend; it's going to be Friday afternoon, it's going to be on Christmas Day, it's going to be when you don't want to deal with those types of situations."

"You can certainly use large language models to accelerate or help cut down on some of the minutiae when writing code."

"Large language models are being used as co-pilot in Security Operations Center, to do log analysis, to speed up monitoring, identification, and notification of potential threats."

"We've always had this need in cybersecurity to increase productivity because there are not enough people to do the work needed to stay safe. So, AI will help, it will be a productivity boon."

"The way I think about leveraging AI is you typically have a human do the first 10% and the last 10%, an AI is really good at doing the 80% in the middle. It's not a replacement for the human, but it's an enabler for that human and allows them to do more with less, and hopefully, highlight the area they need to focus on."

"The reality is that cybersecurity is a team sport, and you need a host of products and solutions working in harmony to adequately address the threats out there and reduce the attack surface."

"In summation, AI is good, we're certainly going to see cybersecurity-related innovations, but it's not going to replace the people it takes to deploy and leverage those solutions."

"It's really about having that defense-in-depth strategy. I think that makes a difference between somebody with pretty good security and somebody with great security."

Connect with Host Dr. Dave Chatterjee and Subscribe to the Podcast

Please subscribe to...