Adam Currie, the Chief Security Officer & Head of IT Engineering at HCLSoftware, discusses the challenges of cybersecurity in the age of advanced AI. He examines the risks, including ransomware through social engineering and the creation of fake content through GenAI.
Currie highlights the importance of strategic response to protect data and the delicate balance between tech, regulation, and ethics in addressing these threats. He notes AI’s role as both a risk and an innovation tool in cybersecurity.
ITSWBureau: Given that AI models are developing at a speed and pace that outrun human detection. How big is the threat? How likely is it that some disruption will happen?
Unfortunately, while there are good security and detection tools out there that do a good job of protecting company data, nothing is 100 percent foolproof. Organizations are in a constant state of catchup; always having to react.
Ransomware attacks are typically disguised in social engineering, or in email, that once someone clicks on a link, the next thing you know you are downloading a malicious file.
There is always going to be that ‘patient zero’ that is going to initiate a ransomware outbreak or attack unknowingly. Each individual company’s ability to protect, identify, and respond to these attacks will determine their level of disruption.
AC: We see Fake videos, misinformation & lies among the dangerous content. How is this content created and disseminated? And why is the content so dangerous?
The evolution of generative AI and the continuous levels of sophistication in how it is being used is dangerous for all businesses, large and small. Generative AI can actually learn and emulate the voice, facial expressions or writing style from publicly available samples such as LinkedIn video, and executives or SMEs via blogs or white papers.
These attacks are typically used for some type of revenue generation. Not only can the business be affected monetarily, it’s brand or image is at stake as well as ransomware attacks typically become public knowledge. We’ve seen this with several large organizations where the malicious actors hold the company for ransom and gain monetary value. That’s not to say that there aren’t malicious actors or state actors that are purposely targeting for the purpose of getting into your environment.
Those aren’t necessarily ransomware type attacks, where instead the target attacks are more for information gathering and understanding the environment in the system and trying to exfiltrate it. The difference is that a ransomware attack is typically communicated because that’s how the attackers hold the company hostage, whereas infiltration for information gathering is typically hidden.
ITSWBureau : How can the threat be mitigated by tech firms, and what is the role of regulations in tackling it?
AC: Mitigation of these types of threats comes down to your ability to protect and respond. This is why an organization’s ability to develop response playbooks and practice through tabletop exercise, and the ability to respond and execute these playbooks.
These have to become muscle memory because in today’s environment, zero-day attacks will happen, and the more practiced you are to be able to respond, the better off you’re going to be. This is across the whole organization. This is not simply the responsibility of the security team.
Everybody has a role to play when it comes to a security event and being able to educate and practice and become muscle memory is critical for successful response to these things.
In terms of the role of regulations, Countries around the world are trying to put some level of governance around this. For instance, India has released some guidance around ethical AI requirements and regulations. It’s very subjective and high level because it’s all new technology, so its not very prescriptive at this point.
Listen to the Podcast: Navigating the Cybersecurity Landscape: Insights and Strategies from Alain Sanchez, EMEA CISO at Fortinet
ITSWBureau: We keep saying that AI is a double edged sword- being the reason for higher risks, and also the tools to mitigate them. What’s your opinion?
AC: We’re starting to see new standards around AI and using it in an ethical way. A lot organizations are starting to build out AI centers of excellence and standards and criteria around both the use and consumption of AI, and for software developers the ability to provide AI Services. You’re starting to see a lot of traction in this probably much faster than you would have seen in some areas in the past with technology,
There is no question that as we continue to evolve and leverage AI capabilities that is going to play a very crucial role in almost every aspect of business and in our daily lives.
ITSWBureau: Threats can be countered if detected in time. What do you think CISOs can do better to stay prepared with predictive tools?
AC: AI provides a speed and a processing capability that’s unmatched by humans. So yes, I do think that AI is going to be an answer to some of the generative AI threat, but only part of the answer. It’s not going to be the entire answer.
I do think regulatory requirements, ethical and legal generation of AI and use of AI and corporate responsibility, and personal awareness and situational awareness, all these are going to play into mitigating the risks that AI brings.
Additionally, it is vital that CISOs understand what type of threats are most likely to hit their organization, and how prepared are they for the most likely attacks? These are the attacks you typically train for, but that’s where threat modeling and understanding your corporate risk posture and what type of risks exist in your environment. What would be the potential attack vectors and what type of malicious actors would potentially target you right as an organization.
Are you a target for State actors? Are you target for potential industrial espionage? Are you target for just some new hacker trying to make a name for themselves? So you really need to understand your threat landscape and do a lot of threat hunting and analysis to really understand and be able to provide an idea of where the threats are going to come from and how they’re going to come.
Wrap Up
As Adam Currie highlights, navigating the cyber threat landscape requires balancing risks like ransomware and using AI for innovation. Firms must develop effective response strategies, promote security awareness, and adhere to ethical standards to mitigate threats.
Success hinges on vigilance and using AI’s benefits while protecting against potential threats.
For more such updates follow us on Google News ITsecuritywire News. Please subscribe to our Newsletter for more updates.