Expect AI to be a tool and a target for threat actors in 2024

threat actors

It’s been another 12 months full of incident in cybersecurity. But for business and IT leaders, the hard work never stops, says Bharat Mistry, Technical Director at Trend Micro. With cyber an increasingly important component of business risk, predicting where emerging threats may be targeted next year is an important task to help build strategic growth plans.

Building resilience today can enable organizations to optimize their use of advanced technologies tomorrow, in order to drive competitive advantage while minimizing the risk of something going wrong. So what’s new for 2024? Some old themes blended with modern technologies.

Private Blockchains could come back to bite

Bharat MistryCyber-criminals are always looking for new ways to make money. And boasting funds and users in ever greater numbers, private Blockchains offer a perfect environment target. These may be restricted access networks, but in many cases, private Blockchains aren’t as well stress-tested as their better known public equivalents. That could open the door for a variation on the extortion attack.

Techniques that worked so well in the past with ransomware, or simple data theft attacks, could be applied to this sphere. Key theft would enable attackers to write malicious data into Blockchains, or edit existing records. With the right privileges, they could even encrypt an entire Blockchain and demand a ransom to restore access, and/or keep the whole thing out of the public eye. Financial and healthcare organizations are particularly at risk.

GenAI up-levels social engineering

Generative AI (GenAI) opens the door to a number of potential use cases for cyber-adversaries. The technology’s ability to generate compelling and convincing written content has already been demonstrated many times. So it won’t be a huge leap to see these same capabilities manipulated for malicious ends. It may mean pretty soon that users can no longer rely on shoddy grammar and spelling to identify phishing emails. GenAI could deliver instant, grammatically perfect campaigns across multiple languages, lowering the bar significantly for budding scammers.

Other nefarious uses for GenAI could be the generation of fake medical records or prescriptions in the healthcare sector, or impersonation of medical professionals to access sensitive data. In education, fraudsters could use GenAI to create diplomas and student transcripts, or to impersonate teachers and administrators in order to gain access to student records or steal financial aid funds.

Already, it’s been shown how legitimate tools like ChatGPT can be hijacked to generate such content. Threat actors could do so using stolen user credentials in order to maintain anonymity. Or they may want to wait for another malicious WormGPT-like tool to use.

GenAI can also help create deepfake campaigns like virtual kidnapping or business email compromise. What was once out of the reach of most will be democratised on cybercrime forums and marketplaces. However, given the economics involved, expect more voice cloning than video deepfake content, for now. Until global governments have passed stricter laws to regulate the industry, businesses should take a zero trust approach in order to mitigate these risks.

Targeting AI for other ends

AI is not just a tool for attackers, it is also a target in its own right. That’s especially true of cloud-based machine learning models, which will increasingly be at risk from data poisoning attacks. Why? Because they’re relatively easy to execute, don’t cost much, and can be used to achieve many objectives – from exfiltrating data to disrupting fraud protection systems, and serving up biased or harmful content. Attackers could strike in the data collection phase, or look to compromise data residing in storage/data pipeline infrastructure.

Retraining models corrupted by data poisoning is still possible if the attack is detected early enough. But it is expensive, so prevention is always preferred. Organizations should achieve this by always validating any data used to train machine learning models, using role-based access controls (RBACs), regularly auditing and monitoring cloud storage, and generally practicing defence-in-depth.

Threats worming their way through the cloud

It’s not just cloud-based ML models that will be targeted with greater frequency in 2024. The cloud is so mission critical to business transformation that it will offer rich pickings for data thieves and extortionists. Expect the arrival of new worming capabilities designed to rapidly propagate in the cloud, using misconfigured APIs as a common entry point. By doing so, threat actors will be able to impact multiple assets at once, such as containers, and even expand the blast radius beyond a particular cloud account – all in a highly automated fashion.

Proactive scanning for cloud environments and adopting a zero trust approach will become a critical endeavour for rooting out and minimizing the chances of cloud worm attacks.

Also Read: Implementing Critical Countermeasures to Keep Sophisticated Threat Actors at Bay

Software supply chain attacks to target lower hanging fruit

Not everything we’ll see next year will be brand new. Many of the most effective attacks of 2023 targeted the software supply chain – so these are likely to return in ’24. As usual, threat actors will focus on widely used software that may be poorly defended, probably targeting vendors through their continuous integration and continuous delivery (CI/CD) pipelines. One of the most effective ways to do this is by implanting malware in upstream open source libraries and components. Or taking advantage of an outdated and vulnerable component that may have been downloaded by a developer in error.

If an attacker can compromise a CI/CD environment, they can inject malicious code into the software development pipeline, which can then be deployed to production environments without being detected. It’s critical that software vendors – and others developing applications, including financial services, healthcare and manufacturing organizations – scan libraries and containers for malware and vulnerabilities before using them. They should also look to application security tools that can spot suspicious behaviour and drive visibility into the CI/CD pipeline.

There’s plenty to think about as we head into 2024. But organizations that know where to focus their cybersecurity resources most effectively, will have an undoubted competitive advantage. Start planning now.

For more such updates follow us on Google News ITsecuritywire News. Please subscribe to our Newsletter for more updates.