AI-Generated Threats that Are Worrisome for 2024

AI-Generated Threats
  • The darker side of generative AI is becoming evident in the expanding cyber threat landscape.
  • A significant number of companies are sounding warning bells about AI-generated threats.
  • In 2024, AI-generated threats will accelerate as companies explore its usage more.

A report by Trend MicroSecurity Predictions for 2024 mentions that the increasing role of generative AI (GenAI) in the cyber threat landscape is a warning for enterprises in 2024.

The coming tsunami of refined social engineering tactics, identity theft-powered GenAI tools, and spear phishing is only the tip of the iceberg in Gen AI’s potential in cybercrimes.

The predictions say they are already taking center stage in scams!

As generative AI advances in sophistication, accessibility, and scalability, it will be difficult to trust online interactions for some time.

This article focuses on the rising AI-generating threats as we enter 2024.

Increase in Generative AI-enhanced Attacks

Google’s report Cybersecurity Forecast 2024 mentions that GenAI and Large Language Models (LLMs) will be used by cyber criminals to improve the effectiveness and scale of social engineering attacks in 2024.

  • Generative AI will make spelling and grammar errors difficult to spot in phishing emails: LLMs will be able to work on translations, making it even harder for users to spot phishing. Using LLMs, attackers can create legitimate content and generate a modified version that looks, reads, and flows like the original version.
  • Generative AI will increase the productivity and scalability of cyber criminals: Attackers will use AI to create fake news, and phone calls to interact actively with people by making content and materials appear more legitimate. LLMs and other generative AI tools will increasingly be used as a service to compromise targets for paid services through phishing campaigns.
  • Cyber-criminals will use more Zero-day vulnerabilities: One of the reasons for this is that attackers want to maintain constant access to devices, networks, and other business assets for a longer time.

AI-Generated Threats to Watch Out for 2024

1. Deepfake Attacks

The recent advancements in Generative AI have sped up deep-fake attacks. Threat actors use Generative AI to create appealing deep fakes of CEOs and other C-level executives to target them in exchange for money.

Andrew Newell, Chief Scientific Officer, iProov, explains.

“CEO fraud is targeting at least 400 companies per day and poses a significant threat to organizations worldwide. In this type of crime, attackers pose as senior company executives and attempt to deceive employees into transferring funds, disclosing sensitive information, or initiating other fraudulent activities. They often involve sophisticated social engineering attacks, making them challenging to detect. Fraudsters widely use generative AI tools to create deepfakes to imitate a person. Bad actors can now create convincing AI-generated audio and imagery and deploy it across platforms, including Zoom, Microsoft Teams, and Slack. Without sophisticated monitoring and detection tools, it’s almost impossible to detect this type of synthetic imagery. As such, we fully expect to see an AI-generated Zoom call lead to the first billion-dollar CEO fraud in 2024.”

Deepfake attacks are not new, but the tools used to create them are becoming more advanced.

According to Sumsub Research: Global Deepfake Incidents Surge Tenfold from 2022 to 2023,

Sumsub Research

The recent deep fake attacks use synthetic imagery- combining existing images and videos and super-imposing them on the source. The resultant image or video is almost dangerously real.

James Hinton, Director of CST Services at Integrity360, explains, “It’s a topic that’s top of minds at the minute. In surveying 205 IT security decision-makers in August 2023, Integrity360 found that more than two-thirds (68%) are worried about cybercriminals’ use of deep fakes in targeting organizations, for example.

The impact of deep fakes and audio synthetization used for nefarious purposes has already been demonstrated.”

2. Threat of Business Email Compromise

Social engineering attacks will remain responsible for billions of business losses. Business email compromise (BEC) has been the preferred attack method.

The 2022 FBI IC3 Report shows that

2022 FBI IC3 Report

Microsoft Cyber Signals report also reported that


The predictions for BEC involve the increasing risk of hybrid vishing attacks. Attackers will use phone numbers to mislead users.

Threat actors will use Generative AI tools to facilitate phishing kits and will use a list of verified email addresses for targeting C-Suite leaders. The emails can be close to the source, as everything about them will look like they’re from the official brand.

Recently, leading companies like PayPal, Norton, and McAfee faced threats of vishing attacks.

As Alex Beavan, ex-British Intelligence officer, Head of Ethics and Anti-Corruption at Convera, explains,

“Entering the new year, Business Email Compromise (BEC) looms as a formidable challenge, intensified by organized crime groups leveraging the potency of AI in their deceitful schemes. Brace for an impending surge in investment scams over the upcoming months…The battleground against cyber threats escalates, demanding heightened vigilance and decisive action.”

In 2024, as attackers advance their attack techniques, companies will have to advance their existing defensive measures.

  • They should have advanced email authentication protocols to verify the sender’s identity and prevent spam emails from reaching targets. One of the protocols is the Domain-based Message Authentication, Reporting, and Conformance (DMARC).
  • Adopting AI-driven anomaly detection systems will be beneficial for analyzing email traffic and identifying anomalies.
  • Employ Employee Training and Awareness Programs to educate employees about the threats of BEC attacks and ways to recognize and respond to them effectively.

3. Attacks on Private Blockchain Networks for Extortion

Private Blockchain networks can be a Generative AI weapon for threat actors in 2024 due to more enterprises turning to Blockchain.

Since private Blockchain requires less resilience than public Blockchain, cyber criminals will likely be more interested in attacking the administration of a private one.

There has been increased criminal attention on Web3 technologies for 2024 using Generative AI tools. They will attack self-executing smart contracts on Blockchain networks.

The threat actors will use smart contracts as a weapon, mainly against decentralized finance platforms.

Ransomware attacks are rising for major DeFi projects, making them prime victims. Blockchain remains an attractive target for malicious actors for Bitcoin transactions, ETFs, and the rise of token prices.

Also Read: GenAI Online Security Threats for Businesses

Considering these threats, Blockchain cybersecurity in 2024 requires constant attention and comprehensive preventive solutions. For this, companies should:

  • Improve smart contract security
  • Improve network security
  • Develop quantum-resistant encryption methods
  • Improve Privacy and Confidentiality of smart contact data
  • Practice regular updates and patches

Enterprises relying on generative AI tools to boost their business productivity are required to stay sharp for the associated real-time threats.

For them, it will be extremely important to stay watchful, practice resilience, and adopt preventive measures at every point of possible threats.

It’s because these will be a multi-dimensional security strategy for companies to enforce across their systems.

For more such updates follow us on Google News ITsecuritywire News. Please subscribe to our Newsletter for more updates.