AI-powered language model GPT-4 is better and faster than human professionals at exploiting cybersecurity vulnerabilities. University of Illinois Urbana-Champaign tested several large language models (LLMs). They found that GPT-4 could attack 87% of tested vulnerabilities successfully, while other models and purpose-built vulnerability scanners failed.
However, GPT-4 could only attack known vulnerabilities and had a 7% success rate at identifying and exploiting unknown bugs. The paper estimates that using an LLM agent is already 2.8× cheaper than human labor and that future LLMs will only grow stronger in these abilities.
Read more – ChatGPT can craft attacks based on chip vulnerabilities – GPT-4 model tested by UIUC computer scientists
For more such updates follow us on Google News ITsecuritywire News. Please subscribe to our Newsletter for more updates.