Bugcrowd Introduces AI Penetration Testing to Improve Customers’ Confidence in AI Adoption

Bugcrowd Introduces AI Penetration Testing to Improve Customers' Confidence in AI Adoption

Bugcrowd, the leader in crowdsourced security, today introduced the availability of its AI Pen Testing on the Bugcrowd Platform to help AI adopters detect common security flaws before threat actors take advantage. AI Pen Testing is now part of  Bugcrowd’s AI Safety and Security Solutions portfolio, in addition to the recently announced AI Bias Assessment offering.

Commoditized access to AI is revolutionizing how work is done in every industry. AI also presents new categories of potential security vulnerabilities, as reflected in President Biden’s Executive Order 14110 that calls for “AI red teaming” (methods unspecified) by all government agencies.

The conversational interfaces in Large Language Model (LLM) applications can be vulnerable to prompt injection, training data extraction, data poisoning, and other types of attacks. Bugcrowd AI Pen Tests are designed to uncover the most common flaws in these areas using a testing methodology based on its open-source Vulnerability Rating Taxonomy – which draws from the OWASP Top 10 for LLM Applications while adding other flaws reported by hackers on our platform.

Many AI applications are highly integrated with other systems, amplifying risk by serving as a potential access point for wider infiltration by attackers. As generative AI becomes universally adopted, the expanded attack surface will require Bugcrowd’s unique brand of rigorous pressure testing to detect the new vulnerabilities that come along with it.

Pentesters are curated from a deep bench of trusted testers selected from the global hacker community for their skills and track record. The Bugcrowd Platform’s data-driven approach to researcher/hacker/pentester sourcing and activation, known as CrowdMatch AI, allows it to rapidly create and optimize crowds with virtually any skill set, to meet any risk reduction goal.

For over a decade, Bugcrowd’s “skills-as-a-service” approach to security has been shown to uncover more high-impact vulnerabilities than traditional methods for customers like T-Mobile, Netskope, and Telstra Corporation, while offering clearer line of sight to ROI. With unmatched flexibility and access to more than a decade of vulnerability intelligence data, the Bugcrowd Platform has evolved over time to reflect the changing nature of the attack surface – including the adoption of mobile apps, hybrid work, APIs, crypto, cloud workloads, and now AI.

“AI serves as a tool for enhancing attacker productivity, a target for exploitation of weaknesses in AI systems, and a threat due to the unintended security consequences stemming from its use,” said Dave Gerry, CEO of Bugcrowd. “With our new AI Pen Testing offering, our customers now have a solution to address any AI-based risks—ranging from standard tests for web apps, mobile apps, and networks to continuous, crowd-powered testing of complex apps, cloud services, APIs, IoT devices, and now AI systems, for maximum risk reduction.”

“The rapid adoption of LLMs in government and enterprise use cases has led to an unprecedented growth in attack surface that adversaries are already exploiting,” said Julian Brownlow Davies, VP of Advanced Services for Bugcrowd. “Bugcrowd’s world-class crowdsourced security platform with CrowdMatch AI has enabled us to bring to market high-impact AI/LLM penetration testing delivered by trusted testers with deep domain experience, providing safety and security to our customers against these evolving threats.”

For more such updates follow us on Google News ITsecuritywire News. Please subscribe to our Newsletter for more updates.