The concept of “zero trust” is quickly gaining traction in the cybersecurity industry, where policies, structures, and processes are dictated by the assumption that no one and nothing should be trusted. In the same breath, however, the cybersecurity industry is advancing a growing number of AI-driven security solutions that rely on a trustworthy “ground truth” as a reference point.
Organizations are incorporating AI models into their security practices, and one of the most pressing problems is whether regulators, compliance officers, security experts, and employees will be able to trust these security models at all.
It is challenging to develop confidence in an AI-dominated world since AI models are smart, cryptic, automated, and frequently evolving. However, without trust and accountability, some of these models can be deemed risky. This could result in them being underutilized, marginalized, or outright forbidden.
Data and more especially, assuring data quality and integrity, is one of the primary stumbling difficulties related to AI trustworthiness. After all, the data that AI models ingest is only as good as the data that they consume.
Nonetheless, these roadblocks haven’t deterred cybersecurity providers, who have demonstrated an unyielding desire to build solutions around AI models. Vendors are taking a risk by presuming that the datasets (whether public or proprietary) that their models consume appropriately represent the real-world scenarios that these models will face in the future.
Also Read: Why Security Takes the Top Spot in a Digital-First Market?
A variety of issues exist with the data utilized to power AI-based cybersecurity systems:
Poisoning of data
By changing the datasets (and even the pre-trained models) that the AI models rely on, training data can be ‘poisoned’ as well. This could allow them to get around cybersecurity procedures while the company at risk is completely unaware that the ground truth it relies on to secure its infrastructure has been hacked. Such tampering could result in minor inconsistencies, such as security systems mislabeling harmful activity as benign, or have a more significant impact by disrupting or disabling security controls.
Data dynamism
AI models are designed to deal with “noise,” but malicious errors in cyberspace are not random. Security specialists are up against sophisticated opponents who learn and adapt over time. Accumulating more security-related data could help AI-powered security models improve, but it could also cause adversaries to adapt their strategies, reducing the effectiveness of existing data and AI models. In this situation, data is actively shaping rather than simply portraying the seen reality as a snapshot.
Unidentified unknowns
Since there are so many unknowns in cyberspace, many service providers advise their customers to construct their security strategy as though they’ve already been hacked. The difficulty for AI models arises from the fact that these unknown unknowns, or blind spots, are seamlessly included in the models’ training datasets, resulting in a stamp of approval and the possibility that AI-based security controls cannot raise any concerns.
Also Read: Embracing and Securing Infrastructure as Code
An omission that protects the privacy
Privacy-preserving and privacy-enhancing technologies, ranging from de-identification to the production of synthetic data, are gaining appeal in an effort to make sensitive datasets accessible to security experts within and across organizations. The entire point of these technologies is to eliminate, change, or mask sensitive data, such as personally identifiable information (PII). However, the fundamental features of the datasets and statistically important attributes can be lost in the process. Furthermore, what appears to be insignificant “noise” for some security models, might have a big impact on outcomes in unforeseen ways.
For more such updates follow us on Google News ITsecuritywire News. Please subscribe to our Newsletter for more updates.