Privacera Introduces AI Governance Solution

Privacera Introduces AI Governance Solution

Privacera, the cloud data governance and security leader founded by the creators of Apache Ranger™, today announced the private preview of Privacera AI Governance. From the continuous scanning and classification of training data to the securing and auditing of AI models, model outputs, and user requests, PAIG empowers organizations to efficiently manage the entire AI data security lifecycle.

Generative AI and large language models (LLMs) have the potential to revolutionize enterprise operations and customer engagements, but the privacy and compliance risks associated with the presence of personal, private, and confidential information in training data and subsequent models have enterprises scrambling to ensure proper security and access controls are in place.

With native enforcement of security and privacy controls across diverse data estates, and architectures, and built on open standards, Privacera’s latest innovation helps companies reduce sensitive data exposure, increase privacy and ethics, and address regulatory and legal compliance issues with AI.

PAIG fosters powerful AI data security governance and federated stewardship between IT departments and business teams. The solution brings together comprehensive data security governance for relational data, non-structured data as well as AI model training and access. PAIG combats the unpredictability of generative AI by helping companies avoid the potential misuse of data, address challenges in compliance policy enforcement, and reduce complexities that arise when runtime contexts are added during inference.

With PAIG, organizations can tap into Privacera’s proven history of innovation in building massively scalable data and access security on AI and diverse data estates. PAIG is powered by the company’s Unified Data Security Platform, which has set the standard for data in the big data ecosystem and the modern cloud data estate. This allows for a common security administration and monitoring platform across all data, along with consistent policies, roles, and controls across all AI models.

The combined solution provides compliance support for CCPA, GDPR, and HIPAA during the training, deployment, and utilization of AI models.

“The potential of generative AI and large language models (LLMs) to transform enterprise operations is immense, but their inherent unpredictability can unknowingly reveal intellectual property, Personally Identifiable Information (PII) and sensitive data,” said Privacera co-founder and CEO Balaji Ganesan. “By providing organizations with an intelligent and adaptive AI data governance solution, Privacera continues its mission to empower enterprises to utilize their data as a strategic asset.”

Also Read: Should Businesses Pay Ransom

PAIG core capabilities include:

  • High-performance AI-driven data governance and security for AI use cases. Building on the existing strengths of Privacera plus combining purpose-built AI and large language models to drive dynamic security, privacy, and access governance.
  • Real-time discovery, classification, and tagging: Training data for generative AI models, as well as embeddings are continuously scanned for sensitive data attributes that then get tagged. More than 160 classifications and rules are pre-built in Privacera and organizations can expand on these based on their own requirements.
  • Data access controls, masking, and encryption: Based on the discovery and tagging above, data-level controls are established to redact, de-identify, mask, encrypt, or even remove sensitive data and data that could introduce vulnerability in the pre-training data pipeline.
  • Allow/deny prompts or responses based on governance policies. Real-time scanning of user inputs and queries for sensitive data elements and applying appropriate privacy controls based on user identity and data access permissions.
  • Redact/de-identify sensitive data in prompts/responses. Real-time scanning for sensitive data elements in model responses and applying appropriate controls based on user’s identity and data access permissions.
  • AI-powered auditing and monitoring: Continuous monitoring and collection of model usage and user behaviors into large language models to power analytics into usage, security, and risk patterns.

For more such updates follow us on Google News ITsecuritywire News. Please subscribe to our Newsletter for more updates.