Credo AI, the company behind the world’s first comprehensive and contextual governance solution for AI, announced today the availability of its Responsible AI Platform, a SaaS-product that empowers organizations with tools to standardize and scale their approach to Responsible AI.
While standards, benchmarks, and clear regulatory regulations are still emerging, many organizations are struggling to put their AI principles into practice and determine what “good” looks like for their AI systems. Credo AI’s Responsible AI platform helps companies operationalize Responsible AI by providing context-driven AI risk and compliance assessment wherever they are in their AI journey.
Credo AI helps cross-functional teams align on Responsible AI requirements for fairness, performance, transparency, privacy, security and more based on business and regulatory context by selecting from out-of-the-box, use-case-driven Policy guardrails. Moreover, the platform makes it easy for teams to evaluate whether their AI use cases are meeting those requirements through technical assessments of ML models, datasets and interrogation of development processes.
The platform, which was built on cross-industry learnings in both regulated and unregulated spaces, is complemented by Credo AI Lens, Credo AI’s open source assessment framework that makes comprehensive Responsible AI assessment more structured and interpretable for organizations of all sizes.
The release of Credo AI’s Responsible AI Platform also includes the following features:
- Seamless assessment integrations: Credo AI ingests programmatic model and dataset assessments from Credo AI Lens and automatically translates them into risk scores across identified AI risk areas such as fairness, performance, privacy, and security
- Multi-stakeholder alignment: Credo AI brings together product, data science, and oversight teams to align on the right governance requirements based on business and regulatory context
- Tunable risk-based oversight: Credo AI allows teams to fine-tune the level of human-in-the-loop governance needed based on the use case risk level
- Out-of-the-Box Regulatory readiness: Credo AI provides gap analysis across out-of-the-box guardrails that operationalize industry standards, as well as existing and upcoming regulations
- Assurance and attestation: Credo AI serves as a central repository for governance evidence automates creation of critical governance artifacts, including audit trails of decision provenance, Model and AI Use Case Cards, and attested AI risk and compliance reports
- AI Vendor Risk Management: Credo AI also makes it easy for organizations to assess the AI risk and compliance of third party AI/ML products and models via a dedicated vendor risk assessment portal
“Credo AI aims to be a sherpa for enterprises in their Responsible AI initiatives to bring oversight and accountability to Artificial intelligence, and define what good looks like for their AI framework,” said Navrina Singh, founder and CEO of Credo AI. “We’ve pioneered a context-centric, comprehensive, and continuous solution to deliver Responsible AI. Enterprises must align on Responsible AI requirements across diverse stakeholders in technology and oversight functions, and take deliberate steps to demonstrate action on those goals and take responsibility for the outcomes.”
Also Read: Dealing with Uncertainty: How Can CISOs Address the Issue?
Multiple AI government regulations are on the horizon, including the European Union’s Artificial Intelligence Act (AIA), and New York City’s bill that states that AI employment decision tools need to be audited for bias before January 1, 2023. Organizations need to responsibly leverage or build their AI governance framework in anticipation of these and other evolving government regulations.
To date, AI Governance has been manual, unscalable, and incapable of providing the oversight needed to prevent AI from behaving in unintended ways. This leaves enterprises exposed to extreme risk and operational overhead, creating a desire for a new solution. Credo AI’s Responsible AI Platform is the first AI Governance platform that creates accountability structures throughout the AI lifecycle, from data acquisition to model deployment. With Credo AI, governance enables organizations to deploy AI systems faster while managing risk exposure.
Credo AI Lens
Credo AI Lens is an open-source Responsible AI (RAI) assessment framework. It provides a single entrypoint to a curated ecosystem of assessments developed by Credo AI and the broader open-source community.
Lens covers diverse AI risk areas including fairness, security, performance, transparency and others, which can be assessed on many model and dataset types. In addition, Lens is extensible, and easily augmented with custom modules derived from other tools or proprietary code.
By standardizing AI assessment, Lens makes it easy for ML practitioners to integrate RAI assessment of models and datasets into their existing workflow, and accelerates the time to productionize new solutions. When paired with the Responsible AI governance platform, Lens assessments are translated into an actionable form understandable by diverse stakeholders, making the development of AI systems a more collaborative process.
With Credo AI, organizations are able to better understand and troubleshoot potential governance blindspots within their AI applications, ML models, dataset, and processes. Existing customers have access to the available version and full list of features of Credo AI Responsible AI Governance.
For more such updates follow us on Google News ITsecuritywire News. Please subscribe to our Newsletter for more updates.