Credo AI, the company behind the world’s first Responsible AI (RAI) Governance Platform, today announced the close of a $12.8 million Series A financing round led by Sands Capital, with follow-on investments from existing investors Decibel and AI Fund.
Enterprises today are struggling to adapt to emerging AI governance regulations, benchmarks and standards for implemented AI systems. Recently featured by IDC as a key Responsible AI governance platform opening new frontiers in the AI revolution, Credo AI assists organizations with operationalizing AI governance to deploy compliant, responsible AI systems faster and with greater confidence through its context-sensitive risk framework.
This latest funding round will enable Credo AI to bring the transformative impact of Responsible AI governance to more enterprises in the world. The funding will be used to accelerate product development, build a strong go-to-market team to continue Credo AI’s Responsible AI category leadership, and strengthen the tech policy function to support emerging standards and policies.
“Credo AI is at the forefront of the Responsible AI movement with a leadership team whose primary mission is to help organizations create AI systems with the highest ethical standards,” said Scott Frederick, Partner at Sands Capital. “AI has the power to augment essentially every part of our lives but can cause irreparable damage to our society without proper governance. Credo AI helps enterprises on their Responsible AI governance journey by bridging the gap between their technical and business stakeholders to successfully keep AI applications compliant and aligned with industry-specific standards.”
Existing solutions like MLOps, open source ML fairness tools and others are often too narrowly focused on the technical aspects of the AI system to address the larger issue at hand: lack of governance and accountability across the entire AI development lifecycle. Credo AI’s Responsible AI Platform helps cross-functional teams align on Responsible AI requirements for fairness, performance, transparency, privacy, security and more based on business and regulatory context across industries such as financial services, banking, retail, insurance and defense.
Also Read: Three Potential Solutions to the Cybersecurity Talent Shortage
“Mastercard has been an early champion of Responsible AI development and governance and has been committed to the ethical use of data to power a wide range of business results as well as drive financial inclusion across the world,” said JoAnn Stonier, EVP and Chief Data Officer, Mastercard. “We are committed to constantly evaluating our efforts in this area and are happy to be partnering with Credo AI’s Responsible AI platform to extend our governance model further. These types of partnerships, tools and techniques assist us in furthering our governance process while improving our time to market.”
Credo AI was founded in 2020 by Navrina Singh and Eli Chen with the mission to help organizations build AI aligned with human values. Singh was recently appointed as one of 27 leaders on the National Artificial Intelligence Advisory Committee (NAIAC) which will advise President Biden and the National AI Initiative Office on a range of issues related to AI.
“We’re laser focused on ushering in industry standards of oversight and accountability for Responsible AI,” said Navrina Singh, founder and CEO of Credo AI. “AI should be in service of humanity, and we intend to continue driving the industry towards an equitable future.”
As part of our commitment to ensure Responsible AI becomes the global standard, Credo AI is bringing together experts including policymakers, data scientists and business leaders across risk, compliance and audit to be part of the solution.
For more such updates follow us on Google News ITsecuritywire News. Please subscribe to our Newsletter for more updates.