A number of significant providers of AI software have joined a new security project launched by the nonprofit Cloud Security Alliance (CSA) with the aim of developing reliable best practices for generative AI.
Tech giants Microsoft, Amazon, Google, OpenAI, and Anthropic have joined the new AI Safety Initiative, which aims to develop templates, tools, and data for implementing AI/LLM technology responsibly, ethically, and legally.
In an effort to help prepare for the development of far more potent AI systems in the future, the AI Safety Initiative is actively creating workable safeguards for the generative AI that is in use today. Its objective is to lower risks and increase AI’s beneficial effects in all industries, the group said in a statement.
Read More: New AI Safety Initiative Aims to Set Responsible Standards for Artificial Intelligence
For more such updates follow us on Google News ITsecuritywire News. Please subscribe to our Newsletter for more updates.