Avast, a global leader in digital security and privacy products, and Borsetta, an AI software-defined secure computing hardware services company, today announced that they will join Intel in launching the Private AI Collaborative Research Institute. The new collaboration’s sole purpose will be to advance and develop technologies that strengthen privacy and trust for decentralized AI.
The Private AI Institute was originally established by Intel’s University Research & Collaboration Office (URC), which then expanded the collaborative potential of the Institute by inviting Avast and Borsetta to join forces. Together, the parties issued a call for research proposals earlier this year, and selected the first nine research projects to be supported by the Private AI Collaborative Research Institute at eight universities:
- Carnegie Mellon University, U.S.
- University of California, San Diego, U.S.
- University of Southern California, U.S.
- University of Toronto, Canada
- University of Waterloo, Canada
- Technical University of Darmstadt, Germany
- Université Catholique de Louvain, Belgium
- National University of Singapore
The Private AI Collaborative Research Institute will encourage and support fundamental research which will result in solving such real-world challenges for society, and will be dedicated to taking an ethical approach to AI development. By decentralizing AI, and moving AI analytics to the network edge, the companies aim to liberate data from silos, protect privacy and security, and maintain efficiency.
Read More: Top tips for remaining secure and compliant while working from home
The Private AI Collaborative Research Institute will aim to overcome a number of challenges faced by industries and societies today. This includes but is not limited to:
- Training data is decentralized in isolated silos and often inaccessible for centralized training.
- Today’s solutions requiring a single trusted data center are brittle – centralized training can be easily attacked by modifying data anywhere between collection and the cloud. There is no existing framework for decentralized secure training among potentially untrusting participants.
- Centralized models become obsolete quickly as data at the edge changes frequently. Today, infrequent batch cycles for collection, training and deployment can lead to outdated models; continuous and differential retraining is not possible.
- Centralized computing resources are costly and throttled by communication and latency.
- Federated Machine Learning, a technique used to train an algorithm across multiple decentralized edge devices, is limited. While today’s federated AI can access data at the edge, it cannot simultaneously guarantee accuracy, privacy, and security.
Michal Pechoucek, Chief Technology Officer at Avast added, “With our skilled AI research team, and Avast AI and Cybersecurity Laboratory (AAICL) located on campus at The Czech Technical University (CTU), we are already witnessing the great results from our scientific research into the intersection of AI, ML, and cybersecurity. Industry and academic collaboration is key to tackle the big issues of our time, including ethical and responsible AI. As AI continues to grow in strength and scope, we have reached a point where action is necessary, not just talk. We’re delighted to be joining forces with Intel and Borsetta, to unlock AI’s full potential for keeping individuals and their data secure.”
Borsetta has joined the Private AI Collaborative Research Institute because of their strong belief in driving a privacy-preserving framework to support our future hyperconnected world empowered by AI. Pamela Norton, CEO of Borsetta, stated, “the mission of the Private AI Collaborative Institute is aligned with our vision for future proof security where data is provably protected with edge computing services that can be trusted. Trust will be the currency of the future, and we need to design AI embedded edge systems with trust, transparency, and security while advancing the human-driven values they were intended to reflect.”
Richard Uhlig, Intel Senior Fellow, Vice President and Director of Intel Labs, said: “AI will continue to have a transformational impact on many industries, and is poised to become a life-changing force in healthcare, automotive, cybersecurity, financial and technology industries. That said, research into responsible, secure, and private AI is crucial for its true potential to be realized. The Private AI Collaborative Research Institute will be committed to the advancing technologies using ethical principles that put people first and keep individuals safe and secure. We invited Avast and Borsetta to join us on our mission to identify the true impact of AI on the world around us. We are excited to have them on board to mitigate potential downsides and dangers of AI”