Many CISOs consider internal processes and regulations created to manage and secure data as a red tape that behaves as obstacles to innovations
Security leaders acknowledge that the protection of valuable data was never an easy proposition. However, as the data boundaries blur and machine learning and other forms of AI take higher precedence, the capability to handle and secure data has become exponentially more complex for security personnel.
Additionally, application programming interfaces (APIs), the Internet of Things, ML, AI, mobile computing, and other services continue to drastically modify the data landscape by altering security and computational requirements. Not only has the data volumes changed, data sprawl has also become increasingly common. Training algorithms expand across multiple data sets present in different clouds and at different enterprises in most cases. This has boosted the need to combine and adapt data in a dynamic manner.
Going beyond the database
Enterprises have expressed a growing desire to compute in the edge and in the cloud; it has had significant ramifications. For one, enterprises are required to embrace increasingly strict data regulations, like the General Data Protection Regulation (GDPR) of the European Union, the US Health Insurance Portability and Accountability Act (HIPAA), and the California Consumer Privacy Act (CCPA). However, there is also a need to unlock the full potential value of data from within business partnerships and supply chains without unveiling the personally identifiable information (PII) and trade secrets.
Leaders say that machine learning and AI have introduced completely new and varied cybersecurity and computational needs that are tightly connected. It isn’t easy to constantly adjust data protection for varied data-localization needs and data-sharing strategies. It is impossible to have departments assessing each situation individually and use a range of technologies and tools.
Enterprises need to focus on multiple data privacy and security risks as data moves across systems, clouds, and devices. It includes external and internal risks, and third-party access to sensitive and confidential data.
The popular method of capturing a Privacy Impact Assessment (PIA) by enterprises is nothing more than a screenshot in time, say, security leaders. This particular method of detecting and managing privacy risks may be efficient in the conventional waterfall development models, but for modern agile software development workflows that involve regular code updates, the PIAs are rendered obsolete by the time they are written completely.
Thus leaders prefer the more automated API-based approach for developing privacy into the development process. Adding to the complications, it is important to ensure that data and AI are used in an ethical manner. Secure AI mainly consists of confidential AI and responsible AI. Responsible AI mainly targets privacy, trust, ethics, and regulations relevant to decision making based on ML and AI models. Confidential AI is related to how enterprises share the data with others for addressing common business issues.
Security leaders point out that complying with regulations is simply an initiating point for a robust and digital-centric data management infrastructure. Privacy and security need to be extended into data ecosystems and presented to clients and their PII.