The First of Six Key Challenges
The following sections provide a deeper dive into the key areas suggested in section 2.
The chain of assurance box at the end of each section provides questions that could be asked by stakeholders, auditors, or regulators seeking assurance in the key area. These questions have been compiled drawing on the useful work of the EU’s High Level Expert Group on Ethical AI.6
4.1 Security
For security audits, verifiability of security claims continues to be the basis for assurance. It enables relying parties to make decisions about the appropriate usage of systems. Under a legislative regime or in situations where there is an issue of liability, such as potential for human injury and environmental harm, a high level of audit and documentation is required for all systems. Using a chain of assurance approach enables AI systems to meet those audit requirements.
We distinguish two aspects of security in relation to the implementation of trustworthy AI systems: cybersecurity in general and AI-specific security.
4.1.1 Cybersecurity
The aim of cybersecurity in a more traditional sense is to implement systems that protect the confidentiality and integrity of assets, while guaranteeing a certain level of availability.
Traditional cybersecurity is a constantly evolving field and there are many approaches, but at Arm we believe a security-by-design system is based on the four key principles:
The foundations of a secure hardware platform implementation are:
More information about secure hardware platforms can be found at the PSA Certified website. For more information about using confidential computing to increase the security of AI systems, see section 5.2.
We cannot stress enough the importance of platform security, especially secure lifecycle management. This lies at the heart of many of the mitigations we mentioned. It also gives us the ability to revoke or update components in the systems, whether they are models or part of the compute subsystem.
Source: PSA Certified 2022 Security Report
4.1.2 AI-Specific Security
There are also specific threats that are introduced by the development and deployment of AI systems, primarily related to the data and models inherent in these AI systems. We can categorize them in terms of attacks on:
Adversarial data poisoning is an effective attack against machine learning and threatens model integrity by introducing poisoned data into the training dataset."
4.1.2.1 Compromising Data
4.1.2.2. Attacks for Models and Algorithms
Seeking assurance
Have the technology developer, system integrator, and device provider considered security and vulnerability?
Have the technology suppliers considered relevant trustworthy AI vulnerabilities during the design and development phase?
Providing detailed information
Does the hardware design include a hardware Root of Trust?
Does the system use secure lifecycle management to protect throughout its design, deployment, and maintenance?
What technology does the system use to ensure the integrity of data sets?
How does the system protect the security of models and algorithms?