Building Trustworthy AI
The application of Artificial Intelligence (AI) to many aspects our lives promises to be the most transformative technology trend in our lifetimes.
Building Trustworthy AI
How a Chain of Assurance Can Build Trust
Toward Trustworthy AI
For AI to Grow, It Requires an Ethical Approach
Read how Arm and its partners are working to increase security, privacy and trustworthiness across all computing markets.
1. Introduction and Overview
The application of artificial intelligence (AI)1 to many aspects our lives promises to be the most transformative technology trend in our lifetimes. It is crucial to the long-term development of the metaverse, which we understand as a massive computer-generated virtual world that will be deeply intertwined with our physical world. The metaverse will enable humans to live and interact in parallel or in superposition in both physical and digital worlds.
However, if AI systems are not considered trustworthy, we will miss out on all the benefits they can bring. Mass adoption of metaverse applications will have to rely on trustworthy AI principles, just like any other digital transformation. Assessing the trustworthiness of AI systems will help avoid potential harm.
Arm has been looking at trustworthy AI for the past few years and we published our Arm AI Trust Manifesto2 describing some of the key principles that we believe should be at the heart of the debate.
Our manifesto joined the various industry attempts in recent years to establish principles for ethical and trustworthy AI. This now requires urgent focus. A concerted effort by the sector to look at how to put our principles into practice will help build public trust. Some regulatory authorities are also on the point of proposing regulation. The technology sector needs to be able to show we have thought about how to put regulatory objectives into practice.
In this paper, we outline what we are calling a chain of assurance which in essence would require any company in an AI supply chain to state what ethical risks it had identified relevant to that company and state how it had addressed them.
We also look at how some emerging advances in technology can help. Our focus is on developments around security and privacy technologies, including trusted execution environments, and how they can be used to deliver a chain of assurance, and in turn, build trustworthy AI systems.
Our ambitious exploration of these ideas is built on the fact that we have met this type of problem before, in dealing with the need to advance security for the IoT. Here we have seen how various organizations alongside regulators have shaped thinking, by offering practical proposals for putting IoT security into practice.
These include the Platform Security Architecture (PSA) approach, of which Arm is a founding member. PSA Certified offers a detailed checklist of measures designed to help IoT device developers ensure their device is designed with security in mind right from the start. We believe that the industry should arrive at a similar point for trustworthy AI. (For a PDF of this paper in white paper format, click here.)
It would be impossible for us to provide an overview of all the different approaches to ethical AI: the World Economic Forum suggests that more than 175 organizations have proposed their own sets of ethical AI principles.3 An overview of the regulatory landscape in three major legal jurisdictions is provided in regulator and government initiatives.
Despite the variety of approaches, there is significant convergence on what the ethical guidelines for AI should be. For an AI system to be considered trustworthy, it must adhere to the following principles:
• Security
• Safety
• Privacy
• Fairness
• Explainability
• Accountability
The core idea of a chain of assurance is that all stakeholders in the AI supply chain issue a statement describing:
• What trust-related issues relevant to their piece of IP they have considered.
• How they have addressed these issues.
This does not necessarily mean that the stakeholder has resolved all the issues listed. They may have concluded that others in the supply chain were better placed than they to do so.
As a minimum, this would provide the company finally placing an AI service on the market with a suite of statements from the supply chain showing how potential trust issues had been tackled.
IBM floated a similar approach in 2018, noting that: ‘Industries use transparent, standardized, but often not legally required documents called Supplier’s Declarations of Conformity (SDoCs) to describe the lineage of a product along with the safety and performance testing it has undergone.’4
In the article “Towards Trustworthy AI: Mechanisms for Supporting Verifiable Claims,”5 OpenAI and others also developed the importance of verifiable claims and suggested various steps that different stakeholders in AI development can take to demonstrate responsible behavior.
A chain of assurance could look a bit like the following diagram. For convenience, this drawing is a simplification: for one product, there may well be hundreds of technology developers, tens of system integrators, and a hierarchy of service providers involved. For more information about types of stakeholders we suggest, see Appendix B: Stakeholders.
The chain of assurance unites all these stakeholders in their common interests. For example, for the key area of security:
-
The technology developer reassures the system integrator that the technology was developed using a secure development lifecycle and in a secure environment.
-
The system integrator reassures the service provider that all security requirements have been met in the integration process.
-
The service provider reassures the end user that information stored on their service benefits from state-of-the-art security. To prove this claim, the service provider can use statements provided by the technology developer and the system integrator.
Security
The First of Six Key Challenges
The following sections provide a deeper dive into the key areas suggested in section 2.
The chain of assurance box at the end of each section provides questions that could be asked by stakeholders, auditors, or regulators seeking assurance in the key area. These questions have been compiled drawing on the useful work of the EU’s High Level Expert Group on Ethical AI.6
4.1 Security
For security audits, verifiability of security claims continues to be the basis for assurance. It enables relying parties to make decisions about the appropriate usage of systems. Under a legislative regime or in situations where there is an issue of liability, such as potential for human injury and environmental harm, a high level of audit and documentation is required for all systems. Using a chain of assurance approach enables AI systems to meet those audit requirements.
We distinguish two aspects of security in relation to the implementation of trustworthy AI systems: cybersecurity in general and AI-specific security.
4.1.1 Cybersecurity
The aim of cybersecurity in a more traditional sense is to implement systems that protect the confidentiality and integrity of assets, while guaranteeing a certain level of availability.
Traditional cybersecurity is a constantly evolving field and there are many approaches, but at Arm we believe a security-by-design system is based on the four key principles:
- Analyze: Make a threat model to determine your security requirements.
- Architect: Use an established security architecture.
- Implement: Create a high-quality implementation.
- Certify: Get an independent, unbiased security evaluation of the system to create assurance.
The foundations of a secure hardware platform implementation are:
- A hardware Root of Trust that is resistant to certain physical attacks.
- Hardware-backed isolation primitives, such as the Realms in the Arm Confidential Computing Architecture, which protect against software and hardware attacks by untrusted third parties.
- Robust use of encryption for communication attacks on data at rest and in transit.
- Secure lifecycle management to protect against supply chain attacks.
More information about secure hardware platforms can be found at the PSA Certified website. For more information about using confidential computing to increase the security of AI systems, see section 5.2.
We cannot stress enough the importance of platform security, especially secure lifecycle management. This lies at the heart of many of the mitigations we mentioned. It also gives us the ability to revoke or update components in the systems, whether they are models or part of the compute subsystem.
Source: PSA Certified 2022 Security Report
4.1.2 AI-Specific Security
There are also specific threats that are introduced by the development and deployment of AI systems, primarily related to the data and models inherent in these AI systems. We can categorize them in terms of attacks on:
- Data. During the input stage to achieve a future adverse outcome, or at the inference stage, to gain knowledge of the input data used.
- Models and algorithms. To achieve an adverse outcome, or to gain knowledge of the model/algorithm itself which were not intended.
Adversarial data poisoning is an effective attack against machine learning and threatens model integrity by introducing poisoned data into the training dataset."
Cornell University, 2020.
4.1.2.1 Compromising Data
- Poisoned Data Backdoor Attacks. This attack injects poisoned sampling data at the development stage by a model developer or at the deployment stage by a service provider, such that a specific unintended outcome (backdoor) can be triggered by a specific set of inputs.7
- Data Protection and Privacy Risks. For information about privacy risks, see section 4.3. One specific example of a privacy risk is a violation of data privacy by adversaries using member inference attacks.
- Member Inference Attacks. A member inference attack attempts to establish whether a subject belongs to a specific data set by probing APIs and running numerous inferences through machine-learning-as-a-service.8 This presents privacy risks for private data sets, or when the data set is facial or other biometric data.
4.1.2.2. Attacks for Models and Algorithms
- Physical Adversarial Example attacks on Deep Learning Models. This is a well-known attack vector, where small perturbations in input can lead to a high rate of misclassification and mislead systems to potentially dangerous outcomes.9, 10
- Model Extraction Attacks. Like member inference attacks, model extraction attacks attempt to infer properties of a model through manipulation of input data and output analysis.11
Chain of assurance questions for security
Seeking assurance
-
Have the technology developer, system integrator, and device provider considered security and vulnerability?
-
Have the technology suppliers considered relevant trustworthy AI vulnerabilities during the design and development phase?
Providing detailed information
-
Does the hardware design include a hardware Root of Trust?
-
Does the system use secure lifecycle management to protect throughout its design, deployment, and maintenance?
-
What technology does the system use to ensure the integrity of data sets?
-
How does the system protect the security of models and algorithms?
Safety
Second of the Six Key Challenges
AI systems may pose new challenges to human safety, a key concern for regulators. Human safety must be a primary consideration in the design of any AI system.12
We distinguish a few key areas in assessing the safety of AI technologies:
- Predictability
- Reliability
- Controllability
- Security
The general topic of reliability engineering, which ensures that products operate as intended with defined performance characteristics, without failures, and under diverse though expected hostile environmental conditions, is not within scope here.
Controllability is a specific risk associated with AI systems, which is designed to perform actions without human intervention, and this risk is heightened in systems with the ability to self-repair, self-improve, or self-replicate.13 In this paper, controllability is understood in terms of system failure prevention (reliability) and ensuring outcomes are as intended (predictability) and not in the general sense of ability to control and contain a generic AI system.
Security is crucial to ensure that the AI system is safe from malicious actors. For more information about security, see section 4.1.
Which presents your greatest AI security risk?
- Poisoned data leads to users unwittingly change action or opinion
- Breach of private information
- Ransomware attacks
4.2.1 Predictability
The following areas strongly affect the predictability of an AI system:
Constraining the Outputs. An AI system is almost always implemented as part of a larger system or application, with other components or indeed other AI systems relying on its output. To achieve some measure of predictability, the outputs of an AI system must be bounded and designed to serve as input in a larger system.
Reproducibility. Predictability is also associated with the issue of reproducibility specific to an AI system. Should the system react in the same way when inputs are equivalent? The answer to the question has implications on how the entire system responds and is a key measurement for predictability of the entire system.
Access and Availability of Internal Tooling and Infrastructure. Lack of access to resources used by AI systems is cited as one of the main blocking points for predictability.14 For predictability, the relevant resources used by the system include the data and the code base of the frameworks used, as well as the hardware version and the associated software releases.
4.2.2 Reliability
We deal specifically with the risk of unreliability in safety-critical applications. To prevent loss of reliability, we need to understand potential sources of failure. The following causes of failure were highlighted in recent research.15
Bad or inadequate data. Errors introduced through bad or inadequate data at development or deployment stage can lead to differential performance, to the extent that the data is not fit for purpose for certain cases.
Shifts in environment. Differences or shifts in environment between development and deployment can, again, lead to worse performance in unanticipated environments. This is where reproducibility and predictability mitigations are important considerations.
Faulty model assumptions and/or fragile models. Errors can be introduced both by faulty model assumptions and/or fragile models. For more information on recommendations for protecting models, see section 5.2.4.
Arm's Erik Jacobson describes Arm's efforts to implement better security measures at the chip level.
Chain of assurance questions for safety
Seeking Assurance
- Have the technology developer, system integrator, and device provider considered safety and prevention of harms?
- Can this AI be approved for a high-risk or high-liability industry?
Providing Detailed Information
- What technologies does the system use to ensure the integrity of data sets?
- Does the system use attestable compute environments to ensure reproducibility?
- Does the device have a manual override or fallback mechanism?
Privacy
The Third of Six Key Challenges
AI systems are increasingly trained on highly sensitive personal data, both in centralized data lakes, as well as on edge devices. Many modern machine learning techniques rely on access to large data sets. The more data that the data set contains, and the more attributes that each record in the data set possesses, the more useful a data set tends to be for machine learning purposes.
These data sets pose a privacy hazard, both during the machine learning training phase when sensitive data is pooled together — potentially on an untrusted device — and during inference, when a trained machine learning model can be “probed” by a malefactor to infer information about the data set used to train the model.
What are the latest state-of-the-art technologies and best practices that can balance the tension between context-relevant personalization and society’s concerns about mass surveillance and secondary use?
There are several ways to address the privacy hazards associated with large data sets. One approach is to modify the data set or limit the types of questions that can be asked about the underlying data in the data set, using anonymization techniques.
4.3.1 Anonymization
Naïve Data Anonymization
Data sets can be anonymized naïvely by removing attributes that appear to be particularly sensitive – such as names, addresses, dates of birth, and so on. However, several kinds of privacy attacks can be used to reconstruct or de-anonymize data that were manipulated using careless anonymization techniques:
-
Linkage Attacks can be used to link records in an anonymized data set with records appearing in a public data set. These attacks can be surprisingly powerful: in one infamous example, a linkage attack was used to reveal the Governor of Massachusetts’ health records after an anonymized medical data set was linked against freely available voter registration data.16 In addition, background knowledge can also be used to de-anonymize data. Knowing that heart attacks occur at a reduced rate in Japanese patients, compared to other nationalities, could be used to narrow the range of values of an attribute in medical data sets.17
-
Differencing Attacks use carefully constructed sets of complementary queries over data sets – even very large ones – to infer the attributes of private records. By issuing the complementary pair of queries, “How many people in this database are known to have cancer?” and, “How many people in this database, not named John Smith, have cancer?” an attacker can infer information about the health of John Smith without having to directly query for that information.
Query Auditing Techniques
Query auditing techniques use explicit checks on data queries to try to gauge if the results of those queries can cause a privacy breach, before being applied to the data set. If a query can cause a privacy breach, it is blocked.
Query auditing techniques would appear to be a good first defense against certain types of differencing attack.
Lacking Confidence: only 4% of organizations are completely confident in their security policy when it comes to protecting third-party data. Source: Arm/Pulse 2021.
Unfortunately, this is not the case, as refusing to process a query in the context of a series of previous queries may itself reveal sensitive information about the underlying data set. Moreover, depending on the expressiveness of the underlying query language, detecting a potential breach of privacy from a series of queries may not even be computationally feasible.
Given the problems with naïve data anonymization, computer scientists have sought to give a precise definition of data privacy and an associated framework within which informative queries over data sets can be made without necessarily sacrificing privacy.
k-Anonymity
Early attempts at providing a framework within which data privacy could be evaluated, such as k-anonymity18 – and its many refinements such as t-closeness19 and l-diversity20 – focused on formalizing the naïve idea of anonymizing a data set by removing record attributes, as discussed above.
Intuitively speaking, a data set has the k-anonymity property – where k is a privacy parameter – if the record of any one individual appearing in the data set cannot be distinguished from the records of any other k – 1 individuals also appearing in the data set. This property, therefore, entails that any one individual in the data set has a form of “plausible deniability” with respect to the results of queries over the data set.
Data sets can be manipulated by removing attributes, or making the range of values that an attribute can take on to be less precise, so that it eventually satisfies the k-anonymity property for some desired privacy parameter, k. Unfortunately, k-anonymity and variants are still subject to a range of privacy attacks.
Differential Privacy
Differential privacy’s modern form21 was perfected by cryptographers and makes use of a modified form of a central concept in the theory of cryptography: indistinguishability.22 Whereas techniques such as k-anonymity try to protect sensitive data by manipulating a data set, differential privacy focuses on the queries, or generalized ‘algorithms’ that can be made about, or computed over, a data set.
The central observation behind differential privacy is inherently intuitive: an individual’s privacy cannot be compromised by an inadvertent release of data from a data set if that data set does not contain any data related to that individual. As a result, a query over a data set is differentially private if it is indistinguishable to an external observer whether the query was computed over a data set containing an individual’s data, or over a data set with the data removed.
Indistinguishability is achieved by adding carefully chosen random noise to the output of a query. Naturally, the amount of noise to add to the output of a query is a function of the data set itself. For example, if a data set contains information about only a single person, then the amount of statistical noise needed to achieve this indistinguishability property is necessarily much greater than is needed to mask the inclusion or exclusion of an individual’s data in a query over a data set containing data about all 500 million Europeans.
Differential privacy is now routinely deployed as a means of guaranteeing the privacy of individuals appearing in large data sets. For example, both Apple23 and Microsoft24 use variants of differential privacy to anonymize telemetry information originating from devices running their operating systems, and the US Census Bureau also uses differential privacy when aggregating population-wide statistics.25
However, despite real-world deployment, differential privacy is not a panacea and does not guarantee perfect privacy, but merely places an upper bound on the amount of information that leaks from a query over a data set. Moreover, significant amounts of noise may be required to obtain the indistinguishability property, making differential privacy inappropriate for some uses.
4.3.2 Federated Learning
An alternative approach to addressing the privacy hazards associated with large machine learning data sets is to avoid collecting large pools of potentially sensitive data in one centralized data set. Federated learning26 is a distributed technique wherein a collection of nodes – for example, mobile phones, tablets, or edge devices – each possessing their own private data set, co-operate to build a combined machine learning model without explicitly exchanging records from their private data sets. Instead, each node trains a local model, which are then combined, either by a central server, or in a decentralized fashion, to produce an aggregate machine learning model.
Note that this aggregate model is obtained without any private data set ever leaving that node.
4.3.3. Cryptographic Techniques
Several cryptographic techniques can be used to guard privacy. Homomorphic encryption schemes allow computation to take place directly on encrypted ciphertexts, without requiring the data to first be decrypted. Using this technique, encrypted data can be freely shared with untrusted third parties for processing without sharing the data itself, or the results of the data processing thereafter.
In a machine learning context, private data originating from a device can be encrypted and transmitted to a central server, where inference takes place using pre-trained models. The result is then transmitted back to the originating device without any private data being revealed.
Secure multiparty computations27 allow a group of distrusting individuals to jointly compute a function over their private data sets, without revealing those data sets to each other. Many protocols for collaborative, privacy-preserving machine learning have been developed by cryptographers.28
Arm Chief Architect Richard Grisentwaite outlines the future of security with technologies such as trusted execution environments..
4.3.4 Hardware-Based Trusted Execution Environments
Hardware-based trusted execution environments (or TEEs) can be used to build systems that address many of the same use cases as the cryptographic techniques that we surveyed above: namely, the protection of data while in use, and when it is pooled amongst a group of mistrusting individuals as a means of computing a joint function over that combined data.
Naturally, the use of TEEs for this purpose has both disadvantages and advantages. To deploy software within a TEE, the user needs to first establish that the TEE is trustworthy. This typically involves an attestation protocol, which is used to verify the provenance of the hardware platform, and supporting firmware, that need to be trusted for the TEE to meet its security guarantees. The hardware, firmware, and attestation protocol need to be explicitly trusted to enable use of TEEs.
By contrast, with cryptographic techniques, one must only trust the correctness of the design and implementation of the underlying cryptographic primitive. To their advantage, systems built around TEEs can be more flexible, being easier to deploy and configure, and easier to understand, design, and program for programmers who are not experts in applied cryptography. Moreover, the use of hardware-based approaches can offer significant performance benefits, often being orders of magnitudes faster than comparable cryptographic techniques and capable of handling much larger data sets, running at near-native speeds.
Chain of assurance questions for privacy
Seeking assurance
Providing detailed information
-
If possible, does the system use edge processing to protect data privacy?
-
Which techniques are being used to provide data privacy?
Fairness and Bias
The Fourth of the Six Key Challenges
The issue of what constitutes unfair bias will inevitably be context specific and will involve many wider factors, such as local culture and societal attitudes.
This white paper explores the broad classes of bias. It looks at how and where bias is likely to enter. We can use classes of bias as the first step toward identifying key questions AI developers should consider.
The key classes of bias identified in this paper are:
Dori.AI's Nitin Gupta discussed how to reduce AI bias on Arm-based edge devices.
4.4.1 Development & Training Bias
The strategic or business intent should be properly – and fairly – reflected in the AI model development. The developer has the key role of reflecting the goal and objectives of the business and its strategic intent, creating a set of attributes that will then be used for AI training and inference. The choice of which attributes should be included is a major source of possible bias.
4.4.2 Sample/Data Bias
Bias can enter the data set as a result of the distribution of the sample data or of the character of the samples themselves.
Bias due to the data itself has been the focus of attention in the debate so far. There is also a risk that bias may relate to data access. For example, certain private data might be excluded from selected processes, which, had the data been accessible to the system, could have helped remove bias. For recommendations on how to protect the integrity of a data set, see section 5.1.
By extension, being able to get access to certain classes of data (for example, private data) could be the only way to verify if an AI is unbiased.
4.4.3 Outcome Bias
Two individuals with similar characteristics in respect of metrics defined for a particular task should get a similar outcome. But even if an outcome bias is, at its root, linked to an implementation, training, or data-sample bias, reaching the state of outcome-bias free is inherently not possible due to technical inaccuracy of AI systems. If we want to ensure all groups and individuals are treated the same way, then algorithm developers must ensure that the probabilities of a false positive and a false negative for different groups are equal.
4.4.4 Other Biases
In federated learning, there are many possible causes of bias:
-
Bias introduced by the sampling of parties and how they are queried (e.g. network availability may bias how each party contributes).
-
A model may be trained on a smaller, specific set of data with a strongly heterogenous source (e.g. due to geo-location of the different parties).
-
The fusion algorithm, depending on how it weights the contribution from different parties, may further amplify or introduce bias.
The complexity is in the integration of models trained using heterogenous data.
Chain of assurance questions for fairness and bias
Seeking Assurance
Providing Detailed Information
-
Has the system integrator considered possible bias in the collection of data?
-
Has the technology developer considered possible bias in the implementation of the data?
-
Has the system user considered possible bias in the interpretation and use of the AI-based recommendation system?
Explanability
The Fifth of the Six Key Challenges
AI systems exhibit different levels of explainability. Some can effectively introspect and explain why decisions were made, others less so.
If an AI system’s accuracy and decision-making process cannot be understood by a human being, then it is hard to assess the potential risk for high-liability industries.
What evidence or documentation, in the various stages of problem definition, design, and development stages, improves interpretation?
There is a popular misconception that machine learning models are necessarily inscrutable black boxes. However, several classes of machine learning models can be examined introspectively and their reasoning explained.
Decision trees and other rule-based algorithms are the classic examples, producing machine learning models that are cascading chains of "if-then" rules.
DarwinAI is using explainable AI to better pinpoint early signs of COVID-19 infection in patients on an intelligent X-Ray system powered by Arm processors.
The field of eXplainable AI (XAI) is steadily developing and introducing new machine learning algorithms for which model introspection and explainability is a first-class concern.29, 30
In a machine learning context, explainability is important for several reasons: it can be used to fully document the software engineering process, deduce the data and the training regimen used during model learning, and helps in evaluating the overall system performance.
Moreover, a machine learning model that can explain its own reasoning is much easier to audit for compliance with relevant regulations. And, if the output of a machine learning model leads to bad or unwarranted outcomes in the real world, an explainable model can be used to pinpoint the reasoning that led to these outcomes, which can then be modified and fixed.
However, despite recent advances in XAI, many state-of-the-art machine learning techniques do, in fact, act as impenetrable black boxes to external observers. This may be because the training algorithm was not designed with model explainability in mind.
What Concerns Your Customers Most About AI?
- Accuracy
- Economic value and ROI
- Unanticipated regulatory and/or legal issues
Alternatively, an algorithm could, in principle, be designed with explainability in mind, but the size of any learned model is so gargantuan, or otherwise complex, that the chances of comprehension of any explanation by a human are slim. Many modern deep learning techniques, such as convolutional neural networks (CNNs),31 long short-term memory networks (LSTM),32 and others, fall into this pattern.
Unfortunately, these techniques also represent the state of the art in several application areas of modern machine learning. We must therefore recognize that for the near future, there will be different levels of explainability for different machine learning systems. Careful analysis is needed to understand the appropriate level of explainability on an application-by-application basis, and the use of explainable models should be preferred where this is appropriate and possible.
Chain of Assurance Questions for Explainability
Seeking Assurance
-
How did the AI system give this outcome, and what is the reasoning behind the decision?
-
Why can this AI system be approved for a high-risk or high-liability industry?
Providing Detailed Information
-
Has the system integrator or service provider considered the explainability requirements for the use case?
-
Has the system integrator or service provider considered the model chosen in the context of explainability requirements?
Accountability
The Sixth of the Six Key Challenges
The development and deployment of AI systems inevitably involves many different subsystems and actors working together to achieve a common goal. Often these subsystems are designed and engineered by different teams within the same company, or even different companies. How can steps and decisions taken at each stage of problem definition, design, development, deployment, and operation be designed and logged to help establish liability and remediation?
During design, all stakeholders should be able to explain and justify development decisions to other system-design stakeholders. They should also be able to explore how decisions could ultimately impact systems. This helps to improve global systems and to define the responsibilities of those involved.
After deployment, each element should be able to explain what happened and why a decision (good or bad) was made. Tamper-resistant immutable logs, sometimes called append-only logs, are one such approach to solving this problem.
Systems can be designed so that details of all decisions made by the system, including details of appropriate inputs which led to the decision being made, are appended to these logs.
If an AI system goes awry, logs can be used by investigators in the same way as aircraft flight data recorders, or ‘black boxes’, to trace through the series of decisions and the stimuli that led to those decisions being made.
Chain of Assurance Questions for Accountability
Seeking Assurance
-
Who is accountable for which part of an AI-based system or product, and the AI decision or outcome?
-
How is this investigated if something goes wrong?
Providing Detailed Information
Recommendations
And Three Critical Ideas
For each trustworthy AI principle, we outlined the challenges facing its proper implementation and suggested some questions that stakeholders should be able to answer.
In this section, we suggest some possible recommendations to counter these challenges and questions.
We also welcome all other suggestions and recommendations from the industry that would form a better and more useful chain of assurance for trustworthy AI.
Most of the recommendations discussed in this paper fall within the following groups:
-
Cybersecurity.
-
Data provenance.
-
Confidential computing.
Traditional cybersecurity was discussed earlier in section 4.1.1. More information about hardware security can also be found at the PSA Certified website. In this section, we discuss data privacy, confidential computing, and recommendations in more detail.
The recommendations we give differ: some are well-recognized solutions (such as hardware TEEs), some require more research (for example, developing unbiased data sets). Recommendations for different principles often overlap — especially for security, safety, and privacy.
Peter Armstrong, cyber subject matter expert, Munich Re Group, discusses the challenges insurers have in quantifying cybersecurity risk.
Trustworthy AI Principles and Recommendations: A Visual Representation
Provenance or lineage metadata describes the modification history of a data set or of the origin and transformation history of data derived from a data set.
Traditionally, pools of data were centralized and assumed to be under the management of trustworthy authorities – dedicated database administrators who restricted who could add data to the data set and in what form.
With such curation, it was reasonable to assume that the data contained within any such data set was consistent, reliable, and had been vetted before being added, with trust in the content of the data set being largely implicit and unstated.33
Today, these assumptions no longer apply for a variety of reasons, but notably, because the rise of the internet has led to an explosion in data creation and synthesis of data sets.
Data is now constantly created and modified, giving rise to potentially huge, decentralized data sets with few guarantees around integrity or form.
Combined with the size increase in modern data sets, this has spurred the development of provenance-tracking techniques.
A significant body of work in this area has been carried out by the database research community, who aimed to answer questions about transformations of data by relating the inputs and outputs of these transformations.34
In particular, the database research community study data to understand the why, how, and where:
-
Given the output of a data transformation or query, can we identify inputs to the transformation that explain why the output was produced?
-
Can we track how input data was transformed to demonstrate how a particular output was obtained?
-
Can we identify which data sets, or which records from a particular data set, contributed to the output of some query or data transformation? That is, can we identify where the origins of inputs that contributed to an output?
Being able to answer these questions is also useful in a machine learning and AI context, especially during the training of machine learning models. For example, by understanding why a model produced an answer, which data sets the model was trained upon, and how the answer was produced, we are better able to debug the model when it produces unfavorable results and to pinpoint sources of bias when they are revealed.
Further, there is a reproducibility crisis in the machine learning community, wherein a large percentage of research papers presenting new machine learning algorithms and techniques cannot be reproduced by third parties. Provenance tracking has therefore been seen as a potential defense against this issue.
The integration of provenance tracking, and provenance metadata, into machine learning pipelines makes it easier to pinpoint exactly which data set was used in an experiment, how it was transformed, and so on, making it easier to replicate.
If machine learning models start to be regularly deployed in regulated domains – such as automotive and medicine – then this reproducibility is vital.
75% of UK and US consumers said there should be more strict regulations on how companies use data to train AI without their knowledge, according to a poll by Audio Analytic. CEO Chris Mitchell discusses the data provenance challenge on Arm Blueprint.
5.1.1 Logging Transactions
To guarantee the quality and integrity of data according to data provenance recommendations, both the integrity of data sets and precisely what has been done with each piece of data, and why, must be logged. This log allows for auditing and potential accident diagnosis in the development, integration, and deployment stages.
Logging transactions also mitigates against membership inference attacks in the deployment stage. Transactions must be logged to detect and deter bad actors in the system.35
5.2 Confidential Computing
Confidential computing refers to a series of techniques where a computation, or inputs to that computation, are protected from untrusted onlookers, who cannot either observe or interfere with the computation.
Cryptographic or hardware-based TEEs may be used to implement these protected computations, or a mixture of the two. In this section, we focus on the use of TEEs in confidential computing.
Notably, many facets of trustworthy AI can be implemented using trusted hardware. For example, TEEs can be used as a mechanism for ensuring data privacy due to the strong confidentiality and integrity guarantees they provide to loaded data and software.
Moreover, TEEs and associated remote attestation procedures can be used as primitives when implementing a provenance chain for data sets.
Several TEE technologies are now available for commodity hardware, including Arm TrustZone,36 Intel’s Software Guard Extensions (SGX),37 AMD’s Secure Encrypted Virtualization (SEV),38 and AWS Nitro Enclaves.39
Arm also has the Realm Management Extensions, part of Arm’s Confidential Compute Architecture (CCA). Some commonalities can be identified:
Strong isolation against a privileged attacker. All the above technologies aim to provide strong integrity and confidentiality guarantees for data and code against a privileged attacker. Arm TrustZone conceptually provides two ‘worlds’ – Secure and Non-Secure – with memory addresses tagged with their originating world.
This insulates code and data in the Secure world from privileged code, even the OS or hypervisor, executing in the Non-Secure world.
Similarly, Intel SGX provides a Secure Enclave which protects code and data from privileged code, including the OS, executing on the same machine.
Additionally, AMD SEV-protected virtual machines, Arm CCA Realms, and Intel SGX Secure Enclaves are backed by integrity-protected encrypted memory, providing some defense, even against physical attackers.
Support for remote attestation. Remote attestation protocols enable a device to authenticate its hardware and software configuration to a third party.
Intuitively, remote attestation protocols allow a skeptical challenger to obtain compelling cryptographic proof – via an attestation token – that a device is configured in a particular way.
For example, to ascertain that a certain piece of software known to the challenger is installed on the device, or the device has known good configuration options set.
Small trusted computing base (TCB). The technologies above aim to reduce the amount of code that is included in the TCB. Notably, this includes moving the ‘rich’ operating system and other privileged system code, such as a hypervisor, out of the TCB.
Numerous startups, and small businesses, are already offering privacy-preserving compute platforms built around TEEs, such as Cosmian40 in France, Decentriq41 in Switzerland, IoTeX in the United States,42 Scalys,43 in the Netherlands, and SCONE44 in Germany.
Established companies, including major cloud providers like Microsoft Azure45 and AWS,46 are also offering access to TEEs, and major financial institutions such as Ant Financial47 in China and JP Morgan Chase in the US are exploring the use of strong isolation technology to protect customer’s data.48
Confidential computing is key to implementing the key areas of security, safety, and privacy in the trustworthy AI chain of assurance.
5.2.1 Hardware-Based Trusted Execution Environments
Using TEEs with attestable properties enables transparency and the reproduction of the compute environment for both development and deployment stages. TEEs can also be used to protect the confidentiality and integrity of sensitive data sets and machine learning models.
When pooling data or using potentially untrusted third-party devices to host a computation, and where advanced cryptographic techniques are inapplicable, we recommend using hardware TEEs to protect computations on user data.
Distributed systems built around TEEs are a pragmatic solution for strong privacy guarantees.
While TEE security and privacy guarantees fall short of pure cryptography, they are widely deployed, efficient, and easier for the average programmer to use.
5.2.2 Edge Processing
Use edge processing where possible to minimize the systemic risk of data pooling. In some distributed system designs, where data is pooled in a central location or shared with untrusted third parties, it may be possible to design the system so that raw data never leaves a user’s device, providing strong data privacy guarantees.
This is the case with federated learning discussed in section 4.3.2.
We recommend that designers of data-intensive distributed systems consider ways to limit the amount of data pooled in centralized services, and how to move more compute onto a user’s device, where possible.
5.2.3 Attestable Runtime
The ability to report, monitor, and correlate potential errors during operation is also a critical component of reliability.
Point-wise reliability, also known as real-time anomaly detection, is an increasingly important approach in the prevention as well as logging of catastrophic failures.
Implementing a computing infrastructure with attestable properties helps reduce potential attacks on models during use.
Remote attestation and runtime measurements enable transparency and more granular monitoring of the operational environment used during both development, as well as deployment stages.
5.2.4 Model Encryption and Decryption
As more models are deployed to the edge, model confidentiality is achieved not only through executing the model in an isolated environment.
Model encryption and decryption on a per-device basis in the deployment stage are becoming increasingly important,49 especially in addressing the model extraction attacks that were mentioned earlier in section 4.1.
5.2.5 Advanced Cryptographic Techniques
Given their strong security and privacy guarantees, use advanced cryptographic techniques for centralized data processing where viable.
In situations that require pooling data in a centralized location, or where computations require more computational power than is available on a user’s device, it may make sense to consider deploying advanced cryptographic techniques, such as Homomorphic Encryption and Secure Multi-party Computations.
While these have long been deemed too inefficient for widespread industrial adoption, many techniques are now reaching a state of maturity and can be deployed profitably in restricted situations.
5.3 Manual Override or Fallback Mechanism
The ability to fail gracefully by having a fallback that can be relied on in the event of failure or unanticipated situations is critically important as a last resort if failure occurs.
Where possible, human intervention should be requested. Otherwise, there needs to be a deterministic form of a rule-based (instead of non-deterministic algorithm) application or component that can take over certain functions, which is in a verified good state. A fallback mechanism must be deterministic in nature, and an attestable signed version of the compute environment, including certified firmware, would normally suffice to ensure reproducibility and the foundation to implement as a fallback mechanism.
5.4 Careful Selection of Data Sets
Engineers and data scientists should keep in mind that machine learning models trained on biased data sets may produce inequitable outcomes once deployed. As a result, they must ensure that sources of potential bias are eliminated from their training data sets, for example by ensuring that the demographics of training data sets used to produce computer vision models are reflective of the wider society.
Coupling carefully considered and curated training data sets with a chain of provenance allows designers to further pinpoint the source of any observed inequity.
Call To Action
The Next Steps for Responsible AI
Laying a Foundation for the Metaverse
In conclusion, the advantages of developing a chain of assurance scheme would help gain trust in AI and consequently build the gateway to the metaverse.50 Whatever the precise shape of regulatory approaches, it is quite likely that such a scheme could help play a key role in assisting companies fully meet any future regulatory requirements.
But even without the stimulus of emerging regulatory interest, a chain of assurances would be an important step toward building trust. As a sector, we should not sit back and wait for regulation before acting ourselves.
In this paper, we have set out some of the key considerations
But we recognize that key aspects of a chain of assurance scheme remain unresolved.
Firstly, for it to be successful there must be some standardization in the way such assurances are given to ensure that they cover the same ground and address the right issues.
Secondly, we must decide whether the assurance will be in the form of a self-declaration by a company stating what it has done, or whether there will be a third-party verification or ‘endorsement.’
Both options may run simultaneously and be respectively appropriate for different use cases, with the higher risk use cases aiming for third-party involvement.
Most importantly, we need a critical mass of companies interested in exploring these ideas to work together. Arm’s success in launching PSA, which as explained above describes how companies can confirm they have addressed IoT security issues, shows that this kind of approach can work. As a first step, Arm is using this paper to engage our partners and others all over the world, including trade associations, in taking forward this concept for Trustworthy AI.
Appendix A: Regulator and Government Initiatives
In this section, we take a brief look at the principles and the frameworks on which various jurisdictions are trying to formulate their approach to promoting trustworthy AI. This includes a quick overview of the regulatory approaches of the EU, the US, and China.
A.1 European Union
In the EU, the European Commission published its wide-ranging proposals for AI regulation in April 2021. These proposals prohibit certain uses of AI, such as AI-based ‘social scoring’, real-time remote use of biometric recognition for law enforcement in public spaces (with some exceptions), and AI which is aimed at ‘manipulating behavior’ to limit free will. Other AI use cases are divided into the following categories:
• High-risk AI
o High-risk AI would require prior assessment of conformity in areas such as privacy protection, human in the loop, and so on.
o Examples: High risk includes transport, education, recruitment, and credit scoring (and the list could be extended).51
• Limited-risk AI
o The customer should simply be informed about the use of AI.
o Examples: Chatbots.
• Minimal-risk AI
o There are no formal requirements.
o Examples: Video games.
Earlier, the Commission had included the EU High-Level Expert Group on Artificial Intelligence (AI-HLEG). This group concluded that trustworthy AI should be:
• Lawful, complying with all applicable laws and regulations.
• Ethical, ensuring adherence to ethical principles and values.
• Robust, both from a technical and social perspective.
The core relevant ethical principles for the AI-HLEG were: respect for human autonomy, prevention of harm, fairness, and explicability. To translate these concepts into practice, the EU’s approach is likely to focus on:
• Human agency and oversight
• Technical robustness and safety
• Privacy and data governance
• Transparency
• Diversity, non-discrimination, and fairness
• Societal and environmental wellbeing
• Accountability
A.2 United States
Federal government thinking on the role of government action is less focused on regulation than the European Commission. But the US government has not excluded regulatory interventions where needed and acknowledges the importance of building trustworthy AI.
In 2019 the president signed an executive order on AI which stressed the importance of standards, including standards for building trust. Further, in 2020 the US Congress passed the National AI Initiative Act,52 which among other things created the National AI Initiative Office to coordinate US government R&D and policy.
The National Institute for Standards and Technology (NIST) has been looking at AI trustworthiness. NIST is looking into how trust can be increased through the development and adoption of strong technical and non-technical standards in the areas of:
• Accuracy
• Explainability
• Resiliency
• Safety
• Reliability
• Objectivity
• Security
It is expected that recommendations for policy in those areas will be made in due course. NIST notes that significant standards work is being done in many of these areas, but that these standards will need to be revised and updated as the technology advances.
One of the focus areas NIST has discussed in its various AI workshops is to work on the development of technical standards and related tools to support reliable, robust, and trustworthy systems that use AI technologies, specifically:
- Data sets in standardized formats, including metadata for training, validation, and testing of AI systems.
- Tools for capturing and representing knowledge and reasoning in AI systems.
- Fully documented use cases.
- Benchmarks.
- Testing methodologies.
- Metrics to quantifiably measure and characterize AI technologies, including but not limited to aspects of hardware (at device/circuit/system level), trustworthiness (e.g. accuracy, explainability, safety, reliability, objectivity, and security).
- AI testbeds.
NIST is also in the process of developing an AI risk-management framework aimed at better managing potential risks to individuals, organizations, and society that could result from the broader use of AI.53 Further, the US federal government has created a central repository at ai.gov for AI-related activity.
A.3 China
The Chinese government is committed to the development of China’s AI industry and has been working on AI standards and ethical guidelines. Trustworthy AI, responsible AI, and AI ethics are key parts of their discussions.
In 2017, China’s State Council published the ‘New Generation AI Development Plan.54 This set a strategic goal of drafting an initial approach to laws, regulations, and ethical norms, related to AI. The aim is to have more comprehensive frameworks in place by 2030.
In 2019, the Ministry of Science and Technology (MoST) issued ’Development of Responsible AI: A New Generation of AI Governance Principles.’55 The AI governance principles provide a framework and action guidelines for AI governance, aiming to “ensure that AI is safe/secure, reliable, and controllable.”
The following eight principles of AI governance are proposed:
• Harmony and friendliness
• Fairness and justice
• Inclusiveness and sharing
• Respect for privacy
• Security and controllability
• Shared responsibility
• Open cooperation
• Agile governance
The New Generation AI Governance Expert Committee was established by the MoST in 2019 to research policy recommendations for AI governance and identify areas for international cooperation.56 Issues of focus include data monopoly, algorithm bias, abusive use of intelligence, deep fake, data poisoning, privacy protection, ethical norms, and inequality.
The Committee released a code of ethics” for AI development titled the ‘New Generation Artificial Intelligence Ethics Specifications,’ in September 2021. The document outlines six fundamental ethical principles for implementing and using AI technologies in society:
(1) improving human welfare
(2) promoting fairness and justice
(3) protecting privacy and security
(4) ensuring controllability and trustworthiness
(5) enhancing responsibility
(6) improving ethical literacy.
Under the principle of protecting privacy and security, the document states that AI users must be informed about how their data is being handled and must consent to or reject the usage of AI systems.
Personal data must be held in accordance with “the principles of lawfulness, fairness, necessity, and integrity.” The document also requires the security and transparency of the R&D and application of AI technologies.
Under the principle of “ensuring controllability and trustworthiness,“ the document stresses that humans must “have full autonomous decision-making power and the right to choose whether to accept the services provided by AI and the right to withdraw from the interaction with an AI system at any time.”
To complement the policy work, China’s standards organizations work on proposals to guide and standardize the behavior of AI practitioners. TC260 (National Information Security Standardization Technical Committee), the main body for drafting the technical standards for information security, produced a report in Jan 2021, ‘Cybersecurity Standard Practice Guide-Guidelines for Ethical Security Risk Prevention of Artificial Intelligence,’ which provides guidelines for AI ethics and ethical issues throughout the technology lifecycle, including research and development, design and manufacturing, deployment and application and other related activities.57
Based on the guidelines, in August 2021, TC260 drafted the national recommended standard, ‘Information security technology-Assessment specification for Machine learning algorithms,’ and made it available for public comments. The draft standard describes the security requirements for machine learning algorithms throughout their lifecycle. Confidentiality and privacy are included in the security assessment criteria.
The MIIT-affiliated China Electronics Standardization Association released its group standard ‘Information Technology - Artificial Intelligence - Risk Assessment Model’ for public consultation in July 2021. The risk assessment model proposed by the group standard has three risk factors – technical risk, application risk, and management risk. The technical risk is further broken down into data risk, algorithm model risk, and systematic risk.
In the same month, at the World Artificial Intelligence Conference (WAIC) in Shanghai, the Shanghai municipal government announced the establishment of the Shanghai Municipal AI Standardization Technical Committee. Standards related to information security and ethics are among the priorities of the committee’s near-term plan.
In addition, another MIIT-affiliated institute, the China Electronics Standardization Institute (CESI) stated in the ‘AI Standardization White Paper (2021)’ that it plans to formulate a standard for the technical requirements of machine learning systems for privacy protection.
In addition to government activities, there are also several Chinese state-affiliated industry associations and think tanks focusing on guidelines for trustworthy AI development:
• Beijing Academy of Artificial Intelligence (BAAI)58: In 2019, BAAI released the ’Beijing AI Principles’ for the “research, development, use, governance, and long-term planning of AI.”59 “Be Ethical” is listed as one of the total seven principles for the research and development of AI, calling for ethical design approaches to make the AI systems trustworthy, i.e., “making the system as fair as possible, reducing discrimination and biases, improving its transparency, explainability, and predictability, and making the system more traceable, auditable and accountable, etc.” Last year, BAAI released a report on AI governance and ethics, advocating the “Beijing AI Principles” in R&D use and governance.[7]
• China’s Artificial Intelligence Industry Alliance (AIIA)lvi: In 2019, AIIA released a draft joint pledge on, among other things, the principles of secure and trustworthy AI, transparency and explainability, privacy protection, clear responsibilities, and diversity and inclusiveness. A year later, AIIA published ’Trustworthy AI Operation Guidelines (V0.5),’ providing a practical guidance based on the principles to execute the trustworthy AI requirements via an “ethics by design” approach. Soon after that, AIIA published Management Measures for Trusted AI Demonstration Zone,’ planning to start pilot programs in China to promote Trustworthy AI. In July 2021, AIIA announced the establishment of the AI Governance and Trustworthiness Committee. To date, AIIA’s efforts related to trustworthy AI mainly include the trustworthiness assessment of AI applications, such as face recognition systems and RPA.
• Tsinghua University: Tsinghua University founded the Institute for AI International Governance (AIIG) in April 2020. The former Vice Minister of Ministry of Foreign Affairs, Ms. Ying Fu, is the honorary institute president. Over the past year, AIIG established its first academic committee that consists of 11 top-level scholars at home and abroad. It also organized dozens of workshops, high-level conferences, and enterprise visits. Currently, it is undertaking research projects on AI governance commissioned by both the Chinese government and enterprises.
B.1 Technology Developer
An organization or individual involved in the development of technology suitable for multiple applications.
Examples of technology developers:
• Hardware manufacturers designing FPGA or specialized chips used in inferencing or training.
• Data aggregators obtaining or producing curated and metadata-tagged data sets.
• Neural network model designers.
• Algorithm developers producing new mathematical methods for optimizing training/inferencing, or to improve model accuracy.
B.2 System Integrator
An organization or individual involved in designing and producing systems or products that are tailored for market- and sector-specific applications.
Examples of development by system integrators:
• Profiling and recommendation engine to suggest new movies and content or advertising.
• Facial recognition CCTV system manufacturers.
• Credit scoring for loan applicants.
• Imaging-based cancer diagnostic systems.
• Robotics systems.
B.3 Service Provider
An organization or individual providing end-user facing services or products.
Examples of service providers:
• Video streaming service providing “watch next” and/or advertising to increase engagement and spending of its subscribers.
• Police force deploying criminal detection CCTV systems at train stations.
• Financial institutions using automated risk assessment systems to approve loan applications.
• Medical practitioners using automated diagnostic systems.
• Car manufacturer deploying automated robotics assembly lines.
B.4 End-User
An organization or individuals impacted by the decisions made through the application of AI systems or products.
Examples of end-users:
• Consumers and subscribers to online video platforms.
• Members of the public traveling through train stations.
• Consumer credit loan applicants.
• Patients with tumour-like symptoms.
• Workers in a car assembly plant.
Appendix C: Chain of Compliance Assurance Models
Chains of assurance vary in terms of motivation, the authority which grants a certification mark, and the rigor and liability to which they are held compliant.
C.1 Self-Declaration Branding and Initiative
Actor: A single company.
Motivation: The motivation is usually aimed at better brand reputation toward its end consumers.
Certifying authority: A single company aiming to hold its supply chain and ecosystem partners to a specified standard.
Rigor and liability: The rigor for assurance varies, but often this is in the form of a self-declaration enforced with contractual terms and conditions.
C.2 Industry or Sector-Specific Standards
Actor An industry consortium, consisting of many companies.
Motivation: Aims to enable a common standard for scale and improved interoperability between products and suppliers.
Certifying authority: The certification mark grant authority in this case is usually an evaluation body appointed by an industry consortium. While the process is voluntary, there is significant market friction involved if the said standard is generally recognized by consumers.
Rigor and liability: Assurance usually consists of compliance tests suites, as well as interoperability tests through independent test labs.
Examples: HDMI and HDCP.
C.3 Consumer Protection or Other Mandating Legislation
Actor: Government
Motivation: Where governments feel the need to impose baseline safety standards, usually for consumer protection,
product liability and compliance guidelines, which are enshrined in legislation.
Certifying authority: Government.
Rigor and liability: Failure to comply can result in lawsuits and penalties as outlined in legislation.
Examples: CE compliance regime for electronic goods.
The scope of this white paper is limited to current AI development practices. The paper does not cover more speculative ethical questions regarding AI, such as superintelligence or the singularity.
The Arm AI Trust Manifesto, published 2019.
‘World Economic Forum launches new global initiative to advance the promise of responsible artificial intelligence’, published 2021.
IBM Research Paper Feb 2019, FactSheets: Increasing Trust in AI Services through Supplier’s Declarations of Conformity
https://arxiv.org/pdf/2004.07213.pdf
Ethics Guidelines for trustworthy AI, published 2019.
arXiv:2007.08745v3 [cs.CR] 26 Oct 2020.
arXiv:1807.09173.
arXiv:1707.08945
arXiv:1412.6572
arXiv:1805.02628
The Arm AI Trust Manifesto, published 2019.
https://jair.org/index.php/jair/article/view/12202/26642
arXiv:2003.00898
arXiv:1904.07204v
Barth-Jones, Daniel. The ‘re-identification’ of Governor William Weld’s medical information: a critical re-examination of health data identification risks and privacy protections, then and now. 2012.
Machanavajjhala, Ashwin, and Kifer, Daniel, and Gerhrke Johannes. L-Diversity: privacy beyond k-anonymity. 2007.
Samarati, Pierangela, and Sweeney, Latanya. Protecting privacy when disclosing information: k-anonymity and its enforcement through generalization and suppression. 1998.
Li, Ninghui, and Li, Tiancheng, and Venkatasubramanian, Suresh. T-Closeness: privacy beyond k-anonymity and l-diversity. 2007.
Machanavajjhala, Ashwin, and Kifer, Daniel, and Gerhrke Johannes. L-Diversity: privacy beyond k-anonymity. 2007.
Dwork, Cynthia, and McSherry, Frank, and Nissim, Kobbi, and Smith, Adam. Calibrating noise to sensitivity in private data analysis. 2006.
Computational indistinguishability. See: https://en.wikipedia.org/wiki/Computational_indistinguishability or any standard textbook on the theory of cryptography.
Apple. Differential privacy overview. See: https://www.apple.com/privacy/docs/Differential_Privacy_Overview.pdf. Accessed 6th April 2021.
Ding, Bolin, and Kulkarni, Jana, and Yekhanin, Sergey. Collecting telemetry data privately. 2017.
Hawes, Michael B. Implementing differential privacy: seven lessons from the 2020 United States census. 2020.
McMahan, H. Brendan, and Moore, Eider, and Ramage, Daniel, and Hampson, Seth, and Agüera y Arcas, Blaise. Communication-efficient learning of deep networks from decentralized data. 2016.
Yao, Andrew Chi-Chi. Protocols for secure computations. 1982.
See, for example, Gascón, Adrià, and Schoppmann, Phillipp, and Balle Borja, and Raykova, Mariana, and DoernerJack, and Zahur, Samee, and Evans, David. Privacy-Preserving Distributed Linear Regression on High-Dimensional Data. 2016.
The Arm AI Trust Manifesto, published 2019.
Guestrin Carlos, Singh Sameer, Ribeiro Marco Tuli, Local Interpretable Model-Agnostic Explanations, https://arxiv.org/abs/1602.04938
Miller Tim, Contrastive Explanation: A Structural-Model Approach, https://arxiv.org/abs/1811.03163
Homma, Toshiteru, and Atlas, Les, and Marks II, Robert. An artificial neural network for spatio-temporal bipolar patterns: application to phoneme classification. 1988.
Hochreiter, Sepp, and Schmidhuber, Juergen. Long short-term memory. 1997.
Lynch, Clifford A. When documents deceive: trust and provenance as new factors for information retrieval in a tangled web. 2001.
Cheney, James, and Chiticariu, Laura, and Tan, Wang-Chiew. Provenance in databases: why, how, and where. 2009.
Verifiable Data Audit https://deepmind.com/blog/article/trust-confidence-verifiable-data-audit
Arm TrustZone.
Intel Software Guard Extensions. See: Accessed 6th April 2021.
AMD Secure Encrypted Virtualization. See: https://developer.amd.com/sev/. Accessed 6th April 2021.
Amazon AWS Nitro Enclaves. See: https://aws.amazon.com/ec2/nitro/nitro-enclaves/. Accessed 6th April 2021.
Arm Confidential Compute Architecture. See: https://www.arm.com/architecture/security-features/arm-confidential-compute-architecture
Cosmian. See: https://cosmian.com. Accessed 6th April 2021.
Decentriq. See: https://decentriq.com. Accessed 6th April 2021.
IOTEX. See: Accessed 6th April 2021.
Scalys. See: https://scalys.com. Accessed 6th April 2021.
SCONE. See: https://scontain.com. Accessed 6th April 2021.
Microsoft Azure confidential computing. See: https://azure.microsoft.com/en-gb/solutions/confidential-compute/. Accessed 6th April 2021.
Amazon AWS Nitro Enclaves. See: Accessed 6th April 2021.
Sen, Youren, and Tian, Hongliang, and Chen, Yu, and Chen, Kang, and Wang, Runji, and Xu, Yi, and Xia, Yubin, and Yan, Shoumeng. Occlum: secure and efficient multitasking inside a single enclave of Intel SGX. 2020.
Silicon Angle. Google debuts Confidential VMs that keep data encrypted while it’s in use. See: Accessed 6th April 2021.
https://community.arm.com/developer/ip-products/processors/b/ml-ip-blog/posts/using-psa-security-toolbox-to-protect-ml-on-the-edge
https://www.arm.com/blogs/blueprint/metaverse
Ethics Guidelines for trustworthy AI, published 2019.
National Artificial Intelligence Act of 2020, 116th Congress (2019-2020).
https://www.nist.gov/itl/ai-risk-management-framework
The New Generational AI Development Plan published by China’s State Council, July 8, 2017.
The Development of Responsible AI: A New Generation of AI Governance Principles published by the Ministry of Science and Technology (MoST), June 17, 2019.
On March 28, the MoST held the first meeting of the New Generation AI Governance Expert Committee, chaired by Lan XUE, Dean of Schwarzman College, Tsinghua University.
The Cybersecurity Standard Practice Guide – Guidelines for Ethical Security Risk Prevention of Artificial Intelligence released by TC260, January 5, 202.1
The Beijing Academy of Artificial Intelligence is guided and supported by the MoST and Beijing municipal government.
The Beijing AI Principles released by the BAAI, May 28, 2019.
The Joint Pledge on Artificial Intelligence Industry Self Discipline (Draft for Comment) released by the AIIA, May 31, 2019.
The Trustworthy AI Operation Guidelines (V0.5) and the Management Measures for Trusted AI Demonstration Zone, published by AIIA, August 2020.
Non-Confidential Proprietary Notice
This document is protected by copyright and other related rights and the practice or implementation of the information contained in this document may be protected by one or more patents or pending patent applications. No part of this document may be reproduced in any form by any means without the express prior written permission of Arm. No license, express or implied, by estoppel or otherwise to any intellectual property rights is granted by this document unless specifically stated.
Your access to the information in this document is conditional upon your acceptance that you will not use or permit others to use the information for the purposes of determining whether implementations infringe any third party patents.
THIS DOCUMENT IS PROVIDED “AS IS”. ARM PROVIDES NO REPRESENTATIONS AND NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY, SATISFACTORY QUALITY, NON-INFRINGEMENT OR FITNESS FOR A PARTICULAR PURPOSE WITH RESPECT TO THE DOCUMENT. For the avoidance of doubt, Arm makes no representation with respect to, has undertaken no analysis to identify or understand the scope and content of, patents, copyrights, trade secrets, or other rights.
This document may include technical inaccuracies or typographical errors.
TO THE EXTENT NOT PROHIBITED BY LAW, IN NO EVENT WILL ARM BE LIABLE FOR ANY DAMAGES, INCLUDING WITHOUT LIMITATION ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, PUNITIVE, OR CONSEQUENTIAL DAMAGES, HOWEVER CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARISING OUT OF ANY USE OF THIS DOCUMENT, EVEN IF ARM HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
This document consists solely of commercial items. You shall be responsible for ensuring that any use, duplication or disclosure of this document complies fully with any relevant export laws and regulations to assure that this document or any portion thereof is not exported, directly or indirectly, in violation of such export laws. Use of the word “partner” in reference to Arm's customers is not intended to create or refer to any partnership relationship with any other company. Arm may make changes to this document at any time and without notice.
This document may be translated into other languages for convenience, and you agree that if there is any conflict between the English version of this document and any translation, the terms of the English version of the Agreement shall prevail.
The Arm corporate logo and words marked with ® or ™ are registered trademarks or trademarks of Arm Limited (or its affiliates) in the US and/or elsewhere. All rights reserved. Other brands and names mentioned in this document may be the trademarks of their respective owners. Please follow Arm's trademark usage guidelines at https://www.arm.com/company/policies/trademarks.
Copyright © 2021-2022 Arm Limited (or its affiliates). All rights reserved.
Arm Limited. Company 02557590 registered in England.
110 Fulbourn Road, Cambridge, England CB1 9NJ.
(LES-PRE-20349)
Confidentiality Status
This document is Non-Confidential. The right to use, copy and disclose this document may be subject to license restrictions in accordance with the terms of the agreement entered into by Arm and the party that Arm delivered this document to. Unrestricted Access is an Arm internal classification.
Web Address
developer.arm.com
Inclusive language commitment
Arm values inclusive communities. Arm recognizes that we and our industry have used language that can be offensive. Arm strives to lead the industry and create change. We believe that this document contains no offensive language. To report offensive language in this document, email terms@arm.com.