In this section, we take a brief look at the principles and the frameworks on which various jurisdictions are trying to formulate their approach to promoting trustworthy AI. This includes a quick overview of the regulatory approaches of the EU, the US, and China.
A.1 European Union
In the EU, the European Commission published its wide-ranging proposals for AI regulation in April 2021. These proposals prohibit certain uses of AI, such as AI-based ‘social scoring’, real-time remote use of biometric recognition for law enforcement in public spaces (with some exceptions), and AI which is aimed at ‘manipulating behavior’ to limit free will. Other AI use cases are divided into the following categories:
• High-risk AI o High-risk AI would require prior assessment of conformity in areas such as privacy protection, human in the loop, and so on. o Examples: High risk includes transport, education, recruitment, and credit scoring (and the list could be extended).51
• Limited-risk AI o The customer should simply be informed about the use of AI. o Examples: Chatbots.
• Minimal-risk AI o There are no formal requirements. o Examples: Video games.
Earlier, the Commission had included the EU High-Level Expert Group on Artificial Intelligence (AI-HLEG). This group concluded that trustworthy AI should be: • Lawful, complying with all applicable laws and regulations. • Ethical, ensuring adherence to ethical principles and values. • Robust, both from a technical and social perspective.
The core relevant ethical principles for the AI-HLEG were: respect for human autonomy, prevention of harm, fairness, and explicability. To translate these concepts into practice, the EU’s approach is likely to focus on: • Human agency and oversight • Technical robustness and safety • Privacy and data governance • Transparency • Diversity, non-discrimination, and fairness • Societal and environmental wellbeing • Accountability
A.2 United States
Federal government thinking on the role of government action is less focused on regulation than the European Commission. But the US government has not excluded regulatory interventions where needed and acknowledges the importance of building trustworthy AI.
In 2019 the president signed an executive order on AI which stressed the importance of standards, including standards for building trust. Further, in 2020 the US Congress passed the National AI Initiative Act,52 which among other things created the National AI Initiative Office to coordinate US government R&D and policy.
The National Institute for Standards and Technology (NIST) has been looking at AI trustworthiness. NIST is looking into how trust can be increased through the development and adoption of strong technical and non-technical standards in the areas of: • Accuracy • Explainability • Resiliency • Safety • Reliability • Objectivity • Security
It is expected that recommendations for policy in those areas will be made in due course. NIST notes that significant standards work is being done in many of these areas, but that these standards will need to be revised and updated as the technology advances.
One of the focus areas NIST has discussed in its various AI workshops is to work on the development of technical standards and related tools to support reliable, robust, and trustworthy systems that use AI technologies, specifically:
NIST is also in the process of developing an AI risk-management framework aimed at better managing potential risks to individuals, organizations, and society that could result from the broader use of AI.53 Further, the US federal government has created a central repository at ai.gov for AI-related activity.
A.3 China
The Chinese government is committed to the development of China’s AI industry and has been working on AI standards and ethical guidelines. Trustworthy AI, responsible AI, and AI ethics are key parts of their discussions.
In 2017, China’s State Council published the ‘New Generation AI Development Plan.54 This set a strategic goal of drafting an initial approach to laws, regulations, and ethical norms, related to AI. The aim is to have more comprehensive frameworks in place by 2030.
In 2019, the Ministry of Science and Technology (MoST) issued ’Development of Responsible AI: A New Generation of AI Governance Principles.’55 The AI governance principles provide a framework and action guidelines for AI governance, aiming to “ensure that AI is safe/secure, reliable, and controllable.”
The following eight principles of AI governance are proposed: • Harmony and friendliness • Fairness and justice • Inclusiveness and sharing • Respect for privacy • Security and controllability • Shared responsibility • Open cooperation • Agile governance
The New Generation AI Governance Expert Committee was established by the MoST in 2019 to research policy recommendations for AI governance and identify areas for international cooperation.56 Issues of focus include data monopoly, algorithm bias, abusive use of intelligence, deep fake, data poisoning, privacy protection, ethical norms, and inequality. The Committee released a code of ethics” for AI development titled the ‘New Generation Artificial Intelligence Ethics Specifications,’ in September 2021. The document outlines six fundamental ethical principles for implementing and using AI technologies in society: (1) improving human welfare (2) promoting fairness and justice (3) protecting privacy and security (4) ensuring controllability and trustworthiness (5) enhancing responsibility (6) improving ethical literacy.
Under the principle of protecting privacy and security, the document states that AI users must be informed about how their data is being handled and must consent to or reject the usage of AI systems.
Personal data must be held in accordance with “the principles of lawfulness, fairness, necessity, and integrity.” The document also requires the security and transparency of the R&D and application of AI technologies.
Under the principle of “ensuring controllability and trustworthiness,“ the document stresses that humans must “have full autonomous decision-making power and the right to choose whether to accept the services provided by AI and the right to withdraw from the interaction with an AI system at any time.”
To complement the policy work, China’s standards organizations work on proposals to guide and standardize the behavior of AI practitioners. TC260 (National Information Security Standardization Technical Committee), the main body for drafting the technical standards for information security, produced a report in Jan 2021, ‘Cybersecurity Standard Practice Guide-Guidelines for Ethical Security Risk Prevention of Artificial Intelligence,’ which provides guidelines for AI ethics and ethical issues throughout the technology lifecycle, including research and development, design and manufacturing, deployment and application and other related activities.57
Based on the guidelines, in August 2021, TC260 drafted the national recommended standard, ‘Information security technology-Assessment specification for Machine learning algorithms,’ and made it available for public comments. The draft standard describes the security requirements for machine learning algorithms throughout their lifecycle. Confidentiality and privacy are included in the security assessment criteria. The MIIT-affiliated China Electronics Standardization Association released its group standard ‘Information Technology - Artificial Intelligence - Risk Assessment Model’ for public consultation in July 2021. The risk assessment model proposed by the group standard has three risk factors – technical risk, application risk, and management risk. The technical risk is further broken down into data risk, algorithm model risk, and systematic risk.
In the same month, at the World Artificial Intelligence Conference (WAIC) in Shanghai, the Shanghai municipal government announced the establishment of the Shanghai Municipal AI Standardization Technical Committee. Standards related to information security and ethics are among the priorities of the committee’s near-term plan.
In addition, another MIIT-affiliated institute, the China Electronics Standardization Institute (CESI) stated in the ‘AI Standardization White Paper (2021)’ that it plans to formulate a standard for the technical requirements of machine learning systems for privacy protection. In addition to government activities, there are also several Chinese state-affiliated industry associations and think tanks focusing on guidelines for trustworthy AI development:
• Beijing Academy of Artificial Intelligence (BAAI)58: In 2019, BAAI released the ’Beijing AI Principles’ for the “research, development, use, governance, and long-term planning of AI.”59 “Be Ethical” is listed as one of the total seven principles for the research and development of AI, calling for ethical design approaches to make the AI systems trustworthy, i.e., “making the system as fair as possible, reducing discrimination and biases, improving its transparency, explainability, and predictability, and making the system more traceable, auditable and accountable, etc.” Last year, BAAI released a report on AI governance and ethics, advocating the “Beijing AI Principles” in R&D use and governance.[7] • China’s Artificial Intelligence Industry Alliance (AIIA)lvi: In 2019, AIIA released a draft joint pledge on, among other things, the principles of secure and trustworthy AI, transparency and explainability, privacy protection, clear responsibilities, and diversity and inclusiveness. A year later, AIIA published ’Trustworthy AI Operation Guidelines (V0.5),’ providing a practical guidance based on the principles to execute the trustworthy AI requirements via an “ethics by design” approach. Soon after that, AIIA published Management Measures for Trusted AI Demonstration Zone,’ planning to start pilot programs in China to promote Trustworthy AI. In July 2021, AIIA announced the establishment of the AI Governance and Trustworthiness Committee. To date, AIIA’s efforts related to trustworthy AI mainly include the trustworthiness assessment of AI applications, such as face recognition systems and RPA. • Tsinghua University: Tsinghua University founded the Institute for AI International Governance (AIIG) in April 2020. The former Vice Minister of Ministry of Foreign Affairs, Ms. Ying Fu, is the honorary institute president. Over the past year, AIIG established its first academic committee that consists of 11 top-level scholars at home and abroad. It also organized dozens of workshops, high-level conferences, and enterprise visits. Currently, it is undertaking research projects on AI governance commissioned by both the Chinese government and enterprises.
B.1 Technology Developer
An organization or individual involved in the development of technology suitable for multiple applications. Examples of technology developers: • Hardware manufacturers designing FPGA or specialized chips used in inferencing or training. • Data aggregators obtaining or producing curated and metadata-tagged data sets. • Neural network model designers. • Algorithm developers producing new mathematical methods for optimizing training/inferencing, or to improve model accuracy.
B.2 System Integrator
An organization or individual involved in designing and producing systems or products that are tailored for market- and sector-specific applications. Examples of development by system integrators: • Profiling and recommendation engine to suggest new movies and content or advertising. • Facial recognition CCTV system manufacturers. • Credit scoring for loan applicants. • Imaging-based cancer diagnostic systems. • Robotics systems.
B.3 Service Provider
An organization or individual providing end-user facing services or products. Examples of service providers: • Video streaming service providing “watch next” and/or advertising to increase engagement and spending of its subscribers. • Police force deploying criminal detection CCTV systems at train stations. • Financial institutions using automated risk assessment systems to approve loan applications. • Medical practitioners using automated diagnostic systems. • Car manufacturer deploying automated robotics assembly lines.
B.4 End-User
An organization or individuals impacted by the decisions made through the application of AI systems or products. Examples of end-users: • Consumers and subscribers to online video platforms. • Members of the public traveling through train stations. • Consumer credit loan applicants. • Patients with tumour-like symptoms. • Workers in a car assembly plant.
Chains of assurance vary in terms of motivation, the authority which grants a certification mark, and the rigor and liability to which they are held compliant.
C.1 Self-Declaration Branding and Initiative
Actor: A single company. Motivation: The motivation is usually aimed at better brand reputation toward its end consumers. Certifying authority: A single company aiming to hold its supply chain and ecosystem partners to a specified standard. Rigor and liability: The rigor for assurance varies, but often this is in the form of a self-declaration enforced with contractual terms and conditions.
C.2 Industry or Sector-Specific Standards
Actor An industry consortium, consisting of many companies. Motivation: Aims to enable a common standard for scale and improved interoperability between products and suppliers. Certifying authority: The certification mark grant authority in this case is usually an evaluation body appointed by an industry consortium. While the process is voluntary, there is significant market friction involved if the said standard is generally recognized by consumers. Rigor and liability: Assurance usually consists of compliance tests suites, as well as interoperability tests through independent test labs. Examples: HDMI and HDCP.
C.3 Consumer Protection or Other Mandating Legislation
Actor: Government Motivation: Where governments feel the need to impose baseline safety standards, usually for consumer protection, product liability and compliance guidelines, which are enshrined in legislation. Certifying authority: Government. Rigor and liability: Failure to comply can result in lawsuits and penalties as outlined in legislation. Examples: CE compliance regime for electronic goods.
The scope of this white paper is limited to current AI development practices. The paper does not cover more speculative ethical questions regarding AI, such as superintelligence or the singularity.
The Arm AI Trust Manifesto, published 2019. ‘World Economic Forum launches new global initiative to advance the promise of responsible artificial intelligence’, published 2021. IBM Research Paper Feb 2019, FactSheets: Increasing Trust in AI Services through Supplier’s Declarations of Conformity https://arxiv.org/pdf/2004.07213.pdf Ethics Guidelines for trustworthy AI, published 2019. arXiv:2007.08745v3 [cs.CR] 26 Oct 2020. arXiv:1807.09173. arXiv:1707.08945 arXiv:1412.6572 arXiv:1805.02628 The Arm AI Trust Manifesto, published 2019. https://jair.org/index.php/jair/article/view/12202/26642 arXiv:2003.00898 arXiv:1904.07204v Barth-Jones, Daniel. The ‘re-identification’ of Governor William Weld’s medical information: a critical re-examination of health data identification risks and privacy protections, then and now. 2012. Machanavajjhala, Ashwin, and Kifer, Daniel, and Gerhrke Johannes. L-Diversity: privacy beyond k-anonymity. 2007. Samarati, Pierangela, and Sweeney, Latanya. Protecting privacy when disclosing information: k-anonymity and its enforcement through generalization and suppression. 1998. Li, Ninghui, and Li, Tiancheng, and Venkatasubramanian, Suresh. T-Closeness: privacy beyond k-anonymity and l-diversity. 2007. Machanavajjhala, Ashwin, and Kifer, Daniel, and Gerhrke Johannes. L-Diversity: privacy beyond k-anonymity. 2007. Dwork, Cynthia, and McSherry, Frank, and Nissim, Kobbi, and Smith, Adam. Calibrating noise to sensitivity in private data analysis. 2006. Computational indistinguishability. See: https://en.wikipedia.org/wiki/Computational_indistinguishability or any standard textbook on the theory of cryptography. Apple. Differential privacy overview. See: https://www.apple.com/privacy/docs/Differential_Privacy_Overview.pdf. Accessed 6th April 2021. Ding, Bolin, and Kulkarni, Jana, and Yekhanin, Sergey. Collecting telemetry data privately. 2017. Hawes, Michael B. Implementing differential privacy: seven lessons from the 2020 United States census. 2020. McMahan, H. Brendan, and Moore, Eider, and Ramage, Daniel, and Hampson, Seth, and Agüera y Arcas, Blaise. Communication-efficient learning of deep networks from decentralized data. 2016. Yao, Andrew Chi-Chi. Protocols for secure computations. 1982. See, for example, Gascón, Adrià, and Schoppmann, Phillipp, and Balle Borja, and Raykova, Mariana, and DoernerJack, and Zahur, Samee, and Evans, David. Privacy-Preserving Distributed Linear Regression on High-Dimensional Data. 2016.
The Arm AI Trust Manifesto, published 2019. Guestrin Carlos, Singh Sameer, Ribeiro Marco Tuli, Local Interpretable Model-Agnostic Explanations, https://arxiv.org/abs/1602.04938 Miller Tim, Contrastive Explanation: A Structural-Model Approach, https://arxiv.org/abs/1811.03163 Homma, Toshiteru, and Atlas, Les, and Marks II, Robert. An artificial neural network for spatio-temporal bipolar patterns: application to phoneme classification. 1988. Hochreiter, Sepp, and Schmidhuber, Juergen. Long short-term memory. 1997. Lynch, Clifford A. When documents deceive: trust and provenance as new factors for information retrieval in a tangled web. 2001. Cheney, James, and Chiticariu, Laura, and Tan, Wang-Chiew. Provenance in databases: why, how, and where. 2009. Verifiable Data Audit https://deepmind.com/blog/article/trust-confidence-verifiable-data-audit Arm TrustZone. Intel Software Guard Extensions. See: Accessed 6th April 2021. AMD Secure Encrypted Virtualization. See: https://developer.amd.com/sev/. Accessed 6th April 2021. Amazon AWS Nitro Enclaves. See: https://aws.amazon.com/ec2/nitro/nitro-enclaves/. Accessed 6th April 2021. Arm Confidential Compute Architecture. See: https://www.arm.com/architecture/security-features/arm-confidential-compute-architecture Cosmian. See: https://cosmian.com. Accessed 6th April 2021. Decentriq. See: https://decentriq.com. Accessed 6th April 2021. IOTEX. See: Accessed 6th April 2021. Scalys. See: https://scalys.com. Accessed 6th April 2021. SCONE. See: https://scontain.com. Accessed 6th April 2021. Microsoft Azure confidential computing. See: https://azure.microsoft.com/en-gb/solutions/confidential-compute/. Accessed 6th April 2021. Amazon AWS Nitro Enclaves. See: Accessed 6th April 2021. Sen, Youren, and Tian, Hongliang, and Chen, Yu, and Chen, Kang, and Wang, Runji, and Xu, Yi, and Xia, Yubin, and Yan, Shoumeng. Occlum: secure and efficient multitasking inside a single enclave of Intel SGX. 2020. Silicon Angle. Google debuts Confidential VMs that keep data encrypted while it’s in use. See: Accessed 6th April 2021. https://community.arm.com/developer/ip-products/processors/b/ml-ip-blog/posts/using-psa-security-toolbox-to-protect-ml-on-the-edge https://www.arm.com/blogs/blueprint/metaverse Ethics Guidelines for trustworthy AI, published 2019. National Artificial Intelligence Act of 2020, 116th Congress (2019-2020). https://www.nist.gov/itl/ai-risk-management-framework The New Generational AI Development Plan published by China’s State Council, July 8, 2017. The Development of Responsible AI: A New Generation of AI Governance Principles published by the Ministry of Science and Technology (MoST), June 17, 2019. On March 28, the MoST held the first meeting of the New Generation AI Governance Expert Committee, chaired by Lan XUE, Dean of Schwarzman College, Tsinghua University. The Cybersecurity Standard Practice Guide – Guidelines for Ethical Security Risk Prevention of Artificial Intelligence released by TC260, January 5, 202.1 The Beijing Academy of Artificial Intelligence is guided and supported by the MoST and Beijing municipal government. The Beijing AI Principles released by the BAAI, May 28, 2019. The Joint Pledge on Artificial Intelligence Industry Self Discipline (Draft for Comment) released by the AIIA, May 31, 2019. The Trustworthy AI Operation Guidelines (V0.5) and the Management Measures for Trusted AI Demonstration Zone, published by AIIA, August 2020.
Non-Confidential Proprietary Notice
This document is protected by copyright and other related rights and the practice or implementation of the information contained in this document may be protected by one or more patents or pending patent applications. No part of this document may be reproduced in any form by any means without the express prior written permission of Arm. No license, express or implied, by estoppel or otherwise to any intellectual property rights is granted by this document unless specifically stated. Your access to the information in this document is conditional upon your acceptance that you will not use or permit others to use the information for the purposes of determining whether implementations infringe any third party patents.
THIS DOCUMENT IS PROVIDED “AS IS”. ARM PROVIDES NO REPRESENTATIONS AND NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY, SATISFACTORY QUALITY, NON-INFRINGEMENT OR FITNESS FOR A PARTICULAR PURPOSE WITH RESPECT TO THE DOCUMENT. For the avoidance of doubt, Arm makes no representation with respect to, has undertaken no analysis to identify or understand the scope and content of, patents, copyrights, trade secrets, or other rights.
This document may include technical inaccuracies or typographical errors.
TO THE EXTENT NOT PROHIBITED BY LAW, IN NO EVENT WILL ARM BE LIABLE FOR ANY DAMAGES, INCLUDING WITHOUT LIMITATION ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, PUNITIVE, OR CONSEQUENTIAL DAMAGES, HOWEVER CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARISING OUT OF ANY USE OF THIS DOCUMENT, EVEN IF ARM HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
This document consists solely of commercial items. You shall be responsible for ensuring that any use, duplication or disclosure of this document complies fully with any relevant export laws and regulations to assure that this document or any portion thereof is not exported, directly or indirectly, in violation of such export laws. Use of the word “partner” in reference to Arm's customers is not intended to create or refer to any partnership relationship with any other company. Arm may make changes to this document at any time and without notice.
This document may be translated into other languages for convenience, and you agree that if there is any conflict between the English version of this document and any translation, the terms of the English version of the Agreement shall prevail.
The Arm corporate logo and words marked with ® or ™ are registered trademarks or trademarks of Arm Limited (or its affiliates) in the US and/or elsewhere. All rights reserved. Other brands and names mentioned in this document may be the trademarks of their respective owners. Please follow Arm's trademark usage guidelines at https://www.arm.com/company/policies/trademarks.
Copyright © 2021-2022 Arm Limited (or its affiliates). All rights reserved. Arm Limited. Company 02557590 registered in England. 110 Fulbourn Road, Cambridge, England CB1 9NJ. (LES-PRE-20349)
Confidentiality Status
This document is Non-Confidential. The right to use, copy and disclose this document may be subject to license restrictions in accordance with the terms of the agreement entered into by Arm and the party that Arm delivered this document to. Unrestricted Access is an Arm internal classification. Web Address developer.arm.com Inclusive language commitment Arm values inclusive communities. Arm recognizes that we and our industry have used language that can be offensive. Arm strives to lead the industry and create change. We believe that this document contains no offensive language. To report offensive language in this document, email terms@arm.com.