For AI to Grow, It Requires an Ethical Approach
Read how Arm and its partners are working to increase security, privacy and trustworthiness across all computing markets.
The application of artificial intelligence (AI)1 to many aspects our lives promises to be the most transformative technology trend in our lifetimes. It is crucial to the long-term development of the metaverse, which we understand as a massive computer-generated virtual world that will be deeply intertwined with our physical world. The metaverse will enable humans to live and interact in parallel or in superposition in both physical and digital worlds.
However, if AI systems are not considered trustworthy, we will miss out on all the benefits they can bring. Mass adoption of metaverse applications will have to rely on trustworthy AI principles, just like any other digital transformation. Assessing the trustworthiness of AI systems will help avoid potential harm.
Arm has been looking at trustworthy AI for the past few years and we published our Arm AI Trust Manifesto2 describing some of the key principles that we believe should be at the heart of the debate.
Our manifesto joined the various industry attempts in recent years to establish principles for ethical and trustworthy AI. This now requires urgent focus. A concerted effort by the sector to look at how to put our principles into practice will help build public trust. Some regulatory authorities are also on the point of proposing regulation. The technology sector needs to be able to show we have thought about how to put regulatory objectives into practice.
In this paper, we outline what we are calling a chain of assurance which in essence would require any company in an AI supply chain to state what ethical risks it had identified relevant to that company and state how it had addressed them.
We also look at how some emerging advances in technology can help. Our focus is on developments around security and privacy technologies, including trusted execution environments, and how they can be used to deliver a chain of assurance, and in turn, build trustworthy AI systems.
Our ambitious exploration of these ideas is built on the fact that we have met this type of problem before, in dealing with the need to advance security for the IoT. Here we have seen how various organizations alongside regulators have shaped thinking, by offering practical proposals for putting IoT security into practice.
These include the Platform Security Architecture (PSA) approach, of which Arm is a founding member. PSA Certified offers a detailed checklist of measures designed to help IoT device developers ensure their device is designed with security in mind right from the start. We believe that the industry should arrive at a similar point for trustworthy AI. (For a PDF of this paper in white paper format, click here.)
PSA Certified 2022 Report. PSA also found that 30% say they can charge a premium for secure products.
It would be impossible for us to provide an overview of all the different approaches to ethical AI: the World Economic Forum suggests that more than 175 organizations have proposed their own sets of ethical AI principles.3 An overview of the regulatory landscape in three major legal jurisdictions is provided in regulator and government initiatives.
Despite the variety of approaches, there is significant convergence on what the ethical guidelines for AI should be. For an AI system to be considered trustworthy, it must adhere to the following principles:
• Security • Safety • Privacy • Fairness • Explainability • Accountability
The core idea of a chain of assurance is that all stakeholders in the AI supply chain issue a statement describing:
• What trust-related issues relevant to their piece of IP they have considered.
• How they have addressed these issues.
This does not necessarily mean that the stakeholder has resolved all the issues listed. They may have concluded that others in the supply chain were better placed than they to do so.
As a minimum, this would provide the company finally placing an AI service on the market with a suite of statements from the supply chain showing how potential trust issues had been tackled.
IBM floated a similar approach in 2018, noting that: ‘Industries use transparent, standardized, but often not legally required documents called Supplier’s Declarations of Conformity (SDoCs) to describe the lineage of a product along with the safety and performance testing it has undergone.’4
In the article “Towards Trustworthy AI: Mechanisms for Supporting Verifiable Claims,”5 OpenAI and others also developed the importance of verifiable claims and suggested various steps that different stakeholders in AI development can take to demonstrate responsible behavior.
A chain of assurance could look a bit like the following diagram. For convenience, this drawing is a simplification: for one product, there may well be hundreds of technology developers, tens of system integrators, and a hierarchy of service providers involved. For more information about types of stakeholders we suggest, see Appendix B: Stakeholders.
The chain of assurance unites all these stakeholders in their common interests. For example, for the key area of security:
The technology developer reassures the system integrator that the technology was developed using a secure development lifecycle and in a secure environment.
The system integrator reassures the service provider that all security requirements have been met in the integration process.
The service provider reassures the end user that information stored on their service benefits from state-of-the-art security. To prove this claim, the service provider can use statements provided by the technology developer and the system integrator.