The Next Steps for Responsible AI
In conclusion, the advantages of developing a chain of assurance scheme would help gain trust in AI and consequently build the gateway to the metaverse.50 Whatever the precise shape of regulatory approaches, it is quite likely that such a scheme could help play a key role in assisting companies fully meet any future regulatory requirements.
But even without the stimulus of emerging regulatory interest, a chain of assurances would be an important step toward building trust. As a sector, we should not sit back and wait for regulation before acting ourselves.
In this paper, we have set out some of the key considerations
The major issues to be addressed.
Goals to cover.
The process and organizations involved.
But we recognize that key aspects of a chain of assurance scheme remain unresolved.
Firstly, for it to be successful there must be some standardization in the way such assurances are given to ensure that they cover the same ground and address the right issues.
Secondly, we must decide whether the assurance will be in the form of a self-declaration by a company stating what it has done, or whether there will be a third-party verification or ‘endorsement.’
Both options may run simultaneously and be respectively appropriate for different use cases, with the higher risk use cases aiming for third-party involvement.
Most importantly, we need a critical mass of companies interested in exploring these ideas to work together. Arm’s success in launching PSA, which as explained above describes how companies can confirm they have addressed IoT security issues, shows that this kind of approach can work. As a first step, Arm is using this paper to engage our partners and others all over the world, including trade associations, in taking forward this concept for Trustworthy AI.