The Sixth of the Six Key Challenges
The development and deployment of AI systems inevitably involves many different subsystems and actors working together to achieve a common goal. Often these subsystems are designed and engineered by different teams within the same company, or even different companies. How can steps and decisions taken at each stage of problem definition, design, development, deployment, and operation be designed and logged to help establish liability and remediation?
During design, all stakeholders should be able to explain and justify development decisions to other system-design stakeholders. They should also be able to explore how decisions could ultimately impact systems. This helps to improve global systems and to define the responsibilities of those involved.
After deployment, each element should be able to explain what happened and why a decision (good or bad) was made. Tamper-resistant immutable logs, sometimes called append-only logs, are one such approach to solving this problem.
Systems can be designed so that details of all decisions made by the system, including details of appropriate inputs which led to the decision being made, are appended to these logs.
If an AI system goes awry, logs can be used by investigators in the same way as aircraft flight data recorders, or ‘black boxes’, to trace through the series of decisions and the stimuli that led to those decisions being made.
Seeking Assurance
Who is accountable for which part of an AI-based system or product, and the AI decision or outcome?
How is this investigated if something goes wrong?
Providing Detailed Information
Has there been maximum transparency at all levels of the design, including a record of how the intelligent system operates?