Act Upon Instructions

Transparency in systems – both human and AI – is crucial for building trust and acceptance, as when users understand how decisions are made and can provide feedback, they're more likely to accept imperfect processes.

While studying Responsible AI we're introduced to some guardrails which those who design Responsible AI systems are encouraged to follow. For example, the introduction in my course describes transparency in AI Systems as follows (paraphrased):

The useful explanation of the behaviour of AI systems and their components, i.e. improving intelligibility, means that those affected by AI’s decisions will understand how and why they are made and be able to provide feedback. With this feedback stakeholders can identify performance issues, safety and privacy concerns, biases, exclusionary practices, or unintended outcomes – leading to improvements for everybody.

We could remove the initialism from the paragraph above – when the systems we use, human or machine, lack transparency, this often leads to unintended consequences for users.

Think about the number of human systems you've interacted with recently and write down the number of times you've left the encounter dissatisfied. Now imagine if your satisfaction may have improved had the person or process you were dealing with taken the time to explain their process or how they reached a decision.

I tend to agree with Peter Hosey when he suggests The algorithmic-only model admits only one remedy: Improve the algorithm and that we may be playing a game of whac-a-mole forever. But I have a caveat; when people, processes, and AI systems operate transparently, users are more likely to accept their imperfections, knowing their feedback helps make things better over time.