AI Oversight Is Not A Public Responsibility
The public doesn’t need to know how artificial intelligence works to trust it – they just need to know that someone with the necessary skillset is examining AI and has the authority to sanction it should AI applications cause harm.
“I’m certain that the public is incapable of determining the trustworthiness of individual Ais…but we don’t need them to do this – it’s not their responsibility to keep AI honest,” argues Dr Bran Knowles, a senior lecturer in data science at Lancaster University.
Dr Knowles who recently presented a research paper ‘The Sanction of Authority: Promoting Public Trust in AI’ at the ACM Conference on Fairness, Accountability and Transparency (ACM FAccT) co-authored by John T. Richards, of IBM’s T.J. Watson Research Center, argues that greater transparency and more accessible explanations of how AI systems work, perceived to be a means of increasing trust, do not address the public’s concerns.
Instead, they argue that a regulatory ecosystem is the only way that AI will be meaningfully accountable to the public sufficiently to earn its trust.
“The public does not routinely concern itself with the trustworthiness of food, aviation, and pharmaceuticals because it trusts there is a system which regulates these things and punishes any breach of safety protocols,” says Dr Richards.
And, adds Dr Knowles: “Rather than asking that the public gain skills to make informed decisions about which AIs are worthy of their trust, the public needs the same guarantees that any AI they might encounter is not going to cause them harm.”
Dr Knowles stresses the critical role of AI documentation in enabling this trustworthy regulatory ecosystem. As an example, the paper discusses work by IBM on AI Factsheets, documentation designed to capture key facts regarding an AI’s development and testing.
But, while such documentation can provide information needed by internal auditors and external regulators to assess compliance with emerging frameworks for trustworthy AI, Dr Knowles cautions against relying on it to directly foster public trust.
“If we fail to recognise that the burden to oversee trustworthiness of AI must lie with highly skilled regulators, then there’s a good chance that the future of AI documentation is yet another terms and conditions-style consent mechanism — something no one really reads or understands,” she says.
The paper calls for AI documentation to be properly understood as a means to empower specialists to assess trustworthiness.
“AI has material consequences in our world which affect real people; and we need genuine accountability to ensure that the AI that pervades our world is helping to make that world better,” says Dr Knowles.
The findings could point the way forward for government and industry to make the most of AI, without harming public trust.