Government

The Ethics of AI in Government

Share this post:

“How was the decision made?” The department’s minister and permanent secretary sat before a select committee looking into a recent tragedy.

“The system made the decision using artificial intelligence,” came the reply.  The decision made was poor and had triggered a catastrophic sequence of events.

“How did the system make the decision?” the chairman persevered.  The minister and permanent secretary looked at each other dumbfounded.  They were unable to explain what had led to the decision that was made because too much trust was being placed in artificial intelligence (AI) without regard for how it came to conclusions.

This hypothetical scenario could play out for real unless due consideration is given to how artificial intelligence should be exploited.  This includes the ethical use of AI, not merely algorithms and technology.  Ethics is about decision making, and the select committee had uncovered the department’s negligence in justifying its actions:  it was unable to explain itself.

Each decision that Government makes in delivering its services to individual citizens has follow on consequences in their lives.  It is unlike the commercial world because there is no option of return or refund.  Ethical decision making is not an exact science, and it is for this reason that the purpose of AI is to augment – not replace – human intelligence;  AI assists people in making decisions.  The judgement necessary is illustrated by a Venn diagram that we originally put together as part of a study I participated in a few years ago on the ethics of analytics.

Artificial Intelligence and Ethics

In my example, the department lost public trust.  If artificial intelligence is used to help make important decisions, it must be explainable.  This means having clarity over who trains AI systems, what data was used in that training and, most importantly, what went into the algorithm’s recommendations.  Governments need to understand from technology providers what the AI is doing.  It is easy to generate recommendations and alerts, but harder to understand the extent to which you should trust them and to measure the performance of the models.

AI offers measurable improvements to users, even though ambiguity in what is the best or right decision will remain.  An example is tackling bias in decision making.  IBM has made the largest annotated data set available to detect and address bias in facial analysis.

Then last month, IBM released the first comprehensive bias mitigation toolkit, AI Fairness 360.  It increases fairness in machine learning algorithms by offering research to industry practitioners.  Its bias mitigation algorithms can act at the data set, classifier or predictions stages.  The toolkit can first be used to measure bias and assess against legal or policy tolerances by exposing factors and weights counting for and against individual decisions, and to identify parts of a data set that might be the source of unfair outcomes.  Furthermore, IBM’s new service captures meta data across the lifecycle of AI systems.  Provenance information ensures complete records are maintained to allow Governments to sustain compliance with regulations such as GDPR.

Clearly, some of the responsibilities for appropriate use of AI fall to developers.  The systems they build must be calibrated, and continuous monitoring undertaken to ensure that the probabilities generated are in line with expectations.  Ethical systems are built so that users are able to perceive and detect when they are using AI, and understand its decision process.

A subtler, but important ethical concern for Government is in the terms and condition associated with the use of AI technologies, especially cloud services.  IBM believes that data and insights belong to their creator.  Our clients are not required to relinquish rights to their data — nor the insights derived from that data — to have the benefits of IBM’s solutions and services.  The owner of the data gets the value.  Therefore, Government data and the insights produced on IBM’s cloud or from IBM’s AI are owned by Government.

Government departments need to be able to explain the decisions that they make to sustain public trust in services.  Using the advantages offer by AI does not change that obligation.  It means that policies governing AI systems must ensure that people understand how a conclusion or recommendation has been reached.  Leaders should form and apply principles for trusted and transparent use of data to govern the application of AI in the public sector.  Find out more about principles for trust and transparency in this article on responsible use of data.

Further Reading:

Global Technical Leader for Defence & Security

More Government stories
By Mark Restall on 5 November, 2024

Impact on Data Governance with generative AI – Part Two

Many thanks to, Dr. Roushanak Rahmat, Hywel Evans, Joe Douglas, Dr. Nicole Mather and Russ Latham for their review feedback and contributions in this paper. This blog is a continuation of the earlier one describing Data Governance and how it operates today in many businesses. In this blog, we will see how Data Governance will […]

Continue reading

By Mark Restall on 28 October, 2024

Impact on Data Governance with Generative AI – Part One

Many thanks to, Dr. Roushanak Rahmat, Hywel Evans, Joe Douglas, Dr. Nicole Mather and Russ Latham for their review feedback and contributions in this paper. Introduction As artificial intelligence (AI) and machine learning (ML) technologies continue to transform industries and revolutionise the way we live and work, the importance of effective Data Governance cannot be […]

Continue reading

By Steve Moe on 14 October, 2024

Accelerating the creation of AI-infused solutions in a hybrid environment

As a global leader in software for banks and financial services organisations, Finastra aims to bring generative AI (gen AI)-enriched solutions to its clients without limiting their options around choice of platforms. Steve Moe, Head of Technology for the Lending business at Finastra, explains how a collaborative initiative between IBM, Microsoft and Finastra, using the […]

Continue reading