Perspectives

Will AI ever be completely fair?

Share this post:

Amidst the celebrations and anticipation of the good that AI can bring, cases of bias in AI have rightly been called out. Bias in AI is cited as a main factor for why, in Edelman’s 2021 Trust Barometer, trust in tech is at an all-time low.

As AI’s capabilities expand, can we assume that more AI will simply mean more bias? Or will solutions from researchers, governments and AI developers root out bias? Indeed, fairness in AI is expected to be addressed in the UK government’s AI strategy, due to be published later this year. The strategy, aimed at unleashing the transformational power of Artificial Intelligence,  has “Ethical, safe and trustworthy development of responsible AI” as a major focus. An AI Roadmap submitted to the UK Government by the independent advisory body, the AI Council, states that “There is a fundamental need to consider transparency and data bias as public sector adoption increases.”

IBM recently brought together 35 leading journalists and influencers with four of AI’s foremost experts. In a pacey, succinct session the speakers addressed burning questions about AI fairness under four headings. Below is a summary of their views. As moderator of the panel I’ve captured the the highlights of the discussion so everyone can explore this important topic.

 

1. The various dimensions of AI fairness – How can bias in AI generate discrimination?

Fairness in AI has less to do with technology and more to do with the existing frameworks behind it. To address bias we need to go “beyond data, algorithms and models” to also look at the background of those who design and deploy AI, said Catelijne Muller, President of ALLAI, an organization advocating for responsible AI. “Bias comes from the general power dynamics in society” said Gry Hasselbalch, senior researcher and co-founder of DataEthics.eu. When AI is seen to discriminate, it’s often because AI models are adopted uncritically and given priority over human decision making. Gry emphasized that “we need to preserve our human critical agencies to always be able to challenge biased AI decisions.”

 AI bias is something new to grapple with when it comes to human rights, said Catelijne, who has advised the Council of Europe (that she calls “the house of human rights”) on laws around AI.  People have the right not to be discriminated against, but it goes further than that – people can only be arrested if there’s a reasonable suspicion of a crime. If AI is used for predictive policing that uses correlations based on shared features with other cases, a person might be arrested merely because of shared features, without any reasonable suspicion.

 

2. How to keep human bias out of AI?

“When we start trying to clean bias out of systems, it’s like a game of whack-a-mole” said Virginia Dignum, Professor of Responsible Artificial Intelligence at Umeå University. She stressed that we cannot completely remove bias from AI systems just like we cannot completely remove bias from humans. Nonetheless, practices such as data calibration can be used for fairer AI results: data from less-represented and more-represented groups are combined, so that the system is trained in a calibrated and a balanced way.

But the real questions are “How much risk do we want to take? And what is the threshold for acceptance?” said Virginia. “This is a broader societal question which should not only be left to algorithm developers.”

IBM Fellow and AI Ethics Global Leader, Francesca Rossi, also emphasized the education and training of developers – on tools and methodologies but also on their awareness of their own biases. Additionally, bias in the system should be transparently communicated. For example, IBM uses the concept of an AI factsheet that developers complete to help everyone understand what kind of bias is in the system. Explainability – allowing people to understand how decisions were made – is also practiced by IBM and can be useful to identify bias. But while technology will help, Francesca’s ultimate goal is “to advance people through technology, making sure they’re more aware of their values and they’re more conscious about their actions and decisions.”

 

3. Will the EU regulation framework on AI be a game-changer?

Note that this discussion was held before the EU’s April 21 publication of proposed AI rules.

“AI does not operate in a lawless world,” said Catelijne Muller. From labour laws to the GDPR, “there already exists a vast legal field that also applies to AI.” Moreover, the obligation exists to explain certain decisions whether AI was used or not. For example, a judge would have to explain how a decision came about and a government is legally obliged to explain why someone was refused social benefits. Catelijne refutes the claim that AI regulation could hamper innovation “First of all, if it hampers anything, it hampers harmful innovation. And secondly, regulation creates a level playing field where you know that your competitor is bound by the same rules as you.” With the GDPR the EU has shown it’s a trendsetter in the regulatory space. If the EU sets the right boundaries with the AI regulation, the rest of the world needs to live up to those if they want to reach 500 million Europeans.  Whatever is proposed will be negotiated in the context of power and interest of different EU institutions, the EU member states, and their priorities and alliances, according to Gry Hasselbalch. Virginia Dignum added that European policies around technology should focus on the impact of digital technology in general, irrespectively of whether we call that technology AI or not.

 

4. Where do we stand from a research perspective?

Expandability, transparency, and verification of AI systems is the focus of research for Virginia Dignum. This involves creating boundaries for AI systems and then verifying that the systems can be trusted to stay within the boundaries. It also involves looking at building modular AI systems and looking at how the composed model behaves in terms of fairness, bias etc in comparison to each of its modular parts. This research is crucial since it is probable that, in the future, there will be a market of AI components that can be used across Europe by many different people.

Francesca Rossi and her team are focused on three key areas of research. The first is how to visualize bias in AI models that are analyzed by third parties. The second is targeted at collective decision-making: how do we achieve a reasonable trade-off between properties like fairness, privacy, and social welfare? And thirdly, how to leverage cognitive theories of human reasoning and decision making to advance AI’s capabilities to make decisions and to support human reasoning.

The four speakers were all part of the EU’s High Level Expert Group on AI, that worked in 2018-2020 to give advice to the European Commission on its AI Strategy by defining the Ethics Guidelines for Trustworthy AI.

 

Watch the roundtable

 

Influencer Relations Leader, IBM Europe

More stories
By Alison Say on 6 March, 2024

New milestone for IBM-AWS partnership makes it even easier for clients in the UK to buy IBM software on the AWS Marketplace

Last summer, IBM launched a programme that enabled UK clients to purchase IBM software on the Amazon Web Services (AWS) Marketplace. Ever since it has given our clients the ability to recapture at-risk spend with AWS, simplify their procurement process, and improve time to deployment. Given the positive response from clients I’m pleased to announce […]

Continue reading

By Jenny Taylor MBE on 2 February, 2024

The Six Reasons Why Apprenticeships Mean Business

As a passionate champion of apprenticeships, I’m delighted to see the model growing in popularity and reputation. UCAS, for example, now features apprenticeships on their website on a par with traditional university opportunities. Despite this, however, I’m often still asked “can we have a graduate for our business?”. To which I reply: “how about an […]

Continue reading

By Ana Paula Assis on 15 January, 2024

Championing Female Leadership in the Age of AI

Ana Paula Assis, Chair and General Manager IBM EMEA, introduces a new IBM study on female leadership in the age of AI. Artificial intelligence (AI) is shifting the status quo, changing the way we work, live, and interact. Businesses are already using AI to transform the future workforces and economies, as the technology itself continues […]

Continue reading