Public Input Could Make AI Fairer, Glasgow Study Finds
Involving people without AI expertise in the development and evaluation of artificial intelligence applications could help create better, fairer and more trustworthy automated decision-making systems, new research from the University of Glasgow suggests. After enlisting members of the public to evaluate the potential impacts of two real-world applications, researchers from across the UK, led by [ ] The post Public Input Could Make AI Fairer, Glasgow Study Finds appeared first on DIGIT .
Involving people without AI expertise in the development and evaluation of artificial intelligence applications could help create better, fairer and more trustworthy automated decision-making systems, new research from the University of Glasgow suggests.
After enlisting members of the public to evaluate the potential impacts of two real-world applications, researchers from across the UK, led by Glasgow University, found that ‘participatory AI auditing’ could improve AI decision-making.
Responsibility for ensuring the applications make fair and impartial decisions usually lies with the engineers and data scientists who develop them, but when developers fail to consider the full social or economic conditions of people affected by the tools’ outcomes, unexpected problems can arise.
For example, in 2019, a healthcare algorithm trained to predict patients’ health risk score was found to demonstrate bias in underrating the severity of Black patients’ health conditions relative to their white peers, while an early Amazon AI tool to rank job applicants discriminated against women, after being trained on ten years of resumes predominantly from men.
By involving a wider group of people in the early stages of AI development, participatory audits might prevent those problems before they occur. Although participants may lack specific technical knowledge of how the systems work, the research found they can offer unexpected insight into social and ethical considerations that traditional audits overlook.
However, for this to work, the researchers argue that participants need significant support to help them provide useful feedback and new tools to guide them through the audit process.
Setting up co-design workshops, the team tasked seventeen people without AI expertise with auditing two real-world AI tools designed for use in healthcare and education.
The auditors were tasked with identifying the applications’ potential impacts, determining how those impacts should be measured, and suggesting how tools to support audits might work.
Although the study initially focused on identifying harms, participants were keen to ensure that the benefits of the applications for the groups were captured, saying they felt constrained by having to mark aspects of the systems as having passed or failed, suggesting a third option is needed when an impact defies binary categorisation.
Recommended reading
-
KPMG UK Interview: Why Trust is AI’s Most Underrated Asset
-
How will AI Change Where Software Engineers Add Value?
-
AI Progress Fatigue Is Starting to Show, Finds Bosch
The research team are now working to build a framework for responsible AI auditing employing a collaborative approach to building AI applications, which will help improve public trust as well as benefit the organisations which develop them.
“Regulations like the European Union’s AI Act, introduced in 2024, are seeking to limit the harms that badly-designed AI applications could inflict on the people affected by their decisions,” said Professor Simone Stumpf, of Glasgow’s School of Computing Science, the project’s lead investigator.
“Our research aims to provide a systematic framework and tools to help people without AI expertise use their lived experience to identify and report those harms through participatory audits, and ultimately be more involved in creating more trustworthy AI systems.”
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
applicationvaluationstudy
The bar is lower than you think
TL;DR: The efficient market hypothesis is a lie, there are no adults, you don't have to be as cool as the Very Cool People to contribute something, your comparative advantage tends to feel like just doing the obvious thing, and low hanging fruit is everywhere if you pay attention. The Very Cool People are anyways not so impossible to become; and perhaps most coolness is gated behind a self belief of having nothing to add. So put more out into the world, worry less about whether people already know or find it boring. At worst you'll be slightly annoying. How can you know, if you haven't even tried? Recently I've been commenting more on LessWrong [1] . This place is somehow the best [2] forum for sane reasoned discussion on the internet besides small academic-gated communities. A lot of post
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

The bar is lower than you think
TL;DR: The efficient market hypothesis is a lie, there are no adults, you don't have to be as cool as the Very Cool People to contribute something, your comparative advantage tends to feel like just doing the obvious thing, and low hanging fruit is everywhere if you pay attention. The Very Cool People are anyways not so impossible to become; and perhaps most coolness is gated behind a self belief of having nothing to add. So put more out into the world, worry less about whether people already know or find it boring. At worst you'll be slightly annoying. How can you know, if you haven't even tried? Recently I've been commenting more on LessWrong [1] . This place is somehow the best [2] forum for sane reasoned discussion on the internet besides small academic-gated communities. A lot of post




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!