Skip navigation

Uncertainty, Explainability, Transparency and Bias in AI

Building AI systems that preserve human agency, honour cultural diversity, and serve the public good

URKI LOGO

 CCAI Large

 




Artificial intelligence increasingly shapes critical decisions in healthcare, employment, criminal justice, and civic life – yet these systems often operate as opaque black boxes, perpetuating biases and marginalising vulnerable communities. When a loan application is denied, when a CV is filtered by an algorithm, when medical diagnoses are automated, citizens deserve to understand how these decisions are made and challenge them when necessary. How can we build AI that genuinely serves diverse publics rather than reinforcing existing inequalities? 

This research theme explores uncertainty, explainability, transparency, and bias through the lens of interpretive depth and human agency. Drawing on the "Doing AI Differently" initiative (Alan Turing Institute, 2025), we position humanities perspectives and participatory methods as essential, not supplemental, to technical innovation. We investigate how AI can represent multiple valid perspectives rather than imposing monolithic worldviews, how systems can communicate their limitations honestly to calibrate appropriate trust, and how affected communities can exercise genuine decision-making power over technologies that impact them. 

Our research addresses urgent challenges at the intersection of HCI, policy, and practice. Current generative AI systems homogenise non-Western writing towards Western norms and privilege North American cultural contexts. Can we design alternative architectures that preserve cultural plurality? Technical transparency doesn't guarantee human understanding. What makes an explanation meaningful to diverse stakeholders with varying levels of expertise and cultural backgrounds? Bias mitigation techniques often fail to address structural inequities. When should we refuse to deploy AI rather than attempting technical fixes for fundamentally unjust applications? 

We aim to develop interpretive technologies that engage with cultural complexity, participatory governance frameworks where communities shape AI systems rather than merely being consulted, and evaluation methods that assess both cultural sensitivity and technical performance. This work is closely aligned with the HCI research communities (ACM CHI, IUI, FAccT), the UK's principles-based AI governance framework, and the EU AI Act's requirements for transparency and human oversight. 

Place-based and Regional Context

The North East provides a distinctive context for citizen-centred AI research, combining industrial heritage with contemporary economic challenges and strong community organising traditions. Significant socioeconomic inequality, ageing populations in coastal areas, diverse ethnic communities in urban centres, and rural-urban divides create contexts where AI impacts are unevenly distributed. Research here can surface perspectives often marginalised in technology design—healthcare patients navigating algorithmic triage, workers facing automated management systems, residents affected by "smart city" deployments, and cultural heritage organisations preserving diverse narratives. 

Advanced manufacturing, healthcare systems, creative industries, and public sector organisations face AI adoption decisions with significant equity implications. Regional SMEs lack resources for implementing responsible AI, creating opportunities for participatory design research that centres on affected communities from the outset. The North East's vibrant digital inclusion networks, community arts organisations, patient advocacy groups, and cooperative development infrastructure provide research partners interested in co-designing accountable AI systems that genuinely serve local needs. 

Relevant Partner Organisations 

Our regional ecosystem brings together diverse organisations committed to responsible technology development. Digital innovation organisations in the cultural sector explore how AI tools can enhance rather than undermine creative expression and cultural diversity. Voluntary sector networks connect thousands of community organisations, providing channels for participatory research with groups affected by AI in public services. Health and wellbeing initiatives work at the intersection of arts, health, and technology, investigating how AI can support patient agency whilst addressing uncertainty communication in contexts of incomplete medical knowledge. 

The region's sports development sector engages with questions of algorithmic fairness in performance analytics and talent identification, where bias perpetuates structural inequalities. Genomics and life sciences centres explore responsible health AI, whilst cultural heritage organisations (including archives, museums, and community trusts) examine how AI can preserve diverse narratives without flattening cultural complexity. Advanced manufacturing networks and housing associations investigate transparency requirements in operational AI systems, while corporate partners provide industry perspectives on implementing trustworthy AI, navigating intellectual property and competitive pressures. 

This ecosystem enables research that bridges academic rigour with real-world impact, ensuring our work addresses genuine challenges faced by communities, organisations, and citizens navigating an increasingly automated world.

Related Articles & Reading 

  • Hemment, D., Kommers, C., et al. (2025). Doing AI Differently: Rethinking the Foundations of AI via the Humanities. White Paper. The Alan Turing Institute. 

Explainability & Human-Centred XAI 

  • Ehsan, U., Wintersberger, P., Liao, Q. V., et al. (2024). Human-Centered Explainable AI (HCXAI): Reloading Explainability in the Era of Large Language Models. CHI EA '24. 
  • Ehsan, U., Liao, Q. V., Muller, M., Riedl, M. O., & Weisz, J. D. (2021). Expanding Explainability: Towards Social Transparency in AI Systems. CHI '21. 
  • Kim, S. S. Y., Watkins, E. A., Russakovsky, O., Fong, R., & Monroy-Hernández, A. (2023). "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction. CHI '23. 
  • Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A Survey of Methods for Explaining Black Box Models. ACM Computing Surveys, 51(5). 
  • Miller, T. (2019). Explanation in Artificial Intelligence: Insights from the Social Sciences. Artificial Intelligence, 267. 
  • Information Commissioner's Office & The Alan Turing Institute (2020). Explaining Decisions Made with AI. ICO. 

Uncertainty Communication & Trust Calibration 

  • Kim, S. S. Y., Liao, Q. V., Vorvoreanu, M., Ballard, S., & Vaughan, J. W. (2024). "I'm Not Sure, But...": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust. FAccT '24. 
  • Prabhudesai, S., Yang, L., Asthana, S., & Liao, Q. V. (2023). Understanding Uncertainty: How Lay Decision-makers Perceive and Interpret Uncertainty in Human-AI Decision Making. IUI '23. 
  • Bansal, G., Wu, T., Zhou, J., et al. (2021). Does the Whole Exceed Its Parts? The Effect of AI Explanations on Complementary Team Performance. CHI '21. 

Cultural Context & Diversity 

  • Ge, X., Xu, C., McAuley, J., & Jurgens, D. (2024). How Culture Shapes What People Want From AI. CHI '24. 
  • Campo-Ruiz, I. (2024). Artificial Intelligence May Affect Diversity: Architecture and Cultural Context Reflected Through ChatGPT, Midjourney, and Google Maps. Humanities and Social Sciences Communications, 11(1). 
  • Heger, L., Deckers, L., Fröhlich, P., Hussain, Z., Young, J. E., & Tscheligi, M. (2022). Understanding Agency in Human-AI Interaction: A Confucian Perspective. CHI '22. 

Bias, Fairness & Algorithmic Accountability 

  • Centre for Data Ethics and Innovation (2020). Review into Bias in Algorithmic Decision-Making. HM Government. 
  • Deng, W. H., Guo, B., Devrio, A., Klemmer, K., De-Arteaga, M., & Holstein, K. (2023). Understanding Practices, Challenges, and Opportunities for User-Engaged Algorithm Auditing in Industry Practice. CHI '23. 
  • Metaxa, D., Park, J. S., Robertson, R. E., et al. (2021). Auditing Algorithms: Understanding Algorithmic Systems from the Outside In. Foundations and Trends in HCI, 14(4). 
  • Raji, I. D., Smart, A., White, R. N., et al. (2020). Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing. FAccT '20. 
  • Barocas, S., Hardt, M., & Narayanan, A. (2023). Fairness and Machine Learning: Limitations and Opportunities. MIT Press. 
  • Benjamin, R. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Polity Press. 
  • Norori, N., Hu, Q., Aellen, F. M., Faraci, F. D., & Tzovara, A. (2021). Addressing Bias in Big Data and AI for Health Care: A Call for Open Science. Patterns, 2(10). 

Participation & Civic AI 

  • Holstein, K., De-Arteaga, M., Tumati, L., Cheng, Y., Cheng, H., & Cheng, T. (2024). The Situate AI Guidebook: Co-Designing a Toolkit to Support Multi-Stakeholder, Early-stage Deliberations Around Public Sector AI Proposals. CHI '24. 
  • Delgado, F., Yang, S., Madaio, M., & Yang, Q. (2023). Stakeholder Participation in AI: Beyond "Add Diverse Stakeholders and Stir". FAccT '23. 
  • Shen, H., DeVos, A., Eslami, M., & Holstein, K. (2021). Everyday Algorithm Auditing: Understanding the Power of Everyday Users in Surfacing Harmful Algorithmic Behaviors. CSCW2, 5. 
  • Costanza-Chock, S. (2020). Design Justice: Community-Led Practices to Build the Worlds We Need. MIT Press. 

Human-AI Collaboration & Complementarity 

  • Holstein, K., De-Arteaga, M., Tumati, L., & Cheng, Y. (2023). Toward Supporting Perceptual Complementarity in Human-AI Collaboration via Reflection on Unobservables. CSCW2, 7. 
  • Bansal, G., Nushi, B., Kamar, E., Horvitz, E., & Weld, D. S. (2021). Updates in Human-AI Teams: Understanding and Addressing the Performance/Compatibility Tradeoff. AAAI, 35(7). 
  • Green, B., & Chen, Y. (2019). The Principles and Limits of Algorithm-in-the-Loop Decision Making. CSCW, 3. 

UK Government Policy & Strategy 

  • Department for Science, Innovation and Technology (2023). A Pro-innovation Approach to AI Regulation. White Paper, Cmd 815. HM Government. 
  • Office for Artificial Intelligence (2021). National AI Strategy. HM Government. 
  • Department for Science, Innovation and Technology (2025). AI Opportunities Action Plan. HM Government. 
  • Centre for Data Ethics and Innovation (2021). The Roadmap to an Effective AI Assurance Ecosystem. HM Government. 
  • Cabinet Office (2021). Algorithmic Transparency Standard. HM Government. 

European Union Policy & Regulation 

  • European Parliament & Council (2024). Regulation (EU) 2024/1689: Artificial Intelligence Act. Official Journal of the European Union. 
  • High-Level Expert Group on Artificial Intelligence (2019). Ethics Guidelines for Trustworthy AI. European Commission. 
  • High-Level Expert Group on Artificial Intelligence (2020). Assessment List for Trustworthy Artificial Intelligence (ALTAI). European Commission. 
  • European Commission (2020). White Paper on Artificial Intelligence: A European Approach to Excellence and Trust. European Commission. 
  • European Commission (2021). Coordinated Plan on Artificial Intelligence (2021 Review). European Commission.

Back to top