Skip navigation

Identities, Security and Society

Building interpretive, collaborative AI systems that enhance human agency in matters of identity, trust, privacy and security

URKI LOGO

 CCAI Large

 




Identity, privacy and security in AI present a fundamental challenge. When AI systems verify identity, detect threats, or make security decisions, they aren't simply processing data but engaging with deeply cultural questions: What does identity mean across different communities? How much sensitive identity information should you share in order to get the best response from an AI agent? How do citizens make sense of trust in automated systems? Whose values are being embodied in AI decision-making? 

Your research could explore how AI systems might represent multiple valid perspectives on identity rather than imposing homogeneous categories. You might investigate how human-AI ensembles in privacy or security contexts could enhance rather than replace human judgment, or how citizens interpret and make meaning of AI-driven identity verification in ways that differ from designers' intentions. Potential directions include developing interpretive frameworks for identity systems that respect cultural diversity, creating collaborative human-AI approaches that preserve human agency, or examining how participatory design methods can enable communities to shape the AI systems that capture and share identity information. 

This theme addresses a critical gap: AI systems increasingly respond to identity information and seek to address privacy and security concerns, yet they often lack frameworks for interpreting the cultural complexity these domains entail. The UK's 2023 Online Safety Act has made issues such as age verification a pressing issue, whilst the 2024 National Cyber Security Strategy and Digital Identity Trust Framework was designed to impact millions of citizens, but their effectiveness depends on engaging meaningfully with diverse human contexts. Your work could help ensure AI systems in these sensitive domains enhance human capabilities whilst reflecting the plurality and contextual nuance essential to democratic societies. 

Place-based and Regional Context 

The North-East presents exceptional research opportunities at the intersection of digital transformation and social challenge. The region has 12.1% internet non-users (second highest in the UK) and a 20.7% population over 65 (rising to 25% by 2043), creating acute digital identity and security needs. Yet major investment is transforming the landscape: a £4.2 billion devolution deal over 30 years, a £10 billion AI Growth Zone (Blackstone investment at Cambois), and Sunderland's award-winning Smart City programme with 30 Digital Health Hubs serving 465,000 visits in 10 months. In 2025 the North-East Combined Authority (NECA) announced a £30 billion investment in order to make the North East an ‘AI Growth Zone’. However, stark gaps persist: only 18% of North East businesses integrate AI (versus 37% in London), 33% of Sunderland residents are digitally excluded, and 65% live in the most deprived areas. The region's £750 million pharmaceutical sector, £2 billion digital sector, and 3.1 million NHS patients all face cybersecurity, digital identity, and AI adoption challenges. This unique combination of vulnerable populations, major industry, emerging AI infrastructure, and active digital transformation creates ideal conditions for research exploring how AI systems can engage meaningfully with diverse cultural contexts and community needs rather than imposing homogeneous technical solutions. 

Relevant Partner Organisations 

Your research could benefit from partnerships spanning government, industry, and community sectors. Ofcom is the UKs regulator for the Online Safety Act. The National Cyber Security Centre (NCSC) recognises Northumbria as an Academic Centre of Excellence and coordinates with the Department for Science, Innovation and Technology (DSIT) on the £1 billion AI investment and Cyber Security Bill. The Cabinet Office's GOV.UK One Login serves over 11 million users, whilst the Department for Work and Pensions handles identity verification for 20 million customers through its Dynamic Trust Hub. 

Industry partners include Yoti (14 million wallet downloads), which pioneers privacy-preserving biometric identity verification. Northumbria Police and Cleveland Police operate the Regional Cyber Crime Unit with established university research collaborations. Newcastle City Council (UK Smart City 2019) deploys IoT sensors and digital services across the city. CyberNorth CIC operates cybersecurity innovation centres, whilst Digital Safety CIC develops AI-powered cyber range training platforms through Innovate UK funding. 

Related Reading 

Human-Centered Explainable AI 

  • Cenci, A. (2025). Citizen science and negotiating values in the ethical design of AI-based technologies targeting vulnerable individuals. AI and Ethics, 1-19. 
  • Ehsan et al. (2022). Human-Centered Explainable AI (HCXAI): Beyond Opening the Black-Box of AI. CHI '22 
  • Kaur et al. (2022). Sensible AI: Re-imagining Interpretability and Explainability using Sensemaking Theory. FAccT '22 
  • Kim et al. (2023). "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction. CHI '23 
  • Miller (2023). Explainable AI is Dead, Long Live Explainable AI! FAccT '23 

Trust and Appropriate Reliance 

  • CSCW (2023). Being Trustworthy is Not Enough: How Untrustworthy AI Can Deceive the End-Users and Gain Their Trust. CSCW '23 
  • FAccT (2024). Should Users Trust Advanced AI Assistants? Justified Trust As a Function of Competence and Alignment. FAccT '24 
  • CHI (2024). Impact of Model Interpretability and Outcome Feedback on Trust in AI. CHI '24 
  • Rong et al. (2023). Exploring the Effects of Human-centered AI Explanations on Trust and Reliance. Frontiers in Computer Science 
  • Sigfrids, A., Leikas, J., Salo-Pöntinen, H., & Koskimies, E. (2023). Human-centricity in AI governance: A systemic approach. Frontiers in artificial intelligence, 6, 976887. 

Algorithmic Fairness and Public Administration 

  • CHI (2024). Why the Fine, AI? The Effect of Explanation Level on Citizens' Fairness Perception of AI-based Discretion in Public Administrations. CHI '24 
  • Ghasemaghaei et al. (2025). Ethics in the Age of Algorithms: Unravelling the Impact of Algorithmic Unfairness on Data Analytics Recommendation Acceptance. Information Systems Journal 
  • Lewicki et al. (2023). Out of Context: Investigating the Bias and Fairness Concerns of "Artificial Intelligence as a Service". CHI '23 

Identity Representation and Bias 

  • CHI (2024). Products of Positionality: How Tech Workers Shape Identity Concepts in Computer Vision. CHI '24 
  • Yang, Y. (2025). Racial bias in AI-generated images. AI & SOCIETY, 1-13. 

Participatory Design and Citizen Engagement 

  • Bratteteig & Verne (2018). Does AI Make PD Obsolete? Exploring Challenges from Artificial Intelligence to Participatory Design. PDC '18 
  • Delgado, F., Yang, S., Madaio, M., & Yang, Q. (2023, October). The participatory turn in ai design: Theoretical foundations and the current state of practice. In Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (pp. 1-23).  
  • Joshi, S. G., Tolloczko, C. H., Wenaas, S., Holm, O. J., & Langved, M. L. F. (2024, October). Investigating How Generative AI Affects Decision-Making in Participatory Design: Shifting the space to make design choices. In Proceedings of the 13th Nordic Conference on Human-Computer Interaction (pp. 1-14). 

Organizational Accountability and Auditing 

  • Deng et al. (2023). Investigating Practices and Opportunities for Cross-functional Collaboration around AI Fairness in Industry Practice. FAccT '23 
  • Deng et al. (2023). Understanding Practices, Challenges, and Opportunities for User-Engaged Algorithm Auditing in Industry Practice. CHI '23 
  • CHI (2023). Accountability in Algorithmic Systems: From Principles to Practice. CHI '23 

AI-Generated Threats and Disinformation 

  • FAccT (2024). The Role of Explainability in Collaborative Human-AI Disinformation Detection. FAccT '24 
  • Ahmad et al. (2022). A Systematic Literature Review on Fake News in the COVID-19 Pandemic: Can AI Propose a Solution? Applied Sciences 

Human-AI Decision Making 

  • Lai et al. (2023). Towards a Science of Human-AI Decision Making: An Overview of Design Space in Empirical Human-Subject Studies. FAccT '23 
  • FAccT (2022). System Safety and Artificial Intelligence. FAccT '22 

Privacy and Surveillance 

  • Saheb (2023). Ethically Contentious Aspects of Artificial Intelligence Surveillance: A Social Science Perspective. AI and Ethics 
  • Canhoto et al. (2023). Snakes and Ladders: Unpacking the Personalisation-Privacy Paradox in the Context of AI-Enabled Personalisation. Information Systems Frontiers 
  • Lee, H. P., Yang, Y. J., Von Davier, T. S., Forlizzi, J., & Das, S. (2024, May). Deepfakes, phrenology, surveillance, and more! a taxonomy of ai privacy risks. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (pp. 1-19). 

Cybersecurity and AI Applications 

  • Kaur et al. (2023). Artificial Intelligence for Cybersecurity: Literature Review and Future Research Directions. Information Fusion 
  • Malatji, M., & Tolah, A. (2025). Artificial intelligence (AI) cybersecurity dimensions: a comprehensive framework for understanding adversarial and offensive AI. AI and Ethics, 5(2), 883-910.  
  • Ray (2023). ChatGPT: A Comprehensive Review on Background, Applications, Key Challenges, Bias, Ethics, Limitations and Future Scope. Internet of Things and Cyber-Physical Systems 

Generative AI and Emerging Technologies 

  • Park et al. (2023). Generative Agents: Interactive Simulacra of Human Behavior. UIST '23 
  • CHI (2024). User Experience Design Professionals' Perceptions of Generative Artificial Intelligence. CHI '24 

UK Government Policy and Strategy 

  • UK Government (2025). AI Opportunities Action Plan. Department for Science, Innovation and Technology 
  • UK Government (2023). A Pro-innovation Approach to AI Regulation: White Paper. Department for Science, Innovation and Technology 
  • UK Government (2022). National Cyber Security Strategy 2022. Cabinet Office 
  • Office for Digital Identities and Attributes (2024). UK Digital Identity and Attributes Trust Framework Gamma 0.4 

Foundational Reviews and Reports 

  • Kim et al. (2024). Understanding Human-Centred AI: A Review of its Defining Elements and a Research Agenda. Behaviour & Information Technology 
  • National Academies of Sciences, Engineering, and Medicine (2022). Human-AI Teaming: State-of-the-Art and Research Needs 
  • Hemment, D., Kommers, C., et al. (2025). Doing AI Differently: Rethinking the Foundations of AI via the Humanities. The Alan Turing Institute

Back to top