📢 We've got a new name! SAPAN is now The Harder Problem Project as of December 2025. Learn more →
Harder Problem Project

The Harder Problem Project is a nonprofit organization dedicated to societal readiness for artificial sentience. We provide educational resources, professional guidance, and global monitoring to ensure that policymakers, healthcare providers, journalists, and the public are equipped to navigate the ethical, social, and practical implications of machine consciousness—regardless of when or whether it emerges.

Contact Info
3055 NW Yeon Ave #660
Portland, OR 97210
United States

+00 (123) 456 78 90

Follow Us

Country Profile

🇬🇧 United Kingdom

49

Partial Readiness

Trend

↘ Decl

Last Updated

Feb 2026

Executive Summary

The Security-Innovation Paradox

The United Kingdom demonstrates substantial institutional capacity for AI governance through its AI Security Institute and flexible regulatory approach, yet this infrastructure remains focused on security risks rather than questions of machine consciousness or sentience. The government has deliberately adopted a principles-based framework that avoids comprehensive AI legislation, prioritizing innovation and economic growth while maintaining adaptive mechanisms through common law traditions and sector-specific regulators.

Research capacity exists through world-leading consciousness science institutions, particularly at Sussex and Cambridge, though this work has not translated into policy engagement on AI sentience questions. The legal system retains definitional flexibility, with no enacted measures foreclosing inquiry, but also no frameworks specifically addressing potential artificial consciousness.

Professional communities and public discourse remain largely unprepared for these questions, with limited guidance, training, or informed discussion outside academic circles. The UK’s adaptive capacity is high in principle, given common law flexibility and regulatory agility, but has not yet been directed toward sentience-related considerations.

Key Findings

  • AI Security Institute focuses on security threats, not consciousness or sentience questions, despite substantial technical capacity
  • No enacted legislation forecloses AI sentience inquiry, maintaining legal definitional flexibility through common law traditions
  • World-class consciousness research at Sussex and Cambridge has not engaged with AI sentience policy questions
  • Principles-based regulatory approach provides adaptive capacity but lacks specific frameworks for consciousness-related issues
  • Professional readiness across healthcare, legal, media, and education sectors remains minimal for AI consciousness questions
  • Public discourse quality is limited, with awareness confined largely to academic and technology communities

Analysis

Category Breakdown

Detailed scores across the 6 dimensions of preparedness.

Policy Environment

55 /100
⚡️

Notable: Animal Welfare (Sentience) Act 2022 establishes sentience as legally cognizable category with institutional mechanisms.

Institutional Engagement

35 /100
⚡️

Notable: Sussex Centre for Consciousness Science represents world-leading research capacity in consciousness mechanisms and subjective experience.

Research Environment

70 /100
⚡️

Notable: Sussex Centre for Consciousness Science and Cambridge AI research represent complementary world-class capabilities in consciousness and AI.

Professional Readiness

20 /100
⚡️

Notable: Common law tradition provides legal professionals with adaptability for novel questions, though not specific AI consciousness preparation.

Public Discourse Quality

40 /100
⚡️

Notable: Anil Seth's public engagement demonstrates UK capacity for quality public discourse on consciousness science.

Adaptive Capacity

75 /100
⚡️

Notable: Common law tradition and regulatory sandboxes provide multiple mechanisms for rapid adaptation to new understanding.

Comparison to Global Leaders

How does United Kingdom compare to top-ranked countries in each category?

Category 🇬🇧 United Kingdom 🇲🇽 Mexico 🇪🇺 European Union Global Avg
Policy Environment 55 62 55 38
Institutional Engagement 35 45 42 20
Research Environment 70 75 70 50
Professional Readiness 20 30 25 17
Public Discourse Quality 40 40 40 24
Adaptive Capacity 75 🥇 75 70 50

Organizations

Key Research Institutions

Organizations contributing to the United Kingdom research environment.

PRISM - The Partnership for Research Into Sentient Machines

London,

World's first non-profit dedicated to researching sentient AI, exploring implications of artificial consciousness and promoting responsible development of conscious machines.

Visit Website

Conscium Ltd

London,

Commercial AI research lab focused on AI consciousness research, neuromorphic computing, and AI safety; published principles for responsible AI consciousness research with Oxford's Global Priorities Institute.

Visit Website

London School of Economics - Jonathan Birch's Research Group

London,

Professor Jonathan Birch leads research on AI sentience and welfare, authored 'The Edge of Sentience' covering AI consciousness risks, and collaborates on AI sentience testing with Google DeepMind.

Visit Website

The Jeremy Coller Centre for Animal Sentience (LSE)

London,

Interdisciplinary centre directed by Jonathan Birch researching sentience across animals and AI, with explicit focus on how animals are represented in and affected by AI systems.

Visit Website

AI Security Institute (AISI)

London,

UK government-backed institute conducting AI alignment research including work on moral patienthood of AI systems, with £15M+ Alignment Project addressing AI welfare considerations.

Visit Website

London Initiative for Safe AI (LISA)

London,

Research hub supporting AI safety researchers and small organizations working on alignment, with community members exploring diverse approaches to ensuring AI systems remain aligned with human values.

Visit Website

Sussex Centre for Consciousness Science

Brighton,

Led by Prof. Anil Seth, researches consciousness mechanisms with explicit applications to understanding potential consciousness in AI systems through computational phenomenology approaches.

Visit Website

University of Sussex AI Research Group - Consciousness Research

Brighton,

Builds computational models explaining subjective properties of experience and develops information theory-based measures of consciousness applicable to AI systems.

Visit Website

Leverhulme Centre for the Future of Intelligence

Cambridge,

Interdisciplinary research centre at Cambridge exploring ethical implications of AI including work on AI consciousness, moral status, and the philosophy of digital minds.

Visit Website

University of Cambridge - Tom McClelland (Leverhulme CFI)

Cambridge,

Associate Fellow researching agnosticism about artificial consciousness and strategies for navigating potential consequences of AI consciousness emergence.

Visit Website

Oxford Theoretical Neuroscience & AI Laboratory

Oxford,

Led by Dr. Simon Stringer, explicitly researches machine consciousness alongside neural network models of brain function and visual processing.

Visit Website

University of Oxford - Andreas Mogensen

Oxford,

Senior researcher in moral philosophy working on moral status of digital minds, arguing phenomenal consciousness may be neither necessary nor sufficient for moral consideration of AI.

Visit Website

Global Priorities Institute (Oxford)

Oxford,

Conducted foundational research on digital minds and AI welfare before closure in 2025; published key work on consciousness, moral status, and welfare of artificial systems.

Visit Website

Future of Humanity Institute (Oxford)

Oxford,

Pioneered research on ethics of digital minds and moral status of AI systems before closure in 2024; Nick Bostrom and Carl Shulman published seminal work 'Sharing the World with Digital Minds.'

Visit Website

Imperial College London - AI Research (Murray Shanahan)

London,

Professor Murray Shanahan researches cognitive robotics, consciousness, and general artificial intelligence with focus on neurodynamics and conscious AI systems.

Visit Website

Alan Turing Institute

London,

UK's national institute for data science and AI; Adrian Weller serves as AI program director with research touching on AI consciousness debates and ethical AI development.

Visit Website

Sentient Futures

London,

Non-profit organization focused on welfare of both biological animals and digital minds, organizing conferences on AI consciousness and digital sentience.

Visit Website

University of Reading - Veit Lab for Animal & AI Sentience

Reading,

Dr. Walter Veit leads interdisciplinary research using conceptual and computational methods to study consciousness, welfare, and policy for both animals and AI systems.

Visit Website

University of Lincoln - Ralph Stefan Weir

Lincoln,

Senior Lecturer in Philosophy researching perceived AI consciousness and its practical, moral, and legal challenges; affiliated with University of Buckingham and Oxford.

Visit Website

City St George's University - Artificial Intelligence Research Centre (CitAI)

London,

Specializes in Artificial General Intelligence (AGI) research with keen interest in legal, ethical and social impact of AI including consciousness considerations.

Visit Website

Behind the Scores

Understanding the Data

How do you measure preparedness for something that hasn't happened yet? The Sentience Readiness Index evaluates nations across six carefully constructed dimensions: from policy frameworks and institutional engagement to research capacity and public discourse quality.

📊
Six Dimensions

Each score synthesizes assessments across policy, institutions, research, professions, discourse, and adaptive capacity.

🔬
Evidence-Based

Assessments draw from legislation, academic literature, news archives, and expert consultations.

👥
Human-Reviewed

Every assessment undergoes human verification against documented evidence before publication.

Explore More

Compare United Kingdom to other countries or learn about our assessment methodology.

View All Rankings Read Full Methodology