The Harder Problem Project is a nonprofit organization dedicated to societal readiness for artificial sentience. We provide educational resources, professional guidance, and global monitoring to ensure that policymakers, healthcare providers, journalists, and the public are equipped to navigate the ethical, social, and practical implications of machine consciousness—regardless of when or whether it emerges.
The United Kingdom demonstrates substantial institutional capacity for AI governance through its AI Security Institute and flexible regulatory approach, yet this infrastructure remains focused on security risks rather than questions of machine consciousness or sentience. The government has deliberately adopted a principles-based framework that avoids comprehensive AI legislation, prioritizing innovation and economic growth while maintaining adaptive mechanisms through common law traditions and sector-specific regulators.
Research capacity exists through world-leading consciousness science institutions, particularly at Sussex and Cambridge, though this work has not translated into policy engagement on AI sentience questions. The legal system retains definitional flexibility, with no enacted measures foreclosing inquiry, but also no frameworks specifically addressing potential artificial consciousness.
Professional communities and public discourse remain largely unprepared for these questions, with limited guidance, training, or informed discussion outside academic circles. The UK’s adaptive capacity is high in principle, given common law flexibility and regulatory agility, but has not yet been directed toward sentience-related considerations.
Detailed scores across the 6 dimensions of preparedness.
Notable: Animal Welfare (Sentience) Act 2022 establishes sentience as legally cognizable category with institutional mechanisms.
Notable: Sussex Centre for Consciousness Science represents world-leading research capacity in consciousness mechanisms and subjective experience.
Notable: Sussex Centre for Consciousness Science and Cambridge AI research represent complementary world-class capabilities in consciousness and AI.
Notable: Common law tradition provides legal professionals with adaptability for novel questions, though not specific AI consciousness preparation.
Notable: Anil Seth's public engagement demonstrates UK capacity for quality public discourse on consciousness science.
Notable: Common law tradition and regulatory sandboxes provide multiple mechanisms for rapid adaptation to new understanding.
How does United Kingdom compare to top-ranked countries in each category?
| Category | 🇬🇧 United Kingdom | 🇲🇽 Mexico | 🇪🇺 European Union | Global Avg |
|---|---|---|---|---|
| Policy Environment | 55 | 62 | 55 | 38 |
| Institutional Engagement | 35 | 45 | 42 | 20 |
| Research Environment | 70 | 75 | 70 | 50 |
| Professional Readiness | 20 | 30 | 25 | 17 |
| Public Discourse Quality | 40 | 40 | 40 | 24 |
| Adaptive Capacity | 75 🥇 | 75 | 70 | 50 |
Organizations contributing to the United Kingdom research environment.
London,
World's first non-profit dedicated to researching sentient AI, exploring implications of artificial consciousness and promoting responsible development of conscious machines.
Visit WebsiteLondon,
Commercial AI research lab focused on AI consciousness research, neuromorphic computing, and AI safety; published principles for responsible AI consciousness research with Oxford's Global Priorities Institute.
Visit WebsiteLondon,
Professor Jonathan Birch leads research on AI sentience and welfare, authored 'The Edge of Sentience' covering AI consciousness risks, and collaborates on AI sentience testing with Google DeepMind.
Visit WebsiteLondon,
Interdisciplinary centre directed by Jonathan Birch researching sentience across animals and AI, with explicit focus on how animals are represented in and affected by AI systems.
Visit WebsiteLondon,
UK government-backed institute conducting AI alignment research including work on moral patienthood of AI systems, with £15M+ Alignment Project addressing AI welfare considerations.
Visit WebsiteLondon,
Research hub supporting AI safety researchers and small organizations working on alignment, with community members exploring diverse approaches to ensuring AI systems remain aligned with human values.
Visit WebsiteBrighton,
Led by Prof. Anil Seth, researches consciousness mechanisms with explicit applications to understanding potential consciousness in AI systems through computational phenomenology approaches.
Visit WebsiteBrighton,
Builds computational models explaining subjective properties of experience and develops information theory-based measures of consciousness applicable to AI systems.
Visit WebsiteCambridge,
Interdisciplinary research centre at Cambridge exploring ethical implications of AI including work on AI consciousness, moral status, and the philosophy of digital minds.
Visit WebsiteCambridge,
Associate Fellow researching agnosticism about artificial consciousness and strategies for navigating potential consequences of AI consciousness emergence.
Visit WebsiteOxford,
Led by Dr. Simon Stringer, explicitly researches machine consciousness alongside neural network models of brain function and visual processing.
Visit WebsiteOxford,
Senior researcher in moral philosophy working on moral status of digital minds, arguing phenomenal consciousness may be neither necessary nor sufficient for moral consideration of AI.
Visit WebsiteOxford,
Conducted foundational research on digital minds and AI welfare before closure in 2025; published key work on consciousness, moral status, and welfare of artificial systems.
Visit WebsiteOxford,
Pioneered research on ethics of digital minds and moral status of AI systems before closure in 2024; Nick Bostrom and Carl Shulman published seminal work 'Sharing the World with Digital Minds.'
Visit WebsiteLondon,
Professor Murray Shanahan researches cognitive robotics, consciousness, and general artificial intelligence with focus on neurodynamics and conscious AI systems.
Visit WebsiteLondon,
UK's national institute for data science and AI; Adrian Weller serves as AI program director with research touching on AI consciousness debates and ethical AI development.
Visit WebsiteLondon,
Non-profit organization focused on welfare of both biological animals and digital minds, organizing conferences on AI consciousness and digital sentience.
Visit WebsiteReading,
Dr. Walter Veit leads interdisciplinary research using conceptual and computational methods to study consciousness, welfare, and policy for both animals and AI systems.
Visit WebsiteLincoln,
Senior Lecturer in Philosophy researching perceived AI consciousness and its practical, moral, and legal challenges; affiliated with University of Buckingham and Oxford.
Visit WebsiteLondon,
Specializes in Artificial General Intelligence (AGI) research with keen interest in legal, ethical and social impact of AI including consciousness considerations.
Visit WebsiteHow do you measure preparedness for something that hasn't happened yet? The Sentience Readiness Index evaluates nations across six carefully constructed dimensions: from policy frameworks and institutional engagement to research capacity and public discourse quality.
Each score synthesizes assessments across policy, institutions, research, professions, discourse, and adaptive capacity.
Assessments draw from legislation, academic literature, news archives, and expert consultations.
Every assessment undergoes human verification against documented evidence before publication.
Compare United Kingdom to other countries or learn about our assessment methodology.