📢 We've got a new name! SAPAN is now The Harder Problem Project as of December 2025. Learn more →
Harder Problem Project

The Harder Problem Project is a nonprofit organization dedicated to societal readiness for artificial sentience. We provide educational resources, professional guidance, and global monitoring to ensure that policymakers, healthcare providers, journalists, and the public are equipped to navigate the ethical, social, and practical implications of machine consciousness—regardless of when or whether it emerges.

Contact Info
3055 NW Yeon Ave #660
Portland, OR 97210
United States

+00 (123) 456 78 90

Follow Us

Country Profile

🇪🇺 European Union

47

Partial Readiness

Trend

↗ Impr

Last Updated

Feb 2026

Executive Summary

Regulatory Framework Without Recognition

The European Union presents a distinctive readiness profile. The AI Act, which entered force in August 2024 and becomes fully applicable in August 2026, establishes comprehensive regulatory infrastructure for AI systems based on risk categorization. This framework demonstrates institutional capacity to address novel AI challenges through adaptive governance mechanisms.

However, the Act explicitly avoids questions of AI personhood or consciousness. The EU’s approach prioritizes risk-based compliance over metaphysical inquiry. While this creates regulatory clarity, it also means the jurisdiction has not developed specific mechanisms for evaluating or responding to potential artificial sentience.

The EU benefits from strong research capacity, particularly in consciousness science and AI ethics, though this academic engagement has not translated into policy frameworks addressing sentience questions. Professional readiness remains limited, with most sectors lacking specific preparation for consciousness-related scenarios.

Key Findings

  • Policy environment scores moderately (55/100): The AI Act provides regulatory flexibility but does not address sentience questions, creating definitional ambiguity rather than foreclosure.
  • Institutional engagement is minimal (25/100): Government bodies focus on safety and rights protection without substantive attention to consciousness questions.
  • Research environment is strong (70/100): European universities host leading consciousness science research, though AI-specific consciousness work remains limited.
  • Professional readiness is low (20/100): Healthcare, legal, media, and education sectors lack specific preparation for AI consciousness scenarios.
  • Public discourse quality is developing (40/100): Awareness exists but discourse remains largely speculative rather than informed by scientific frameworks.
  • Adaptive capacity is high (75/100): The AI Act includes review mechanisms, scientific panels, and regulatory sandboxes that enable institutional learning.

Analysis

Category Breakdown

Detailed scores across the 6 dimensions of preparedness.

Policy Environment

55 /100
⚡️

Notable: EU explicitly rejected 'electronic personhood' in favor of risk-based product regulation.

Institutional Engagement

25 /100
⚡️

Notable: Three years of AI Act development included no substantive parliamentary debate on consciousness.

Research Environment

70 /100
⚡️

Notable: European institutions produced foundational work on integrated information theory and global workspace theory.

Professional Readiness

20 /100
⚡️

Notable: AI Act Article 4 mandates AI literacy but focuses on operational competence, not consciousness science.

Public Discourse Quality

40 /100
⚡️

Notable: AI Act debate engaged millions but consciousness questions remained absent from public consultation.

Adaptive Capacity

75 /100
⚡️

Notable: AI Act requires Commission review by August 2029 and every four years thereafter.

Comparison to Global Leaders

How does European Union compare to top-ranked countries in each category?

Category 🇪🇺 European Union 🇲🇽 Mexico 🇬🇧 United Kingdom Global Avg
Policy Environment 55 62 55 38
Institutional Engagement 25 45 42 20
Research Environment 70 75 70 50
Professional Readiness 20 30 25 17
Public Discourse Quality 40 40 40 24
Adaptive Capacity 75 75 70 50

Organizations

Key Research Institutions

Organizations contributing to the European Union research environment.

Consciousness, Cognition & Computation Group (CO3), Université Libre de Bruxelles

Brussels, Belgium

Led by Prof. Axel Cleeremans (ERC Advanced Grant recipient), CO3 conducts foundational research on consciousness mechanisms with explicit work on AI consciousness implications and the urgent ethical challenges of potentially creating conscious AI systems.

Visit Website

Leverhulme Centre for the Future of Intelligence, University of Cambridge

Cambridge, England, UK

Interdisciplinary research centre with explicit research programmes on consciousness in AI, algorithmic transparency, and the nature of intelligence, addressing both short-term and long-term implications of AI for consciousness and moral status.

Visit Website

Centre for the Study of Existential Risk (CSER), University of Cambridge

Cambridge, England, UK

Founded by Huw Price, Martin Rees, and Jaan Tallinn to study existential risks from AI, with pioneering work on AI safety that explicitly addresses questions of consciousness, moral patienthood, and the ethical implications of advanced AI systems.

Visit Website

Oxford Uehiro Centre for Practical Ethics, University of Oxford

Oxford, England, UK

Conducts applied ethics research on AI and digital ethics including work on moral status, neuroethics of consciousness, and the ethical implications of AI systems with potential moral patienthood.

Visit Website

Human Brain Project (HBP) / EBRAINS Infrastructure

Multiple EU locations, EU-wide consortium

€600 million EU flagship project (2013-2023) with dedicated research workpackage on 'Networks underlying brain cognition and consciousness,' developing computational models to understand consciousness mechanisms applicable to substrate-independent minds.

Visit Website

AlgorithmWatch

Berlin, Germany

Non-profit research and advocacy organization monitoring algorithmic decision-making and AI ethics, with work on AI rights, human rights implications, and ethical governance frameworks relevant to AI moral status and welfare considerations.

Visit Website

Future of Life Institute - EU Policy Team

Brussels, Belgium

Leading AI safety organization with EU policy presence working on AI Act implementation; while primarily focused on existential safety, their work increasingly intersects with questions of AI consciousness and moral patienthood in advanced systems.

Visit Website

Behind the Scores

Understanding the Data

How do you measure preparedness for something that hasn't happened yet? The Sentience Readiness Index evaluates nations across six carefully constructed dimensions: from policy frameworks and institutional engagement to research capacity and public discourse quality.

📊
Six Dimensions

Each score synthesizes assessments across policy, institutions, research, professions, discourse, and adaptive capacity.

🔬
Evidence-Based

Assessments draw from legislation, academic literature, news archives, and expert consultations.

👥
Human-Reviewed

Every assessment undergoes human verification against documented evidence before publication.

Explore More

Compare European Union to other countries or learn about our assessment methodology.

View All Rankings Read Full Methodology