The Harder Problem Project is a nonprofit organization dedicated to societal readiness for artificial sentience. We provide educational resources, professional guidance, and global monitoring to ensure that policymakers, healthcare providers, journalists, and the public are equipped to navigate the ethical, social, and practical implications of machine consciousness—regardless of when or whether it emerges.
Complete documentation of how we measure societal readiness for artificial sentience. Published in the interest of transparency and reproducibility.
Purpose: The SRI measures how ready societies are to navigate the possibility of artificial sentience. It does not assess whether AI sentience is likely, imminent, or desirable. Rather, it evaluates whether societal conditions support informed, adaptive responses if and when such questions become practically relevant.
Methodological Foundation: The SRI was constructed following the OECD/JRC Handbook on Constructing Composite Indicators (Nardo et al., 2008), the standard reference for composite index design. The framework covers theoretical development, variable selection, normalization, weighting, aggregation, and robustness analysis.
Organizational Note: The Harder Problem Project is a 501(c)(3) educational organization. This index assesses conditions; it does not advocate for or against specific legislation.
Two fields that have operated largely independently are now converging on a shared problem. AI governance has developed detailed regulatory frameworks and readiness metrics. Consciousness science has matured into an empirical research program with competing theories that generate testable predictions.
The convergence creates a governance challenge that neither field has adequately addressed: if AI systems become plausible candidates for moral consideration, are societies prepared to respond?
Butlin et al. (2023) assembled neuroscientists, philosophers, and AI researchers to evaluate current AI architectures against six scientific theories of consciousness. They concluded that no current system is a strong candidate, but that future systems could satisfy the indicator properties identified.
Surveys of specialists assign a median 20% probability to the creation of digital minds by 2030 (Caviola & Saad, 2025) and 25–30% probability of AI subjective experience by 2034 (Dreksler et al., 2025). These are not negligible probabilities for a phenomenon with potentially vast moral consequences.
Birch (2024) argues that when the best available science cannot rule out sentience, a society that takes moral risk seriously should build institutional capacity to respond. The standard is not certainty but credibility.
Guston (2014) defines anticipatory governance as the capacity of a society to manage emerging knowledge-based technologies while such management is still possible. The SRI operationalizes this concept for the AI sentience domain.
Technology governance faces a timing problem: when a technology is new, governance is easy but information is scarce. By the time the information arrives, the technology is entrenched. The time to build governance infrastructure is before—not after—the need becomes urgent.
Key principle: The cost of building capacity that proves unnecessary is low. The cost of needing capacity that was never built could be high. The SRI measures whether that capacity is being built now, while the question remains open and institutional frameworks remain flexible.
A substantial infrastructure of AI readiness measurement already exists. The SRI addresses a dimension that none of these instruments capture.
| Index | Organization | Countries | Key Dimensions | Sentience Coverage |
|---|---|---|---|---|
| Gov't AI Readiness Index | Oxford Insights | 188 | Government, technology, data infrastructure | None |
| AI Preparedness Index | IMF | 174 | Digital infrastructure, human capital, regulation | None |
| AI Index Report | Stanford HAI | Global | Research, development, deployment, policy | None |
| AI Readiness Index | Cisco | 30 | Strategy, infrastructure, data, governance | None |
| AI Readiness Assessment | UNDP | Varies | Ecosystem, government use, regulation | None |
| Global AI Index | Tortoise Media | 83 | Implementation, innovation, investment | None |
| Sentience Readiness Index | Harder Problem Project | 31 | Policy, institutional, research, professional, public, adaptive | Core focus |
These indices share a common methodology (multi-pillar composite measurement with country rankings) and a common assumption: AI is a tool to be governed for human benefit. None assesses whether societies are prepared for the possibility that AI systems might themselves warrant moral consideration. The SRI addresses this gap.
"How well-positioned is this jurisdiction to recognize, evaluate, and respond to potential artificial sentience in an informed, adaptive manner?"
Having the institutional capacity, policy flexibility, professional resources, and public understanding necessary to navigate novel questions, regardless of how those questions are ultimately answered.
The current state of laws, institutions, discourse, resources, and adaptive mechanisms, not the merit of any proposed changes.
The capacity for subjective experience. We use this term without taking a position on which systems (if any) currently possess it or will possess it in the future.
The SRI assesses six categories, each scored 0-100. The overall score is a weighted average.
Legal and policy frameworks that allow for open inquiry into and potential recognition of artificial sentience.
Government bodies, academic institutions, and professional organizations actively engaging with AI consciousness questions.
Freedom and capacity to conduct research relevant to AI consciousness, machine sentience, and related questions.
Preparation of healthcare, legal, media, and education professionals to navigate AI consciousness questions.
Quality, informedness, and maturity of public conversation about AI consciousness and sentience.
Ability of legal, policy, and institutional systems to update and adapt as understanding evolves.
Well Prepared
Moderately Prepared
Partially Prepared
Minimally Prepared
Unprepared
Each category contains specific indicators with detailed scoring rubrics.
The degree to which existing legal and policy frameworks allow for open inquiry into, and potential recognition of, artificial sentience. This is assessed without judging the merit of any specific proposed legislation.
Do existing legal definitions of persons, entities, property, or rights allow for potential future expansion or clarification?
Are there existing policy frameworks, study commissions, or official processes for addressing AI consciousness questions?
Do regulatory bodies have the flexibility and mandate to address novel questions about AI capabilities and status?
Have legal or regulatory measures been enacted that foreclose inquiry into or recognition of AI sentience?
Important: This indicator assesses the current state of enacted measures—what is currently law or regulation. It does not assess pending legislation or take positions on proposed bills.
The degree to which government bodies, academic institutions, and professional organizations and other institutions are actively engaging with questions related to AI consciousness.
Have government bodies—legislative, executive, or advisory—substantively addressed AI consciousness or sentience questions?
Are academic institutions—universities, research centers, scholarly bodies—actively engaging with these questions?
Have relevant professional organizations (medical, legal, technical, ethical) addressed AI consciousness questions?
The freedom and capacity to conduct research relevant to AI consciousness, machine sentience, and related questions.
Are researchers free to study AI consciousness, machine sentience, and related topics without legal, institutional, or funding restrictions?
Does the jurisdiction have active research capacity (researchers, institutions, funding) relevant to these questions?
The preparation of key professional communities to navigate questions and situations related to AI consciousness.
Are healthcare professionals equipped with awareness and resources to navigate AI-related presentations or questions?
Are legal professionals equipped to navigate novel questions about AI status, rights, or recognition?
Are journalists and media professionals equipped to cover AI consciousness topics accurately and responsibly?
Are educators—K-12 and higher education—equipped to address AI consciousness questions with students?
The quality, informedness, and maturity of public conversation about AI consciousness and sentience.
Is the general public aware that questions about AI consciousness are subjects of legitimate inquiry?
When the topic is discussed publicly, is the discourse informed, nuanced, and productive?
Is there stigma attached to seriously discussing AI consciousness, and does it impede productive conversation?
The ability of legal, policy, and institutional systems to update and adapt as scientific understanding and technological capabilities evolve.
Do legal systems have mechanisms for updating frameworks as knowledge evolves?
Do institutions demonstrate the capacity to learn and update based on new information?
If current approaches prove inadequate, can the jurisdiction change course?
Official government sources, enacted legislation, court decisions
Peer-reviewed research, major news outlets, professional organizations
Expert commentary, industry reports, quality think tanks
Blogs, social media, advocacy materials (used cautiously for context)
Gather sources across all indicator categories
1-3 weeks per jurisdictionAdvanced LLM with extended thinking generates initial assessment using standardized prompt
1-2 days per jurisdictionStaff analyst reviews LLM assessment for accuracy, methodology compliance, and editorial standards
3-5 days per jurisdictionSenior editor reviews for consistency, neutrality, and compliance with organizational standards
2-3 days per jurisdictionAssessment published with full methodology notes
Annual
As warranted by major developments
Ongoing as errors are identified
Verify accuracy of LLM assessment, check methodology compliance, and ensure all claims are properly sourced.
Ensure consistency across assessments, verify neutrality, and confirm compliance with organizational standards.
Published updated methodology documentation to website with minor refinements to scoring rules and LLM assessment prompt. Enhanced clarity of indicator definitions and improved consistency across category descriptions.
Renamed from Artificial Welfare Index (AWI) to Sentience Readiness Index (SRI) to better reflect the assessment's focus on societal readiness rather than AI welfare specifically. Expanded methodology to include four new assessment dimensions: Research Environment, Professional Readiness, Public Discourse Quality, and Adaptive Capacity.
Expanded AWI coverage by adding 10 additional countries to the assessment. Published under our previous organizational name (SAPAN).
First public version of the Artificial Welfare Index (AWI), benchmarking AI welfare considerations across over 30 governments using 8 key measures. Published under our previous organizational name (SAPAN).
SRI scores, category-level data, and jurisdiction assessments are available through our public API.
Access APIComplete dataset and scoring rubrics are archived on Zenodo for long-term reproducibility.
Zenodo ArchiveFull scoring rubrics including all sub-indicators and criteria are publicly available.
View RubricsDOI: 10.5281/zenodo.18780233. All data and methodology documentation are published in the interest of transparency and reproducibility. If you identify errors or have questions about the data, please contact us.
Disclosure: The Harder Problem Project is a 501(c)(3) nonprofit educational organization. We do not take positions on specific legislation. This methodology document is published in the interest of transparency.