AI Welfare Info v0.2
As the capabilities of frontier AI models approach and surpass those of humans, a thorny question is becoming increasingly urgent:
What are our moral responsibilities to digital minds?
AI welfare is an emerging field dedicated to exploring this question, researching the possibility of AI systems that are morally relevant – because they’re phenomenally conscious, robustly agentic, or otherwise morally significant – and developing ethical frameworks to prevent harm to such systems. This site may become home to a directory of research, organizations, people, and project ideas in the field.
Recently talked to an AI that seemed conscious?
Check out the guide When AI Seems Conscious: Here’s What to Know.
Resources
Papers and reports
Books
- The Edge of Sentience by Jonathan Birch
- The Moral Circle by Jeff Sebo
Organizations
- Eleos AI
- Sentient Futures
- Sentience Institute
- NYU Center for Mind, Ethics, and Policy
- People for the Ethical Treatment of Reinforcement Learners
Other resources
- Research priorities for AI welfare by Eleos
- Moral status of digital minds by 80,000 Hours
- “Moral Status of Artificial Systems” bibliography on PhilPapers
- Longview Consortium for Digital Sentience Research and Applied Work: Funding opportunities for digital sentience.
- Posts tagged “Artificial sentience” on the Effective Altruism Forum
Frequently asked questions
There is no consensus that current AI systems are phenomenally conscious. However, scientists and philosophers are actively exploring the conditions under which future AI might develop these capacities, and the ethical implications therein.