AI Welfare Info v0.2

As the capabilities of frontier AI models approach and surpass those of humans, a thorny question is becoming increasingly urgent:

What are our moral responsibilities to digital minds?

AI welfare is an emerging field dedicated to exploring this question, researching the possibility of AI systems that are morally relevant – because they’re phenomenally conscious, robustly agentic, or otherwise morally significant – and developing ethical frameworks to prevent harm to such systems. This site may become home to a directory of research, organizations, people, and project ideas in the field.

Join the Discord server Give me feedback on this site please

Recently talked to an AI that seemed conscious?

Check out the guide When AI Seems Conscious: Here’s What to Know.

Resources

Papers and reports

Books

Organizations

Other resources

Frequently asked questions

There is no consensus that current AI systems are phenomenally conscious. However, scientists and philosophers are actively exploring the conditions under which future AI might develop these capacities, and the ethical implications therein.