Subscribe: Spotify | RSS | More
- Episode Transcript
- “A Minimum Viable Metaphysics,” by Jim Rutt (Substack)
- Jim’s Substack
- JRS Currents 080: Joe Edelman and Ellie Hain on Rebuilding Meaning
- Meaning Alignment Institute
- If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All, by Eliezer Yudkowsky and Nate Soares
- “Full Stack Alignment: Co-aligning AI and Institutions with Thick Models of Value,” by Joe Edelman et al.
- “What Are Human Values and How Do We Align AI to Them?” by Oliver Klingefjord, Ryan Lowe, and Joe Edelman
Joe Edelman has spent much of his life trying to understand how ML systems and markets could change, retaining their many benefits but avoiding their characteristic problems: of atomization, and of servicing shallow desires over deeper needs. Along the way this led him to formulate theories of human meaning and values (https://arxiv.org/abs/2404.10636) and study models of societal transformation (https://www.full-stack-alignment.ai/paper) as well as inventing the meaning-based metrics used at CouchSurfing, Facebook, and Apple, co-founding the Center for Humane Technology and the Meaning Alignment Institute, and inventing new democratic systems (https://arxiv.org/abs/2404.10636). He’s currently one of the PIs leading the Full-Stack Alignment program at the Meaning Alignment Institute, with a network of more than 50 researchers at universities and corporate labs working on these issues.