Subscribe: Spotify | | More
Jim continues his conversation with recurring guest
Forrest Landry on his arguments that continued AI development poses certain catastrophic risk to humanity. They discuss the liminal feeling of the current moment in AI, Rice’s theorem & the unknowability of alignment, the analogy & disanalogy of bridge-building, external ensemble testing, the emergence of a feedback curve, the danger of replacing human oversight with machine oversight, Eliezer Yudkowsky’s AI risk work, instrumental convergence risk, inequity issues, deepening multipolar traps, substrate needs convergence, environmental degradation, developing collective choice-making among humans, economic decoupling, the Luddite movement, fully automated luxury communism, the calculation problem, the principal-agent problem, corruption, agency through autonomous military devices, implicit agency, institutional design, the need for caring, hierarchy & transaction, care relationships at scale, using tech to correct the damages of tech, love as that which enables choice, institutions vs communities, techniques of discernment, enlivenment, empowering the periphery, and much more.
Forrest Landry is a philosopher, writer, researcher, scientist, engineer, craftsman, and teacher focused on metaphysics, the manner in which software applications, tools, and techniques influence the design and management of very large scale complex systems, and the thriving of all forms of life on this planet. Forrest is also the founder and CEO of Magic Flight, a third-generation master woodworker who found that he had a unique set of skills in large-scale software systems design. Which led him to work in the production of several federal classified and unclassified systems, including various FBI investigative projects, TSC, IDW, DARPA, the Library of Congress Congressional Records System, and many others.