EP 181 Forrest Landry Part 1: AI Risk



Forrest Landry
Jim talks with recurring guest Forrest Landry about his arguments that continued AI development poses certain catastrophic risk to humanity. They discuss AI versus advanced planning systems (APS), the release of GPT-4, emergent intelligence from modest components, whether deep learning alone will produce AGI, Rice’s theorem & the impossibility of predicting alignment, the likelihood that humans try to generalize AI, why the upside of AGI is an illusion, agency vs intelligence, instrumental convergence, implicit agency, deterministic chaos, theories of physics as theories of measurement, the relationship between human desire and AI tools, an analogy with human-animal relations, recognizing & avoiding multipolar traps, an environment increasingly hostile to humans, technology & toxicity, short-term vs long-term risks, why there’s so much disagreement about AI risk, the substrate needs hypothesis, an inexorable long-term convergence process, why the only solution is avoiding the cycle, a boiling frog scenario, the displacement of humans, the necessity of understanding evolution, economic decoupling, non-transactional choices, the Forward Great Filter answer to the Fermi paradox, and much more.

Forrest Landry is a philosopher, writer, researcher, scientist, engineer, craftsman, and teacher focused on metaphysics, the manner in which software applications, tools, and techniques influence the design and management of very large scale complex systems, and the thriving of all forms of life on this planet. Forrest is also the founder and CEO of Magic Flight, a third-generation master woodworker who found that he had a unique set of skills in large-scale software systems design. Which led him to work in the production of several federal classified and unclassified systems, including various FBI investigative projects, TSC, IDW, DARPA, the Library of Congress Congressional Records System, and many others.