Transcript of EP 238 – Sam Sammane on Humanity’s Role in an AI-Dominated Future

The following is a rough transcript which has not been revised by The Jim Rutt Show or Sam Sammane. Please check with us before using any quotations from this transcript. Thank you.

Jim: Today’s guest is Sam Sammane. Sam is an entrepreneur, thinker, and writer focusing on artificial intelligence and dedicated to harnessing technological advancements for global posterity. Welcome, Sam.

Sam: Hello. Thank you for having me.

Jim: Yeah, this should be a very interesting conversation. You can learn more about Sam at his website, sammane.com. And as always, we’ll have other links available to other sources of his on the episode page at jimruttshow.com. Check it out after the show if you want to learn more about Sam and his work. Today, we’ll be talking about his book, The Singularity of Hope: Humanity’s Role in an AI Dominated Future. How did you come to be thinking about AI? It looks like you’ve done a bunch of other things in your career.

Sam: Yes, in fact, I started, when I did my PhD, I did it on verification of digital circuit design using technologies like automated reasoning, automated theorem proving, symbolic simulation and at that time, we didn’t have the generative AI of today. In fact, funny story, I checked the theories at that early level. I didn’t like them, so I stick to the old theory of that time and today, they proved me wrong and they did something impressive, what we call generative AI, of course. Not the AI of science fiction, which is completely different, but the generative AI is the new technology. When I have seen the advancement they have made and the hype in the media around us and how people are planning to use AI, I thought I have an obligation to write a book about it. For one reason, I have three girls and I want them to live in a world where AI is used for our advantage, not against us.

Jim: And that’s I think a noble goal and many of us that are working in and around AI share. I have recently became a grandfather, so that adds even more impetus to make sure we don’t screw the world up by some dumb-ass mistake. Now you mentioned generative AI and it is what’s getting a lot of the attention right now, but it’s not the only kind of AI there is. And in fact, many of the leading AI fathers, guys like Yann LeCun, Jeff Hinton, Gary Marcus, Melanie Mitchell, Joscha Bach, Ben Kurdziel, et cetera, all pretty much reject the idea that not only generative AI, but even the broader ideas of deep learning and transformer-based AI, while all the amazing things that they can do are really not the golden road to full AGI, et cetera. What do you think about that?

Sam: First of all, I don’t like the hype. When we talk about AGI, I will assure you, it’s a hype. We’ll never reach it soon for a simple reason. We are not doing serious research to achieve AGI. When I hear OpenAI talking about it, I have the feeling they are doing it for commercial reason, they are cooking something on, they will call it AGI, but it’ll be, let’s call it from the scientific perspective, generative AI++. I don’t believe it’s the real AGI defined as completely human-like artificial intelligence.

Jim: Yeah, with that, I’ll agree. I think that the hype that the current generative AI, deep learning-based AI is likely to lead to AGI seems to me unlikely though I will say, I think all of us were surprised by how much intelligent behavior could come out of these fairly simple-minded transformer models. But again, as I mentioned, that’s not the only road to AGI. One project I’ve followed for many years called OpenCog by Ben Goertzel, he’s actually the person that coined the term artificial general intelligence is taking a much broader approach, including both neural and symbolic approaches operating together synergistically. That might be a faster road to full AGI, though at the time being, our generative models continue to impress because they’re so simple. They’re so brute force. You apply computation, you apply data, you grind the shit out of it, you spend $60 million, and you have an artifact that can do some pretty amazing things

Sam: And you cheat because in reinforced learning, you use the human intuition and gut feeling that it’s some kind of superpower we have to do machine learning. So practically, it’s not really machine learning but that’s how they want to call it, and names matters. So when you call it machine learning and deep learning, people have some thought about it, but in reality there is a lot of cheating. Reinforced learning, it’s a very important part of that and it’s using human intuition to cheat, to tell the machine, “Oh, that’s good. That’s not good.”

Jim: Yeah, though, compared to the inputs on the deep learning side, the human reinforcement learning part is relatively modest. It’s 10 or 20,000 data points guiding a model of a hundred billion or a trillion parameters. One of the things you do talk about, and this is one of my pet interests, is that intuition and emotion are some of our human superpowers that may be difficult for, particularly the deep learning approaches to AGI to reach. Do you want to talk about that a little bit?

Sam: Yeah, we’ll talk about it in the meaning that we learn with love. Like when you have a baby, his attachment to his parents and the attachment of parents to him is the base of the learning because of this love and attachment we learn. And then another composition, I’m talking about intuition, it’s a very mysterious algorithm. We don’t know how it works for real. There is some tentative to do some, let’s call it heuristic or approaches to mimic it, but not really, not too much money spent on that. So I call it our superpower.

I think it’s still behind the algorithmic and it shows how human can learn from very small amount of data and how sometimes I will use intuition with education, and that’s what I call it in my book, educated guess. So educated guess, yet much more mysterious because you navigate a huge amount of data and your gut feeling tell you this is what to do practically in a fraction of seconds or what we call expert. I think we are still superior to the machine at this level. That doesn’t mean we don’t need machine because of course in terms of pattern matching and accessing a huge data, that’s an advantage. But in my book, I’m emphasizing how the real, let’s call it super intelligence, it’ll be human augmented with this generative and basic AI.

Jim: Yeah, it’s interesting you mentioned the heuristics because I’ve been saying for at least how many years now, 10 years, that something that most of the AI research is not doing is to attempt to see how we can do what humans do from small data sets. I’ve been calling this heuristic induction, which is to take from a relatively small data set combined with our whole lifetime of experience, our intuitive physics, all the things that our emotions, everything that makes us human and create simple rules of thumb. That’s why we’re able to navigate in the world with almost instant decision making and often without even any conscious processing at all, is that we’ve cooked down a whole series, probably thousands of rules of thumb that we can apply pattern match to a situation, say, let’s apply this rule, probably some algorithm for choosing what’s heuristic to use at what time, and there’s not much of that going on in AI research yet.

Sam: No, I agree with you 100%. There is nothing serious about mimicking human capacities. By the way, we had put some models for our brain, I don’t know who put them, honestly, I didn’t go deep enough that assumed that our brain would work the same way the digital microprocessor works, which is not true. There are some people who suggested that our brain is practically a quantum-computer approach. And if it’s the case, then good luck by trying to implement human thoughts using traditional silicon digital computers.

Because if really, and this is, I will cite to one of my heroes in physics, Roger Penrose who wrote a book about it, about the possibility that our brain is a quantum computer and if it’s the case, good luck in mimicking the thoughts a human using traditional algorithms because it’s not the same. But let’s step back and I will tell you, today what we have still impressive. I’m not criticizing generative AI. I like it a lot and I think it’ll be very good for humanity to augment their capacity using this generative AI. Even if we don’t need to mimic or redo by machines our capacity of intuition and gut feeling, I think we can combine both and this is what I argue in my book.

Jim: I will point out Penrose and his collaborator, Hameroff have pushed this idea of possible quantum computing in the brain. I hang out a fair bit in the cognitive science, cognitive neuroscience spaces, and the vast preponderance of people in those spaces find that to be an unlikely hypothesis on the grounds that the brain is too wet and too hot for any process that we can think of that would allow quantum processes to be coherent. The other one that I’ve been pushing a little bit to refute Penrose and Hameroff is the brain moves too slow. The brain operates at a click speed on the order of a millisecond. Nothing much happens, at least at the neuron level in less than a millisecond. While if something is happening in the quantum range, it’s going to be down there at the exasecond. It’s going to be way teeny, teeny timeframes.

It’s really hard to contemplate a quantum process that could be coherent for as long as a millisecond, particularly in a hot and wet environment. But anyway, that’s not really relevant to our main story, but whenever I hear anybody mention Penrose, I always have to say, all right, Penrose a mighty smart dude, but the preponderance of the experts disagree, and there’s some pretty good reasons to think that that’s probably not really what’s going on. So let’s start a little bit and talk about this distinction between what humans are good at and what AIs are good at and how we may be able to make more or do better as humans by using the two together.

Sam: Exactly. Here, I’m not trying to talk AI in general. I like the idea that we distinguished, I’m talking generative AI because if we talk AI in general, it’s a bigger scope. But today, the impressive tool that we have in hand is the generative AI. In my book, I argue about what we call human AI augmentation. Essentially it’s somehow, and this is double word, it’s an intuitive way to use generative AI and maybe we are all starting doing higher or human AI augmentation without knowing because I think it’s the intuitive way of doing it. And that always starts by human intuition or initiation, or an idea that you want to do. So in this case, the start come from human, which is essential.

Let’s take the example of a researcher who is trying to find information about a topic. The topic is picked by the researcher, not the machine, of course. And then we’ll use AI capacity to suggest places where to go when this research, several plans for example, and here I’m trying to be shallow so everyone can understand, but of course we can go deeper in more precise example later on. The idea here, then researcher would pick, and here he will use the educated guess, but the love as well, “I like this idea, let’s explore it more.” And then he explore it more with AI. And again, he will pick and pick and pick. It’s a loop.

During this loop, finally we’ll have an outcome that we used to do it in our own way without AI, it take us month. But with this approach we can reduce the month to days, three, four days, and you have a research output, if we go back to this researcher, that is much faster, much better than not using generative AI. By the way, this you can apply, I put in my book more complex examples when we can use it in education, when we can use it in art. So HIA for me is using the current generative AI and combining it with our educated guess and intuition.

Jim: Yes, I’ve been doing a fair bit of work in exactly that domain. I started a company last year called Fluent Muse in our first product is Script Helper, which is a human augmented writing of movie screenplays. How about that?

Sam: Wow.

Jim: You are absolutely right. It’s designed so it can write a total movie screenplay from scratch, including all the dialogue and everything. You can even just say, “Create a movie. Use your own guess. Just make something up,” and it will. But they suck. The ones that are totally AI driven are no good at all.

Sam: But if you make it in a loop, and this is what I’m arguing, if you make it one shot, I agree with you 100%, this is why we need the loop because it’s a tag.

Jim: Yeah, that’s my point. What we ended up creating is something with 40 different processes with humans at every stage in the process. The humans can choose to do either more or less at each stage, but they often have to make a decision, “Is this good? Is this bad?” Or “Give me a thought on how to steer it,” et cetera. I absolutely agree with you. But let’s step back a little bit further and look some at what people should be expecting at what the world of work is going to look like. You put together interesting series of scenarios on Sarah Romdell. I thought that was kind of cute.

Sam: It’s a real story. I use Sarah Romdell in my business of lab. I will not detail it, I don’t want them to close the account, but you can Google Sarah Romdell, she exists on LinkedIn. The name Sarah is the name that we choose randomly, but Romdell is the name of the computer. It was Dell the brand, and it had a ROM next to it, so we said Romdell. This is the mystery of the name. And then we give her a fake picture generated by AI. But it was a pretty picture. We provided a small script that was connected at that time to LinkedIn, to prospect people, but also answer and identify interested people and send them to us.

It was somehow not fully automatic but still automatic and the magic of that bot or LinkedIn automatic profile at every time of LinkedIn because now I think a lot of people are doing it anyway, it was generating a lot of leads, a lot of things. In the approach we used there was human supervision, so it didn’t look dumb. I wrote in the book, some people also called us on the phone and they wanted to meet Sarah very badly. We didn’t know what to do. And then one of the staff, “Okay, I’m Sarah. What do you want?” And he wanted to date her, that was the reason. He fell in love with a very insistent robot. That’s what happened.

For me it was a digital robot. That means it does not exist physically. But we created for Sarah Romdell are the fake trait of social media that people like. I think now we have even people proposing virtual influencer using Instagram and social media, and everything is created digitally. But our experience on Sarah Romdell show us how society will change because lead generation and cold calls, honestly suck. It’s a machine-like job. It’s something I saw I have seen in the book. I said, it’s not the problem that generative AI looks like human in terms of thinking. But the problem, we have formatting people and training them to act like machine in the last 50 years. So I think these jobs of machine -like will disappear first because nobody like to do them anyway. And secondly, the machine will do them much better.

Jim: Yeah, another one that every day experience convinces me of this is customer service is another one that ought to be laid out to generative AI. In the last couple of months, I’ve had three or four really bad customer service experiences with companies that used to be good and they keep getting stupider and stupider people, and give them more and more rigid scripts to follow. I’m absolutely convinced that a, let’s say Opus-3 top of the line language model, fine tuned on their stuff with a rag wrapped around it would do a far, far better job than the lame-ass people that they hire.

Sam: I agree with you to the point that my company feels, a first product is a customer service product. It’s answering the phone, talking with the client and answering their most important things because we hook it up to the database of that company. And it’s using as engine, of course, generative AI. I will not do an ad and tell which company. But at the same time it has voice, it has a very, let’s call it empathic-like behavior, but we know it’s not true. The idea, it will not replace a human, but it’ll take a lot of charge. So in this case, and of course everybody tried it, the bored person unfortunately that we trained him as the machine will disappear, but his supervisor will not because this machine will have limited capacity and intelligence, let’s say. But I agree with you, it’s one of the first applications that will change the world.

Jim: Now of course, a lot of people will lose their jobs. There’s lots and lots of people around the world who make their living doing customer service, in particular. That’s a gigantic business on a worldwide basis. Another one that’s coming slower than we thought, but coming nonetheless is the fully autonomous vehicle in the United States. I looked up the data, truck driver is the number one occupation in 48 out of 50 states.

Sam: Oh my God, yes.

Jim: Two million people make their living driving with a commercial driver’s license. And we’re already finding that self-driving vehicles are actually getting more penetration in the heavy truck, city to city where the routes are simple down the interstate than they are trying to solve all the problems of driving to the grocery store and dealing with drunks on the sidewalk and stuff. So customer service, huge group of people working, commercial drivers, really huge group of people working, how do we think about this with respect to its impact on society?

Sam: This is a big problem. The problem again is that we created in the past, jobs when we use people like machine. And now suddenly that the machine can do the job in terms of robotism, I will be honest, and I think it’ll take five to ten years at least before we see fully autonomous truck and automation. But it’ll happen. And there are a lot of solutions, but the real solution is to go back and behave like a human, that means have empathy with these people. I’m talking in my book about concrete solutions like UBI, universal basic income and also, it depends on the person. I call it. It needs to be compassionate and situation one by one because retraining is a great thing, because look at it from this perspective. Retraining is different in the age of AI than retraining in the past because practically, you will train the people to use robots. You will train the people to use AI.

So finally, even if his initial education was not giving him the possibility of having complex jobs, with the arrival of robots and machine, his job will be different. But it demands that we take care of that as soon as possible. One thing that I’m talking about in the book, we don’t want the problem to happen and then, “Ah, we need to do it.” Now we need to think and plan the society for the upcoming singularity where things will change radically, where suddenly we need very little amount of work to do what we’re already doing to sustain the humanity. You are talking about two million jobs in that truck industry. Of course, we’ll still need a human. It will not be fully automatic 100%. But instead of two million, we’ll have maybe 200,000 needs. So all this gap, you need to retrain the people if they are in the age of retraining and give them universal basic income or universal compassionate income.

I will call it this, but of course the amount need to be studied because it’s dumb to give everyone the same amount. Some people are richer than others, some people can retrain, some others are not. But it’s a deep change in the society. I talk in my book about wealth paradox because with the arrival of robots and AI, some people can get a lot of money with very or little human power. You can see very soon a company that generate billion with three or four employees. So this wealth paradox need to be treated. I’m not pretending I have the monopoly of solutions, but there are solutions and we need to be bold and talk about them and take them.

And what’s scaring me today, we have too much polarization in politics. If you propose a solution, the other side will refuse it just because it does not come from them. But at the end, we are all sharing the same problem and it’s not a new problem because with technology, you have the changes in job and society, but the new thing that suddenly it’ll be massive and quick, that’s we are not used to it. No. I think, but I will give it the time. I will not be also exaggerating and I will tell you it’ll happen five years, not true. But 15 years, yes. Most of the jobs that we call today, blue collar jobs may disappear for robots and AI and combination of both.

Jim: It’s interesting, the combination of both and as you alluded to the training to use AI in your work will be different than learning to do the work yourself. I can give you an example. When I was writing my script helper program, I haven’t written any serious software code in about four or five years, and so my fingers were rusty. I’d never would’ve been able to write this amazingly advanced program without using AI to write it. I learned fairly quickly, it’s not good to write the framework of the program or figure out the macro logic, but man, is it good at writing functions, right?

If you can specify a 10 to 30 line function, it is, especially if you’re doing something funky like using Regex or something, it’s amazingly good. Or using an API to some oddball library, it knows all those libraries and how to use the APIs et cetera. And so to retrain old Jim Rutt to be able to program fast would’ve been a big job at my old age, but rather reprogramming myself to learn how to use open Ais, LLMs to write code, it took me a week at most to figure out how to do that. So that’s a very interesting point that you raised, that it may be easier to retrain people to be partners with AIs than it would be to train them to be experts at the domain itself. I think that’s a very interesting point.

Sam: I think it’s intuitive somehow, but we are not used to it. We are used to retrain. That means you need to learn a new job. Not necessarily. But I like your approach and I think you are already a higher champion. That means you are using AI to augment your own capacity in a formidable way. I love that. It’s how I think should be for all level and all jobs, and we need to take care of it as soon as possible. This is why I wrote the book now because I think the problems will come, it’s inevitable. Progress is inevitable. Unfortunately, if someone will try to protect other country or other company will not respect that anyway. So progress inevitable and we need to take care of that as soon as possible.

Jim: Yes, you made another very interesting point, which is that as this wave progresses, at some point, we’ll see the emergence of a digital proletariat and we’ll make the whole idea of Marxism obsolete. I never heard anybody say that idea before. That’s a pretty clever idea. Why don’t you unpack that a little bit for us?

Sam: Yeah. The digital proletariat is just a term. But for me, the capitalism and innovation work together very well in the last 50 years. That means to do innovation, you need capital, you need venture capital, you need capital that’s ready to lose money, and that comes from the rich and comes from capitalistic people. This part and area, I don’t think it was a plan. I’m not trying to call it an evil plan, but somehow suddenly in the context of let’s call it, labor against capital, suddenly the capital can create digital labor that will replace our labor as a human. And then the power is shifted suddenly. And I call it the rise of digital proletariat, borrowing a little bit from Karl Marx the idea of revolution. Only that this time revolution will be in the side of capitalistic people because this digital proletariat will favor capitalism and forever things will be irreversible with a rival amount.

Of course, I’m not talking about today. But 15 years from now, robots will be everywhere, digital bots everywhere or digital robots, whatever we’re going to call them. And suddenly, the human labor value will not disappear, but it’ll take a stand behind and it’s time to rethink things radically and differently in a different perspective. Here, I’m talking about being a human, using empathy, love to each other rather than the traditional Marxism that unfortunately lost forever for them, of course. And now we need to take another approach which is empathetic approach, compassionate approach to take care of each other. Also I am against being rigid, like the ideas of everything need to balance in terms of the economy. It needs to balance for human advantage, for human nature, which is empathy and love and compassion between each other.

Jim: I love that thought, but I will say the things you actually suggested did not go far enough. I think you were talking about UBI and quantitative easing and dynamic management of the money supply, et cetera. But I’d suggest that’s not going to be enough because as you point out, AI is going to produce massively crazed winners with small amounts of people and assets employed. And perhaps a whole notion of individual ownership of investable capital might be a bad one. Maybe we need to reform that and come up with some fractal organization where communities have some investable capital, larger aggregations have investable capital, guilds have investable capital. And then one of my favorites I came up with fairly recently is perhaps instead of money being invested by money managers who are caught in an arms race around rate of return, maybe investments should only be done by a priesthood.

And this is a priesthood like a Catholic priest or an imam in Islam. They go undergo a lot of training, they learn a lot about ethics, they learn a lot about the history of ideas, et cetera. Imagine having a priesthood for investing, and further like the Catholics, they have to take a vow of poverty and celibacy so that they are acting on the benefit of the human race rather than in a arms race around maximizing rate of return irrespective of human wellbeing because the point you made, I think is the fundamental one. The purpose of the economy is not money on money return, even though that’s how it works today, the real purpose of the economy is to increase human wellbeing. And this sea change in the means of production and the creation of a digital proletariat and highly leveraged returns on investments potential may be just the time that we should get out of this rat race of the system we currently have.

Sam: I agree with you. In fact, when I started writing my book, most of it, it was about the new ideas for the economy. Here I made a decision that this book would not talk in details about it because first of all, I’m not an economist and it needs much more, more further, let’s call it investigation to be ready. But I agree with you, and my upcoming book, it’ll not be now, it’ll be called The Economy of Post-Singularity and I will try to explore with a lot of economic people ideas because I don’t believe we’ll catch it from the first time, but we need to explore innovative idea. I like your idea of priesthood, it’s good, but also it goes in tied with what I’m thinking. You need to change radically how we treat money, how we generate money.

And in physics, I come from a background of physics, we define work as displacement and energy, and it’s not practically far from today. I have seen some attempted do that. But to go back, the radical change will happen because if robots are working and you have very little people maybe generating money out of robots, but you still need a human, you don’t want the society to collapse. But if people are not working, that means no taxes to collect anyway. That means the foundation of our current economic models is at risk and we need to do something about it. It’ll be very dumb to wait until that happens because crisis will lead to extremism at all directions and this is dangerous for us.

UBI is an idea, and I used it in this version. It’s called in of the book as a base to explore other ideas. I talked about quantitative easing as an example, how people sometimes, let’s call it change the rules and use money printing, call it quantitative easing to solve a problem. But it works. At a lot of crises, we run out of them by using these things to tell at the end that yes, we need to come with innovative idea about money, about production. In the singularity concept, you will need very little human energy to run humanity. And that means what are you going to do? That people suffer? No. We need to take care of each other.

Jim: Gotcha. That’s quite interesting. All right. Let’s go on to your next topic, which is a way to think about how we work together with AI, which you called HAIA for human AI augmentation, the fusion of the mind of the machine. Tell us how that is a somewhat different perspective than the way people are thinking about AI now.

Sam: It’s an improvement, and I told you in the beginning it’s an intuitive way somehow. First of all, why it’s different. It’s because it’s a loop. People trying to create AGI from generative AI. That means I tell the machine what to do, it will do it. That’s very unfortunately wishful thinking. Generative AI is not that good. It will give you ideas, but it will generate somehow, spam. I have seen people who got impressed by articles wrote 100% by AI. But let’s be accepting this, it’s a spam. You would be impressed because the machine generated it, but nobody reasonable would read it. Only the marketing people.

Or talk about the music by AI, for example. I have seen that, too. I have seen software that generate full song using AI. I will tell you, when you listen to it, “Wow, this is written by AI.” But let’s be honest, you will not get this song and listen to it in your car or anywhere. It’s impressive because it generated by a computer, but it’s fake. Going back to HAIA. HAIA, it’s a way where we do iteration using a human capacity. Essentially, I’m talking about using our love, our empathy inside the loop.

Essentially, if we take an artist, if he asks for example to generate a poster or something using AI, it will. But it’ll not be interesting. And here comes the expertise or the augmentation part where instead of being satisfied with the first round of AI, the human will add his creativity, his abilities, and made a loop and refine, refine, refine. And this refinement could be also using multiple humans. It doesn’t need to be limited by one person. One person, this is one level.

But imagine it also having each of us having his bot or a ChatGPT-like companion, and then we do our thinking, our loop, we send it to another human that will give his loop as well, and go on. And then it will be an augmentation for our collective intelligence. I think this is a game changer. Here when I think the super intelligence will emerge with this new combination of a human intelligence and machine intelligence. And human intelligence, I am precise about it. It’s the educated guess, it’s the love, the empathy, but also ethics because machine has no ethics.

You mentioned the ethics. It’s part of the loop in human augmentation approach that I’m describing in my book. And also, because I’m talking about group, it could be committees, it could be ethical committees that they’re called investment, that’s why, why not? Why not including them in the loop? And I think we’ll have tools emerging that will automate this paradigm of thinking inside the tools. So it’ll be already human touch. And also who did that at the end as a human.

Something that I find funny sometimes, like they will tell you, “This committee show is wrote by AI. It was written by AI.” Not true. There is a human who used AI to generate that. And this is, we did it in the higher augmentation, higher approach to say who is the human behind it. Not only what’s the AI tool, because it’s funny how it’s like saying, “Hey, this document is wrote by computers in Word.” Not true. There is a human behind it. And with a little bit of abstraction, AI, the same thing. There is human who is using his ability as a human, but augmented with AI.

Jim: Gotcha. Yeah, we did put several case studies in the book, which I thought were quite illuminating. My tongue, get out of the way there boys. One I thought that was particularly good was, I don’t know if you’re ready to provide one of these, but the AI augmentation and drug discovery. Can you just give that as an example?

Sam: Yeah. And drug discovery. I remember I called the researcher, Emily, if I’m not …

Jim: Yep. Dr. Emily. Dr. Emily.

Sam: Yes, yes. In the example of Dr. Emily, for example, she’s working on a drug, and I can be a little bit specific, let’s say she’s working on a heart that will improve the quality or reduce the possibility of heart attacks. In this case, she has a composition in mind for different molecules that she want to use. Of course, we are talking here about special AI that developed for her case, who will work hand in hand with Emily. Emily will say, “Okay, I wanted to do this and that and that.” And then the AI is trained on a big database of possibilities. He will suggest the possibilities, and of course if we rely only on the AI, there is no augmentation here.

What she will does, she will use her educated guess to say, “Ah, this molecule I have seen in other stuff, it got this problems.” “Ah, this one I like. This one is new. Let’s explore it more.” And this will be the second phase of the loop. When she used, again, her educated guess and experience to refine what kind of molecule we’ll use for this kind of drug. And then comes the phase of for example, clinical trials. The machine would suggest or the AI from the huge data set of learning of clinical trials that we used to train this model some, let’s call it plans for that. Here comes Emily. She says, “Oh, my God. No, this plan will not work.” Nobody will be volunteering.

Here comes the empathy part, because for the machine it’s the perfect one, but for the doctor who knows how people behave, he will know he will get no volunteer at all in this plan. Maybe this plan will be dangerous as well, but it’s the most optimum for the machine. So here comes the human part, augmentation, getting out all the things that think it’s unethical or inhuman because there is no empathy. Again, refine the plan, refine the plan, and go on. I don’t want to go into details of clinical trials level one, two, three, four. But at the end, we obtain a plan that maybe in the past, instead of just Emily working on it, it will need two or 300 people to define the full plan for research.

Here it’s an example of augmentation not only of the intelligence of Emily, but the research of humanity at all. Suddenly with three, four researchers, we can suggest new drugs, new drug discovery. And against the idea, because I have seen it also, people think the AI will suggest the drug and boom! No, that will not happen. Generative AI is not that good. But combining both, yes. Suddenly complex problems that needed 10 years of research will be solved in two or three months. And this is the singularity. This is the singularity of hope where a human will become super intelligent and lead the humanity to another level in discovery and improvement, not the machine will outsmart us.

Jim: Another area that you talk about in some detail in this section of the book, and interestingly I think it’s very important as we talked about earlier, kind of a new kind of education and new kinds of retraining and how HAIA systems might well be able to produce a major uplift in education.

Sam: I believe in that very strongly. I think education is still done unfortunately in the 19th century way. I have suffered from that myself when I was young. Like some teachers will tell you, “What if you don’t have a calculator? What are you going to do?” Or “What if you don’t have access to computer? What are you going to do?” And I will not shock anyone if I tell you, hey, if we don’t have access to technology, we cannot make survive eight billion people on this Earth. We need the technology. I will not think if I don’t have it, what I will do. No. I need to train my kids from early day to use the technology.

The technology uses with the HAIA augmentation is to treat kids as a human again. I don’t want them to be formatted in the 19th century dogma where they used to be another machine in the manufacturing facility. No, I want them to learn at two levels. The HAIA in education is a little bit more complex. We’ll have two level, the higher for teachers and higher for learners or kids. Of course you can apply it to higher level, but I’m insisting on kids because I think we need to start very early in using AI to have the super intelligent generation, let’s call it, because they were born with AI. Not like us, we are discovering it at late stages somehow. For them they are born with it.

Let’s start with the teacher, so the teacher will have a system that will help him draft his courses, a system of AI, especially for teachers. But the AI will not teach the kid because it has no empathy. So the teacher, when he drafts his lessons will use the same loop that we discussed with Emily, but this time to draft the courses the way to explain to kids. And this time, he will assist with his empathy and experience how to draft the lessons so everybody will understand it easily.

We need to develop not knowledge anymore. We need to develop the educated guess, the critical thinking of our kids from very every stage. That doesn’t mean they don’t need knowledge. They need knowledge. But they don’t need to memorize things when we have AI and generative AI anymore. They don’t need to write by hand honestly long pages when they can type or even dictate. They need the basics. I’m not against of course teaching them legacy things because they will need it as a basic for their education to build the educated guess.

But then the kid himself will use a special kind of AI system in doing his education, a system where the kid’s love of what he is learning will be the engine for his learning. We know that we cannot have the same thing for all of us. Some people are more intelligent in other and stuff than others. Some people are linguistic, other are mathematical. Using AI, we can identify that quickly, so there are two systems, one for the teacher and one for the kids. The kids, it will be practically a system where the kids learn the things that they love so they will be excelling in it. It’s a new kind of education. It’s time to use the technology full power to get the new super intelligent human.

Jim: Indeed. I find myself using the generative AIs, the chatbots for lifetime learning all the time in a way that’s analogous to what you described. Let me tell you one domain. For whatever reason, I don’t know how I fell into this. I deal with a bunch of philosophers, right? Formal philosophers, Hegelians, process philosophers, even more out there things. I’m no philosopher. My brain does not work philosophically for some reason, but I want to be able to engage in conversations with these people about things that I am interested in.

And so I find the LLMs, particularly ChatGPT-4 and the new Claude 3 Opus to be actually really good at teaching me, okay, I want to learn about something. Let’s say Hegelian dialectic, right? Well, to your point, you can tune it really easily to have it give you different levels of explanation. For instance, I might start off with, “Please explain Hegelian dialectic as you would explain it to a reasonably intelligent 13-year-old,” and it’ll give you two paragraphs, small words, real simple. Then, “Okay, okay. I got that. Now please tell it again this time at the level of a freshman at a good university.” And you’ll get five or six paragraphs, longer words, more detail.

And then you could say, “All right. Dig into paragraph four and explain it to me as you would to a recently graduated college philosophy major.” And then it’ll give you some more. Then you can say, “Do it again as if it were a lecture given by a world famous philosopher.” And amazingly, those little cues actually work with even today’s generative AI. And we know today’s generative AI is very, very early. I find myself doing those kinds of things all the time in areas where I may have relatively little knowledge, but you can come up to speed as deeply as you want within an hour. It’s really remarkable. It scares me to think what it had been like to have been a reasonably smart 13-year-old with access to one of these large language models. And then, oh, by the way, Google Scholar to look things up.

Sam: Agree. And imagine the teacher will teach the kids what you have just said instead of going into the past methods. So instead of teacher who is residing in the past, his courses is oriented on how to use technology to learn yourself. This combination will create the super intelligent generation. And I will tell you, it’s inevitable and it’s coming.

Jim: It’s probably already happening, right?

Sam: Exactly.

Jim: You know that smart 13-year-old has got up to his elbows in Opus 3 and maybe he’s learning physics. I know a fair bit of physics, so I don’t really need to have it give me the baby version. But just as an experiment, I did the similar kind of thing on particle physics. And again, it was very, very, very good.

Sam: What’s dangerous, I think, and here I will mention it, is not that. If that happened, great. But what would happen, honestly, and this is why I wrote it in my book, is people will try to consider using AI in education as cheating. And that’s dangerous. Hey guys, everybody is using it already. They tell in reality or not, everybody is using it. Considering it as a cheating in assignment is a way of avoiding technology and it’s dangerous. No. I think the type of assignment need to change, the requested knowledge from people need to change, not tell [inaudible 00:47:15], “Don’t use ChatGPT because it’s cheating.” That’s dangerous. No, use it. But instead of asking people to write basic stuff now, ask them to use their critical thinking, their knowledge to do complex and more elaborate things.

Jim: Yep. I think it’s going to put a bit of a burden for a while on the teachers to get them to be able to explain these kind of meta methodologies where truthfully, a smart high school senior can do a hell of a lot more in terms of writing than they could before these things came around, and they’ll have to get their heads deeply into the content that they’re creating. I think it’ll be better for the kids. But as you say, it would be the worst possible reaction to just say, “No, that’s cheating.” That would be very bad. But I think in some places, that’s actually happening.

Sam: What’s also worrying me, it’s happening at the university level where people need to be much more open. If it’s happened at the high school and let’s call it, elementary school, I maybe think yes because teachers are not ready and they don’t know how to behave with it. But in universities, when you have professors who used to do research are telling their students, “Don’t use it. It’s cheating,” for me, it’s alarming. It’s alarming. No, they need to update quickly. They are professors.

Jim: I can give you a good data point. I was actually up at my alma mater, MIT last week and I’m on one of the governance boards there that evaluates academic departments on what they’re doing. One of the questions we asked is, how are you dealing with generative AI in the classroom? And they said, “Oh, we’re totally over worrying about it. In fact, we’re now teaching people how to use it as a tool.” And so at least at one university, they’ve already gotten past the worrying about it as cheating and they’re now considering it as a tool.

I have to say though, it does require more work for a while on teacher’s part. When I first got access to ChatGPT, this was just ChatGPT 3.5, I quickly found out it would do computer programming for you. And so I had it replicate the first three or four assignments in the first programming course I took, in fact, the only programming course I ever took. And it sure enough knocked the answers out totally easily. So it means that if you’re going to do that, you better have a lot more interesting and difficult problems, which is kind of interesting.

Sam: But also it’s what we need. It’s time we stop teaching our student to become a machine. It’s time to act as a human and tell them, “Hey, we need to solve complex problems,” not “Let’s redo this problem again to see. Huh? Now you know it. Okay, great.” No. Give them challenging things. That’s the education change and be, let’s call it, worthy of the singularity era.

Jim: Yeah, great. Agree. I think we’re on the same page there. Now, one thing you didn’t talk about, which is one of my pet hobby horses as regular listeners to the Jim Rutt Show know is I believe, and somebody out there should go make a trillion dollars. This is the next trillion dollar opportunity, people. If I wasn’t so old, so lazy and so rich, I might go do it, but I’m not going to. So I’m just giving this one away to the world. And that is, we can use the large language models, semantic vector databases, other technologies to build personal info agents for ourselves where we never deal with something like Facebook or Twitter or The New York Times.

Rather, our info agents deal with them for us and they learn on what we like and they network with people that we know, and they collectively up regulate our ability to cut through the sea of shit that our infosphere has become. What do you think about that? Do you think that the large language models and closely related technologies have the capacity with some work, obviously, quite a lot of work and such, to become trusted info agents for us?

Sam: I think the technology has the capacity, and I like your idea. It should happen. But also, I’m worried about algorithmic bias and training data sets. Unfortunately in our world today, it happens. If you take your large language model and throw it on the internet, by default, it will be biased because the content are biased. Most of the content you have, it’s really our humanity and human heritage, which is fine, but it’s bias. It’s not like 100% independent. So the idea I like and I agree with you 100%, but I don’t think we have the large language model ready to do it yet.

But it’ll happen. I think it’ll happen. We’ll get better. I’m very optimistic about it. I like this idea of personal information agent. Anyway, it’s the robot and the agent feature. It’ll be customized easily. We are still in the infancy of large language models. I think with time, not very far time, maybe in three, four years, you will have that capacity to program your own large language model without programming it like by seeing what you want. I think this is what will happen. It’s getting already better, I think, getting large language model trained.

Jim: Doing fine tuning is now trivial on OpenAI’s platform. You can use their assistant architecture, that’s basically kind of building a rag, a retrieval augmented generation wrapper around it. It’s also very easy now to do fine tuning of an existing model. And those two things go a long way to personalizing and getting rid of some of the biases. Or you can substantially get rid of bias.

Sam: Yes. And the upcoming platform is not only chatbot, it’ll be agent platform, that means it’ll do what you are suggesting, that not only you will be able to fine tune the model, but also to tell the model how to fetch information, where to look for it in terms of something like combining Zapier and ChatGPT right now, but much more elaborate and stronger. And it’s happening. But still, I think the algorithmic bias is here unfortunately, and it’ll take us a while to get better models that doesn’t have this shortage.

Jim: And of course to some degree want humans still in the loop, because this thing got through the agent, is this good or bad? So you train the agent in some sense by human interaction. And then one of the things that I see would be particularly useful is for me to build a network of my friends and associates whose info agents talk to each other so that we can together learn the best way to approach the infosphere.

Sam: In fact, in Theosym, we are building a tool called symprise and we are building what you’re exactly saying, it’s a collaboration tool, it’s a higher tool when you have employee and symployee. And practically, symployee is the agent that you train yourself on what you want. And combining them, it’s amazing.

Jim: That sounds very much on the cutting edge of what’s happening now. All right, onto the next topic. I don’t recall you writing about this, but I actually had a guy on my podcast recently, Trent McConaghy, one of the smartest guys I know back in EP 222, where we talked about AI and brain-computer interfaces. We talk about in your HAIA idea, we’re basically dealing with the computers, and humans and AIs are dealing with each other over the keyboard or voice or something. What happens if anything qualitatively different when we go to a brain-computer interface?

Sam: I didn’t talk about it in my book, but I had that chance to talk about it in my YouTube channel. I think it goes hand in hand with the ideas that I would promote. That means in terms of augmentation, I’m talking about most in the book about a little bit chatbot because this is what people are examining right now. But you can extend it to adding a thumb. I have seen this video when we add the thumb that we can control using a human brain interface. That means with our thought, having a sixth finger, an additional thumb to your hand. This is an augmentation for your sensors, your feeling. And then it means automatically an augmentation for human intelligence because our human intelligence is based a lot of sensory and what we can do, what we can imagine using our sensors, our capacity of emotions and feelings.

This is I think our capacity. Amplifying what we can do using a human computer interface, putting us in the loop with the chatbot and everything, it goes hand in hand with the augmentation, our intelligence and achieving super intelligent human. I love that. It’s one of the fascinating research and it’s crazy how it goes fast. I think it will take some time, because honestly I think we have today, not in the near future, we have much more research about semantics and text base, and even image processing than let’s call it a human brain interface and these kind of things. But we’ll catch up. We have AI now to accelerate our research. I’m very, very optimistic about it and it’s one of the domain that will change the world quickly.

Jim: Yeah, Trent talked about two classes. One are the invasive ones like Neuralink where they actually stick wires in your brain and stuff. And probably those of us who are physically healthy aren’t going to go for that too easily or too early. But he also talked about non-invasive ones using EEG, and eye reading, and some other interesting things. There’s quite a number of ways that we can get started with brain computer interfaces that aren’t quite so scary, and they may, as you point out, accelerate our learning and get us to the stronger forms of BCI faster than we might’ve thought.

Sam: Exactly. The non-invasive one I think would go very fast because the invasive one, I like it, but only as you said, if there is someone who need it badly. But in case you don’t need it badly and you want to just increase your or augment your physical abilities, this is where non-invasive is important and it will accelerate fast. It’s fascinating how these things worked quicker than we thought, like the idea of controlling machine by thought is not new, but it’s how impressive how we got there quicker than expected.

Jim: Indeed, indeed. All right. Talking about quicker than expected, later in the book, just again, so far you’ve been focusing mostly on generative AI, basically feed forward networks with transformers, et cetera. Pretty simple technology, actually. But at some point, either they’ll figure out how to approach AGI with that technology. I doubt it, but they might. Some very smart people think that they can, or people working on hybrid architectures and other architectures will start approaching artificial general intelligence. First, tell us what you think artificial intelligence is and what you think the road towards it might be, and maybe even how long it’ll take and what will happen as we approach the singularity.

Sam: Okay, that’s a lot of questions. But let me start with super intelligence. There are a lot of levels of AI. The current one, and I want to be here closer to people, I don’t want to be very complex, but the current one using transformers and text based, I call it in my book, giant spellchecker. It’s the same technology but improved a lot. This technology is very impressive. But for me it’s very far from intelligence as we define it, as we understand intuitively and it’s casting an illusion of intelligence. I agree with you, I don’t think we’ll achieve AGI using this technology. I think hybrid is the way. There is a way, but I don’t see, maybe I’m mistaken, enough research and investment in that domain. It’s not zero. There are people who are trying, as you said, the hybrid approach and it could happen.

But lets me give you a timeline. I don’t think it will happen the coming 20 years. No. I don’t think we need necessarily to do it. Now, there are the higher level of intelligence, what we call also super intelligence. That means an intelligence beyond our current capacity of understanding things. Of course, if it’s behind our capacity by definition, I cannot imagine it. You cannot imagine it. But I think super intelligence, and for me it’s the singularity of hope will be achieved by combining the human and the current generative AI or hybrid if you want AI approaches and will have a new generation that has human brain interface with machine, and also that we’ll have super capacity to achieve complex project in a fraction of the time right now. This is when we achieve the singularity, that means singularity of hope, not the singularity and the meaning that the machine are super intelligence, and they are doing things beyond our capacity.

I think just the new generation will be very super intelligent and will do things that we never thought it could be happening. So this is what I think. Now there are the cosmic or God-like AI, I don’t think we need to create that. I think it’s dumb. It will be creating a new species. Could we be doing that soon? I don’t think so. Let’s not be elusive. But I don’t think we need even to investigate that. We don’t need it for our human species. I argued a little bit about it in my last chapter. I don’t think we need it. I don’t think ethically, it’s a good idea to do it.

And the next step like super intelligence, we’ll achieve it in 20 years from now maximum, in my opinion. We’ll go to the singularity of hope, very optimistic vision where we’ll be hand in hand with our new robot, augmented human that will achieve solutions for very complex problems in a fraction of time. We have shadows of this singularity of hope. That means we have the social problems, the income problems, the wealth paradox to take care of. And I hope we have the courage to take that quickly. But anyway, super intelligent human are coming and they will figure it out.

Jim: Now, one thing you didn’t really directly address is the Vernor Vinge original thought of the singularity, which was picked up by Kurzweil, but he kind of frankly made it sound less scary, I think, unintentionally. And this is the original Vernor Vinge hypothesis, that once we get AGIs that are at let’s say, 110% of human capacity, they’re just a bit smarter than we are and we give them their job to invent their successor and they are better at it than we are, so they build something that’s 130% of humans. And you tell that one to invent its successor and you end up at 200% of a human, and you then keep going.

And they very rapidly take off going from 200% of a human to a 1000% of a human, to a million percent of a human, to a million times human power. And people of course argue how quickly that will take off. I know some fairly serious people to say it’ll happen in hours, right? Or it will happen in days or it’ll happen in months. The other serious people who say, “No, it’ll take years or decades or centuries.” What do you think of that classic argument of the non-hybrid human AI, but the pure AI singularity that could take off by just iterating upon itself to redesign itself at higher and higher levels of capacity?

Sam: First of all, it’s fascinating. I grew up with this idea and at my early age, I thought it will happen. But more and more, I don’t think it will happen soon, not in a hundred years. This is my prognostic because I think we still don’t know a lot about how to replicate intelligence. I think we are overestimating our understanding of our self and intelligence. Until now, we don’t have algorithm for love. We don’t know how to replicate it.

Not we don’t have computer big enough to replicate it, we don’t know how to write it. And this is one aspect of our intelligence. There is a lot of aspect. I’m not here trying to tell you it’s impossible. No. This vision was fascinating to me. When I was very young, I thought that’s what will happen. And I wanted this to happen because I was a geek person who likes science fiction and it’s fascinating.

But now with a little bit lived in this world and see what we can do for real and how sometimes, let me use the term, we are not humble enough. We are so arrogant about our knowledge and sometimes, we think we know everything. It happens a lot in the history of humanity when we think we finished the science, we learned everything. So my own perspective, I don’t think this will happen soon, not in 100 years.

And here I’m optimistic if I tell you it will happen, maybe it will start happening in 200 years. But honestly, if we don’t change radically how we look at our intelligence and the human ability, and think about it in a different way, I don’t think it will happen. We had a small discussion in the beginning about how human brain, maybe a quantum computers, I will not use the term quantum computers because today unfortunately, it’s used in a very specific meaning.

But I will tell you that we don’t know how our brain work. You told me that we have very low frequency, true. But in this low frequency, we are achieving miracles. I don’t know how, honestly, and I’m not the only one. I’m not trying to attack other people because I once was these people. I once was this person who was dreaming of the singularity Kurzweil style, if you want, and I love this guy. He is amazing. I listen always to what he say, even if I don’t agree that it will happen the fast way he’s explaining it, I think it will take a humanity maybe thousands of years to achieve that.

It’ll happen because we want it to happen. I believe in our capacity of achieving great things. But at the same time, I want to be humble and tell you, hey, we are very far from achieving AGI, not in 100 years, but we need to keep trying. I’m not trying to tell you people who are trying are crazy or liars. They are the people we need because without them failing, and that’s what I think would happen soon, we’ll not learn how to do it maybe in 100 years.

Jim: Interesting. Well, that’s hopeful, I suppose. I follow this area and I hear people on all sides of it. And again, there are people who think it’ll happen by 2029, right, and we won’t be ready for it. What I like about your scenario is that it will give us a long period of time to up-regulate human capacity and produce this collective intelligence, and maybe even some collective wisdom so that we can make better decisions than maybe we’ve made in the past about some other technologies.

Sam: I said it in my book, I think we will learn. We and our machine will learn. They will collect data, but we will collect wisdom. The super intelligence is coming in. My vision of singularity of hope, I’m not telling you, will not achieve great things, will not achieve the singularity. I’m telling you it’ll happen, but in a different way. It’ll be hybrid. It will be human and machine working together for the good of humanity. I don’t want to create a new species or an AGI purely machine. I think it’s useless and ethically not correct. But still, I think we don’t know how to do it, anyway. It will take us so long time, but I’m optimistic about the future. We’ll create this singularity in a different way, and let’s call it the singularity of hope.

Jim: All right. This has been an extremely interesting conversation, Sam. That’s Sam Sammane at sammane.com. As always, you can get the links at the Jim Rutt Show, at the episode page at jimruttshow.com.

Sam: Thank you. Thank you for having me. I really enjoyed the discussion. I appreciate everything in it and I’m looking forward to talk again maybe in the future about other ideas. You mentioned a lot of great ideas.

Jim: Yeah, I love the chat with you about any work you’re doing about agents, particularly info agents. I should make a note. I usually make a note about this upfront. Sam’s book is actually very well written, it’s very clear, lots of examples. You don’t have to be an expert in the domain. So if you want to learn more about Sam’s vision, I could tell you that any educated person can read the book and make sense out of it.

Sam: Thank you.

Jim: All right.