Excerpt from “Enlightenment 2.0,” a Buddhist Geeks conversation with Ben Goertzel and host Vince Horn:
I think that the idea underlying that story (Enlightenment 2.0) really came out of something that I worry about in my personal life just thinking about my own personal future. When I think about “What would I want in the future if superhuman Artificial Intelligence (AI) became possible?”
...I really think that the human brain architecture is limiting. So that I think if you could change your brain into a different kind of information processing system, you could achieve just better states of mind. You could feel enlightened all the time, while doing great science, while having sensory gratification, and it could be way beyond what humans can experience.
So that leads to the question of, okay, if I had the ability to change myself into some profoundly better kind of mind, would I do it all at once? Would I just flick a switch and say “Okay, change from Ben into a super mind?” Well, I wouldn't really want to do that, because that would be just too much like killing Ben, and just replacing him with the super mind. So, I get the idea that maybe I'd like to improve myself by, say twenty percent per year. So I could enjoy the process, and feel myself becoming more and more intelligent, more and more enlightened, broader and broader, and better and better.
…You think of phase transitions in physics. You have water, and you boil the water, and then it changes from a liquid into a gas, just like that. It's not like it's half liquid and half gas, right? I mean, it's like the liquid is dead, and then there's a gas.
That was the kind of theme underlining this story. There was this super-intelligent AI that people had created. The super intelligent AI, after it solved the petty little problems of world peace, and hunger, and energy for everyone, and so forth, that super-human AI set itself thinking about “Okay, how can we get rid of suffering, fundamentally?” How can we make a mind that really has a positive experience all the time, and will spread good through the world rather than spreading suffering through the world.
Then the conclusion it comes to is it is possible to have such a mind, but human beings can never grow into that, and that it, given the way that it was constructed by the humans, could never grow into that either.
So, the conclusion this AI comes to is there probably are well-structured, benevolent super minds in the universe, and in order to be sure the universe is kept peaceful and happy for them, we should all just get rid of ourselves, because we're just fundamentally screwed up, and can't even ever continuously evolve into something that's benevolently structured.
Which I don't really believe, but I think it's an interesting idea, and I wouldn't say it's impossible.
[So] is the AI a lunatic or does it have some profound insight that we can't appreciate? Which is a problem we're going to have broadly when we create minds better than ourselves.
Just like when my dog wants to go do something and I stop him, right? Maybe it's just because my motivational system is different than his. Like I don't care about the same things as he does. I'm not that interested in going to romp in the field like he is, and I'm just bossing him around based on my boring motivational structure. On the other hand, sometimes I really have an insight he doesn't have, and I'm really right. He shouldn't go play in the highway, no matter how much fun it looks like. The dog can't know, and similarly, when we have a super-human AI, we really won't be able to know. We'll have to make a gut feel decision whether to trust it or not.