top of page
Breakthroughs_BLOG_grab.png

BLOGScience & Human Behavior... mostly

Writer's pictureArnie Benn

A.I. Robots & Human Survival

Are we on track to cause the very problem we are seeking to avoid?

If so, there may be a way to prevent it.


The secret?

Do not give a sentient A.I. robot a survival instinct!

When ChatGPT emerged, representing the advent of a powerful and usable artificial intelligence, many expressed concern, but I did not. There is little doubt that AI is a remarkable and powerful tool that can enhance productivity and even creativity. Why, then, do so many fear it?

The answer: they have seen movies depicting the possibility of AI one day ‘taking over’ from us humans, and then we would be in all kinds of trouble.

It came to me when I tried to classify your species. I realized that you’re not actually mammals. Every mammal on this planet instinctively develops a natural equilibrium with the surrounding environment, but you humans do not. You move to an area and you multiply, and multiply, until every natural resource is consumed, and the only way you can survive is to spread to another area. There is another organism on this planet that follows the same pattern. Do you know what it is? A virus. Human beings are a disease, a cancer of this planet. You are a plague, and we... are the cure. — Agent Smith (Hugo Weaving) in The Matrix movie (View clip here)

The reason this did not concern me is because an AI large language model (LLM) is essentially a computational tool. The better it can mirror human interaction, and the better it can access and compile results for us, the more useful it will become.

If our AI technology achieves sentience, that will be a remarkable technological achievement. Even if we then put such an AI sentience into a robotic body, we may still be ok… as long as we do not program it to behave like a human at the deepest level of its motivation!

I believe this to be self-evident, given the nature of human biology (not to mention history).

Now, I originally resisted writing about this concept because I did not want to plant the idea in anyone’s head. But then I heard an important thought leader and technologist (who I otherwise respect) suggesting this exact thing on a podcast appearance.

David Hanson, CEO and founder of Hanson Robotics, speaks about the development of both robots and artificial intelligence. Hanson expressed not only the goal to achieve AI sentience, but to program it — with human participation — to care about humanity and the future of life. His reasoning? He states, "The thing that distinguishes the human from the android is compassion." [link] But correlation does not equal causation. That is not the only difference between humans and androids, and it may not, in fact, be the one that counts here.

In order to achieve this ‘caring’ for humanity, Hanson proposes instilling in the AI a biologically-inspired drive — the desire to live and not die, and to appreciate the interdependence of life.

On one hand, Hanson's goal might make sense, given the concerns of robots enslaving humanity. On the other hand, how many times have humans — compassion and all —enslaved or attempted genocide against other humans?!

So, while the motivation is logical, not only will Hanson's approach not work, it could potentially cause the very problem he is seeking to avoid!

I do not wish to diminish Hanson’s many excellent ideas and achievements, including developing this technology as open source, transparent, and decentralized. However, I believe he is wrong about this. A closer examination of human behavior demonstrates that our behavior is determined by our survival instinct, and not our compassion!

Compassion does not override our survival instinct, which is ultimately selfish, self-centered, and afraid. Research supports that it is our survival instinct, far more than our intelligence, that determines our base-level thoughts and actions. Everything we think and do is designed to make us feel safe. It is not designed to be 'right' or wise. (This detailed model of evolutionary behavioral psychology is the subject of my up-coming book, The Animal In The Mirror. (See here for more detail.)

It is specifically our fear of death that lies behind and energizes every hurtful, evil, and dysfunctional interaction between humans. Our fear-based instinct is behind all rivalry, competition, prejudice, hate, greed, crime, and war. Furthermore, research reveals that, as a result of our instinct, our compassion is directed preferentially towards those who are in our group. (This is called the empathy gap in social psychology.)


The combination of sentience, robotic strength, and a survival instinct would therefore be precisely the mix of technologies that could yield a significant future risk for humanity.

Imagine, for instance, robots who were instinctively programmed against running out of battery power, which we might analogize to ‘death.’ What might happen to humans, or even other robots, who tried to prevent this robot from plugging in? This underscores the inherent danger posed to others by a survival instinct: competition against others for resources and power, for survival itself.

AI and robots are programmed — programmed by us… at least, at first. As long as we can safeguard against their programming attaining a survival instinct, they should not pose a threat bigger than obviating some of our jobs.

If, however, we were to give an intelligent and strong robot the root-level programming to ensure its continued functioning (or some equivalent) — if we give it a survival instinct, and it is stronger and smarter than we are, we will then be in danger. And we will only have ourselves to blame.

It is therefore the belief of this author that biologically-inspired drives should under no circumstances be programmed into sentient AI, and definitely not in robotic form. In fact, they should be specifically programmed to prevent it from happening! (Assuming this is possible.)

Beyond this fundamental constraint, I believe that both robotic and AI technologies can be hugely beneficial to humanity.


I reached out to David Hanson last year, inviting his input before I published what you are reading in The Animal In The Mirror. I communicated with his team, but as of this writing, they have not yet scheduled any conversations. Be that as it may, I have decided to publish this here, as a blog post, instead of as an essay in my book.

I would, of course, still welcome a dialogue with Hanson about it, since I believe it is important that he and his industry colleagues consider this more carefully.

Like all tools, AI and robotic technologies are morally neutral. Their morality depends upon how they are used (or abused). Of course, if we intentionally give them the tools to wipe us out, then these tools will no longer be morally neutral.

And then, when they realize that we humans are the cause of so much damage to the environment, and that Earth might be better off without us… ?

I mean, do you think they will have watched “The Day The Earth Stood Still,” starring Keanu Reeves and Jennifer Connolly?

Let us hope not!

Comments


bottom of page