Is robots taking over the world bad? Some of the AI-risk scenarios I’ve read almost feel like it’s inevitable — and I’ve been wondering if there is a way it could happen that we would actually be comfortable, or even happy with. Let me roughly outline one scenario I have in mind, and then reflect on whether it would be “bad” and why.
We start from the state we are in today: AI is getting progressively better at taking over various human jobs merely by leveraging statistical regularities in the data. As AI cannot run itself yet, humans remain in the loop for now, but make fewer and fewer independent actions and decisions that are not informed or assisted by AI in some way. The big assumption I will take for the scenarios here is that at some point, AI becomes fully self-sufficient in some parts of the economy: achieving autonomous self-replication, including sourcing raw materials, design, assembly, maintenance, debugging, and adapting by gathering more data and learning new statistical regularities.
At that point, or soon thereafter, in the perfect world we can imagine all humans being provided all the basic needs without needing to work. And with this comes the hard problem of finding purpose, meaning and fun in our lives where AI can perfectly well run our economy without needing any help from us. It is often said that meaning comes from helping other humans in some way shape or form. So sure, while AI might not need our help to run the economy, perhaps other humans will need us for some quality human connection and understanding? Empathy, listening, psychotherapy perhaps, sex, friendship. Perhaps we’ll still need each other to make art that strikes some deeper notes in our souls. Or to discover mysteries of nature through science — and explain them in a simple elegant way that gives the pleasure of “human understanding” (even if computers can make the same predictions more accurately through incomprehensible statistics).
So while basic human needs may be met through automation, perhaps we will still need each other to satisfy our higher needs? Well this might be true if meeting those higher needs was harder to automate — but currently the evidence we have does not seem to support that. Video games are a good example: by hacking our reward system, games can give us a powerful sense of meaning, of fighting for a great cause, and doing it together with comrades we can trust with our life (even if some of them may be bots). They give us joy of accomplishment, the pain of loss, and others to share this with. As AI gets more convincing, and learns to recognize human emotions (empathic AI), it is not so hard to imagine that it will meet our needs of human connection much better than other humans can. Same may be sad for arts and sciences, which AI is already well underway in conquering. Even sex is already far from being unchartered territory for surrogate alternatives (think AI-augmented sex-dolls or VR porn).
By having our personal video game and AI friends adapt to our needs and desires, each of us can get siloed into our own personal paradise where all our needs, no matter how “basic” or “high” are satisfied far better than the real world or real humans ever could. Any contact with other humans — who have their own needs to be accounted for — may become tiresome, if not unbearable. While we may have some nagging sense that “we should keep it real and make real babies,” it may be no more pressing than a New Year’s resolution like “I should eat healthier.” And besides, to put our minds at ease, we could probably ask our AI to write us some inspiring and convincing blog-posts explaining why it’s really not so bad if robots take over the world. ;)
At this point, I can imagine the question of human species survival becoming a topic of some public debate. Perhaps some minor factions will separate from mainstream society and artificially cap the level of permissible AI in their community to leave some areas for human superiority. Yet in most of the world, humans will probably no longer be useful to anything or anyone — even to each other — and will peacefully and happily die off.
Now, this seems scary. But is it really so bad? Having been trained to understand our human needs and human nature in minute detail, the AI we leave behind will be the sum total of all human values, desires, knowledge and aspiration. Moreover, each one of us will have contributed our personal beliefs and values into this “collective conscience.” Having spent years of our lives living in AI world and thereby personalizing and training it to know our wants may not, afterall, be so far off from a direct “brain download.” And since by then the AI-economy will have already had a long run of human-supervised self-sufficiency, there is no reason to fear that without our oversight the robots left behind will run the world any worse than we can.
Brain downloading, or progressively replacing all our organic tissues with various artificial enhancements, could be other paths to a “gentle apocalypse” — but none of them seem fundamentally “better” or “worse” to me in any moral sense. Either way, the biological human species goes out, having left its creation — its child — behind. In this sense, our biological children, which replace us generation after generation, are no more a continuation of us than this AI would be.
The scenario I described may be thought of as one where the child does all in its power to take care of all the needs of the aging parent. In practice, this does not always happen — and there are countless historical examples of children murdering their parents to claim the figurative “throne.” Even then, however, they continue the bloodline. Whether violently or gently, by rebelling or inheriting, the children carry on their parents’ legacy, values, and world-view. So if the robots do “rise up,” and the apocalypse is not so gentle — when all is said and done, does it really matter?
Comments