Artificial intelligence expert David Levy says relationships with robots might be even more complicated than Ayesha and Parag Khanna assume.
In their lucid and convincing piece, Ayesha and Parag Khanna (“Technology Will Take on a Life of Its Own,” September/October 2011) use the example of a young man in Japan who recently married a video-game character, pointing to a future in which human-robot marriages will be commonplace. Yes, we humans will be forming emotional attachments with machines, falling in love with them, having sex with them, and marrying them, thereby contributing significantly to a worldwide reduction in loneliness and the unhappiness that the lack of a loved one so often brings. But there are dangers as well.
The article also touches on what will be one of humankind’s greatest challenges in the future — cyberhacking. The Khannas ask whether it is only human cyberhackers whom we have to fear, “or perhaps also artificially intelligent software programs.” I would answer both, and their threat to just about every facet of our lives will be awesome.
In the case of robots built for love and sex, one aspect of what we have to fear from cyberhackers is encountering robots that play with our emotions because their software has been hacked to make them do so. The warped thinking that encourages hackers to wreak havoc with computer systems just for the “fun” of it is likely to instigate, for example, a type of virus that manipulates human emotions sufficiently to make someone fall in love, and then dash their bliss by dumping them. This and other emotional “games” will have the potential to cause misery to the point of making some who love robots suffer extreme psychological trauma.
We need Alvin Toffler-like thinking to prepare us for these threats and point the way for dealing with them.
CEO, Intelligent Toys Ltd.
Author, Love and Sex With Robots
Ayesha and Parag Khanna reply:
We welcome David Levy’s insight, based on his own intellectually stimulating research. He rightly points to the unintended consequences of our growing emotional attachment to machines. While some maladies such as loneliness are addressed, other vulnerabilities are created. As pervasive networks expand — whether among humans or among machines and humans — there is a window or lag time in which users can be manipulated. Hybrid Reality Institute fellow Marc Goodman has conjured very specific scenarios around the possibility of financial algorithms’ gaining the autonomous capacity to divert capital toward shell companies, effectively stealing from the markets.
Stanford University’s Jeremy Bailenson, an expert on virtual-human interactions, predicts that eventually anti-malware software will be developed that alerts us to the kinds of intrusions Levy rightly fears will transpire. But with the stakes and potential profits from online emotional extortion rising (just witness the many fraudulent schemes that have pervaded Second Life), cyberhackers will no doubt persist in their efforts to penetrate not just individual consumers, but en masse. In a related vein, recent reports have documented the rise of “virtual slavery” in which Chinese prison inmates are forced to play online games for up to 12 hours per day (in addition to hard manual labor) in order to “gold-farm” — build stockpiles of virtual currencies that prison guards then spend. We know all too well how regulations and education frequently lag one or more steps behind such crafty and malicious purveyors of cyberexploitation. It is therefore a task of foresight to anticipate these possibilities.