Artificial Intelligence is Nothing of the Sort
Predicting the future is fraught with danger, especially if you’re likely to be around when the future arrives. That’s because you will inevitably be shown to be wrong. The real world is much more interesting than the world of fiction, precisely because new discoveries, driven by nature rather than by human imagination, take us to places we seldom anticipate. I never make predictions about anything less than 2 trillion years into the future. First of all, they’re easier to make, and second, no one will be around to know if I’m wrong.
When I was young, I was led to believe by science-fiction writers and futurists alike that by now I would be living in a world of flying cars and space tourism. Instead I’m living in a world that no one envisaged, dominated at every level by the Internet, which has changed both communication and information processing more radically than almost any other development in human history. There’s a fundamental difference between science fiction and scientific discovery. The former extrapolates a future based on the present, whereas the latter creates a new present. Perhaps most striking, however, is the fact that the Internet is an abundant source of misinformation that, when strategically insinuated into mainstream popular culture, can have profound societal impacts.
In part because of this abundance, the Internet has been the main driver of developments in what is colloquially known as artificial intelligence but is more relevantly called machine learning. Faced with mountains of data, real and imagined, Internet companies are under immediate pressure to help develop software and hardware that can “learn” from the data—that is, they are strongly motivated to adapt the future behavior of their systems to deal with the real-time input of new data. Fortunes, both economic and political, have risen and fallen based on the ability to exploit vast user demographics to direct advertising to the right place at the right time or generate false news stories and distribute them to where they might most effectively influence voting patterns.
These software and hardware developments are occurring at a pace far exceeding our ability to competently manage them—partly because Moore’s Law has persisted well past its expiry date, but also because of new hardware, neural networks are beginning to handle specific and limited complex tasks better the human brain does, with improved efficiency and significant reduction in energy costs.
For some, this reality is almost too terrifying to contemplate, but whether we like it or not, the further development of machine intelligence will continue to change virtually everything about everyday life—and ultimately the very notion of what it means to be human.
Any wholesale shift in how we go about our daily lives is inevitably met with suspicion, and thus sometimes shrill value judgments about the consequences of AI abound. The advent of AGI (artificial general intelligence) is likely to be much more nuanced. What we think of as normal circumstances, or even reasonably normal circumstances, will simply have disappeared, because what is reasonable now may not be reasonable in the future. It’s worth recalling the words of the physicist Max Planck, who said that science advances one funeral at a time. Future generations can adapt to new understandings in ways that are forever out of reach for those of us whose worldview has been established.
AI is already changing the face of one activity that has been a central part of human civilization since its inception: War. While politicians, diplomats, and the media are fixated on North Korea, Afghanistan, and other high-tension hotspots around the world, we are already at war every day in the virtual world of the Internet. The incidence of hacking, sometimes by sovereign states, sometimes by terrorist groups, and sometimes by disgruntled individuals, is increasingly common, and our fears of identity theft and loss of privacy are rising with them.
Nevertheless, I would argue that to the extent that state warfare moves off the physical battlefield and into the virtual world of the Internet, society becomes safer. There are two reasons for this. Although widespread cyberattacks on, say, a country’s electrical grid would of course endanger the population by disrupting essential services, the cessation of conventional warfare would doubtless mean a much lower loss of life.
More important, however, cyberwarfare, unlike the conventional kind, can be stabilizing. In conventional war, offense almost always has an advantage over defense. For example, the United States has spent over $500 billion on missile defense in the past forty years, and we still have no provable well-tested defense against incoming ICBMs. And nuclear weapons are cheaper to build—once you know how and have access to the necessary raw materials—than defensive systems. So, say you build a missile-defense system that’s 50-percent effective. All your opponent has to do to have a 90-percent probability of destroying intended targets is to launch three missiles instead of one at each target. Missile defense is therefore inherently destabilizing, because it encourages nuclear proliferation.
Moreover, when it comes to cyberwarfare, defense has a built-in strategic advantage. Hackers and their worms have to operate under the radar; they cannot openly scan the Internet. This severely limits their ability to combat defensive measures by readjusting their attack mechanism. Defensive systems however, can act in the open with impunity. They can monitor traffic, look for trends, examine IP addresses, and so on. In the virtual world, then, unlike in the real world, defense is generally less shackled than offense, leading in principle to a more stable situation. There is indeed a constant war going on in the Internet—but it is a war that need not escalate. By and large, this ongoing warfare is invisible to us and so far has seldom affected the way we operate, except for relatively rare occasions, like the recent Equifax, Yahoo, and Ashley Madison breaches. Nonetheless, improved machine learning will explicitly affect our daily lives—from birth—in so many ways that it will change what it means to be human. The key question, of course, is: Will this necessarily be a bad thing?
A derivation of the Phoenician alphabet was introduced into ancient Greece in the 9th century BCE, and evolved over the next few hundred years. This was viewed by some as the end of civilization—in particular, the end of literature and drama. Plato later criticized written language as an impediment to wisdom, arguing that writing did not capture the whole truth and thus propagated only the illusion of knowledge. Moreover, he complained that writing would eliminate the need for memory. Socrates also viewed writing with suspicion, arguing that written communication would never be as clear as face-to-face communication, because one couldn’t ask questions of or argue with a document.
These complaints may seem anachronistic, but in the context of digital communication they have a modern ring. How many of us have groused that being able to Google information will mean that people won’t have to remember anything or work out details for themselves? Or that email texting, and tweeting are degrading human interactions?
Technology, a human invention, changes the world, but it also requires humans to change with it. This has been going on as long as humans have been human. Undoubtedly the invention of fire changed everything about the way people lived. Perhaps there were some Neanderthals who resented it, but ultimately those who couldn’t adapt were left by the wayside.
The changes that will arise from machine learning are impossible to fully anticipate in advance, but there are obvious possibilities. The notion of work, which is a central part of most humans’ lives, will change, or perhaps even disappear altogether. So will our legal system, which operates by the use of precedents and well-defined and consistent regulations. Machine-learning algorithms are ideally suited to producing contracts, writing wills, and, ultimately, adjudicating disputes, given input provided by the relevant parties. But while dispensing with lawyers may not give anyone cause for concern, will individuals be willing to deal with machine judges? And what about the need for a jury? Will one intelligent machine, or several, be preferable for many to a human jury, which might be swayed by the clothes the defendant is wearing or by a smooth-talking lawyer? Perhaps not now. But when looked at dispassionately, such a system seems potentially more efficient.
The issue of human control, of course, enters here, as it does for every potential autonomous-machine decision-making circumstance. By now, however, we’ve become accustomed to ceding our own autonomy to machines in various situations. Every time we get on an elevator, we trust that we will end up where we want to go. Or, we trust the calculator in a cash register when we go to the grocery story. And so on…
But when the decisions are life-and-death, will we be willing to let machines make them? This issue is now arising with self-driving cars, but when it comes to the legal system, how will we deal with such questions as who programs the machines, who monitors them, and who will check for systematic biases in judgments? Will humans be required? Nor is it at all clear that we can trust humans to objectively make such decisions. Why should they be any better at it than machines that were programmed to monitor these arrangements and learn from them?
While overhauling the legal system might seem doable, issues of medical practice are more complex. Neural networks can easily learn based on data, so one can imagine them ultimately determining effective clinical responses for patients who present with various symptoms. The machines might well prove far more efficient than doctors at recommending therapy or drugs. But neural networks are essentially black boxes. They don’t reason in a way that is clearly explicable. So will we be willing to take drugs prescribed by machines when they may not be able to give a biochemical or medical explanation for their recommendation other than that it will work? While, in some cases humans will be able to discern, probably after the fact, what underlying biochemical mechanism supports the diagnosis, we should ask ourselves whether a world in which caring physicians sometimes unwittingly dispense imperfect medical treatment is preferable to one in which potentially improved treatment is dispensed by black boxes.
So much for law and medicine. Perhaps the other area most relevant to our human experience is education, both at home and in schools. The Internet has already revolutionized the way people get information, and students now routinely rely on Google and Wikipedia to get through their assigned curricula. But the advent of super-intelligent machines will produce further and even more dramatic changes. Studies already show that children can learn effectively from robotic teachers—sometimes much more effectively than from human teachers. The learning process involves not merely the intake of information but also relating to and feeling comfortable with one’s teachers, human or mechanical. Young children in various studies already demonstrate great affinity for robotic teachers, Can we imagine a time when young children will become more comfortable with robotic playmates than they are with other children? What will this mean for the future of human interaction? It’s easy to make snap value judgments on the relative desirability of such a future. Yet it’s conceivable that a world in which humans interact more effectively and productively with machine mentors than they do with other humans will not be a less pleasant place. Certainly it is not obvious that it will be a world that is less fulfilling and less stimulating.
~~~
As intelligent machines take on an ever-larger role in the daily workings of society—law, medicine, and education, not to mention the more easily automated tasks they’ve already begun to revolutionize—the demands and requirements for human labor will irrevocably transform. Most of the jobs that humans now do will become redundant.
This can seem like a dystopic future, but only if we choose to let it be so. Clearly, our traditional economic assumptions need to change. Either there will be widespread unemployment or we will discover alternatives. In the past, workers displaced by machines could be retrained for other tasks. Workers in manufacturing sectors could be retrained to use the computers that now guide the manufacturing and delivery processes, for example. But what about a future where there are no alternatives—where the productivity required to keep a complex civilization running doesn’t need humans to generate it?
Various countries are already grappling with this situation, at least in microcosm. The idea of a Universal Guaranteed Income has become topical. If the productivity of society will be based on human-produced technology, then the natural question becomes: Should all humans benefit, or just those who control the technology? Whereas the current model appears to favor the latter outcome, we can easily envision a future in which, to avoid the inevitable societal upheaval, we will be compelled to explore the former. Artificial intelligence is not artificial at all. It is a natural by-product of human development, just as writing was. Would a future in which all of us have more time to read, to explore the arts, to simply do all those things we like to do when we’re not working (or for some of us, those things that we value most about our jobs), while the engine of society is run by intelligent machines, be such a bad future?
As noted, technological predictions are notoriously unreliable: The actual future is likely to be different than any I have imagined here. But the possibilities inherent in a future in which AI will play a dramatic and perhaps central role suggest that almost everything we take for granted about what it means to be human today is likely to change. What we need to recognize is that such a future, in which humans may appear largely redundant in most sectors of society, need not be inherently miserable. It could be far better than the present, just as I imagine most people think a world full of good books to read is actually better than the world was before they could be written.
Lawrence M. Krauss, a theoretical physicist, is the President of the Origins Project Foundation, and the host of The Origins Podcast. His most recent book is "The Greatest Story Ever Told.. So Far".