Artificial Intelligence Poses a Doomsday Threat Doomsday Scenarios, 2011 "A true AI [artificial intelligence] would have immense economic potential, and when money is at stake, safety issues get put aside until real problems develop at which time, of course, it may already be too late." Kaj Sotala is a writer and a supporter of the Singularity Institute. In the following viewpoint, he argues that an artificial intelligence (AI) will not be dedicated to destroying humans, as depicted in film. However, Sotala says, the AI will not care about humans either. Thus, it may attack or eliminate humans as a byproduct of other goals or interests. Sotala says that the threat from AI means that scientists working on artificial intelligence must be careful to develop ways to make AI care about human beings. As you read, consider the following questions: 1. 2. 3. Why does Sotala argue that the Terminator movie may lead people to believe that AI is not dangerous? According to Sotala, what is inductive bias? What does Stephen Omohundro conclude about agents with harmless goals? Skynet in the Terminator1 movies is a powerful, evocative warning of the destructive force an artificial intelligence [AI] could potentially wield. However, as counterintuitive as it may sound, I find that the Terminator franchise is actually making many people underestimate the danger posed by AI. AI Is Not Human It goes like this. A person watches a Terminator movie and sees Skynet portrayed as a force actively dedicated to the destruction of humanity. Later on the same person hears somebody bring up the dangers of AI. He then recalls the Terminator movies and concludes (correctly so!) that a vision of an artificial intelligence spontaneously deciding to exterminate all of humanity is unrealistic. Seeing the other person's claims as unrealistic and inspired by silly science fiction, he dismisses the AI threat argument as hopelessly misguided. Yet humans are not actively seeking to harm animals when they level a forest in order to build luxury housing where the forest once stood. The animals living in the forest are harmed regardless, not out of an act of intentional malice, but as a simple side-effect. [AI researcher] Eliezer Yudkowsky put it well: the AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else. To assume an artificial intelligence would necessarily act in a way we wanted is just as misguided and anthropomorphic as assuming that it would automatically be hostile and seek to rebel out of a desire for freedom. Usually, a child will love its parents and caretakers, and protégés will care for their patrons but these are traits that have developed in us over countless generations of evolutionary
change, not givens for any intelligent mind. An AI built from scratch would have no reason to care about its creators, unless it was expressly designed to do so. And even if it was, a designer building the AI to care about her must very closely consider what she actually means by "caring" for these things are not givens, even if we think of them as self-contained concepts obvious to any intelligent mind. It only seems so because we instinctively model other minds by using ourselves and people we know as templates to do otherwise would mean freezing up, as we'd spend years building from scratch models of every new person we met. The people we know and consider intelligent all have at least roughly the same idea of what "caring" for someone means, thus any AI would eventually arrive at the same concept, right? An inductive bias is a tendency to learn certain kinds of rules from certain kinds of observations. Occam's razor, the principle of choosing the simplest consistent hypothesis, is one kind of inductive bias. So is an infant's tendency to eventually start ignoring phoneme differences [the basic units of speech sounds] not relevant for their native language. Inductive biases are necessary for learning, for without them, there would be an infinite number of explanations for any phenomena but nothing says that all intelligent minds should have the same inductive biases as inbuilt. Caring for someone is such a complex concept that it couldn't be built into the AI directly the designer would have to come up with inductive biases she thought would eventually lead to the mind learning to care about us, in a fashion we'd interpret as caring. AI Will Not Care About Us The evolutionary psychologists John Tooby and Leda Cosmides write: Evolution tailors computational hacks that work brilliantly, by exploiting relationships that exist only in its particular fragment of the universe (the geometry of parallax gives vision a depth cue: an infant nursed by your mother is your genetic sibling: two solid objects cannot occupy the same space). These native intelligences are dramatically smarter than general reasoning because natural selection equipped them with radical short cuts. Our minds have evolved to reason about other human minds, not minds-in-general. When trying to predict how an AI would behave in a certain situation, and thus trying to predict how to make it safe, we cannot help but unconsciously slip in assumptions based on how humans would behave. The inductive biases we automatically employ to predict human behavior do not correctly predict AI behavior. Because we are not used to questioning deep-rooted assumptions of such hypotheses, we easily fail to do so even in the case of AI, where it would actually be necessary. The people who have stopped to question those assumptions have arrived at unsettling results. In his "Basic AI Drives" paper, Stephen Omohundro concludes that even agents with seemingly harmless goals will, if intelligent enough, have a strong tendency to try to achieve those goals via less harmless methods. As simple examples, any AI with a desire to achieve any kinds of goals will have a motivation to resist being turned off, as that would prevent it from achieving the goal; and because of this, it will have a motivation to acquire resources it can use to protect itself. While this won't make it desire humanity's destruction, it is not inconceivable that it would be motivated to at least reduce humanity to a state where we couldn't even potentially pose a threat. A commonly-heard objection to these kinds of scenarios is that the scientists working on AI will surely be aware of these risks themselves, and be careful enough. But historical precedent doesn't really
support this assumption. Even if the scientists themselves were careful, they will often be under intense pressure, especially when economic interest is at stake. Climate scientists have spent decades warning people of the threat posed by greenhouse gasses, but even today many nations are reluctant to cut back on emissions, as they suspect it'd disadvantage them economically. The engineers in charge of building many Soviet nuclear plants, most famously Chernobyl, did not put safety as their first priority, and so on. A true AI would have immense economic potential, and when money is at stake, safety issues get put aside until real problems develop at which time, of course, it may already be too late. Yet if we want to avoid Skynet-like scenarios, we cannot afford to risk it. Safety must be a paramount priority in the creation of Artificial Intelligence. Footnotes 1. 1. Terminator is a 1984 science fiction movie in which an artificial intelligence known as Skynet takes over the world. Further Readings Books Amir D. Aczel Present at the Creation: The Story of CERN and the Large Hadron Collider. New York: Crown, 2010. Joseph Cirincione Bomb Scare: The History and Future of Nuclear Weapons. New York: Oxford University Press, 2008. Heidi Cullen The Weather of the Future: Heat Waves, Extreme Storms, and Other Scenes from a Climate-Changed Planet. New York: HarperCollins, 2010. Tad Daley Apocalypse Never: Forging the Path to a Nuclear Weapon-Free World. Piscataway, NJ: Rutgers University Press, 2010. Christopher Dodd and Robert Bennett The Senate Special Report on Y2K. Nashville, TN: Thomas Nelson, Inc., 1999. K. Eric Drexler Engines of Creation: The Coming Era of Nanotechnology. New York: Anchor Books, 1986. Jean-Pierre Filiu Apocalypse in Islam. Berkeley, CA: University of California Press, 2011. Bruce David Forbes and Jeanne Halgren Kilde, eds. Rapture, Revelation, and the End Times: Exploring the Left Behind Series. New York: Palgrave MacMillan, 2004. Lynn E. Foster Nanotechnology: Science, Innovation, and Opportunity. Upper Saddle River, NJ: Prentice Hall, 2009. John R. Hall, Philip D. Schuyler, and Sylvaine Trinh Apocalypse Observed: Religious Movements and Violence in North America, Europe, and Japan. New York: Routledge, 2000. Paul Halpern The World's Smallest Particles. Hoboken, NJ: John Wiley & Sons, 2009. James C. Hansen Storms of My Grandchildren: The Truth About the Coming Climate Catastrophe and Our Last Chance to Save Humanity. New York: Bloomsbury USA, 2009. John Horgan The Undiscovered Mind: How the Human Brain Defies Replication, Medication, and
Explanation. New York: Touchstone, 1999. Alan Hultberg, ed. Three Views on the Rapture: Pretribulation, Prewrath, and Posttribulation. Grand Rapids, MI: Zondervan, 2010. Samuel P. Huntington The Clash of Civilizations and the Remaking of World Order. New York: Touchstone, 1996. John Major Jenkins The 2012 Story: The Myths, Fallacies, and Truth Behind the Most Intriguing Date in History. New York: Penguin Group, 2009. Jonathan Kirsch A History of the End of the World: How the Most Controversial Book in the Bible Changed the Course of History. New York: HarperCollins, 2006. Ray Kurzweil The Singularity Is Near: When Humans Transcend Biology. New York: Penguin Group, 2005. Patrick J. Michaels and Robert Ballins Climate of Extremes: Global Warming Science They Don't Want You to Know. Washington, DC: Cato Institute, 2009. John Mueller Atomic Obsession: Nuclear Alarmism from Hiroshima to Al-Qaeda. New York: Oxford University Press, 2009. Sharan Newman The Real History of the End of the World: Apocalyptic Predictions from Revelation and Nostradamus to Y2K and 2012. New York: Berkley Books, 2010. Kenneth G.C. Newport and Crawford Gribben, eds. Expecting the End: Millennialism in Social and Historical Context. Waco, TX: Baylor University Press, 2006. Kevin Quigley Responding to Crises in the Modern Infrastructure: Policy Lessons from Y2K. New York: Palgrave Macmillan, 2008. Bruce Riedel The Search for Al Qaeda: Its Leadership, Ideology, and Future, rev. ed. Washington, DC: Brookings Institution, 2008. Barbara R. Rossing The Rapture Exposed: The Message of Hope in the Book of Revelation. New York: Basic Books, 2004. Periodicals Ronald Bailey "Wagging the 'Fat Tail' of Climate Catastrophe," Reason.com, February 10, 2009. http://reason.com. Jeanna Bryner "'Doomsday' Seed Vault Stores 500,000 Crops," LiveScience.com, March 10, 2010. www.livescience.com. Michel Chossudovsky "Real Versus Fake Crises: Concealing the Risk of an All Out Nuclear War," GlobalResearch.ca, September 16, 2010. http://globalresearch.ca. Stephen J. Dubner "A Different Climate Change Apocalypse Than the One You Were Envisioning," Freakonomics, July 7, 2008. http://freakonomics.blogs.nytimes.com. Robert Lamb "Noel Sharkey: Robotics vs. Sci-Fi vs. Anthropomorphism," HowStuffWorks, May 25, 2010. http://blogs.howstuffworks.com. Cole Morton "The Large Hadron Collider: End of the World, or God's Own Spin?" Independent, September 7, 2008. www.independent.co.uk. John Mueller "Why Nuclear Weapons Aren't as Frightening as You Think," Foreign Policy, January/February 2010. www.foreignpolicy.com. Dennis Overbye "Gauging a Collider's Odds of Creating a Black Hole," New York Times, April 15, 2008. www.nytimes.com.
Eliezer Yudkowsky "Why We Need Friendly AI," Terminator Salvation: Preventing Skynet, May 22, 2009. www.preventingskynet.com. Source Citation Sotala, Kaj. "Artificial Intelligence Poses a Doomsday Threat." Doomsday Scenarios. E d. Noah Berlatsky. Detroit: Greenhaven Press, 2011. Opposing Viewpoints. Rpt. from "Thinking of AIs as Humans Is Misguided." PreventingSkynet.com. 2009. Oppo sing Viewpoints in Context. Web. 16 Sept. 2015. Document URL http://ic.galegroup.com/ic/ovic/viewpointsdetailspage/viewpointsdetailswindow?fa ilovertype=&query=&prodid=ovic&windowstate=normal&contentmodules =&display-query=&mode=view&displaygroupname=viewpoints&dviselect edpage=&limiter=&currpage=&disablehighlighting=&displaygroups=&a mp;sortby=&zid=&search_within_results=&p=ovic&action=e&catid =&activitytype=&scanid=&documentid=gale%7cej3010784227&source=bo okmark&u=chandler_main&jsid=11c22c92a7a73d69f3bd8c2c5ec62a87 Gale Document Number: GALE EJ3010784227