Artificial Intelligence Poses a Doomsday Threat



Similar documents
BBC LEARNING ENGLISH 6 Minute English Do you fear Artificial Intelligence?

YEAR 11 REVISION KEYCARD (Religion and Planet Earth)

The future of Artificial Intelligence

Big History: Earth, Life, and Sustainability Raghu Murtugudde

GCE Religious Studies Explanation of Terms Unit 1D: Religion, Philosophy and Science

Teaching Evolution in a Climate of Controversy Meshing Classroom Practice with Science and Common Sense

Your Professional Reputation There is no way to put a price on your professional reputation, its value to you is priceless

MONEY MANAGEMENT. Guy Bower delves into a topic every trader should endeavour to master - money management.

NATIONAL: THE GOOD AND MOSTLY BAD OF ARTIFICIAL INTELLIGENCE

David Morehan October 1, Miss Evers Boys portrays the emotional effects of one of the most amoral instances of

Philosophical argument

Assignment Discovery Online Curriculum

Ever since the release of the original Atari in 1972, people of all ages have been

Love: A Spiritual Reinforcement

Why People Don't Develop Effective Corrective Actions

Chapter 13. Prejudice: Causes and Cures

SOCIAL WORK PRACTICE AND DOMESTIC VIOLENCE

Hollywood, Superheroes and IR: The Crisis of Security Concepts and Why Metropolis and Gotham Are Not Lost Yet

Artificial Intelligence and Asymmetric Information Theory. Tshilidzi Marwala and Evan Hurwitz.

Argument Mapping 2: Claims and Reasons

FOREIGN LANGUAGE TEACHING AN INTERVIEW WITH NINA SPADA

Laplace's Demon. By finishing the work began by Sir Isaac Newton in mathematics, and further

Department of Political Science Phone: (805) University of California, Santa Barbara Fax: (805)

1 The Unique Character of Human Existence

Their stories are tragic. A new chapter starts now. now.

Five Practices of Fruitful Congregations Radical Hospitality

Essay Writers Wanted. do my essay cheap. football research paper. psychology essay writing service. essay for helping others

Immigrants and Immigration: Answering the Tough Questions

Last time we had arrived at the following provisional interpretation of Aquinas second way:

New York: Center of the World Teacher s Guide

The Awesome Trading System (2 nd Edition)

SCIENTIFIC THEORIES ABOUT THE ORIGINS OF THE WORLD AND HUMANITY

G u i d e l i n e s f o r K12 Global C l i m a t e Change Education

Community Dialogue Participant s Guide. Lessons from Islamic Spain for Today s World

A Somewhat Higher Opinion of God A conversation with biologist Ken Miller. February 9, 1998 Interview by Karl W. Giberson

Lesson 3 Temptation. Lesson Objectives:

Mainly, non-muslims information on Islam is based on what they see on television and in the movies.

"A Young Child's Point of View on Foster Care and Adoption"

Introduction to 30th Anniversary Perspectives on Cognitive Science: Past, Present, and Future

The Senior Executive s Role in Cybersecurity. By: Andrew Serwin and Ron Plesco.

Why Don't They Leave? Domestic Violence/Abuse

MLA Style Research Paper based on the 7 th ed. of the MLA Handbook for Writers of Research Papers. Created Nov 10, 2009.

c01_1 09/18/ CHAPTER 1 COPYRIGHTED MATERIAL Attitude Is Everything in a Down Market

Religious Differences or Religious Similarities? History teaches us that religion plays a major role in society worldwide.

Excellence and Equity of Care and Education for Children and Families Part 3 Program Transcript

Reading political cartoons

Vivisection: Feeling Our Way Ahead? R. G. Frey Bowling Green State University

How To Understand The Hype Around Cybersecurity

How Psychology Needs to Change

But both are thieves. Both lead us away from God s grace and rob us of our joy.

Introduction to Artificial Intelligence

Evolutionary Attitudes Survey Hawley & Parkinson, 2008 (Pilot N = 90, Intro Psych Subject Pool) University of Kansas

The Writing Center Presents:

Interview With A Teen. Great Family. Outstanding Education. Heroine Addict

Religious Studies (Short Course) Revision Religion and Planet Earth

Assignment 2. by Sheila Y. Grangeiro Rachel M. Vital EDD 9100 CRN Leadership Seminar

What is the difference between sensation and. Perception:

THEME: We should take every opportunity to tell others about Jesus.

The impact of corporate reputation on business performance

presenter: Kevin A. MacQuarrie, AIA Principal Vice President, WLC Architects, Inc. innovative schools for the new millennium

Episode Five: Place Matters

Alexis Naugle Intro to Special Education. Dr. Macy

Reality in the Eyes of Descartes and Berkeley. By: Nada Shokry 5/21/2013 AUC - Philosophy

高 級 口 說 測 驗 考 生 作 答 謄 錄 稿

Planning For Your Cat's Care -- If You Are No Longer There

40 Recommended Resources for Understanding Dispensationalism By Michael J. Vlach, Ph.D.

Managing Change: The Role of the Change Agent

Big Data, Big Decisions: The Growing Need for Acting with Certainty in Uncertain Times

The Design, Fashion & Luxury Group at McCarter

Monoclonal Antibody Therapy: Innovations in Cancer Treatment. James Choi ENGL 202C

Personal Insurance Myths

ADVANCED GEOGRAPHIC INFORMATION SYSTEMS Vol. II - GIS Project Planning and Implementation Somers R.M. GIS PROJECT PLANNING AND IMPLEMENTATION

Teaching for Critical Thinking: Helping College Students Develop the Skills and Dispositions of a Critical Thinker

Faculty/Staff Referral Guide for Students in Crisis

Disaster and Crisis Management in the Public, Private and Nonprofit Sectors RPAD 572/472 Instructor: Terry Hastings

Out of the Frying Pan into the Fire: Behavioral Reactions to Terrorist Attacks

Politics of Environmental, Health, and Safety Regulation Professor Brendon Swedlow Northern Illinois University

CPO Science and the NGSS

Okami Study Guide: Chapter 3 1

Turabian De-Mystified

Contracting for Agile Software Projects

OPTIMALITY THEORY AND THE THREE LAWS OF ROBOTICS 1

Transcription:

Artificial Intelligence Poses a Doomsday Threat Doomsday Scenarios, 2011 "A true AI [artificial intelligence] would have immense economic potential, and when money is at stake, safety issues get put aside until real problems develop at which time, of course, it may already be too late." Kaj Sotala is a writer and a supporter of the Singularity Institute. In the following viewpoint, he argues that an artificial intelligence (AI) will not be dedicated to destroying humans, as depicted in film. However, Sotala says, the AI will not care about humans either. Thus, it may attack or eliminate humans as a byproduct of other goals or interests. Sotala says that the threat from AI means that scientists working on artificial intelligence must be careful to develop ways to make AI care about human beings. As you read, consider the following questions: 1. 2. 3. Why does Sotala argue that the Terminator movie may lead people to believe that AI is not dangerous? According to Sotala, what is inductive bias? What does Stephen Omohundro conclude about agents with harmless goals? Skynet in the Terminator1 movies is a powerful, evocative warning of the destructive force an artificial intelligence [AI] could potentially wield. However, as counterintuitive as it may sound, I find that the Terminator franchise is actually making many people underestimate the danger posed by AI. AI Is Not Human It goes like this. A person watches a Terminator movie and sees Skynet portrayed as a force actively dedicated to the destruction of humanity. Later on the same person hears somebody bring up the dangers of AI. He then recalls the Terminator movies and concludes (correctly so!) that a vision of an artificial intelligence spontaneously deciding to exterminate all of humanity is unrealistic. Seeing the other person's claims as unrealistic and inspired by silly science fiction, he dismisses the AI threat argument as hopelessly misguided. Yet humans are not actively seeking to harm animals when they level a forest in order to build luxury housing where the forest once stood. The animals living in the forest are harmed regardless, not out of an act of intentional malice, but as a simple side-effect. [AI researcher] Eliezer Yudkowsky put it well: the AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else. To assume an artificial intelligence would necessarily act in a way we wanted is just as misguided and anthropomorphic as assuming that it would automatically be hostile and seek to rebel out of a desire for freedom. Usually, a child will love its parents and caretakers, and protégés will care for their patrons but these are traits that have developed in us over countless generations of evolutionary

change, not givens for any intelligent mind. An AI built from scratch would have no reason to care about its creators, unless it was expressly designed to do so. And even if it was, a designer building the AI to care about her must very closely consider what she actually means by "caring" for these things are not givens, even if we think of them as self-contained concepts obvious to any intelligent mind. It only seems so because we instinctively model other minds by using ourselves and people we know as templates to do otherwise would mean freezing up, as we'd spend years building from scratch models of every new person we met. The people we know and consider intelligent all have at least roughly the same idea of what "caring" for someone means, thus any AI would eventually arrive at the same concept, right? An inductive bias is a tendency to learn certain kinds of rules from certain kinds of observations. Occam's razor, the principle of choosing the simplest consistent hypothesis, is one kind of inductive bias. So is an infant's tendency to eventually start ignoring phoneme differences [the basic units of speech sounds] not relevant for their native language. Inductive biases are necessary for learning, for without them, there would be an infinite number of explanations for any phenomena but nothing says that all intelligent minds should have the same inductive biases as inbuilt. Caring for someone is such a complex concept that it couldn't be built into the AI directly the designer would have to come up with inductive biases she thought would eventually lead to the mind learning to care about us, in a fashion we'd interpret as caring. AI Will Not Care About Us The evolutionary psychologists John Tooby and Leda Cosmides write: Evolution tailors computational hacks that work brilliantly, by exploiting relationships that exist only in its particular fragment of the universe (the geometry of parallax gives vision a depth cue: an infant nursed by your mother is your genetic sibling: two solid objects cannot occupy the same space). These native intelligences are dramatically smarter than general reasoning because natural selection equipped them with radical short cuts. Our minds have evolved to reason about other human minds, not minds-in-general. When trying to predict how an AI would behave in a certain situation, and thus trying to predict how to make it safe, we cannot help but unconsciously slip in assumptions based on how humans would behave. The inductive biases we automatically employ to predict human behavior do not correctly predict AI behavior. Because we are not used to questioning deep-rooted assumptions of such hypotheses, we easily fail to do so even in the case of AI, where it would actually be necessary. The people who have stopped to question those assumptions have arrived at unsettling results. In his "Basic AI Drives" paper, Stephen Omohundro concludes that even agents with seemingly harmless goals will, if intelligent enough, have a strong tendency to try to achieve those goals via less harmless methods. As simple examples, any AI with a desire to achieve any kinds of goals will have a motivation to resist being turned off, as that would prevent it from achieving the goal; and because of this, it will have a motivation to acquire resources it can use to protect itself. While this won't make it desire humanity's destruction, it is not inconceivable that it would be motivated to at least reduce humanity to a state where we couldn't even potentially pose a threat. A commonly-heard objection to these kinds of scenarios is that the scientists working on AI will surely be aware of these risks themselves, and be careful enough. But historical precedent doesn't really

support this assumption. Even if the scientists themselves were careful, they will often be under intense pressure, especially when economic interest is at stake. Climate scientists have spent decades warning people of the threat posed by greenhouse gasses, but even today many nations are reluctant to cut back on emissions, as they suspect it'd disadvantage them economically. The engineers in charge of building many Soviet nuclear plants, most famously Chernobyl, did not put safety as their first priority, and so on. A true AI would have immense economic potential, and when money is at stake, safety issues get put aside until real problems develop at which time, of course, it may already be too late. Yet if we want to avoid Skynet-like scenarios, we cannot afford to risk it. Safety must be a paramount priority in the creation of Artificial Intelligence. Footnotes 1. 1. Terminator is a 1984 science fiction movie in which an artificial intelligence known as Skynet takes over the world. Further Readings Books Amir D. Aczel Present at the Creation: The Story of CERN and the Large Hadron Collider. New York: Crown, 2010. Joseph Cirincione Bomb Scare: The History and Future of Nuclear Weapons. New York: Oxford University Press, 2008. Heidi Cullen The Weather of the Future: Heat Waves, Extreme Storms, and Other Scenes from a Climate-Changed Planet. New York: HarperCollins, 2010. Tad Daley Apocalypse Never: Forging the Path to a Nuclear Weapon-Free World. Piscataway, NJ: Rutgers University Press, 2010. Christopher Dodd and Robert Bennett The Senate Special Report on Y2K. Nashville, TN: Thomas Nelson, Inc., 1999. K. Eric Drexler Engines of Creation: The Coming Era of Nanotechnology. New York: Anchor Books, 1986. Jean-Pierre Filiu Apocalypse in Islam. Berkeley, CA: University of California Press, 2011. Bruce David Forbes and Jeanne Halgren Kilde, eds. Rapture, Revelation, and the End Times: Exploring the Left Behind Series. New York: Palgrave MacMillan, 2004. Lynn E. Foster Nanotechnology: Science, Innovation, and Opportunity. Upper Saddle River, NJ: Prentice Hall, 2009. John R. Hall, Philip D. Schuyler, and Sylvaine Trinh Apocalypse Observed: Religious Movements and Violence in North America, Europe, and Japan. New York: Routledge, 2000. Paul Halpern The World's Smallest Particles. Hoboken, NJ: John Wiley & Sons, 2009. James C. Hansen Storms of My Grandchildren: The Truth About the Coming Climate Catastrophe and Our Last Chance to Save Humanity. New York: Bloomsbury USA, 2009. John Horgan The Undiscovered Mind: How the Human Brain Defies Replication, Medication, and

Explanation. New York: Touchstone, 1999. Alan Hultberg, ed. Three Views on the Rapture: Pretribulation, Prewrath, and Posttribulation. Grand Rapids, MI: Zondervan, 2010. Samuel P. Huntington The Clash of Civilizations and the Remaking of World Order. New York: Touchstone, 1996. John Major Jenkins The 2012 Story: The Myths, Fallacies, and Truth Behind the Most Intriguing Date in History. New York: Penguin Group, 2009. Jonathan Kirsch A History of the End of the World: How the Most Controversial Book in the Bible Changed the Course of History. New York: HarperCollins, 2006. Ray Kurzweil The Singularity Is Near: When Humans Transcend Biology. New York: Penguin Group, 2005. Patrick J. Michaels and Robert Ballins Climate of Extremes: Global Warming Science They Don't Want You to Know. Washington, DC: Cato Institute, 2009. John Mueller Atomic Obsession: Nuclear Alarmism from Hiroshima to Al-Qaeda. New York: Oxford University Press, 2009. Sharan Newman The Real History of the End of the World: Apocalyptic Predictions from Revelation and Nostradamus to Y2K and 2012. New York: Berkley Books, 2010. Kenneth G.C. Newport and Crawford Gribben, eds. Expecting the End: Millennialism in Social and Historical Context. Waco, TX: Baylor University Press, 2006. Kevin Quigley Responding to Crises in the Modern Infrastructure: Policy Lessons from Y2K. New York: Palgrave Macmillan, 2008. Bruce Riedel The Search for Al Qaeda: Its Leadership, Ideology, and Future, rev. ed. Washington, DC: Brookings Institution, 2008. Barbara R. Rossing The Rapture Exposed: The Message of Hope in the Book of Revelation. New York: Basic Books, 2004. Periodicals Ronald Bailey "Wagging the 'Fat Tail' of Climate Catastrophe," Reason.com, February 10, 2009. http://reason.com. Jeanna Bryner "'Doomsday' Seed Vault Stores 500,000 Crops," LiveScience.com, March 10, 2010. www.livescience.com. Michel Chossudovsky "Real Versus Fake Crises: Concealing the Risk of an All Out Nuclear War," GlobalResearch.ca, September 16, 2010. http://globalresearch.ca. Stephen J. Dubner "A Different Climate Change Apocalypse Than the One You Were Envisioning," Freakonomics, July 7, 2008. http://freakonomics.blogs.nytimes.com. Robert Lamb "Noel Sharkey: Robotics vs. Sci-Fi vs. Anthropomorphism," HowStuffWorks, May 25, 2010. http://blogs.howstuffworks.com. Cole Morton "The Large Hadron Collider: End of the World, or God's Own Spin?" Independent, September 7, 2008. www.independent.co.uk. John Mueller "Why Nuclear Weapons Aren't as Frightening as You Think," Foreign Policy, January/February 2010. www.foreignpolicy.com. Dennis Overbye "Gauging a Collider's Odds of Creating a Black Hole," New York Times, April 15, 2008. www.nytimes.com.

Eliezer Yudkowsky "Why We Need Friendly AI," Terminator Salvation: Preventing Skynet, May 22, 2009. www.preventingskynet.com. Source Citation Sotala, Kaj. "Artificial Intelligence Poses a Doomsday Threat." Doomsday Scenarios. E d. Noah Berlatsky. Detroit: Greenhaven Press, 2011. Opposing Viewpoints. Rpt. from "Thinking of AIs as Humans Is Misguided." PreventingSkynet.com. 2009. Oppo sing Viewpoints in Context. Web. 16 Sept. 2015. Document URL http://ic.galegroup.com/ic/ovic/viewpointsdetailspage/viewpointsdetailswindow?fa ilovertype=&query=&prodid=ovic&windowstate=normal&contentmodules =&display-query=&mode=view&displaygroupname=viewpoints&dviselect edpage=&limiter=&currpage=&disablehighlighting=&displaygroups=&a mp;sortby=&zid=&search_within_results=&p=ovic&action=e&catid =&activitytype=&scanid=&documentid=gale%7cej3010784227&source=bo okmark&u=chandler_main&jsid=11c22c92a7a73d69f3bd8c2c5ec62a87 Gale Document Number: GALE EJ3010784227