1 Functionalism Some context from: Stanford Encyclopedia of Philosophy Behaviorism... attempts to explain behavior without any reference whatsoever to mental states and processes Functionalism in the philosophy of mind is the doctrine that what makes something a mental state of a particular type does not depend on its internal constitution, but rather on the way it functions, or the role it plays, in the system of which it is a part.
2 Functionalism Things are defined by their functions Two ways to define function 1) Function = inputs and outputs (machine functionalism) e.g. mathematical function, e.g. +, -, x, / 2 x 3 = 6, when input is 2 and 3, output is 6 Multiple realizability: can be realized in different materials or through different processes
3 Functionalism defined as inputs and outputs continued e.g. beliefs, desires I am thirsty (i.e. I desire water) is defined in terms of inputs and outputs. When there are inputs x and y, there is output z: Input (x) Water is available (y) There is no reason not to drink the water Output (z) I drink water
4 2) Function = use (teleological functionalism) Function is defined by what something does. e.g. a heart pumps blood. e.g. a belief plays a role in reasoning: a premise in a practical syllogism Premise 1 I believe x is water Premise 2 I desire water Premise 3 There is no reason not to drink x Conclusion I drink x
5 No matter if you interpret functional as an input-output relation (machine functionalism) or use (teleological functionalism), mental states, such as thirst are multiply realizable. A waiter can conduct addition. A computer can conduct addition. An alien can have thirst, pain, etc. A chimpanzee can have thirst, pain, etc.
6 Functional definition of mind If x acts like a mind, it is a mind. If, when compared to a mind, given similar inputs, x gives similar outputs, x is a mind. If a computer can converse (take part in linguistic input and output exchanges/play the role of an intelligent conversational partner) just like a person, the computer is as intelligent as a person. It has a mind.
7 The Chinese Room Argument
8 Thought Experiments Background Instead of empirical experiments, philosophers and logicians can conduct thought experiments Thought experiments may be carried out using natural languages, graphic visualizations, and/or formalized versions of their relevant aspects They test concepts and theories for consistency, completeness, etc., using critical intuition aided by logic tools (e.g., reasoners ) for evaluation
9 The Turing Test In 1950, a computer scientist, Alan Turing, wanted to provide a practical test to answer Can a machine think? His solution -- the Turing Test: If a machine can conduct a conversation so well that people cannot tell whether they are talking with a person or with a computer, then the computer can think. It passes the Turing Test. In other words, he proposed a functional solution to the question, can a computer think?
10 There are many modern attempts to produce computer programs (chatterbots) that pass the Turing Test. In 1991 Dr. Hugh Loebner started the annual Loebner Prize competition, with prize money offered to the author of the computer program that performs the best on a Turing Test. You can track (and perhaps try) the annual winners: But Turing Tests have been objected to on several grounds:
11 Searle s Chinese Room Argument John Searle Famous philosopher at University of California, Berkeley. Most well-known in philosophy of language, philosophy of mind and consciousness studies Wrote Minds, Brains and Programs in 1980, which described the Chinese Room Argument :... whatever purely formal principles you put into the computer, they will not be sufficient for understanding, since a human will be able to follow the formal principles without understanding anything.
12 Searle s Chinese Room Argument The Chinese Room argument is one kind of objection to functionalism, specifically to the Turing Test Searle makes distinction between strong AI and weak AI, objecting (only) to strong AI: Strong AI: the appropriately programmed computer really is a mind, in the sense that computers, given the right programs can be literally said to understand Weak AI: Computers can simulate thinking and help us to learn about how humans think NB: Searle knows that he understands English and by contrast that he does not understand any Chinese
13 Summary of Searle s Chinese Room Thought Experiment Searle is in a room with input and output windows, and a list of rules, in English, about manipulating Chinese characters. The characters are all meaningless squiggles and squoggles to him. Chinese texts and questions come in from the input window. Following the rules, he manipulates the characters and produces each reply, which he pushes through the output window.
14 The answers in Chinese that Searle produces are very good. In fact, so good, no one can tell that he is not a native Chinese speaker! Searle s Chinese Room passes the Turing Test. In other words, it functions like an intelligent person. Searle has only conducted symbol manipulation, with no understanding, yet he passes the Turing Test in Chinese. Therefore, passing the Turing Test does not ensure understanding. In other words, although Searle s Chinese Room functions like a mind, he knows (and we in an analogous foreignlanguage room experiment would know) it is not a mind, and therefore functionalism is wrong.
15 Grailog: Classes, Instances, Relations Classes with relations subclassof understand Language instanceof understand English Chinese negation apply lang lang lang lang to with for rules texts questions replies use with for lang havelanguage Searle Wang Searle-reply i Wang-reply i Instances with relations distinguishable
16 Syntax vs. semantics Searle argues that computers can never understand because computer programs (and he in a Chinese Room) are purely syntactical with no semantics. Syntax: the rules for symbol manipulation, e.g. grammar Semantics: understanding what the symbols (e.g. words) mean Syntax without semantics: The bliggedly blogs browl aborigously. Semantics without syntax: Milk want now me.
17 Searle concludes that symbol manipulation alone can never produce understanding. Computer programming is only symbol manipulation. Computer programming can never produce understanding. Strong AI is false and functionalism is wrong.
18 What could produce real understanding? Searle: it is a biological phenomenon and only something with the same causal powers as brains can have [understanding].
19 Objections The Systems Reply Searle is part of a larger system. Searle doesn t understand Chinese, but the whole system (Searle + room + rules) does understand Chinese. The knowledge of Chinese is in the rules contained in the room. The ability to implement that knowledge is in Searle. The whole system understands Chinese.
20 Searle s Response to the Systems Reply 1) It s absurd to say that the room and the rules can provide understanding 2) What if I memorized all the rules and internalized the whole system. Then there would just be me and I still wouldn t understand Chinese. Counter-response to Searle s response If Searle could internalize the rules, part of his brain would understand Chinese. Searle s brain would house two personalities: English-speaking Searle and Chinesespeaking system.
21 The Robot Reply What if the whole system was put inside a robot? Then the system would interact with the world. That would create understanding.
22 Searle inside the robot
23 Searle s response to the Robot Reply 1) The robot reply admits that there is more to understanding than mere symbol manipulation. 2) The robot reply still doesn t work. Imagine that I am in the head of the robot. I have no contact with the perceptions or actions of the robot. I still only manipulate symbols. I still have no understanding. Counter-response to Searle s response Combine the robot reply with the systems reply. The robot as a whole understands Chinese, even though Searle does not.
24 The Complexity Reply Really a type of systems reply. Searle s thought experiment is deceptive. A room, a man with no understanding of Chinese and a few slips of paper can pass for a native Chinese speaker. It would be incredibly difficult to simulate a Chinese speaker s conversation. You need to program in knowledge of the world, an individual personality with simulated life history to draw on, and the ability to be creative and flexible in conversation. Basically you need to be able to simulate the complexity of an adult human brain, which is composed of billions of neurons and trillions of connections between neurons.
25 Complexity changes everything. Our intuitions about what a complex system can do are highly unreliable. Tiny ants with tiny brains can produce complex ant colonies. Computers that at the most basic level are just binary switches that flip from 1 to 0 can play chess and beat the world s best human player. If you didn t know it could be done, you would not believe it. Maybe symbol manipulation of sufficient complexity can create semantics, i.e. can produce understanding.
26 Possible Response to the Complexity Reply 1) See Response to the Systems Reply 2) Where would be the quantitative-qualitative transition? Counter-response to that response What would happen if Searle s Chinese-speaking subsystem would become as complex as the English-speaking rest of his linguistic mind?
27 Searle s criticism of strong AI s mind-program analogy Searle s criticism of strong AI s analogy mind is to brain as program is to computer seems justified since mental states and events are literally a product of the operation of the brain, but the program is not in that way a product of the computer.
28 Classes and relations tangible intangible animate brain produce mind inanimate computer run program Classes: brain, mind, computer, program Binary relations: produce, run
29 Instances tangible intangible b 1 produce m 1 animate b 2 produce m 2 inanimate c 1 c 2 run run p Classified instances: brains b 1, b 2 ; minds m 1, m 2 ; computers c 1, c 2 ; program p
30 A theory claiming two assertions over the classes and relations In English: Different brains (will) produce different minds. Different computers (can) run the same program. In Controlled English, equivalent to first-order logic with (negated) equality: For all brains B 1, B 2 and minds M 1, M 2 it holds that if B 1 B 2 and B 1 produces M 1 and B 2 produces M 2 then M 1 M 2. There exist computers C 1, C 2 and program P such that C 1 C 2 and C 1 runs P and C 2 runs P.
31 A theory claiming two assertions over the classes and relations If produce and run would be the same relation, produce = run, and brain and computer would be the same class, brain = computer, and mind and program would be the same class, mind = program, then this would lead to an inconsistency between the two assertions. Hence, according to the theory, the relations or one of the pairs of classes must be different.
32 Conclusion 1) The Turing Test: Searle is probably right about the Turing Test. Simulating a human-like conversation probably does not guarantee real human-like understanding. Certainly, it appears that simulating conversation to some degree does not require a similar degree of understanding. Programs like the 2008 chatterbots presumably have no understanding at all.
33 2) Functionalism Functionalists can respond that the functionalist identification of the room/computer and a mind is carried out at the wrong level. The computer as a whole is a thinking machine, like a brain is a thinking machine. But the computer s mental states may not be equivalent to the brain s mental states. If the computer is organized as nothing but one long list of questions with canned answers, the computer does not have mental states such as belief or desire. But if the computer is organized like a human mind, e.g. with learnable, interlinked, modularized concepts, facts, and rules, the computer could have beliefs, desires, etc.
34 3) Strong AI: Could an appropriately programmed computer have real understanding? Too early to say. We might not be convinced by Searle s argument that it is impossible. The right kind of programming with the right sort of complexity may yield true understanding. Searle s criticism of strong AI s mind-program analogy seems justified.
35 4) Syntax vs. Semantics How can semantics (meaning) come out of symbol manipulation? How can 1s and 0s result in real meaning? It s mysterious. But then how can the firing of neurons result in real meaning? Also mysterious. One possible reply: meaning is use (Wittgenstein). Semantics is syntax at use in the world.
36 5) Qualia Qualia = raw feels = phenomenal experience = what it is to be like something Can a computer have qualia? Again, it is hard to tell if/how silicon and metal can have feelings. But it is no easier to explain how meat can have feelings. If a computer could talk intelligently and convincingly about its feelings, we would probably ascribe feelings to it. But would we be right?
37 6) Searle claims that only biological brains have causal relations with the outside world such as perception, action, understanding, learning, and other intentional phenomena. ( Intentionality is by definition that feature of certain mental states by which they are directed at or about objects and states of affairs in the world. ) However, an AI embodied in a robot that puts syntax at use in the world as in 4) may not need (subjective) Qualia as in 5) to permit it perception, action, understanding, and learning in the objective world.
38 Optional Readings for next week Sterelny, Kim, The Representational Theory of Mind, Section 1.3, pgs Sterelny, Kim, The Representational Theory of Mind, Section , pgs The Representational Theory of Mind. - book review by Paul Noordhof, Mind, July,
39 More optional readings On the Chinese Room: Searle, John. R. (1990), Is the Brain's Mind a Computer Program? in Scientific American, 262, pgs Churchland, Paul, and Patricia Smith Churchland (1990) Could a machine think? in Scientific American 262, pgs On modularity of mind: Fodor, Jerry A. (1983), The Modularity of Mind, pgs at: Pinker, Steven (1999), How the Mind Works, William James Book Prize Lecture at: HowTheMindWorks