Tuesday, August 20, 2019
Chinese Room Argument
Chinese Room Argument Searles Chinese Room argumentfails because the room proves nothing Abstract Searle argues that without understanding, computers can never really have mental states. Searles argument that computers can never have understanding depends onhow he portrays the Chinese room. If we pick apart the rooms imitation process, we find that there is a computer-simulation defect and as a result the room would never pass the Turing test. We could of course let the man fix the defect. He would need to remember and change what he does as a result of what he experiences and this, I claim, is precisely what it needs to achieve intentionality. Intentionality, as Searle states, is what distinguishes mental states from physical ones. Given that there is intentionality in the room, it then becomes clear that understanding appears. Searle may counter-claim that the room itself can fix its own defects; but as the room has no semantic understanding and only syntactic translation, we can infer that the room mustve anticipated every question with a predetermined instruction. If a finite room has the capacity to predict every possible question in the universe as well as know the events of the future, then the room is ineffable. If there is understanding, or the room is simply ineffable, then the room proves nothing and Searles argument fails. Essay Searles famous Chinese Room Argument has been the target of great interest and debate in the philosophy of mind, artificial intelligence and cognitive science since its introduction in Searles 1980 article ââ¬ËMinds, Brains and Programs. It is no overstatement to assert that the article has been the centre of attention for philosophers and computer scientists for quite some time. Preston and Bishop (2002) is a perfect example of exclusivity into the ongoing debate regarding the Chinese Room, because the significance and importance of the Chinese Room is meant to be obvious. The Chinese Room is supposed to scuttle the thought of strong AI: which implies that computers have mental states. The Chinese Room arises out of the following, now familiar, story: Searle asks us to imagine that a man is seated in a sealed room with 2 doors: one allowing input from one source outside the room (in the form of a slot) and one allowing output to the source outside the room (also in the form of a slot). The input from the outside source are Chinese squiggles that have been printed on card, but to the man in the room they are nothing more than incomprehensible gibberish(since he does not know the first thing about Chinese). The man is told that upon receiving the input squiggles, he must open a heavily-indexed reference book, wherein he must scrupulously track down the squiggle he received and find the matching squiggle of another sort. Once the man finds the matching squiggle, he must record it on an output piece of card and send it back through the output doors slot. Unknowingly the man has just performed some sort of translation that is altogether opaque to his understanding. To the outside source, the Chinese room as a whole, is a sort of system and is being treated as a subject of a Turing test. The interested parties of the outside source are typing in questions in Chinese and receiving answers in Chinese. If the Chinese room is of good quality, then it should be possible to convince the interested parties that the room, or something inside it, is intelligent, thus suggested that the room, or something inside it, could pass the Turing. Searle suggests that this is an error, as the man in the room does not have any conscious states that exhibit and sort of understanding of the questions that he receives. To him it is all just squiggles. It seems, therefore, that the Turing test is not a reliable way of ascertaining true thought, and moreover that any machine exhibiting such a formal architecture, no matter how complex, could never be called intelligent in the way that we mean. Certainly it might simulate intelligence impressively, but Searle suggests th at this is precisely the problem, since it means only that we have an automata that is extremely good at fooling our test. Therefore, the Chinese Room argument appears to contain the following argument: 1. The room occupant knows no Chinese. 2. The room occupant knows English. 3. The room occupant is given sets of written strings of Chinese, {Ci, Cj,â⬠¦, Cn} 4. The room occupant is given formal instructions in English that correlate pairs of sets of Chinese strings, hCi, Cji. 5. The room occupant is given formal instructions in English to output some particular Ci given a particular Cj. 6. The room occupants skill at syntactically manipulating the strings of Chinese is behaviourally indistinguishable from that of a fully competent speaker of Chinese. 7. If 1-6 are jointly possible, then syntax is not sufficient for mental content. 8. 1-6 are jointly possible. 9. Therefore, syntax is not sufficient for mental content. Searles contention is that no matter what may happen, the man in the room will never understand any of the Chinese. Searle takes this to broadly mean that formal architectures, such as our great look-up book, can never produce understanding, because real thought requires semanticsââ¬âmeaningââ¬âwhereas the book gives us only syntax, or relation. Unfortunately, what the Chinese Room argument really implies about mental states and strong AI has always been a matter of great controversy. Much of the controversy and debate today comes from how Searle is challenged. The two most obvious ways to challenge Searle can be understood to be versions of what is known as the systems reply to the Chinese Room argument. The first is to challenge premise (8) of Searles argument by asserting that (1-6) are inconsistent due to premise (1) being incorrect -concluding that, in some sense, the man in the room actually knows Chinese in some important sense when we carefully consider all the details of Searles argument. The second is to challenge premise (7) of Searles argument by asserting that (1-6) are consistent but that the room understands Chinese even if the occupant does not. Searle intelligently built the Chinese Room so that those who try to pick-apart his argument with a systems response get tangled up in a web of truth in regard to strong AI or more specifically, what is understanding. A systems response simply asserts that the man in the room knows Chinese because the mans formal manipulations, or the operations of the man and the room as a whole, are structurally identical to a native Chinese speakers formal manipulations. Searles counter-argument is that if the man memorized the program, then the program has become part of the manââ¬âbut for the program, which understands Chinese, the man is still simply providing the hardware on which it runs. One might attempt to apply a subtle version of the system, commonly called a virtual mind reply. Yet virtual mind replies, like system replies, do not prove that strong AI is true either: they provide no evidence that the system (or the virtual mind) understands Chinese, other than the hypothetical premise that it passes the Turing Test. Searles argument remains, for neither the systems or virtual minds succeed at challenging Searles argument. That is because both replies have tried to find understanding in the room. Thats a mistake, its playing into Searles hands, as understanding simply isnt there. Understanding is not missing because computers cant have it. Its missing because the claim that Searles claim that the Chinese room can simulate what computers can do is false. The rooms computer-imitation is so flawed that the claim that the Chinese room can produce the appearance of understanding Chinese is also false. We can easily show that there is a defect in the room when we pick apart the computer-imitation (or the rooms process), with a conversation that might take place: Dominic: Hello there. Before we begin our conversation, Id just like to point out that from here on in Im going to use the word ââ¬Ëhot to mean good looking. Chinese Room: No problem, I speak slang now and then too. Dominic: I heard your cars cooling system was overheating. Did you think that your cars engine was getting too hot? Chinese Room: No the temperature was fine. Dominic: Talking about cars, did you see the yellow Ferrari parked outside your house yesterday! Dont you think Ferraris are hot cars? Chinese Room: Yes, Ferraris are commonly hot due to their high-performance engine components. The reason the room cant handle this sort of thing is that it cannot write anything that the man in the room can read. According to Searle, it can only write Chinese characters which Searle cannot read. Which is why it cannot remember things like the ââ¬Å"hotâ⬠car. If we gave the room the right machinery so that the man in the room has the ability to change the script (similarly compared to a computer changing its own program), then the man would, essentially, be changing the rooms behaviour in response to events. Admittedly, giving the room the right machinery so the man could do this is more complicated than having a giant heavily-indexed book do all the processing, but it would remove the computer-simulation defect. Furthermore, it certainly would make intentionality possible. And it is intentionality that, according to Searle (1980) and Brentano (1874/1973), distinguishes mental states from physical ones. And, if the room had the machinery, or the fundamentals, to produc e intentionality, then the room could be made to understand. According to Searle (1980), intentionality exists in internal states if they are ââ¬Å"directed at or about objects and states of affairs in the worldâ⬠. This means, to me, that internal states can change appropriately when they are ââ¬Å"directed atâ⬠changes. For example, if I always thought that the Chinese room was painted ââ¬Å"greenâ⬠and I found out that the room was actually painted ââ¬Å"whiteâ⬠, then the Chinese rooms would think that my intentionality is lacking because my ââ¬Å"thoughts of the roomâ⬠change upon learning of a colour change. Yet, the rooms ââ¬Å"thoughts about meâ⬠also lack intentionality because they cannot change when I tell the room that Im temporarily using ââ¬Å"hotâ⬠differently. There are other mental states that have intentionality for similar reasons. For example, what gives my belief that ââ¬Å"All elephants are greyâ⬠intentionality is that, after I see a few black elephants, my belief can change appropriately, to maybe ââ¬Å"All elephants are grey or blackâ⬠. Yet not all changes produced by experience are sufficiently complex or flexible enough to count toward intentionality. parent knows. http://degreesofclarity.com/writing/chineseroom/ http://plato.stanford.edu/entries/chinese-room/
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.