Tuesday, September 11, 2012

Book Response #2: Chinese Room Thought Experiment


Minds, Brains, and Programs
     By:  John R. Searle

Response to Published Article:

     Searle argues that instantiating a program (running one to accomplish a specific task) does not lead to a computer that 'understands' the information it is processing.  He uses a specific example of a Turing Test using a Chinese story in which the 'being' inside the room answers questions to.  He states that if it were an English speaking man in the room that used a set of rules to transcribe Chinese characters received as input into appropriate Chinese characters as output would not actually understand Chinese.  This goes directly against the views of functionalism and computationalism which state that the mind is an information processing system operating on formal symbols.  Searle approaches this argument by clarifying that information processing does not actually mean one understands the information.  To demonstrate this, I would point to the fact that I have had classes in the past where it is easy to deduce the answer to a question based on another question that is of the same format, but occasionally I find myself struggling to determine the cause, which is to say I don't truly understand the material.

     This brings up another point by Searle, which is that simulation shouldn't be considered the same as duplication.  Behaviorism and operationalism classify objects by how they appear or act, but Searle points out that you wouldn't confuse a human and a dog just because they both eat food.  He argues that creating a strong AI needs to be viewed as creating some sort of meta-program that happens to function like a mind in the framework of a brain.  Since strong AI implies understanding and intentionality, strong AI cannot form from the simulation of just one instantiation of understanding, but would rather form from the creation of another instantiation of the mind, but not in the construct of the brain.

     So this leads back to the Chinese Room example, where Searle tries to boil it down to the fact that if there is an English speaker in the room who actually does understand the Chinese story the way another Chinese speaker would, then the original English speaker must also be a Chinese speaker.  Searle chose Chinese and English as the examples because the languages are so dramatically different, but for the argument at hand, I'd prefer to call it the Language Room.  This means that if the original machine in the room is seen as understanding a language not native to itself, then it must have been able to learn that language.  It also means that we will never get a strong AI just by trying to mimic understanding of something in particular, but that a strong AI can only be developed by creating something that understands in general and can therefore be instantiated to understand something in particular.