Thursday, January 20, 2011

Reading #1: The Chinese Room

Discussion: Adam Friedli Alex Cardenas
Reference Information:
Searle, John. Minds, Brains, and Programs.
Wikipedia


Summary: The Chinese Room is a thought experiment revolving around whether or not so called "strong" artificial intelligence is actually capable of understanding things, or if it simply *appears* to be understanding things. The thought experiment is relatively simple: If you write a program to "talk" to someone in Chinese, but rather than running it on a computer, you follow through the instructions yourself. The "program" you're running passes the Turing test, so the Chinese person you're talking to believes you're fluent in the language. However, in reality, you have no idea what's going on. It's this basic premise that Searle uses to argue that "strong" AI can never truly understand things, it can simply give the appearance that it knows what is going on.

Searle spends the majority of his article refuting arguments to his claims. The reply he spends the majority of the time refuting is the "System" argument. It states that while the person running the "program" would be clueless to what the conversation was about, the system as a whole could have an understanding of what was going on. Searle refutes this, saying that all the processes of the system could be internalized, making the person and the system the same entity.

The rest of the arguments are some form of, "What if the computer mimicked the human mind?" Searle also refutes these claims, saying that once you've reached the point that you've completely and fully mimicked the human brain, you're no longer talking about an artificial computer system, you're just discussing another form of the mind.

Discussion: I'm not sure if I completely agree with Searle's claim. While it has a nice simplicity to it that makes sense at first, it seems a bit short sighted, considering how fast the pace of innovation in technology is. It's not exactly fair to say that thermostats don't have beliefs, therefore they can't be a "strong" AI.

I think that the "system" rebuttal to his argument is the one that makes the most sense. It points out the flaws in his theory without having to rely on the assumption that some day we'll be able to recreate the human mind in software.

No comments:

Post a Comment