cold & chinese robots
i am wearing multiple layers including hat and scarf and drinking cheap but tasty red wine from a small jelly jar, in order to stay warm in my room. sad. all i really want my home to be is warm, clean, and quiet. at least i (usually) have one of three, but still, this living situation is not working for me and i think a lot about changing it. i am past the point where this dickensian routine is romantic.
***
some thoughts on mind and meaning
(inspired by yesterday's link #2)
the thing about the claim that as computers become more and more powerful we get closer and closer to approaching the computational power of the human brain, and thus closer and closer to achieving humanlike AI, is that it fails to acknowledge the difference between effectively modeling understanding and actually *understanding*. even if kurzweil's projections are correct, it will take much more than sheer computational power to create a mind. for any system (biological or computational) to have real understanding, it has to do more than manipulate formal symbols...it has to have knowledge/experience of their real-world referents. this is why computer programs can't tell whether someone in a picture is eating pizza. they don't have any way to know what pizza is or what eating looks like, let alone what tomato sauce tastes like, or how it feels to burn the roof of your mouth because you didn't wait until the cheese was cool enough.
deep blue is a higly effective model of (one metric of) human intelligence, a remarkable achievement, but it's tough to say that it *is* intelligent/ce. the machine doesn't understand what it is doing, it doesn't even understand what chess is; it just has a complicated algorithm for calculating the best move. it doesn't feel satisfied when it wins, it doesn't even "want" to win, as such. it does not have beliefs or desires or any mental states at all, and these are indispensable notions in any theory of consciousness and intentionality. thus, any drive or adaptability that deep blue has is secondary, derived; it is the intentionality of the designers and programmers that actually drives the system.
searle uses a thought experiment called "the chinese room" to illustrate the difference between modeling understanding and actual understanding. he asks us to imagine that AI efforts have succeeded in creating a program that can answer questions about a given story so accurately that the program passes a Turing test. imagine that the program does all of this in chinese. let's say the program knows about what kinds of entities and events are found in restaurants. so you give it a story like this: "a man walked into a restaurant and ordered a hamburger. it was perfectly cooked and tasty, so he left a large tip and left satisfied." now you ask it, "did the man eat the hamburger?" the system tells you: "yes, he ate the hamburger." you are tempted to ascribe understanding to this system; it knows something that you did not explicitly tell it. it has made a logical leap!
to convince ourselves that this is not actual understanding, we can imagine searle (who does not speak chinese) in a room with no windows, etc. he has piles of books that contain complicated tables that map certain chinese characters to other chinese characters. someone hands sheets full of chinese characters through a slot in the wall, and searle looks up the characters in various tables, correlating them with symbols in other tables, eventually ending up with a new sheet full of chinese characters which he hands out through the door. the people on the outside of the room are convinced that searle understands chinese, because he always answers the questions correctly. but searle does not understand chinese! and, crucially, since the computer does not have anything that searle does not have, the computer does not understand chinese either.
now, there are many objections to this line of reasoning. in fact, i wrote a short paper for searle's class, aiming to critique the argument. i didn't get very far with it though, because i kept coming back to the point that meaning really does require real-world reference and context, something that a computer program (which is made up entirely of formal symbols) lacks. yes, these forms have semantic content too, but it is only a system-internal semantics. you can try to make the system as big or as multimodal as you like, you can even try to stick it in the head-box of a big robot that can lumber around and interact with the world. still it all comes back to the same problem: if you look inside the robot's head, there is searle, still not understanding chinese.
if you buy this (and if you are not a functionalist which i hope you are not), then it follows that no matter how much computational power a system has (even if it exceeds that of all of the brains in all of humanity as kurzweil breathlessly predicts) it is not really intelligent, even if it behaves in ways that we are tempted to call intelligent, because it does not understand anything. computational power is thus a necessary, but not a sufficient, condition for intelligence.
perhaps when we better understand what meaning is and how it connects to linguistic form (one of my major interests, by the way), and when we have technology that can duplicate the neurobiological processes that give rise to human consciousness, it will be possible to create a system that can rightly be called intelligent. until then, the best we can hope for is a clever simulation. until then, there will always be tasks that require "Human Inside" architecture.
okay, there's a lot more to say about this, but i have to go finish my pragmatics paper outline now. and then to sleep, perchance to dream of giant robots spitting out pages full of chinese characters. it's a nice thought.
***
some thoughts on mind and meaning
(inspired by yesterday's link #2)
the thing about the claim that as computers become more and more powerful we get closer and closer to approaching the computational power of the human brain, and thus closer and closer to achieving humanlike AI, is that it fails to acknowledge the difference between effectively modeling understanding and actually *understanding*. even if kurzweil's projections are correct, it will take much more than sheer computational power to create a mind. for any system (biological or computational) to have real understanding, it has to do more than manipulate formal symbols...it has to have knowledge/experience of their real-world referents. this is why computer programs can't tell whether someone in a picture is eating pizza. they don't have any way to know what pizza is or what eating looks like, let alone what tomato sauce tastes like, or how it feels to burn the roof of your mouth because you didn't wait until the cheese was cool enough.
deep blue is a higly effective model of (one metric of) human intelligence, a remarkable achievement, but it's tough to say that it *is* intelligent/ce. the machine doesn't understand what it is doing, it doesn't even understand what chess is; it just has a complicated algorithm for calculating the best move. it doesn't feel satisfied when it wins, it doesn't even "want" to win, as such. it does not have beliefs or desires or any mental states at all, and these are indispensable notions in any theory of consciousness and intentionality. thus, any drive or adaptability that deep blue has is secondary, derived; it is the intentionality of the designers and programmers that actually drives the system.
searle uses a thought experiment called "the chinese room" to illustrate the difference between modeling understanding and actual understanding. he asks us to imagine that AI efforts have succeeded in creating a program that can answer questions about a given story so accurately that the program passes a Turing test. imagine that the program does all of this in chinese. let's say the program knows about what kinds of entities and events are found in restaurants. so you give it a story like this: "a man walked into a restaurant and ordered a hamburger. it was perfectly cooked and tasty, so he left a large tip and left satisfied." now you ask it, "did the man eat the hamburger?" the system tells you: "yes, he ate the hamburger." you are tempted to ascribe understanding to this system; it knows something that you did not explicitly tell it. it has made a logical leap!
to convince ourselves that this is not actual understanding, we can imagine searle (who does not speak chinese) in a room with no windows, etc. he has piles of books that contain complicated tables that map certain chinese characters to other chinese characters. someone hands sheets full of chinese characters through a slot in the wall, and searle looks up the characters in various tables, correlating them with symbols in other tables, eventually ending up with a new sheet full of chinese characters which he hands out through the door. the people on the outside of the room are convinced that searle understands chinese, because he always answers the questions correctly. but searle does not understand chinese! and, crucially, since the computer does not have anything that searle does not have, the computer does not understand chinese either.
now, there are many objections to this line of reasoning. in fact, i wrote a short paper for searle's class, aiming to critique the argument. i didn't get very far with it though, because i kept coming back to the point that meaning really does require real-world reference and context, something that a computer program (which is made up entirely of formal symbols) lacks. yes, these forms have semantic content too, but it is only a system-internal semantics. you can try to make the system as big or as multimodal as you like, you can even try to stick it in the head-box of a big robot that can lumber around and interact with the world. still it all comes back to the same problem: if you look inside the robot's head, there is searle, still not understanding chinese.
if you buy this (and if you are not a functionalist which i hope you are not), then it follows that no matter how much computational power a system has (even if it exceeds that of all of the brains in all of humanity as kurzweil breathlessly predicts) it is not really intelligent, even if it behaves in ways that we are tempted to call intelligent, because it does not understand anything. computational power is thus a necessary, but not a sufficient, condition for intelligence.
perhaps when we better understand what meaning is and how it connects to linguistic form (one of my major interests, by the way), and when we have technology that can duplicate the neurobiological processes that give rise to human consciousness, it will be possible to create a system that can rightly be called intelligent. until then, the best we can hope for is a clever simulation. until then, there will always be tasks that require "Human Inside" architecture.
okay, there's a lot more to say about this, but i have to go finish my pragmatics paper outline now. and then to sleep, perchance to dream of giant robots spitting out pages full of chinese characters. it's a nice thought.
<< Home