April 16, 2012

  • What is “intelligence”? Might computers already qualify?

    A review of Godel, Escher, Bach by Douglas Hofstadter

     

    Cross-post: Original JDN 2456008 EDT 12:09. Current JDN 2456034 EDT 14:07.

     

    Like I Am A Strange Loop only more so, Godel, Escher, Bach is a very uneven work.

     

    On the one hand, Hofstadter is a very brilliant man, and he makes connections between formal logic, artificial intelligence, cognitive science, and even genetics that are at once ground-breaking and (in hindsight) obviously correct. GEB makes you realize that it may not be a coincidence that DNA, Godel’s theorems, and the Turing test were discovered in the same generation—indeed, it may not simply be that technology had reached a critical point, but rather that there is a fundamental unity between formal logic, computers, and self-replication, which makes it essential that you will either understand them all or you will understand none of them.

    On the other hand, GEB is filled with idiotic puns and wordplay that build on each other and get more and more grating as the book goes on (“strand” backwards becomes “DNA rapid-transit system”, etc.), and it often digresses into fuzzy-headed Zen mysticism (the two are combined when “MU-system monstrosity” becomes “MUMON”). Worst of all, between each chapter and the next there is a long, blathering dialogue between absurd, anachronistic characters that is apparently supposed to illuminate the topics of the next chapter, but in my experience only served to bore and frustrate. (Achilles is at one point kidnapped by a helicopter; that should give you a sense of how bizarre these dialogues become.) Hofstadter loves to draw diagrams, and while a few of them are genuinely helpful, most of them largely serve to fill space. He loves to talk about different levels of analysis, different scales of reduction (and so do I); but then in several of his diagrams he “illustrates” this by making larger words out of collections of smaller words. If he did this once, I could accept it; twice, I could forgive. But this happens at least five times over the course of the book, and by then it’s simply annoying.

     

    Much of what Hofstadter is getting at can be summarized in a little fable, one which has the rare feature among fables of being actually true.

    There was a time, not so long ago, when it was argued that no machine could ever be alive, because life reproduces itself. Machines, it was said, could not do this, because in order to make a copy of yourself, you must contain a copy of yourself, which requires you to be larger than yourself. A mysterious elan vital was postulated to explain how life can get around this problem.

    Yet in fact, life’s solution was much simpler—and yet also much more profound. Compress the data. To copy a mouse, devise a system of instructions for assembling a mouse, and then store that inside the mouse—don’t try to store a whole mouse! And indeed this system of instructions is what we call DNA. Once you realize this, making a self-replicating computer program is a trivial task. (Indeed in UNIX bash I can write it in a single line. Make an executable script called copyme that contains one command: cp copyme copyme$$(the $$ appends the id of the current process, making the copy unique.)) Making a self-replicating robot isn’t much harder, given the appropriate resources. These days, hardly anyone believes in elan vital, and if we don’t think that computers are literally “alive”, it’s only because we’ve tightened the definition of “life” to limit it to evolved organics.

    Hoftstadter also points out that we often tighten the definition of “intelligence” in a similar way. We used to think that any computer which could beat a competent chess player would have to be of human-level intelligence, but now that computers regularly beat us all at chess, we don’t say that anymore. We used to say that computers could do arithmetic, but only a truly intelligent being could function as a mathematician; and then we invented automated theorem-proving. In this sense, we might have to admit that our computers arealready intelligent, indeed for some purposes more intelligent than we are. To perform a 10-digit multiplication problem, I would never dream of using my own abilities; computers can do it a hundred times faster and be ten times as reliable. (For 2 digits, I might well do it in my head; but even then the computer is still a bit better.) Alternatively, we could insist that a robot be able to do everything a human can do, which is a matter of time.

    Yet even then, it seems to me that there is still one critical piece missing, one thing that really is essential to what I mean by “consciousness” (whether it’s included in “intelligence” is less clear; I’m not sure it even matters). This is what we call sentience, the capacity for first-person qualitative experiences of the world. Many people would say that computers willnever have this capacity (e.g. Chalmers, Searle); but I wouldn’t go so far as that. I think they very well might have this capacity one day—but I don’t think they do yet, and I have no idea how to give it to them.

    Yet, one thing troubles me: I also have no idea how to prove that they don’t already have it. How do I know, really, that a webcam does not experience redness? How do I know that a microphone does not hear loudness? Certainly the webcam is capable of distinguishing red from green, no one disputes that. And clearly the microphone can distinguish different decibel levels. So what do I mean, really, when I say that the webcam doesn’t see redness? What is it I think I can do that I think the webcam cannot?

    Hofstadter continually speaks, in GEB and in Strange Loop, as if he is trying to uncover such deep mysteries—but then he always stops short, interchanges the deep question for a simpler one. “How does a physical system achieve consciousness?” becomes “How does a program reference itself?”; this is surely an interesting question in its own right—but it’s just not what we were asking. Of course a computer can attain “self-awareness”, if self-awareness means simply the ability to use first-person pronouns correctly and refer meaningfully to one’s internal state—indeed, such abilities can be achieved with currently-existing software. And we could certainly make a computer that would speak as if it had qualia; we can write a program that responds to red light by printing out statements like “Behold the ineffable redness of red.” But does it really have qualia? Does it reallyexperience red?

    If you point out I haven’t clearly defined what I mean by that, I don’t disagree. But that’s precisely the problem; if I knew what I was talking about, I would have a much easier time saying whether or not a computer is capable of it. Yet one thing is clear to me, and I think it should be clear to you; I’m not talking about nothing. There is this experience we have of the world, and it is of utmost importance; the fact that I can’t put it into words really is so much the worse for words.

    In fact, if you’re in the Less Wrong frame of mind and you really insist upon dissolving questions into operationalizations, I can offer you one: Are computers moral agents? Can a piece of binary software be held morally responsible for its actions? Should we take theinterests of computers into account when deciding whether an action is moral? Can we reward and punish computers for their behavior—and if we can, should we?

    This latter question might be a little easier to answer, though we still don’t have a very good answer, and even if we did, it doesn’t quite capture everything I mean to ask in the Hard Problem. It does seem like we could make a robot that would respond to reward and punishment, would even emulate the behaviors and facial expressions of someone experiencing emotions like pride and guilt; but would it really feel pride and guilt? My first intuition is that it would not—but then my second intuition is that if my standards are that harsh, I can’t really tell if other people really feel eitherThis in turn renormalizes into a third intuition: I simply don’t know whether a robot programmed to simulate all the expressions of guilt would actually be feeling it. I don’t know whether it’s possible to make a software system that can emulate human behavior in detail without actually having sentient experiences.

    These are the kinds of questions Hofstadter always veers away from at the last second, and it’s for that reason that I find his work ultimately disappointing. I have gotten a better sense of what Godel’s theorems are really about—and why, quite frankly, they aren’t important. (The fact that we can say within a formal system X the sentence “this sentence is not a theorem of X” is really not much different from the fact that I myself cannot assert “It’s raining but Patrick Julius doesn’t know that” even though you can assert it and it might well be true.) I have even learned a little about the history of artificial intelligence—where it was before I was born, compared to where it is now and where it needs to go. But what I haven’t learned from Hofstadter is what he promised to tell me—namely, how consciousness arises from the functioning of matter. It’s rather like my favorite review of Dennett’s Consiousness Explained: “It explained a lot of things, but consciousness wasn’t one of them!”

    Godel, Escher, Bach is an interesting book, one probably worth reading despite its unevenness. But one thing I can’t quite get: Why was it this Pulitzer Prize-winning bestsellingmagnum opus?

    I guess that’s just another mystery to solve.

Comments (18)

  • Interesting blog. Basically the crux of it is we don’t understand the mechanism of consciousness. But I doubt my computer is experiencing anything, simply because we don’t know how to make a computer do that, so it’s unlikely it’s an accidental byproduct of our design.

  • Whoa! You’re still on Xanga. Yippee!

  • It’s interesting stuff. I wanted to read Hofstader, but then I read that he follows closely after Dennett, so I didn’t.

    Much of the problem pivots on our definitions of consciousness. Dennett and others sometimes suggest that if an average person thinks that he is speaking with another person, then even if he is talking to a computer, he is talking to another person. In other words, intelligence is relative to functional or as if intelligence.

    Intelligence, maybe. But certainly not consciousness. I tend to side with Searle and Chalmers that consciousness is irreducibly first-person phenomena. The causal connections that occur in a machine may be analyzed solely from a third person point of view, which means that no subjective internal perspective is needed to explain anything about the machine’s behavior.

  • I would argue that a computer as it stand absolutely does not “experience” red or green. You can see this in the algorithms. Explanatorily, that’s the difference between us and computers: you can look at a computer’s code and see for yourself what it is doing, follow the algorithms along and find that there is no experience process, only a transduction of light information into a set of pixels stored as binary information and nothing at all more mystical. You can teach it to recognise “red” or “green” but these instructions too will be entirely explicable in terms of hard numbers and code. So the computer itself is simple, and easily understood. But as has been mentioned, we don’t understand how human consciousness works in the slightest. We can’t crack ourselves open and look at how we’re programmed. The best we can do is design various experiments to test specific aspects of functioning and conscious awareness, the results of which will add to a growing picture that really is fuzzy at best and seems to discredit itself on a bi-weekly basis.

    As for whether computers will ever experience anything, the question is open-ended and ends with a big “maybe, I dunno”. I believe you would have to program in phenomenology, or the ability to have specific feelings as arising from perception. So you would have to find a way to make the machine “feel” outside of its hard programming. I don’t know if you’re familiar with the work of Rodney Brooks in the early nineties but I believe AI is going in this direction – embodied robotics that interact with the world, rather than as simple code on a disembodied binary platform. If you look at a robot like “Big Dog” (you can youtube that) it seems to me much more intelligent than the chess-playing instruction machine. But still I doubt that it is conscious in the slightest. It’s just responding to its algorithms and blindly doing what it’s told, though that’s a fairly anthropomorphic way of talking about it, and anthropomorphism can be dangerous in this area. I reckon there will always be a problem with computers being too rational to truly think, and inherently unable to behave irrationally; the idea that thought is a rational process is a hangover from Descartes that needs to be eliminated if we’re going to truly understand how consciousness works. Humans can be remarkably rational in terms of abstract thought, but we can also be incredibly variant and unpredictable in our reactions to information, being largely led by emotionality and bizarre reasoning processes that we may not even disclose to ourselves. A lot of our cognitive functioning happens on a much more immediate, bodily basis than is analogous to algorithmic software functioning on a hardware platform, so I would argue that even the chess grand master AI isn’t intelligent at all. It’s not really doing anything more than a Heath Robinson machine. Input goes in, steps happen, outcome emerges and we anthropomorphise intelligence on to the results, but really it’s just a sophisticated system of whirring cogs.

    This is a very good review. I’ve considered reading the book several times but I feel that work happens so quickly in this field that in the past thirty years much of it will have been outdated. I will read Strange Loop over the summer, though.

    And to finish, Daniel Dennett is a douchebag who is far too cocksure to allow for being wrong about so many things.

  • The bottom line is that technology amplifies human capability. AI is going to unleash a hyper wave of human talent and accomplishment.

  • good post. i read Godel, Escher, Bach when it came out in 1979. there were no personal computers yet at the time and a decade or less earlier computers were fed punch-cards. i know because my older sister was a computer programmer at that time. computers as you know have developed exponentially since that time. i don’t recall many particulars of the book. in fact the only particular i remember is the story of Bach writing a 6-part fugue and sending it by messenger to the organist with instructions to play the fugue both forwards and backwards. the music produced was the same played both ways which is which is nothing short of amazing when considering the intrinsic complexity of a 6 part fugue. (i hope that story was in the book. it’s possible i read it in another book around the same time haha)

    i do remember something you pointed out and i also found parts of the book boring. i think for it’s time the book provoked thought but so much has developed since then. because of computers and technology we are moving at breakneck speed compared to say, the dark ages. still, the question “What is sentience” produces different answers depending on who you ask. animals some people call loving household pets are by others called dinner. some people, while by appearances look like everyone else, feel no remorse or empathy. i believe there are degrees to sentience.

    in the 1980s i read it would take a computer the size of which would cover the face of the Earth to perform all the tasks the human brain performs at each moment and even then it might fall short. today i’m guessing the same computer could be considerably smaller.

  • i meant to say, i believe there are degrees to which sentience displays.

    more food for thought. we don’t see objects. what we see is light reflected from an object that reaches our eyes. from there our brains interpret the information and make determinations about the object. as we walk outdoors our brains our constantly interpretation objects ahead of us and within our peripheral range of vision. whether or not we become cognizant of an object is a matter of the relative importance an object has to us. for example boulders we pass may go unnoticed but we become cognizant of one blocking our path.

    anyway, i enjoyed your blog. thanks for posting it.

  • @sometimerainbow - 

    It’s clearly possible for a robot to behave irrationally. All you have to do is throw in algorithms that are defective, or unnecessarily introduce randomness into the output. I guess this isn’t quite the same thing as how we behave irrationally… but still, it’s a robot not being rational.

    Be careful with arguments like “we can’t look at our own programming”. That’s an argument from ignorance; it would imply that if we ever do understand cognitive science in detail, consciousness will somehow stop existing!

    I think you’re right that there is an important difference between the way I experience red and the way a computer processes #ff0000; but I can’t put my finger on exactly what it is. What do you think it would mean to “program in phenomenology”?

  • @TheSutraDude - 

    All of that is true, but it’s equally true of a robot with a camera (it doesn’t literally see the objects, it processes data from light-sensitive circuits in the camera, etc.). Indeed some of the cutting-edge research right now in AI is in making robots that don’t try to pay attention to everything at once, but instead optimize their attention to what’s important.

  • @pnrj - yes. i mentioned how we see the world in reference to someone you mentioned in your post who said something about our connection to the world. i can’t find it now probably because i just woke up haha.

  • @pnrj - 

    “It’s clearly possible for a robot to behave irrationally. All you have to do is throw in algorithms that are defective, or unnecessarily introduce randomness into the output. I guess this isn’t quite the same thing as how we behave irrationally… but still, it’s a robot not being rational.”

    Not quite – a robot following a defective or random algorithm isn’t behaving irrationally, it is merely following instructions that aren’t particularly suited to any purpose. The perceived “irrationality” doesn’t have a point of origin in the robot itself – the thing that is not being rational is the programmer, whether by choice or accident. So the robot may appear to be acting irrationally, but really it’s not doing much of anything, except the only things its inputs allow it to do in any situation. You could argue that the same applies to humans, but humans are essentially self-organising systems in that there’s a way in which we at the very least appear to perpetuate our own functions, and we at the very least have the illusion of “choice”.

    “Be careful with arguments like “we can’t look at our own programming”. That’s an argument from ignorance; it would imply that if we ever do understand cognitive science in detail, consciousness will somehow stop existing!”

    We will never be able to understand cognitive science in the kind of detail you are talking about, and even if we did it wouldn’t make all aspects of experience intelligible. This is pretty much the only thing in this entire field I am happy to stake a claim on. For one thing, a computer that could successfully perform the operations required for a “completed neuroscience” in which you could predict future activity from someone’s current brain state would probably have to be bigger than the universe. Never underestimate the wild complexity of the human brain. I should have worded that differently and said “we will never look at our own programming”. Our own programming (if we can even be understood as computational entities, which I doubt fairly highly) doesn’t happen in our own language; neural events don’t present themselves as neural events but experiences, and so there is an explanatory distance in terms of the kinds of language we can use to describe them. Computers are different, in that we know what each item programming language refers to and the explicit instructions commensurable and interchangeable with the actions of the binary. The relationship is direct and obvious. Humans, on the other hand, are a lot more messy, partly because this thing called “consciousness”, whatever it may be, prevents us from experiencing things the way they occur; we think psychologically in the language of things like desire, choice, memory and happiness, and not in the physiological language of the underlying processes. This obscuring isn’t necessarily an ignorance. It just means that our own programming (if it really exists) is at base largely irrelevant to the way the world is actually disclosed to us.

    “I think you’re right that there is an important difference between the way I experience red and the way a computer processes #ff0000; but I can’t put my finger on exactly what it is. What do you think it would mean to “program in phenomenology”?”

    By phenomenology I mean the ability to have perceptions from the position of subjective experience. Pain, for instance, is phenomenological; when I cut my finger, it affects my internal biology, but the “pain” itself exists only within my experience. There is no such thing as the pain itself in objective terms, only the underlying physiological event. This is similar to sight. What I actually “see” as such isn’t the photons that hit my retina and get encoded into neural instructions – I experience a picture of my laptop screen in front of me. Computer vision functions without this kind of “sight” arising from the transduction of light into a pixel datum. #ff0000 is just #ff0000 before and after the algorithm recognises it, whereas I look at something red I have an experience of seeing red because my systems allow for subjective perception. So programming in phenomenology would entail getting the computer to essentially step out of pure instructional mechanism and do something else in parallel, something that allowed it to “see” red as well as just process it. I don’t really know how this would work and should probably look into it more before I make a judgement about whether or not it’s possible. One of my tutors claims he writes phenomenological programs, but I’ve yet to see anything in action. I feel like pain mechanism would be a good place to start – write a program that is sensitive to damage and recognises something is wrong when you poke holes in it, thus doing something analogous to “feeling pain”, and tries to self-correct and recover, perhaps.

  • @sometimerainbow - 

    There definitely are AI researchers working on making robots that respond to damage by pulling away, making shrieking noises, and avoiding the stimulus in the future. We’re almost at the point where we can do this, and behaviorally, it looks a lot like pain. But it still has me scratching my head: Does the robot really feel pain, or just simulate it? Should we feel bad about hurting the robot?

  • @agnophilo - 

    I just thought of a rather serious flaw in this argument.

    Evolution didn’t know how to make consciousness either… yet apparently it was an accident of designing an animal that’s really good at surviving.

  • @pnrj - Actually it was many many accidents, some of which were useful which accumulated over a few hundred million years. Most of the mechanisms in our brain evolved in the development of the unconscious mind.

  • Never found such informative articlescheck out your url

  • These articles have got complete sense without confusing the readers.
    Refurbished Macbooks

  • Your site is very informative and your articles are wonderful.
    minecraft free download pc

Post a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *