JDN 2456291 EDT 12:13.
I realized today why so many people have trouble understanding concepts in cognitive science that to me feel so elegant and intuitive. I had been assuming that other people think much the same way I do (Mind Projection Fallacy), and I hadn't been correcting for my highly unusual personality traits.
Specifically, I am an empathizer-systematizer. Most people are either empathizers or systematizers, not both. Empathizers conceptualize the world in terms of conscious beings with thoughts and feelings; they relate to others on a personal level, try to view the world from their perspective. Systematizers conceptualize the world in terms of physical phenomena, obeying natural laws, organized into structures made of parts. While most people have some degree of each trait, the majority of people are much better at one than the other. Also, statistically women are more likely to be empathizers and men systematizers, but it's not as large a difference as stereotypes would have you believe (Marie Curie and Rosalind Franklin were classic systematizers, and Martin Luther King and Mahatma Gandhi were textbook empathizers).
You need to be at least part systematizer to be any good at science. If you're a pure empathizer, you'd make a good social worker or psychotherapist, but scientific research is always going to be hard for you. If you're an extreme systematizer but a very low empathizer, you can end up with a "mad scientist" sort of attitude, even bordering on psychopathic--B.F. Skinner is a great example of this. The things he did to animals were unconscionably cruel, but he didn't think of them as animals; he thought of them as complex systems of interacting parts--so he took them apart and studied the pieces!
I am a bizarre case, an empathizer-systematizer, someone who is good at seeing the world from both of these perspectives simultaneously. And this is what you need to be able to do in order to make sense of cognitive science. You need to be able to look at a brain and understand the way it can be broken down into lobes, sulci, neural clusters, neurons, synapses; but you also need to understand how that brain houses a mind, with thoughts, beliefs, memories, hopes, feelings, desires. You need to be able to appreciate the marvelous fact that minds are made of parts.
Steven Pinker is also an empathizer-systematizer, which is why he makes a good cognitive scientist. Noam Chomsky is as well, though he lets his empathy cloud his reason when it comes to political issues. Steven Pinker's politics make a great deal more sense, though he occasionally ventures a bit too right economically. Still, center-right libertarian is far more sensible than far-left anarcho-syndicalist. In any case, I think you'll find that most of the top cognitive scientists are all empathizer-systematizers.
Also, my boyfriend is an empathizer-systematizer, which is probably why we get along so well. Kittens and tabletop RPGs are two of his favorite things. No doubt it also helps that he is a sensitive, highly intelligent, introverted, and the nerdiest person I've ever met.
I know a lot of systematizers who aren't empathizers, and conversations with them about cognitive science often lead to them making really bizarre greedy-reductionist claims. They can see the parts, but they can't see the whole! "Cognitive science will be ultimately reduced to physics." No, except in the sense that thermodynamics is reduced to statistical mechanics, which is pretty much already the case. (I don't know of any serious cognitive scientist who thinks that the brain operates on non-physical principles! Even Penrose, who is on the fringe, thinks the brain operates by physics we don't yet understand.) "There is no fundamental difference between a human being and an asteroid." Funny, I don't seem to be able to have this conversation with an asteroid. (I suppose it could hinge on what we mean by fundamental, but I never disputed that we're made of atoms. I merely point out that whole humans and whole asteroids can and should be treated differently.)
It's not just personality of course. Computer scientists tend to understand better what I'm getting at, even though most of them are systematizers and not empathizers. I think it's because computers and minds really are so similar that the two can be used to understand one another. The sorts of things that physicists and biologists say to me about cognitive science would be translated into computer science as something like, "It's all made of zeroes and ones!" or "It's all electrical circuits!" Yeah, so what? That doesn't help me optimize heuristics for the traveling-salesman problem.
Economists are better about it too, even though economics clearly is in desperate need of more empathizers. (Michigan's own Frank Thompson is one obvious exception, a clear empathizer-systematizer; he's a big part of why I got into economics in the first place.) The condescension of economists comes from a different direction; they think psychology is "soft", "fuzzy", too much talk of emotions and personalities instead of rigorously defined behavior patterns. But they would never claim that it can be reduced entirely to quantum physics; they know how hard it is to model the behavior of a small number of rational individuals, much less a large number of irrational ones.
But personality clearly does matter. Even though our training is quite similar (classes across the hall from each other), social psychologists are almost all empathizers, and they tend to view us cognitive scientists with a sense of awe. They struggle with t-tests and ANOVA as we solve differential equations and compute Bayesian neural networks. They are probably what economists are thinking of when they think of psychologists (though even then, I think the economists underestimate the value of social psychology); but the social/cognitive divide runs much deeper than people outside psychology realize. And only we psychologists understand why: It has to do with the personality types of people who choose these different specializations.
What I need to do now is find a way to explain cognitive science to people who don't have the requisite personality, find a way to express the deep insights in terms that pure empathizers or pure systematizers can understand. Thus far, I've found empathizers a bit easier to work with; while they clearly don't understand the details of what I'm saying, they rarely dismiss it outright and often remark that it sounds fascinating. It's the systematizers who are most frustrating; they simply dismiss the empathic perspective as naive and fuzzy-headed, so they feel that once they've understood each part by itself, obviously the whole will just magically fall into place. This is how we got into the pathetic muddle of behaviorism, eliminativism, and epiphenomenalism. From the other end, pure empathizers gave us mysterianism.
To see just how stupid these theories really are, allow me to make an analogy to physics. We did the two-slit experiment, got this crazy weird result that we couldn't understand, and here's how they reacted.
Behaviorist: Ignore that, we don't understand it so it must not be important.
Eliminativist: That obviously didn't happen, you're just naively accepting the folk notions.
Epiphenomenalist: Well, maybe it happened, but don't worry, it doesn't affect anything. It's just sort of added on to physics, and doesn't do anything, I'm sure.
Mysterian: You see? Physics is a failure! We will never understand the mysteries of the atom!
I don't think I'm going out on a very long limb when I say that we would not have invented lasers or microprocessors from any of those lines of research. And likewise, it will be cognitivists, not epiphenomenalists or mysterians, who solve the problems of AI.
Because computer scientists are systematizers but understand better, I'm thinking I'll need to express what I'm saying in computational terms, using words like "hardware" and "software" and "algorithm"; but even when I do this, systematizers often still have a dismissive attitude. Maybe it would help if they actually studied some computer science, and learned just how absurdly difficult some computational problems really are. Many physicists and biologists seem to think that computers can just improve in complexity indefinitely and become capable of solving any imaginable problem by brute force, thus rendering algorithms and heuristics irrelevant. If they actually read some computer science, they'd realize that the game of Go is too complex to be solved by brute force by a Planck-scale computer the size of the Earth in a thousand years.
Here's the calculation, if you don't believe me. There are 19x19 spaces on a Go board, each of which can be empty or filled by one of two colors. That's 3^(19x19) possible states, which is 1.74e172. The Earth is approximately sphere of radius 1.3e4 m, volume 4pi/3*(1.3e4 m)^3 = 9.2e12 m^3. A Planck length is 1.616e-35 m, so there are 2.37e104 Planck volumes in a cubic meter. That means our computer would have 2.18e117 components, each performing one calculation per Planck time (5.39e-44 seconds), which means our computer calculates at 4.05e160 flops, which is mind-bogglingly fast... and yet, our total calculation would still take 4.3e11 seconds, which is over 13,000 years. I remind you, that is for the game of Go, and it is a very generous estimate of the fastest possible computer anyone could build out of our planet. This is why Go programs do not use lookup tables in the form "if board X, do move Y"; it would take eons just to sort through the table, which by the way would have to be a trillion times the size of the Earth even if each Planck volume could store 1 bit. With current technology for storage and computation, the hard drive would be the size of the galaxy and the processing would take more time than the universe has before heat death. Real Go programs use heuristics, and actually they're not all that good; I can beat most Go programs, and I'm not even good enough to play in most amateur leagues. If you can find better heuristics, good enough to beat high-level pros, you could find fame and possibly fortune creating the "Deep Blue" of Go.
And you're telling me that human behavior is going to be understood by brute-force computation? No, we're going to use heuristics. And those heuristics are going to involve high-level systems, like beliefs, desires, emotions, and intentions.
Now, you might say, this is an unsatisfying philosophical result: It sounds like I'm saying we are machines, but we have to pretend we're not because we'll never make sense of anything that way. No, that's not what I'm saying. We're not pretending anything. We are machines--we are machines that think. The empathic perspective works as a model because it accurately reflects the world; its heuristics are not chosen arbitrarily, but rather necessitated by the structure of the phenomenon. Biologists of all people should understand this: Is there such a thing as a giraffe, or are there merely atoms? No, the giraffe is a real thing; there is a meaningful (if slightly fuzzy-edged) boundary between the giraffe and the surrounding world. We carve the world, but we carve it at its joints.
This is the empathizer-systematizer part, the part I don't know how to explain: Both perspectives describe the same real phenomenon. We are machines, and we do have feelings--and the marvelous thing is that machines can have feelings. It is not either-or; it is both-and.
Hinduism has a parable about this, which isn't bad: Two blind men are trying to understand an elephant. One feels the trunk, and says it is soft, bending, like a snake; the other feels the legs, and says it is firm, rigid, like a tree. Who is right and who is wrong? Both? Neither? The whole elephant encompasses more than either part. If you truly wish to understand the elephant, you must see it as a unified whole.
Recent Comments