On the cover of this month’s Scientific American an article is advertised by the phrase “When It Pays to Be Irrational.” The actual title of the article is “The Traveler’s Dilemma,” written by a Cornell economist named Kaushik Basu, who devised a game theory scenario about two star crossed airline passengers. Here’s how it goes. Two complete strangers return to the states on an airliner from a Pacific island with identical antique vases in their luggage. Both vases are damaged during the flight, and both passengers contact the same airline manager for compensation. In order to decide on a price, the manager gets creative and tells the two passengers (who are incommunicado with each other) that they should determine the value of the vase. The price can be anywhere between 2 and 100$. But here’s the catch. If they both give the same price, then both will be awarded the same amount. However, if one gives a higher price than the other, then the price will be set at the lower of the two. On top of that, a reward of 2$ will be given the passenger who gave the lower price, while a 2$ penalty will be applied to the passenger who gave the higher price. For example, if one passenger says 100$, and the second says 50$, then the second passenger will get 52$ and the first passenger will get 48$. What price would you give?
Now, before deciding, you might also want to determine what the most rational choice would be. Yes, I’m insinuating that your actual choice might not be the most rational one. At least that is Basu’s claim, because his assertion is that the most rational dollar amount would be 2. The idea is that Lucy, the first passenger, wants as much as she can get, so she thinks to herself “I’ll say 100$.” But the second passenger, Pete, knows this is what Lucy will be thinking, and thinks to himself “I’ll say 99$.” If these choices were submitted, Pete would get the maximum amount of 101$ and Lucy would be the “loser” with 98$. But, of course, Lucy knows this, knows Pete is thinking the same thing, so she decides to undercut Pete and say 98$. You see where this is going. The reasoning plummets the price down to the lowest amount of 2$, thereby guaranteeing that neither passenger will receive more money than the other. This is the outcome predicted by game theorists, using Nash and various other equilibrium concepts.
You might not be surprised to know that, when this game is actually played in the real world, the majority of the choices were not 2$. “[T]he game’s logic dictates that 2 is the best option, yet most people pick 100 or a number close to 100—both those who have not thought through the logic and those who fully understand that they are deviating markedly from the ‘rational’ choice. Furthermore, players reap a greater reward by not adhering to reason in this way. Thus, there is something rational about choosing not to be rational when playing Traveler’s Dilemma.” Basu says that this was the purpose of creating the game in the first place, to challenge widely held notions of traditional economics and “to high-light a logical paradox of rationality.”
On the face of it, one might be tempted to solve this paradox by simply changing one of the assumptions of the game, and of traditional economics. Why would we be primarily concerned with getting more than our opponent? Even the idea of selfish economic behavior doesn’t dictate the necessary impoverishment of others. But it’s not as simple as that. This does not have to be a zero sum game for the “rational” choice to plummet to 2$. What an analysis of the game requires is a mind willing to consider, not what amount it wants the most, but what amount it will most likely get with any given price. The fact is, in this game over-thinking doesn’t pay off, and going with your gut does.
What I’d like to ask is, what is rationality and what is rational behavior? Rationality, as I understand it, is reason based thinking. That is, thinking through a situation and coming up with the reasonable conclusion. We would assume that rational behavior would be that behavior which is dictated by rational thinking. But will any line of reasoning do? Let’s assume for the moment that rationality is based on humanity’s best reasoning strategies, those of logic and mathematics. Given this assumption, we could say that good reasoning is based on abstract thinking, being able to boil a situation down to its essential abstract parts and manipulating them with mathematical/logical thinking.
In the end, rational behavior, based on abstract thinking, is just one part of human behavior. The other aspects of human behavior are what we would call instinctual, and are shared with other animals. The intelligent behavior of a human being is an advanced version of the intelligent behavior we find in chimpanzees, elephants, ants, and bees. What seems to give us the leg up is the use of language, and this is essential for abstract thinking. Linguistic thought is something that we apparently do not share with other animals. But, like a good Darwinian, I assume that language is an adaptation, because surely language leads to a unique pattern of behavior that is “seen” by natural selection. A group of hominids that can speak to each other are going to be better off compared to a group that cannot.
But perhaps adaptation is a strong word for what happened in our evolutionary past. Stephen Jay Gould coined the phrase “exaptation” to describe characters that evolved for other uses (or no uses at all) and were later “co-opted” for their current function. The parts of the modern brain that seem to enable speech—Broca’s Area and Wernicke’s Area—have been around possibly as long as two million years. No one thinks that speech itself is that old, and some think that it developed as recently as 50,000 years ago. This is the round number for the arrival of “behaviorally modern” humans, who began ceremoniously burying their dead and painting pictures on the walls of caves (“anatomically modern” humans arrive in the fossil record ~100,000 years ago). Some believe that the areas of the brain now used for speech were originally adaptations for problem solving behavior. In other words, it was built by natural selection for a consistent function because it promoted fitness (just as an eye was built by a progressive conglomeration of photo-sensitive cells).
The key event that might have switched this part of the brain from problem solving to grammar producing was the mutation of the FOXP2 gene around 200,000 years ago (based on molecular clock evidence). People today who have a dysfunctional version of this gene exhibit a linguistic disorder that affects both speech and understanding. The protein that FOXP2 codes for differs in humans from mice and chimpanzees by only one amino acid. Apparently, this single mutation enabled parts of our ancestors’ brains to be co-opted for grammatical usage. Since language is often used in a problem solving function, one might want to say that this was a clear adaptation. However, my point is to say that, given the size of our ancestors’ brains 200,000 years ago (which was at least as large as ours is today) it is likely that their pre-rational/pre-linguistic behavior was approaching what we would normally call rational behavior. Archaic homo sapiens, as they are called, would have seemed fairly advanced in their problem solving behavior, making tools for various uses and most likely communicating audibly and with hand gestures, as chimpanzees do today. (By the way, I’m getting some of this information from Dawkins’ The Ancestor’s Tale.) Therefore, we could consider language an exaptation, co-opted for its current use upon an established brain, and group of behaviors, that were already complex.
This brings me back to the Traveler’s Dilemma (TD). In the TD scenario there is no adaptive advantage to “being rational.” Abstracting the situation to its essentials, placing it on a grid and using mathematics and logic to come up with an answer, is the worst way to play the game. But of course there are many advantages in the real world to thinking abstractly, and therefore language, logic, and mathematics have flourished along with our species. So here is the paradox of the TD: at what point do we abandon rationality as a strategy for successful behavior? How can I rationally come to that decision? You might be tempted to stick with rationality come hell or high water, but that doesn’t seem to be a good policy. If rationality is worth keeping, it is worth keeping because it is a useful predictor of successful behavior (human and otherwise). A dogmatism of rationality is still a dogmatism.
The solution to this dilemma, as I see it, is rethinking what rational behavior entails. Perhaps it is something more than acting on the conclusions of the most abstracted logic, and also something more than going with your gut, that is, a pre-linguistic kind of intelligence. In fact, I recall that Antonio Damasio’s Descartes’ Error makes a similar point. If I remember correctly, Damasio chronicles a history of people who have sustained damage to a particular area of the brain (I don’t remember where) and have become severely emotionally disabled because of it. But the “rational” faculties, the ability to think abstractly and reason logically, were still intact. The result of this brain damage was a change in personality and an inability to operate successfully in social situations. Damasio’s conclusion was that the people who suffered this particular brain trauma could not operate socially because their rational faculties were working in overdrive. There was no emotional faculty to asses the real world situation and force a decision. These people were able to rationalize, but they were severely challenged when it came to displaying what we would normally call rational behavior. It seems that a rationally functioning human being requires a certain amount of emotional input.
What I am suggesting is a kind of pragmatic understanding of rational behavior. What do we want rationality to do for us? If it is to be used as a tool for discovering universal rules of logic or mathematics, then abstraction is great. If it is to be used for discovering the laws and makeup of the natural world, then abstract thought is incredibly useful (although, as I’ve said before, even some physicists defy logic at times). But, if we want to use it as a tool for living a satisfying life, then we had better be cautious. Rationality can often lead to irrational behavior.
With this pragmatic understanding of rational human behavior it is difficult to assert that it is irrational to believe in a god, as I have been known to argue in the past. I've always known that there is no conclusive proof of the nonexistence of god, though I agree with Dawkins that what we know scientifically about the world suggests that there is no supreme being. Therefore, a belief in god, though it might contradict the strictest rational indications of science, is not necessarily irrational behavior. What do we want this belief in god to do for us? If it is used dogmatically to impose blatantly incorrect beliefs about the world (like, say, the earth is only six thousand years old), then it fails miserably. But if it is used as a way to live a meaningful life, then there is a case for rational/pragmatic belief in god.
I know many believers who live very satisfied lives, and there are times when I wish I could believe again. However, I am a determinist when it comes to belief. As Wendell Berry once wrote, there is no such thing as a willful suspension of disbelief. Belief precedes the will. I believe there is no god because a god is unbelievable to me now, as a result of what I have learned. But, I recognize the shortcomings of making rational/scientific inquiry the sole basis for living a satisfying life. I’m not convinced that we have advanced much upon the psychology Dante’s vision, and I don’t think there has been an improvement on Shakespeare's understanding of human personalities. The most meaningful aspect of scientific understanding, to me, is when investigators speculate on what has happened in the past by telling a damn good story. Borges accepted Gibbon’s version of the decline of Rome because it was the best story he had read about the empire. I feel the same way about Diamond’s version of the rise of civilizations. When I read a good story about the origins of our world, my life is enlarged, and in this way good science is similar to any other useful story telling.
In the opinion of some, the Bible is a good story, and one to base your life on. Its success in that respect is hard to argue with. Like it or not, belief in a god cannot be categorically relegated to irrational behavior.