Friday, June 15, 2007

The Traveler's Dilemma

On the cover of this month’s Scientific American an article is advertised by the phrase “When It Pays to Be Irrational.” The actual title of the article is “The Traveler’s Dilemma,” written by a Cornell economist named Kaushik Basu, who devised a game theory scenario about two star crossed airline passengers. Here’s how it goes. Two complete strangers return to the states on an airliner from a Pacific island with identical antique vases in their luggage. Both vases are damaged during the flight, and both passengers contact the same airline manager for compensation. In order to decide on a price, the manager gets creative and tells the two passengers (who are incommunicado with each other) that they should determine the value of the vase. The price can be anywhere between 2 and 100$. But here’s the catch. If they both give the same price, then both will be awarded the same amount. However, if one gives a higher price than the other, then the price will be set at the lower of the two. On top of that, a reward of 2$ will be given the passenger who gave the lower price, while a 2$ penalty will be applied to the passenger who gave the higher price. For example, if one passenger says 100$, and the second says 50$, then the second passenger will get 52$ and the first passenger will get 48$. What price would you give?

Now, before deciding, you might also want to determine what the most rational choice would be. Yes, I’m insinuating that your actual choice might not be the most rational one. At least that is Basu’s claim, because his assertion is that the most rational dollar amount would be 2. The idea is that Lucy, the first passenger, wants as much as she can get, so she thinks to herself “I’ll say 100$.” But the second passenger, Pete, knows this is what Lucy will be thinking, and thinks to himself “I’ll say 99$.” If these choices were submitted, Pete would get the maximum amount of 101$ and Lucy would be the “loser” with 98$. But, of course, Lucy knows this, knows Pete is thinking the same thing, so she decides to undercut Pete and say 98$. You see where this is going. The reasoning plummets the price down to the lowest amount of 2$, thereby guaranteeing that neither passenger will receive more money than the other. This is the outcome predicted by game theorists, using Nash and various other equilibrium concepts.

You might not be surprised to know that, when this game is actually played in the real world, the majority of the choices were not 2$. “[T]he game’s logic dictates that 2 is the best option, yet most people pick 100 or a number close to 100—both those who have not thought through the logic and those who fully understand that they are deviating markedly from the ‘rational’ choice. Furthermore, players reap a greater reward by not adhering to reason in this way. Thus, there is something rational about choosing not to be rational when playing Traveler’s Dilemma.” Basu says that this was the purpose of creating the game in the first place, to challenge widely held notions of traditional economics and “to high-light a logical paradox of rationality.”

On the face of it, one might be tempted to solve this paradox by simply changing one of the assumptions of the game, and of traditional economics. Why would we be primarily concerned with getting more than our opponent? Even the idea of selfish economic behavior doesn’t dictate the necessary impoverishment of others. But it’s not as simple as that. This does not have to be a zero sum game for the “rational” choice to plummet to 2$. What an analysis of the game requires is a mind willing to consider, not what amount it wants the most, but what amount it will most likely get with any given price. The fact is, in this game over-thinking doesn’t pay off, and going with your gut does.

What I’d like to ask is, what is rationality and what is rational behavior? Rationality, as I understand it, is reason based thinking. That is, thinking through a situation and coming up with the reasonable conclusion. We would assume that rational behavior would be that behavior which is dictated by rational thinking. But will any line of reasoning do? Let’s assume for the moment that rationality is based on humanity’s best reasoning strategies, those of logic and mathematics. Given this assumption, we could say that good reasoning is based on abstract thinking, being able to boil a situation down to its essential abstract parts and manipulating them with mathematical/logical thinking.

In the end, rational behavior, based on abstract thinking, is just one part of human behavior. The other aspects of human behavior are what we would call instinctual, and are shared with other animals. The intelligent behavior of a human being is an advanced version of the intelligent behavior we find in chimpanzees, elephants, ants, and bees. What seems to give us the leg up is the use of language, and this is essential for abstract thinking. Linguistic thought is something that we apparently do not share with other animals. But, like a good Darwinian, I assume that language is an adaptation, because surely language leads to a unique pattern of behavior that is “seen” by natural selection. A group of hominids that can speak to each other are going to be better off compared to a group that cannot.

But perhaps adaptation is a strong word for what happened in our evolutionary past. Stephen Jay Gould coined the phrase “exaptation” to describe characters that evolved for other uses (or no uses at all) and were later “co-opted” for their current function. The parts of the modern brain that seem to enable speech—Broca’s Area and Wernicke’s Area—have been around possibly as long as two million years. No one thinks that speech itself is that old, and some think that it developed as recently as 50,000 years ago. This is the round number for the arrival of “behaviorally modern” humans, who began ceremoniously burying their dead and painting pictures on the walls of caves (“anatomically modern” humans arrive in the fossil record ~100,000 years ago). Some believe that the areas of the brain now used for speech were originally adaptations for problem solving behavior. In other words, it was built by natural selection for a consistent function because it promoted fitness (just as an eye was built by a progressive conglomeration of photo-sensitive cells).

The key event that might have switched this part of the brain from problem solving to grammar producing was the mutation of the FOXP2 gene around 200,000 years ago (based on molecular clock evidence). People today who have a dysfunctional version of this gene exhibit a linguistic disorder that affects both speech and understanding. The protein that FOXP2 codes for differs in humans from mice and chimpanzees by only one amino acid. Apparently, this single mutation enabled parts of our ancestors’ brains to be co-opted for grammatical usage. Since language is often used in a problem solving function, one might want to say that this was a clear adaptation. However, my point is to say that, given the size of our ancestors’ brains 200,000 years ago (which was at least as large as ours is today) it is likely that their pre-rational/pre-linguistic behavior was approaching what we would normally call rational behavior. Archaic homo sapiens, as they are called, would have seemed fairly advanced in their problem solving behavior, making tools for various uses and most likely communicating audibly and with hand gestures, as chimpanzees do today. (By the way, I’m getting some of this information from Dawkins’ The Ancestor’s Tale.) Therefore, we could consider language an exaptation, co-opted for its current use upon an established brain, and group of behaviors, that were already complex.

This brings me back to the Traveler’s Dilemma (TD). In the TD scenario there is no adaptive advantage to “being rational.” Abstracting the situation to its essentials, placing it on a grid and using mathematics and logic to come up with an answer, is the worst way to play the game. But of course there are many advantages in the real world to thinking abstractly, and therefore language, logic, and mathematics have flourished along with our species. So here is the paradox of the TD: at what point do we abandon rationality as a strategy for successful behavior? How can I rationally come to that decision? You might be tempted to stick with rationality come hell or high water, but that doesn’t seem to be a good policy. If rationality is worth keeping, it is worth keeping because it is a useful predictor of successful behavior (human and otherwise). A dogmatism of rationality is still a dogmatism.

The solution to this dilemma, as I see it, is rethinking what rational behavior entails. Perhaps it is something more than acting on the conclusions of the most abstracted logic, and also something more than going with your gut, that is, a pre-linguistic kind of intelligence. In fact, I recall that Antonio Damasio’s Descartes’ Error makes a similar point. If I remember correctly, Damasio chronicles a history of people who have sustained damage to a particular area of the brain (I don’t remember where) and have become severely emotionally disabled because of it. But the “rational” faculties, the ability to think abstractly and reason logically, were still intact. The result of this brain damage was a change in personality and an inability to operate successfully in social situations. Damasio’s conclusion was that the people who suffered this particular brain trauma could not operate socially because their rational faculties were working in overdrive. There was no emotional faculty to asses the real world situation and force a decision. These people were able to rationalize, but they were severely challenged when it came to displaying what we would normally call rational behavior. It seems that a rationally functioning human being requires a certain amount of emotional input.

What I am suggesting is a kind of pragmatic understanding of rational behavior. What do we want rationality to do for us? If it is to be used as a tool for discovering universal rules of logic or mathematics, then abstraction is great. If it is to be used for discovering the laws and makeup of the natural world, then abstract thought is incredibly useful (although, as I’ve said before, even some physicists defy logic at times). But, if we want to use it as a tool for living a satisfying life, then we had better be cautious. Rationality can often lead to irrational behavior.

With this pragmatic understanding of rational human behavior it is difficult to assert that it is irrational to believe in a god, as I have been known to argue in the past. I've always known that there is no conclusive proof of the nonexistence of god, though I agree with Dawkins that what we know scientifically about the world suggests that there is no supreme being. Therefore, a belief in god, though it might contradict the strictest rational indications of science, is not necessarily irrational behavior. What do we want this belief in god to do for us? If it is used dogmatically to impose blatantly incorrect beliefs about the world (like, say, the earth is only six thousand years old), then it fails miserably. But if it is used as a way to live a meaningful life, then there is a case for rational/pragmatic belief in god.

I know many believers who live very satisfied lives, and there are times when I wish I could believe again. However, I am a determinist when it comes to belief. As Wendell Berry once wrote, there is no such thing as a willful suspension of disbelief. Belief precedes the will. I believe there is no god because a god is unbelievable to me now, as a result of what I have learned. But, I recognize the shortcomings of making rational/scientific inquiry the sole basis for living a satisfying life. I’m not convinced that we have advanced much upon the psychology Dante’s vision, and I don’t think there has been an improvement on Shakespeare's understanding of human personalities. The most meaningful aspect of scientific understanding, to me, is when investigators speculate on what has happened in the past by telling a damn good story. Borges accepted Gibbon’s version of the decline of Rome because it was the best story he had read about the empire. I feel the same way about Diamond’s version of the rise of civilizations. When I read a good story about the origins of our world, my life is enlarged, and in this way good science is similar to any other useful story telling.

In the opinion of some, the Bible is a good story, and one to base your life on. Its success in that respect is hard to argue with. Like it or not, belief in a god cannot be categorically relegated to irrational behavior.

10 comments:

thecrazydreamer said...

I have an objection to the assertion that the most rational choice would be $2. Perhaps that is true if you were on some sort of game show and the objective was to end up with more money than your opponent, but in this real life scenario, I think rationality does not dictate you select $2. If we're willing to assume an equal chance for your opponent to select any amount between $2 and $100, then a selection of $2 will give you a 1/99 chance at $2 and a 98/99 chance at $4. If you select $100, then there is a 5/99 chance that it will be less than $4, but a 94/99 chance that it will be equal to or more than $4. In fact, the most rational choice, statistically speaking, would be $97 or $96. Both give you a mean earning of $49 and 8/99ths of a cent (yes i'm geeky enough to make a spreadsheet to determine the exact value one ought to choose)

Perhaps I am missing the logic behind selecting $2, or misunderstood the question. I feel like you were attempting to address this in your fourth paragraph, but I either don't find your or Basu's explanation satisfactory or I am just not understanding it.

Even were I to concede though, a single instance of irrationality having a better outcome than rationality doesn't seem to be a compelling defense of faith. A person enjoying this game may have a faith that dictates they always choose the number 97 every time they are asked to pick a number between 2 and 100. While choosing 97 will end up netting them the most money over an extremely large sample size, and also provides the highest probability of a pleasant outcome in the one instance we have, that is certainly no justification for their faith.

I tend to agree with your conclusion that a life without emotion or irrationality is an incomplete life, but I'm not satisfied with the traveler's dilemma to support that conclusion.

Also, I hope this does not come off as a hostile response. I enjoyed the post, and I am just trying to continue the conversation. I readily concede that I probably am misunderstanding the game or the argument, because it seems unlikely that an economist who's getting his work published on the cover of Scientific American was incapable of making a spreadsheet to calculate the probabilities himself. Also, were you interested in being added as an author to Soul & Meat? If so, let me know the email address of your blogger account so I can send you an invite. You can post it here or on my blog in a comment, or send it to my username @gmail.com. The blog is at soulandmeat.blogspot.com if you would like to join. We used your suggestion of "What say ye, pagans?" as the tagline.

Sub-sub-librarian said...

Thanks for the comment, Crazy Dreamer. I too found it hard to believe that the rational answer is 2$. The trick, I think, is that there is a psychology built into the analysis. Basu says that game theorists describe this kind of situation, where I think that you think that I think that you think I will say X, by “saying that rationality is common knowledge among the players.” This means that a formal analysis can’t just be figuring out the odds, because this is not a random event. For this kind of situation theorists rely on equilibrium concepts, such as the Nash equilibrium (of A Beautiful Mind fame). “A Nash equilibrium is an outcome from which no player can do better by deviating unilaterally. Consider the outcome (100, 100) in TD (the first number is Lucy’s choice, and the second is Pete’s). If Lucy alters her selection to 99, the outcome will be (99, 100), and she will earn $101. Because Lucy is better off by this change, the outcome (100, 100) is not a Nash equilibrium.” Basu says that there are other kinds of equilibriums that game theorists use, and they all predict 2$.

In regards to your third paragraph, I’m not sure what you mean by “faith.” I’m assuming you mean some kind of absurd belief system, like worshiping the porcupine god of my dreams who has 97 needles, making 97 the most Holy number. Picking 97 and having success would be seen as a justification for the porcupine god, and you find that objectionable (correct me if I’ve misunderstood). Yes, I agree that this would be unjustified, but I’m not suggesting this sort of scenario. In the Traveler’s Dilemma, having “faith” that the other person is going to pick something in the 90’s, and not 2$, is justified. It is based on an intuition, a hunch, about what other people are like. The reality of a personal god and the assurance that comes from such a belief is an intuition that many people feel.

Here’s another example that I think is appropriate to bring up. I would never say that a person who believes in human free will, and acts on the basis of that belief, is being irrational. But I think the most formal rational arguments on the subject indicate that there is no such thing as free will. Therefore, I think it is strictly irrational to believe in free will, knowing full well that all people (even the most thorough determinists) behave as if free will exists. This is rational behavior based on irrational conclusions. You may not find this a good analogy either. For me, this confirms that there are actually Traveler’s Dilemma situations out there, and they may be more common than we think.

Yes, I’m going to join Soul & Meat. I still have your original e-mail. I’ve just been procrastinating.

thecrazydreamer said...

I can somewhat see what you're saying... Correct me if I'm misunderstanding this: Game theory suggests that humans will benefit most from a bid of $2. In reality, humans actually benefit more from higher bids. Basu is asserting that game theory is rational, and going against game theory is irrational. Therefore, sometimes it is beneficial to be irrational.

Is that a fair summary? If so, I figuratively pull out my red sharpie and circle the statement "game theory is rational, and going against game theory is irrational".

I understand what game theory is saying... that you're trying to guess what your opponent will choose, and if you keep guessing one lower than your opponent then you end up better off. However, what is stopping the same pattern of hypotheticals from occuring in the opposite direction? Start at $2, and if your opponent chose $4, you realize you could've earned more money if you had chosen $3. And if they had chosen $5, you could've earned more having chosen $4, etc. If anything, I think it is an irrational assumption that your opponent will start at $100 and follow the recursion process that Basu predicts.

In other words, Basu is saying that this game theory is rational, but incorrect. I am saying this game theory is incorrect, therefore irrational.

Perhaps we are at odds over the definition of the word irrational, or perhaps we are at odds over the acceptance of the rationality of game theory.

As for your free will topic, I haven't been exposed to a convincing argument against free will. If I were willing to concede the point, or for others who are willing to concede the point, its an acceptable example. Perhaps we can revisit this discussion after I learn more about the free will vs. determinism arguments.

Sub-sub-librarian said...

Yes, the thesis statement for Basu’s article would be “game theory is rational, and going against game theory is irrational…Therefore, sometimes it is beneficial to be irrational.” What I’m asking is, if going against formal rational conclusions is consistently beneficial in certain circumstances, would we want to call such acts irrational?

You make an interesting point about starting at 2$ and working your way up. The reason that doesn’t work is that all answers from 3-100$ are values from which an individual “player can do better by deviating unilaterally.” Starting at 2$ would only stop an individual from raising the value at all, because by giving a value of 3$ you are running the risk of being underbid. That’s the game theory answer.

I have a hard time believing game theory’s conclusions as well. My concern is with rationality as a process that we all engage in at different levels and to different extents. As I said in my original essay, rationality at its best is abstraction in terms of logic and mathematics. There are different reasons we use for doing things, which I still consider rationality, and sometimes our less formal reasons are at odds with our abstract formulations. One way to decide between the two is to use the one that works the best.

The free will situation is the most convincing to me. I’d be interested in your thoughts on this topic when you consider the arguments for and against human free will. Again, I believe there is no such thing, but I also believe that living as if there is free will is rational behavior.

Sub-sub-librarian said...

Actually, going back to my original post, I quote Basu as saying, “there is something rational about choosing not to be rational when playing the Traveler’s Dilemma.” So he doesn’t think it is irrational to go against the most formal rational arguments. I’m suggesting the same goes for free will and (some) religious beliefs.

thecrazydreamer said...

Thanks for pointing that out. That really seems to change the tenor of the entire article to me.

My protest is more with the advert-teaser of "When It Pays to Be Irrational". Perhaps Basu had no hand in that subtitle and it is a misrepresentation of what he was attempting to say.

Based on that, I'd say I mostly agree with his assertions, since that sort of takes the teeth out of it. I guess I'd have to change his thesis to something like "Game theory is rational, but it is also rational to not rely solely on game theory, as it may be flawed." I think that ends up supporting your argument better as well.

As for the free will debate, I'm willing to admit that I don't know enough about it. My current stance is that determinism is simply counter-intuitive. I can imagine the theories, but I do not know of any compelling arguments to lead me to discard my intuition in this case. From what I can see determinism is sound as a theory in a similar way to descartes' evil genius theory, but in both cases, while the theory doesn't collapse on itself, i dont find that sufficient evidence to adopt it. I could not argue against descartes without relying on my intuition and i cannot argue against determinism without my intuition as well.

The Unapologetic said...

Comment Disclaimer: I am absolutely ignorant when it comes to game theory.

I'm going to try and change the direction of these comments because your underlying question is very interesting: Are there times when it is better, existentially, to behave irrationally on a consistent basis?

Before game theory came along, TCD's logic/probabilities made the most sense. Game theory brought the added layer of a kind of mathematics/logic of psychology. In the same way, it's possible that acting instinctually (irrationally) may later be discovered to be rational as our knowledge of the brain and psychology increases. For example, the first people to start thinking it was a good idea to start washing their hands before doing open-heart surgery probably didn't realize how rationally they were behaving, and their behavior was later justified by scientific discovery. Similarly, I think it's possible that behavior that is only based on likelihoods right now could eventually be proven logical. I do not, however, believe that behavior based on religious texts like the Bible would eventually be proven rational because of how unlikely it is that they are based on truth.

People who believe the Bible tells a good story are probably right, but taking the further step of basing your life on is isn't looking at it rationally at all when so much of it is internally inconsistent and fails to describe the world as it really is. If it fails in those regards, how can it be expected to direct us in any truly helpful way?

Your decision to let Diamond's story enrich your life is based on the likelihood of its truth, its seeming accuracy. His book is full of scientific evidence and logical theories and can serve as a basis for explaining a hell of a lot of stuff. Religious texts cannot make this claim, since it is not likely they're true, based on evidence.

With our current breadth and depth of knowledge about the world and humans, the best we can do at this point is behave as rationality, science and likelihood dictate. I truly believe that starting with a rational, scientific foundation starts a person out better down the path of self-discovery and search for universal truth, and that starting with a religious foundation only puts us behind on that path.

The only value I can see for religious belief and behavior is purely social, since yes, it would make my social and family life run much more smoothly if I believed the same things they did. In the end the choice to behave irrationally has to benefit the parts of life someone feels are most important, and if the social element wins out over something like a personal quest for truth, there isn't much I can argue against that.

Sub-sub-librarian said...

The Crazy Dreamer: Yes, I think your re-re-revised thesis statement works well for both Basu and me. On the free will point, I’m not sure Descartes’ evil genius does quite what you say it does. As far as I understand it, the evil genius is an example of hyperbolic doubt that is used to see if we can figure out what we can know without using intuition. The fact that we can think of such a thing as intuition, be conscious of our intuition, means that we exist. We are not relying on intuition to come to that conclusion, although intuition might be the content of our consciousness while we consider whether we exist or not.

My point is that there are situations where it makes sense to rely on intuition and some where it doesn’t. You have an intuition that you have free will, and I think it is reasonable to act on that belief. There is no satisfactory way to explain how free will works in a cause and effect, material world, without having recourse to something like an immaterial soul that works in some mysterious, otherworldly way. An analogy here would be relativity. Our intuition is that time is not relative, and there is a good reason for that, considering our dimensions and speed. But the fact is that time is relative. All of my life I will probably get away with the assumption that time is a constant, but if I ever start doing some serious intergalactic traveling (or become an engineer for GPS technology) I better think ahead. The determined human mind has not been experimentally demonstrated as conclusively as relativity has, but in my opinion it is not a matter of if we can do so, only when.

What You Dream: I see the point of your third paragraph. The most successful choices for the Traveler’s Dilemma are successful for a reason, and eventually we might be able to explain that based on a better understanding of reality. We both think that’s true because we believe that the world acts according to natural laws, and these laws can be understood using logic, mathematics, etc. This is my point about a pragmatic understanding of rational behavior. If the logic of game theory is pointing us in the wrong direction, then it is better to go with our intuition and hope we can come up with a better rational explanation for why it works.

When it comes to religious texts, I agree that they are not likely to tell us the truth about the origin of the universe or of life, and they should not be seen an infallible historical documents. But they do a little more than provide a social life. A belief in a loving god is deeply satisfying and gives a sense of meaning that is confirmed by everyday experience in the minds of believers. This is such a common intuition and makes sense to so many, that it is hard to deny it as a rational behavior for coping with an otherwise “meaningless” life. I personally find meaning in a life without god, but many find this to be impossible.

Silentio said...

Sub-sub:

Interesting post. I remember talking with this with you while on a hike somewhere. I can't remember where.

After reading your post and some of the replies, it seems to me like there’s an ambiguity in the way the paradox is set up. What is the goal of the game: to get more money than the other person or to get a large amount of money? It seems like the rationality of the decision depends on what the goal is and from what you described it isn’t clear what the goal of the game it. I think it is clear that if the goal is to beat your opponent then the best thing to do is to say $2. That way, you are guaranteed that your opponent won’t underbid you and thus get more money. Any bid higher than $2 makes it more likely that you won’t come out on top. But if the point is the get a large sum of money (but not necessarily to get more than your opponent) then the best thing to do would be to go high. So you could be guaranteed at least $98 if you bid 100. And if both participants had that same goal, then they would both bid 100.

On funny paradoxes. The Monty Hall paradox is pretty cool: http://en.wikipedia.org/wiki/Monty_Hall_problem

On human rationality. Some of the most interesting work on human irrationality comes from the social psychologists up at University of Michigan: Nisbett and Ross. They have all these crazy experiments about how irrational humans are. My friend, Max, is writing a dissertation on this stuff. I think that one really interesting finding is that many times people give post hoc rationalizations for decisions they make when in reality they are choosing some far more basic strategy for decision such as “choose the one on the left.” An interesting article on this is: Nisbett and Wilson, 1977 http://www.psychwiki.com/wiki/Nisbett_&_Wilson_(1977)

Another interesting article on the failure of human reason to measure up to the standards of logic and decision theory is the Wasson selection task: http://en.wikipedia.org/wiki/Wason_selection_task

The interesting thing about research with the wason selection task is that if you change the subject matter then people’s performance improves drastically. Thus, even though the formal logic of the situation is the same, performance changes with the content of what is reasoned about. Thus human reasoning doesn’t seem to match up with the strictures of formal logic, according to which inferences are made in virtue of their form, not content.

I think that it is probably correct to say that human cognition did not evolve to solve logical problems. What this area of research seems to suggest this that humans are not computers or even computer like in our processing. Big surprise (much to the chagrin of Fodor). One interesting thought here is how an immersion in language (the capacity for which may very well be innate—which doesn’t mean there’s no environmental stimulus necessary in order to actually acquire language) can actually change the way we think. Andy Clark has some interesting (though, I think ultimately unsuccessful) ideas about how language transforms our thought. Think, for example, about the ability (shared by nonlinguistic creatures) of making rough estimations of differences between quantities. For example, someone could tell the difference between 2 and 5 apples (but probably not between 9 and 10). Chimps can do this. But chimps cannot add to get a precise sum, and they can’t count to tell the difference between say, 82 and 79. Clark thinks that symbols are needed to pull off a task like this. Our ability to use symbols brings in its wake new skills unavailable to nonlinguistic creatures. Perhaps an immersion in language enables us to become more logical and more rational. Or perhaps not. Maybe the cognitive mechanisms that account for many of our decisions are not linguistic. But it seems that at least some are. So even if we aren’t perfectly rational, we’re more so than chimps. And maybe that is because we have language.

Finally, I wanted to say a few things about rationality. I think you’re right to say that rationality is about acting for reasons. Perhaps acting for reasons is integrally tied up with presenting justifications for actions, and perhaps justification for action is essentially linguistic. Typically when we think of acting for reasons we are attributing to either ourselves or others some propositional attitude. I think it is plausible that propositional thinking requires language and so probably animals don’t act for reasons. This isn’t to say they don’t think. Of course they do. They just don’t have propositional thought. And propositional thought is where reasons reside.

So acting for reasons is certainly part of rationality. But there’s another question (one that is the purview of epistemology) of whether reasons are justified. Call this Rationality with a capital “R.” This issue of justification is what philosophers are mostly concerned with. And the question of justification and the challenges to it (e.g. “naturalized epistemology,” “naturalized ethics,” in general, attempting to naturalize normativity) are some of deepest and most difficult there are in philosophy today (or so I think). I would submit that these are the real questions and contentions that are at the root of much disagreement in a wide range of philosophy, from reductionist theories of mind to free will debates and beyond. Are we nothing more than a bunch of relatively stupid homunculi that, when working in concert, produces what we call rationality (and Rationality with a big “R” too for that matter)? And if so, does that mean we have eliminated Rationality?

Silentio said...

One more thing. I think your idea about the pragmatic natural of rationality is a good one. The question: "what do we want rationality for?" is exactly the right question, in my opinion. (Any you know how Dennett says that good philosophy is about asking the right questions.) I wonder though: are you really a pragmatist about scientific truth? (It's okay if you are. I think I am too at some level.) But then, what is the "use" or purpose of science if not just the desire to know the truth? And will you then give a pragmatic account of truth? (Again, it's okay if you do, I'm just curious.)

Also the idea that rationality can perhaps sometimes lead to an unsatisfying life is right on. This is the excellent point that William James makes against Clifford's "snarling logicism" the the example of the man who doesn't know (doesn't have sufficient evidence) that a certain woman likes him, and so refuses to act until he has sufficient evidence that she likes him. James says that 10:1 he'll bet that the woman will never like him. Sometimes it takes action without sufficient justification that some fact is the case to bring that fact about.