Evolutionary Psychology human-nature.com/ep 2003. 1: 172-187 Book ReviewAdaptive Thinking: Rationality in the Real World.By Gerd Gigerenzer. Oxford, Oxford University Press, 2000. ISBN 0195136225. Earl Hunt, Professor (Emeritus) of Psychology, Department of Psychology, Box 351525, University of Washington, Seattle, WA 98195, USA. Email: ehunt@u.washington.edu. Gerd Gigerenzer is a man with a mission, and a mission that has some point to it. He wants to show that people are rational decision makers, most of the time. To understand why Gigerenzer is on this mission we have to have a bit of background.Modern ideas about rationality are very largely based on Von Neumann and Morgensterns (1947)Theory of Games and Economic Behavior,a masterful analysis of decision making. Von Neumann and Morgenstern assumed that life consists of choices between lotteries, where if you take action A you will receive award R1 probability withp1 , award R2 with probabilityp2 and so forth. This , depiction can be applied very widely. You can choose to invest your money in one stock or another. That is clearly a choice between lotteries. While walking on the shady side of the street you can choose to cross to the sunny side or not, and theres a chance you might be hit by a car. You are trading coolness for sure against sunny walk, probabilityp1or hit by a car, probability 1-p1. Von Neumann and Morgenstern postulated a small number of statements that they believed described rational behavior in such situations. To gain a flavor for these axioms here are informal descriptions of three of them. Transitivity.If A is preferred to B, and B is preferred to C, then A is preferred to C. Example: If you prefer Chocolate to Vanilla, and Vanilla to Strawberry, you should prefer Chocolate to Strawberry. Irrelevance of unpreferred alternatives: Suppose that a person is offered a choice between rewards A and B, and prefers B. If the choice is repeated with the addition of a new reward, C , the person should choose either A the original choice, or C, the new choice. Suppose that you are offered a choice between chocolate and vanilla, and choose chocolate. Then the waiter says Oh, we also have strawberry. It would be rational to remain with chocolate, or to choose strawberry, but it would not be rational to switch from chocolate to vanilla just because strawberry was present. To give a final (but not exhaustive) example of the Von Neumann and
Adaptive ThinkingbyGerd Gigerenzer
Morgenstern axioms,Every lottery has its price:Suppose that you prefer A to C and you are offered a lottery where you win A with probabilityp C with and probability (1-p) (written (A,p),(C,1-p)). Then there is some reward, B, where you prefer A to B and B to C, such that you are indifferent between playing the lottery or simply receiving C. Indeed, that is the way a lottery works; there is a price that you will pay for a ticket. The argument works the other way, too. To go back to an earlier example, suppose you prefer walking in the Sun to walking in the shade, and walking in the shade to being hit by a car. If the traffic is dense enough so that the probability of being hit is high enough, then you will stay in the shade instead of crossing to the sunny side. Von Neumann and Morgenstern proved that such a rational decision maker can be described as assigningutilityto each reward, u(A), u(B), etc. (which need not be monetary value, even if the rewards are money), that the utility function is unique up to a linear transformation, and that the utility of the lottery ((A,p),(B,1-p)) is (1) u(((A,p),(B,1-p)))=pu(A) + (1-p)u(B). In words, the utility of a lottery is equal to its expected value in utility units. When offered a choice between two lotteries a rational person (as defined by von Neumann and Morgenstern) should choose the lottery with the highest expected utility. There is a hidden assumption here; that the probability measure,p, has to conform to the definition of a probability measure. Notice that I did not say that the measure the decision maker is using has to be an accurate measure in the world. That is, I could be rational in the von Neumann sense if I used the wrong probabilities, providing that my subjective probability estimates were all non-zero, summed to one, and otherwise followed the rules of the probability calculus. This distinction, which Von Neumann and Morgenstern did not stress (nor does Gigerenzer) leads to two further definitions, which are mine but are needed to discuss Gigerenzers work. I will call a personrationalif he or she conforms to the Von Neumann and Morgenstern model using some probability measure. I will call a decision makerinformedif the decision makers probability assignments are identical to those that operate in the real world, i.e. the decision maker knows the real odds. Finally, a decision maker isadaptiveto a particular environment if the decisions they make result in the highest possible rewards. The relation between these concepts is important. A decision maker would be adaptive in any environment in which he or she was both rational and informed. However, as Gigerenzer argues in detail, there are other ways to be adaptive. Von Neumann and Morgensterns analysis has had a huge effect on theoretical economics. In psychology it served as a starting point for a descriptive theory of decision making, a program that was first spearheaded by Ward Edwards, and
Evolutionary Psychology ISSN 1474-7049 Volume 1. 2003.
173
Adaptive ThinkingbyGerd Gigerenzer
then by Daniel Kahneman, Amos Tversky, and many related decision theorists. (Kahneman received the Nobel Prize for his work in 2003). The general conclusion behind the psychological research is that people do not act as rational decision makers for two reasons. First, instead of analyzing decisions as choices amongst lotteries people use a variety of short cuts in reasoning (heuristics) that can lead them into inconsistent decision making, and to error in estimating probabilities. Second, peoples decision making choices can be altered by describing a situation to emphasize gains or losses. To illustrate, would you be more likely to by from Store A, which advertised Discount for cash or Store B, which advertised Charge for credit? This description of psychological research can be generalized beyond the study of decision making. Descriptive studies of deduction and logical reasoning also start by setting a normative standard for rational behavior, in this case the laws of logic, and then studying how human behavior differs from this ideal. The typical decision theory study presents people with a decision or probability estimation problem, identifies a strategy that people use, and shows that this strategy is incompatible with either rational decision making or inconsistent with the rules of a probability calculus, or both. The experimental strategy can be illustrated by the much usedLinda problem.Experimental participants (usually university students) are presented with a description of Linda as a young woman who, in college, was neat, precise, good with mathematical arguments, and also a political activist on the extreme liberal side. Participants are then asked which is more likely, that (a) Linda is a feminist, (b) Linda is an accountant, or (c) Linda is a feminist and an accountant. A substantial number of participants choose answer (c). Such a selection violates one of the axioms of the probability calculus, which states that for any two events, a and b, (2)p(a•b)=p(a)p(b|a)=p(b)p(a|b), implying that (3)p(a•b)≤Min(p(a),p(b)) People who choose alternative (c) are said to have committed thenoijnoctcnu fallacy. Why might someone commit the conjunction fallacy? The usual argument is that instead of answering the question by recourse to the probability calculus people apply arpertatiesenssvene They ask Is Lindas description strategy. typical of a person who might be an accountant and a feminist. More formally, let E be the evidence (here the description of Linda) and H be the hypothesis
Evolutionary Psychology ISSN 1474-7049 Volume 1. 2003.
174
Adaptive ThinkingbyGerd Gigerenzer
(here,thatsheisanaccountant(Ha) or a feminist (Hf). People applying the representativeness heuristic are said to respond accountant and feminist because (4)p(E|H•H)>Max(p(E|H),p(E|H)). a f a a In doing so they are answering the wrong question, for the question required evaluating statements of the formp(H|E not) ,p(E|H) . Analogs of the Linda problem can be developed for many other settings, e.g. as a problem in personnel selection. Representativeness can also lead to a rather different type of error, calledbase rate neglect. Imagine a physician whose patient presents with a headache and extreme nausea. This is typical of West Nile fever, a rather rare disease, and slightly atypical (because of the headache) of common food poisoning. The physician would be said to have committed the representativeness error of he or she diagnosed West Nile fever. As in the case of the Linda problem the physician has confused typicality, defined asp(symptoms| disease) with diagnosticity, p(disease|symptoms).Diagnosticity requires a consideration of both the typicality of the symptom, given the disease, and the a priori probability, before the symptom was observed, of possible diseases. Suppose that prior to observing a piece of evidence it is possible to assign prior probabilities to possible hypotheses. In the physicians case these probabilities would be the relative frequencies of food poisoning and West Nile fever in the relevant population. Let Hi be the ith possible hypothesis, and letp(Hi) its be probability. (This is the base rate of Hi) Now, let E be the observed evidence, and letp(E|Hi)be the probability that a particular piece of evidence would be observed, given that hypothesis Hiis, in fact, correct. What we seek isp(Hi|E), the a posteriori probability of each hypothesis after the evidence has been observed. th The necessary reasoning is encapsulated in Bayes theorem, an 18 century theorem on evidence evaluation that is a consequence of the definition of a probability calculus. Bayes theorem is p(E|H)p(H) i i (5)p(H|E)=. i ∑ p(E|H)p(H) k k k Neglect of base rates is often illustrated by thecab problem; In a certain city 80% of the cabs are painted blue and 20% painted green. An accident occurs, and a witness to the accident states that the cab involved was green. Experiments show that under the conditions of visibility at the accident site the witness is able to identify the color of a cab correctly 80% of the time. What is the probability that the cab was green?
Evolutionary Psychology ISSN 1474-7049 Volume 1. 2003.
175
Adaptive ThinkingbyGerd Gigerenzer
Most people who attempt this problem give answers in the .8 to .6 range. However the probability according to Bayes theorem is only .5 As I have indicated, it is easy to construct a medical diagnostic problem analogous to the cab problem. Similar confusions occur, even when the participants are physicians. The next fallacy to be discussed is theveoncerconfidefallacy. Suppose that an experimental participant is asked a series of questions, such as Is a baseball heavier than a tennis ball? or Who made George Washingtons false teeth? In addition to answering the question the participant is asked to indicate the probability that his/her answer was correct. (I suspect that most American readers would assign a high probability of being correct to their answer for the first question, and a low probability for the second question.) Call this the Confidence Level. At the end of answering the questions we would have a set of questions with confidence level .8, .7, .6, etc. If a person has an accurate calibration of his/her knowledge we would expect 80% of the questions assigned a confidence level of .8 to be answered correctly, 70% of the questions assigned a confidence level of .7 to be correct, and so forth. In fact, this is not what happens. The frequency of correct answers for a set of questions at confidence level x (0<x < 1) is almost always below the confidence level. This is said to indicate overconfidence. Clearly if overconfidence is characteristic of human performance we are likely to be a bit more rash than we should be. The fallacies just cited are believed to operate in the world outside of the laboratory. Gigerenzer cites a particularly egregious problem due to base rate neglect; counseling heterosexual, non-drug using individuals who, on a routine screening, test positively for human immune deficiency virus (HIV). Suppose that the counselor who reports this test to the affected individual confuses the probability that the test is positive, given that a person has HIV (p(positive test|HIV)) with what the patient wants to know, p(HIV|positive test). Gigerenzer cites figures, based on German data for low-risk individuals, in which the first number is .99 but the second is only .50. Field surveys show that many health professionals do make this confusion, and as a result transmit a terrifying message to their clients. Findings like this are the reason that Kahneman, Tversky, and many others have concluded that people are often irrational and uninformed, in the sense defined above, and that as a result they are not adaptive in the normal world. Gigerenzer believes that this conclusion is wrong, wrong, wrong. In fact, he offers three arguments against modern decision theory, one for each of my wrongs. He then generalizes his argument by presenting two overarching reasons why decision theorists have gotten into what Gigerenzer perceives as a mess. I present and comment on each of the wrongs, and then deal with the overarching reasons. Wrong 1. Virtually all the conclusions of modern decision theory are based upon laboratory experiments rather than observations of behavior in natural
Evolutionary Psychology ISSN 1474-7049 Volume 1. 2003.
176
Adaptive ThinkingbyGerd Gigerenzer
environments. This can be extremely misleading. Gigerenzer offers an extensive paean to Egon Brunswik, who made the same point in the 1950s. The Brunswik-Gigerenzer argument is that the heuristics people use (e.g. representativeness) are very close to optimal in the natural environment, because they exploit regularities in that environment. The strategies only look bad in the laboratory because the controlled laboratory setting destroys these regularities. Broadening the argument somewhat, Gigerenzer makes the valid point that the rationality of a cognitive strategy should be determined by its usefulness in a natural environment (adaptability) rather than by rationality. Gigerenzer illustrates his point by considering some experiments on knowledge of cities. Suppose that an American student is asked Does New York have a larger population than Los Angeles? The student will probably be able to answer this directly, because US education contains numerous references to New York as the largest US city. But what about a comparison between St. Louis, Missouri. and Toledo, Ohio? The student is unlikely to know the size of either city on an explicit basis. On the other hand, as Gigerenzer observes, the student will know things about these cities that are correlated with size. For one, St. Louis will simply be more familiar, for (outside of Ohio!) the student is likely to have encountered the name St. Louis more often than Toledo. St. Louis is associated with more cultural icons (the St. Louis Blues, a jazz theme) and sporting events (the St. Louis Cardinals, a major league baseball team.). I doubt that anyone ever played the Toledo Blues, and the Toledo baseball team (the Mudhens) is definitely in the minor leagues. Accordingly, it is reasonable to decide, on the basis of sheer familiarity and correlated cues, that St. Louis is larger than Toledo. More generally, if you recall facts about two cities that are correlated with size, and stop when the preponderance of evidence indicates that one city is larger than another, you will be right more often than not. The strategy Gigerenzer proposes falls into a class of decision procedures referred to asdecision making by lexicographic ordering..It is one of a variety of strategies designed to get a reasonable number of decisions right without committing oneself to extensive computation to make any one of them. The technique is called satisficing, as opposed to optimizing. Shortly after Von Neumann and Morgensterns publication Herbert Simon (1957), in work for which he received the Nobel Prize (!) argued that satisficing is more descriptive of natural decision making than is optimizing. And like Gigerenzer, Simon stressed that satisficing could lead to adaptive behavior in the world outside of the laboratory. For those who dislike the mathematical argument, I offer the following th example. The time is the 12 century, and the place is England. Two groups of travelers are going through Sherwood Forest, on separate paths. Each group consists of a few rich people and a larger number of ordinary individuals. The resident bandit of Sherwood Forest, Robin Hood, has only enough Merry Men to
Evolutionary Psychology ISSN 1474-7049 Volume 1. 2003.
177
Adaptive ThinkingbyGerd Gigerenzer
rob one of the groups. Which one should Robin pick? In order to find an optimal solution Robin Hood should estimate the wealth of all individuals in each group, and then accost the wealthiest group. But thats a lot of work. It is easier to estimate the wealth of the one or two richest individuals in each group, and attack the group that appears to have the fattest cats. This solution may be right often enough so that it is satisfactory. In Simons terms, Robin Hood satisfices rather than optimizes. To put the argument mathematically, consider two alternative rewards, X and Y, each of which consists of a bundle of N comparable subrewards, {xi} and {yi}. Let these rewards be considered in order, i.e. the decision maker first evaluates x1 and y1,the x2 y and2, and so forth. A value maximizer will compute the total rewards for each alternative, choosing X over Y if and onlyΣi=,Nxi>Σi=,Nyi. A satisficer will either only consider the first M, where M < N, subrewards, deciding for X ifΣi=,Mxi>Σi=Myi, or will keep a running average of the differences, D = Σ i=,N xi -Σi=,N yi and decide for X (or Y) if the absolute value of D reaches a threshold of satisfaction. It is fairly easy to show that satisficing can lead to irrational decision making as conventionally defined. Robin Hoods decision could be wrong, because the group with the wealthiest one or two individuals might not have the greatest total wealth. More generally, the value of the satisficing strategy depends upon the structure of the environment. Satisficing works to the extent that the relative values of the attributes considered predict the relative values of the attributes not considered. In Robin Hoods case, he and his band will attack the right group of travelers if the group with the greatest average wealth also contains the wealthiest individual. Gigerenzer argues that, in fact, the environment is structured in such a way that conditions like this do exist. He offers an interesting experimental example. German students were asked to judge the relative size of US cities (e.g. Is St. Louis larger than Toledo?) The German students, in general, knew only a very little about each city. Nevertheless, their judgments were nearly as accurate as those of US students, who (presumably) knew more about each city. Gigerenzer claimed that this was because the environment is so tightly structured that good guesses could be made on the basis of impoverished information. In his terms, fast and frugal algorithms work because things are so highly correlated. One of the simplest is just to ask Given any two cities, have I heard of one and not the other? If so, the city that had been heard of was likely to be the bigger city, providing that the cities are either chosen randomly or exhaustively from the set of US (or German) cities that exceed a certain size. On the other hand, it is possible to design an experiment in which selected cities are compared, that will fool this fast and frugal algorithm. Las Vegas, Nevada, is well known as a gambling resort and Boston as a historical site. Indianapolis is not as well known, but (as of 2000) it was about 40% larger than either Las Vegas
Evolutionary Psychology ISSN 1474-7049 Volume 1. 2003.
178
Adaptive ThinkingbyGerd Gigerenzer
or Boston. Gigerenzer also objects to studies showing that people make errors in logical reasoning. Many laboratory studies have shown that people tend to believe that if a implies b, then b implies a. A famous and much studied example is the Wason card selection task. A participant is first told that a set of cards was to be printed according to the rule If there is a vowel on one side there must be an odd number on the other. In logical terms, (vowel) implies (odd number). The participant is then shown four cards with one face displayed, as in: A 4 B 7 and is asked which cards have to be turned over to see if the rule is being followed. The correct answer is A and 4. However a substantial number of people also turn over the 7, erroneously thinking that if there is an odd number on one side, then the other side must have a vowel. Gigerenzer objects to this sort of experiment on the grounds that it has no ecological validity. He points out (as have others before him, notably Cheng and Holyoak (1985)) that the card sorting task is easy if it is presented as a task in detecting whether or not people are doing things they are not permitted to do. Suppose that you are a liquor-law inspector, enforcing the law No drinking is permitted unless you are twenty one. This is logically equivalent to Drinking implies person is over twenty one. Which of these people must be further investigated to see if the law is being followed? A young woman, apparently a college student, drinking beer. An obvious teenager with an unidentified drink. A young man, also a college student, with a coca cola. An elderly professor with an unidentified drink. This problem is quite easy. Nevertheless, it is formally identical to the card sorting task, for drinking alcohol maps to vowel and over 21 to odd number.More generally, Gigerenzer argues that people are quite able to reason logically about social situations, such as those just described. Therefore poor performance in abstract tasks, such as Wasons card selection task, may considerably underestimate peoples capability. Gigerenzers more general argument is that experimenters have drawn erroneous conclusions about the irrationality of various fast and frugal algorithms because they conduct studies that are like the Las Vegas, Boston, Indianapolis comparison; intentionally designed to illustrate fallacious reasoning. In general, I agree. Experimental designs that balance variables do destroy the natural structure that would be found by representative sampling (Brunswiks
Evolutionary Psychology ISSN 1474-7049 Volume 1. 2003.
179
Adaptive ThinkingbyGerd Gigerenzer
point) and Gigerenzer has a point when he says that decision theorists have often chosen situations designed to expose errors in human reasoning, regardless of how representative these situations are in practice. Wrong 2. The experiments used to demonstrate faulty reasoning about probability and statistics are based upon an erroneous interpretation of probability. When the experiments are redesigned to correct this error erroneous reasoning virtually disappears. Consider the Linda and the Cab problems. Gigerenzer criticizes them on two grounds. First, all information is presented to the participant. In the normal case people acquire frequency information directly from their experience. Furthermore, there is good reason to believe that frequencies derived from experience are automatically encoded. Ecological validity is lost when frequencies are presented as facts, rather than being experienced. Second, in both the Linda and the Cab problems (and their analogs) the participant is asked to reason about the probability of a single event, Lindas being an accountant or the cabs having a certain color, without being told, explicitly, that the event was randomly sampled from some defined universe. Both of these objections refer to experimental details. Gigerenzer and his colleagues have conducted experiments that show, quite convincingly, that in situations in which people acquire statistical information directly fallacious reasoning disappears. His results certainly limit the generalizations that have been drawn from the research he attacks. Whether or not this demonstrates lack of ecological validity, though, is a trickier question. People are frequently asked to reason about statistical issues in situations where they have not, themselves, observed the relevant frequencies. Consultations with physicians about alternative medical treatments are a good case in point. We are also often asked to use probabilistic information to deal with patently non-random events. This leads us to consider Gigerenzers third objection to the research on probabilistic reasoning. Wrong 3. claims that virtually all American students of decision Gigerenzer making, including Kahneman and Tversky (both Israelis, but who did most of their work in the US), simply do not understand probability. Gigerenzer is a strong proponent of thefrequentist view of probability, in which the probability of an event is equal to the relative frequency with which an event will appear in an infinite sequence of random selections from the relevant universe of possible events. Thus the probability of rolling 7 in dice is the relative frequency with which a 7 would be rolled in a (hypothetical) infinite set of throws of a pair of dice. This view is either implicit or explicit in many texts on applied statistics, and is natural in the context in which it is usually presented. It becomes problematical when we consider single events that are not drawn from any clearly defined universe. What is the probability that there will be a terrorist attack on a United States
Evolutionary Psychology ISSN 1474-7049 Volume 1. 2003.
180
Adaptive ThinkingbyGerd Gigerenzer
city in the next five years? Or that the Seattle Mariners baseball team will win the baseball World Series in 2010? These questions sound meaningful, but neither event can reasonably be thought of as a random sample from a defined universe of possible events. Many statistical texts, and certainly many experiments on probability estimation, simply ignore this point. But it is important. What is the probability that Linda is a feminist accountant? What is the probability that I am correct in believing that Vienna is a larger city than Prague? In both cases the statements are either true or false. Probability would apply only I knew that Linda had been chosen, randomly, from some population with subpopulations of feminists and accountants, or if the question about Vienna and Prague had been drawn, randomly, from some larger set of comparisons that I had made. Gigerenzer concludes that when people are asked to judge probabilities in situations like these they rightly refuse to adopt a frequentist interpretation. Instead they interpret probability to mean the more vaguely defined Is this statement to be believed? Therefore their answers need not conform to the probability calculus and they cannot be said to be behaving irrationally. He buttresses his case with a clever series of experiments in which the base rate, overconfidence, and conjunction fallacies disappear (i.e. peoples judgments conform to the laws of probability) when a frequentist interpretation is clearly appropriate. For instance, overconfidence appears if a person is asked to give a judgment of the probability of a correct answer each time a question is asked. But suppose that instead of being asked What is the probability that the answer to this question is correct the participant is asked Of the last 10 questions that you answered, how many do you think you answered correctly? Gigerenzers results show that, at least in the city size judgment situation, frequentist responses are well calibrated. This finding has important practical implications. Gigerenzer reports that German AIDS counselors often present risk information in the confusing probabilistic form, in which p(Positive test|Disease) is confused with p(Disease| Positive test). Indeed, because the relevant information was presented to the counselors in terms of probabilities, they may not understand the distinction themselves. Gigerenzer then points out that there is a simple way in which people can behave in accordance with Bayes theorem. As this is important I will first present the strategy in a medical setting, and then generalize. Let D be the statement A patient has the disease of interest and let S be the statement The patient displays the symptom of interest. (Symptom here includes a positive result of a diagnostic test.) Suppose that a physician keeps track of the number of his or her patients who display the symptom and either do or do not have the disease. Let these figures be N(D|S) and N(~D|S). Suppose a new patient comes in, who in fact displays the symptom. What is the probability that the patient has the disease? Taking a frequentist view of probability, this is
Evolutionary Psychology ISSN 1474-7049 Volume 1. 2003.
181
Adaptive ThinkingbyGerd Gigerenzer
N(D|S) (6)p(D|S)=, N(D|S)+N(~D|S) which incorporates base rates and is in agreement with Bayes theorem. Gigerenzer builds on this finding in two ways. First he points out that if people (and other animals) act this way they will be informed, in the sense defined above, and thus prepared to act adaptively. This is so, but Gigerenzer offers no substantive attempt to work his observation into actual findings and theories on the evolution of behavior. Second, Gigerenzer cites the psychological literature that shows that people are capable of recording the frequency of outcomes of different sorts, and they do so in an apparently automatic fashion. This argument links Gigerenzers views to a well established experimental literature. In fact, Gigerenzer does not go far enough. Both Gigerenzers work and most other studies assume that people should treat information about relative frequencies as if they were observing a succession of random samples from a distribution that is stationary over time. A good deal of research has shown that in this situation people behave suboptimally, because they are more sensitive than they should be to short term phenomenon, such as a run of several heads in a row, in a coin tossing game, or several baskets in a row (the hot hand) in a game of basketball. Excessive sensitivity to recent runs of events has been taken as evidence of irrationality, and would be non adaptive if the assumption of stationarity can be defended. On the other hand, if the generating parameters are changing over time it is both adaptive and rational to give more weight to the most recent observations. This would certainly be the case in many medical situations. Think of patients who present with fever and muscular aches. The probability of influenza goes up in the winter and down in the summer, the probability of the (much rarer) West Nile fever goes up in summer and down in winter. Although Gigerenzers main argument is with theories of decision making, at various places in his book he broadens the attack to include arguments about the very way that theories are generated in modern social sciences. He points out, as have many others, that scientific theories (especially in the social sciences) reflect of the social situation in which scientists operate and, interestingly, the technologies that they use. Two of the technologies that concern him are so closely allied that I will consider them as one. They are the use of computer models and the pervasive acceptance of formal models of rationality, such as the Von Neumann model and Bayesian reasoning, as the appropriate description of ideal behavior. The reason that the two are mixed up is that, according to Gigerenzer, when we program a computer model of cognition we start with the rational solution and then distort the model from that solution in order to make it more descriptive. Gigerenzer (once again echoing many others) also objects to
Evolutionary Psychology ISSN 1474-7049 Volume 1. 2003.
182