top of page

Instinct Can Beat Analytical Thinking

Intuition is a mode of consciousness wherein content is perceived by sudden, direct awareness. Intuition sees the whole of things, perceiving patterns and making connections. The intuitive mode is useful for creativity, problem solving, decision making, and all forms of discovery.

Read more from Psychologist Gerd Gigerenzer managing director of the Max Planck Institute for Human Development in Berlin:

INSTINCT CAN

BEAT ANALYTICAL THINKING

Justin Fox - JUNE 20, 2014

Researchers have confronted us in recent years with example after example of how we humans get things wrong when it comes to making decisions. We misunderstand probability, we’re myopic, we pay attention to the wrong things, and we just generally mess up. This popular triumph of the “heuristics and biases” literature pioneered by psychologists Daniel Kahneman and Amos Tversky has made us aware of flaws that economics long glossed over, and led to interesting innovations in retirement planning and government policy.


It is not, however, the only lens through which to view decision-making. Psychologist Gerd Gigerenzer has spent his career focusing on the ways in which we get things right, or could at least learn to. In Gigerenzer’s view, using heuristics, rules of thumb, and other shortcuts often leads to better decisions than the models of “rational” decision-making developed by mathematicians and statisticians. At times this belief has led the managing director of the Max Planck Institute for Human Development in Berlin into pretty fierce debates with his intellectual opponents. It has also led to a growing body of fascinating research, and a growing library of books for lay readers, the latest of which, Risk Savvy: How to Make Good Decisions, is just out.


During a visit to HBR’s New York office, Gigerenzer discussed his work for an Ideacast podcast, which you can listen to here: ¸¸.•*¨*•♫ instinct-can-beat-analytical-thinking

HBR: Most of us are used to hearing about how bad we are at making decisions under conditions of uncertainty, and how our intuitions often lead us astray. But that’s not entirely the direction your research has gone in, correct?


Gerd Gigerenzer: I always wonder why people want to hear how bad their own decisions are, or at least, how dumb everyone else is. That’s not my direction. I’m interested to help people to make better decisions, not to state that they have these cognitive illusions and are basically hopeless when it comes to risk.


HBR: But a lot of your research over the years has shown people making mistakes.


Just imagine, a few centuries ago, who would have thought that everyone will be able to read and write? Now, today, we need risk literacy. I believe if we teach young people, children, the mathematics of uncertainty, statistical thinking, instead of only the mathematics of certainty – trigonometry, geometry, all beautiful things that most of us never need – then we can have a new society which is more able to deal with risk and uncertainty.


HBR: By teaching people how to deal with uncertainty, do you mean taking statistics class, studying decision theory?


If you’re in the world where you can calculate the risk, then statistical thinking is enough, and logic. If you go in a casino and play roulette, you can calculate how you will lose in the long run. But most of our problems are about uncertainty. So, for instance, in the course of the financial crisis, it was said that banks play in the casino. If only that would be true — then they could calculate the risks. But they play in the real world of uncertainty, where we do not know all the alternatives or the consequences, and the risks are very hard to estimate because everything is dynamic, there are domino effects, surprises happen, all kinds of things happen.


HBR: Risk modeling in the banks grew out of probability theory.


Right, and that’s the reason why these models fail. We need statistical thinking for a world where we can calculate the risk, but in a world of uncertainty, we need more. We need rules of thumb called heuristics, and good intuitions. That distinction is not made in most of economics and most of the other cognitive sciences, and people believe that they can model or reduce all uncertainty to risk.


HBR: You tell a story that I guess is borrowed from Nassim Taleb, about a turkey. What’s the problem with the way that turkey approached risk management?


Assume you are a turkey and it’s the first day of your life. A man comes in and you believe, “He kills me.” But he feeds you. Next day, he comes again and you fear, “He kills me,” but he feeds you. Third day, the same thing. By any standard model, the probability that he will feed you and not kill you increases day by day, and on day 100, it is higher than any before. And it’s the day before Thanksgiving, and you are dead meat. So the turkey confused the world of uncertainty with one of calculated risk. And the turkey illusion is probably not so often in turkeys, but mostly in people.


HBR: What kind of rule of thumb would help a person, or a turkey, in that sort of situation?


Let’s use people for that. For instance, the value at risk and other standard models that rating agencies used before the crisis in 2008 — the same thing happened there. The confidence increased year by year, and shortly before the crisis, it was highest. These types of models cannot predict any crisis, and have missed every one. They work when the world is stable. They’re like if you have an airbag in your car that works all the time except when you have an accident.


So we need to go away from probability theory and investigate smart heuristics. I have a project with the Bank of England called simple heuristics for a safer world of finance. We study what kind of simple heuristics could make the world safer. When Mervyn King was still the governor, I asked him which simple rules could help. Mervyn said start with no leverage ratio above 10 to one. Most banks don’t like this idea, for obvious reasons. They can do their own value-at-risk calculations with internal models and there is no way for the central banks to check that. But these kinds of simple rules are not as easy to game. There are not so many parameters to estimate.


Here’s a general idea: In a big bank that needs to estimate maybe thousands of parameters to calculate its value-at-risk, the error introduced by these estimates is so big that you should make it simple. If you are in a small bank that doesn’t do big investments, you are in a much safer and more stable mode. And here, the complex calculations may actually pay. So, in general, if you are in an uncertain world, make it simple. If you are in a world that’s highly predictable, make it complex.


HBR: What about the role of intuition and gut feelings in all of this? Clearly, in business, that’s a big issue.


Gut feelings are tools for an uncertain world. They’re not caprice. They are not a sixth sense or God’s voice. They are based on lots of experience, an unconscious form of intelligence.


I’ve worked with large companies and asked decision makers how often they base an important professional decision on that gut feeling. In the companies I’ve worked with, which are large international companies, about 50% of all decisions are at the end a gut decision.


But the same managers would never admit this in public. There’s fear of being made responsible if something goes wrong, so they have developed a few strategies to deal with this fear. One is to find reasons after the fact. A top manager may have a gut feeling, but then he asks an employee to find facts the next two weeks, and thereafter the decision is presented as a fact-based, big-data-based decision. That’s a waste of time, intelligence, and money. The more expensive version is to hire a consulting company, which will provide a 200-page document to justify the gut feeling. And then there is the most expensive version, namely defensive decision making. Here, a manager feels he should go with option A, but if something goes wrong, he can’t explain it, so that’s not good. So he recommends option B, something of a secondary or third-class choice. Defensive decision-making hurts the company and protects the decision maker. In the studies I’ve done with large companies, it happens in about a third to half of all important decisions. You can imagine how much these companies lose.


HBR: But there is a move in business towards using data more intelligently. There’s exploding amounts of it in certain industries, and definitely in the pages of HBR, it’s all about Gee, how do I automate more of these decisions?


That’s a good strategy if you have a business in a very stable world. Big data has a long tradition in astronomy. For thousands of years, people have collected amazing data, and the heavenly bodies up there are fairly stable, relative to our short time of lives. But if you deal with an uncertain world, big data will provide an illusion of certainty. For instance, in Risk Savvy I’ve analyzed the predictions of the top investment banks worldwide on exchange rates. If you look at that, then you know that big data fails. In an uncertain world you need something else. Good intuitions, smart heuristics. But most of economics is not yet prepared to admit that there would be another tool besides expected utility maximization.


HBR: You tell the story in your book of Harry Markowitz, who introduced expected utility maximization to the world of investing with modern portfolio theory. How does he actually choose his investments?


When Harry Markowitz made his own investments for the time after his retirement, he relied on a simple heuristic. A quite intuitive one, which is invest your money equally. If you have two options, 50-50; three, a third, a third, a third; and so on. It’s called “one over N.” N is the number of options. We and others have studied how good one over N is. In most of the studies, one over N outperforms optimizing Markowitz portfolios.


Can we identify the world in which a simple heuristic, one over N, is better than the entire optimization calculation? That’s what Reinhard Selten and I call the study of the ecological rationality of a heuristic. If the world is highly predictable, you have lots of data and only a few parameters to estimate, then do your complex models. But if the world is highly unpredictable and unstable, as in the stock market, you have many parameters to estimate and relatively little data. Then make it simple.


HBR: To use the taxonomy popularized in the last couple of years by Daniel Kahneman, it’s not about putting system two in charge, and keeping system one under control. It’s about figuring out which situations are best for each.


I have my own opinion about system one and system two, and if you want me to share …


HBR: Well, I know you and Kahneman have been debating these things for decades, so sure.


What is system one and system two? It’s a list of dichotomies. Heuristic versus calculated rationality, unconscious versus conscious, error-prone versus always right, and so on. Usually, science starts with these vague dichotomies and works out a precise model. This is the only case I know where one progresses in the other direction. We have had, and still have, precise models of heuristics, like one over N. And at the same time, we have precise models for so-called rational decision making, which are quite different: Bayesian, Neyman-Pearson, and so on. What the system one, system two story does, it lumps all of these things into two black boxes, and it’s happy just saying it’s system one, it’s system two. It can predict nothing. It can explain after the fact almost everything. I do not consider this progress.


The alignment of heuristic and unconscious is not true. Every heuristic can be used consciously or unconsciously. The alignment between heuristic and error-prone is also not true. So, what we need is to go back to precise models and ask ourselves, when is one over N a good idea, and when not? System one, system two doesn’t even ask this. It assumes that heuristics are always bad, or always second best.


HBR: It seems like in leadership in business and elsewhere, you really are stuck or blessed with heuristics, because the whole idea is you’re pushing into unknown territory. For a leader, what are some of the key heuristics you’ve found over the years that seem to work?


There are heuristics that we can simulate and mathematically treat. But others are more like verbal recipes. The more uncertain the world is, the more you need to go into verbal recipes.


There’s an entire class of heuristics that I call one-good-reason decision-making. Assume you have a large company with a customer base of 100,000, and you want to not target those customers who will never buy from you. So, how to predict which customers will buy, and which will not? According to standard marketing theory, it’s a complex problem, so you need a complex solution — for instance, the Pareto negative binomial distribution model, which has four parameters you estimate and gives you the probability for each customer that he or she will make future purchases. The other vision is it’s a complex problem in a world of uncertainty. Therefore, you need to find a simple solution because you will get too much error by making all these estimates.


The hiatus heuristic is an example: if a customer has not bought for at least nine months, classify as inactive; otherwise, active. Now, you might say, relying on one good reason can never be better than relying on this reason and many others, and doing impressive mathematical computation. But that’s a big error in a world of uncertainty. Studies have shown that for, instance, at an airline, the simple heuristic predicted future customer behavior better than the complex Pareto model. The same for an apparel business.


HBR: Less was more.


Yeah, less is more. A number of studies have shown, for instance, that in language learning what works is a limited memory and simple sentences. Parents do this intuitively. It’s called baby talk. And that works. And children learn. And then, the sentences get a little bit more complex, and the memory gets extended and this is the way to make a good language understanding.


The problem of the heuristics and biases people, including much of behavioral economics, is they keep the standard models normative, and think whenever someone does something different, it must be a sign of cognitive limitations. That’s a big error. Because in a world of uncertainty, these models are not normative. I mean, everyone should be able to understand that.


I hope that at some point, the economists turn around and realize that their models are good for one class of situations, but not for another. And also, that psychologists and also economists realize that there’s a mathematical study of heuristics. That heuristics are not just words like availability or representative that explain everything post-hoc, but can predict things. One over N, recognition heuristic, one-good-reason can predict very well, and can be shown to be wrong in certain situations.


HBR: The recognition heuristic is a classic one, where there’s been lots of stock market research done by people like Terry Odean at Berkeley about how people made bad decisions in investing because of the recognition heuristic. But you’ve done studies that foundAmericans were better than Germans at picking which of various pairs of German cities had the higher population, and Germans were better than Americans on the American cities, because the foreigners simply picked the cities they recognized.


The fun thing is that the entire idea that a simple heuristic could do well arises so much resistance. There were two German psychologists who said okay, we believe that people rely on this heuristic, but it can’t be good. And they were arguing that because the cities are a stable world, it may work there. But they selected a world where they thought that Germans were really biased. It’s the prediction of all Wimbledon tennis matches in gentlemen’s singles.


They were arguing, first, it’s a highly dynamic environment. The winner of last time is no longer the winner this time. Second, we use Germans’ predictions and German players did not do very well. This was the time after Becker. They had three gold standards, which was the two ATP rankings — one is a moving year, the other is calendar year — and the Wimbledon seeding. Then, they needed people who are semi-ignorant, and the ideal group is those who has heard of half of the players and not heard of the other half. They found German amateur players in Berlin, who had not even heard of half of these guys. So they made a recognition ranking.


What was the result? The ATP ranking number one got 66% correct. The other ATP ranking got 68%, the Wimbledon seedings did 69%. And the recognition heuristic correctly predicted 72% of the match results. That was not what they wanted to show. That was repeated two years later, with basically the same result.




To find the right tool for you to develop your intuition, please check out: intuition-ip.com/products



bottom of page