- Speaker #0
Welcome to the Be Good podcast, where we explore the application of behavioral economics for good in order to nudge better business and better lives.
- Speaker #1
Hi, and welcome to this episode of Be Good, brought to you by BVA Nudge Consulting, a global consultancy specializing in the application of behavioral science for successful behavioral change. Every month, we get to speak with a leader in the field of behavioral science. psychology and neuroscience in order to get to know more about them, their work, and its application to emerging issues. My name is Eric Singler, Managing Director of the BVA family and CEO of BVA Dutch Consulting, and with me is my colleague Richard Bordenave, Chief Behavioral Science Officer at PIRAS in Vivo. Hi Richard.
- Speaker #2
Hi Eric, I'm excited to be joining you for this episode and I'm delighted to be introducing our guest, Professor Gerd Gigerenzer. Professor Gigerenzer is a giant in the world of academic research on human behavior and decision making. Professor Gigerenzer is director of Harding Center for Risk Literacy at the University of Potsdam, director emeritus at the Max Planck Institute for Human Development, and a partner at Simply Rational, the Institute for Decision and... He's also vice president of the European Research Council, and he's a former professor of psychology at the University of Chicago. So a few of Professor Gigerentz's awards include American Associations for the Advancement of Science, a prize for the best article in the behavioral sciences, the Association of the American Publisher Prize for the best book in social and behavioral sciences, the German Psychology Award, and the Communicator Award of the German Research Foundation. So Professor Gigerenzer is author of multiple books on heuristics and decision-making, which have been translated in many languages, like more than 20, including the book of today, Smart Management, How Simple Heuristics Help Leaders Make Good Decisions in an Uncertain World. I'm very happy to welcome you, Professor. And welcome to our Be Good podcast.
- Speaker #0
Oh, thank you for having me here again. And I'm looking forward to have another insightful conversation with you.
- Speaker #1
Professor Gigerenzer, thank you so much again for being with us today for this episode of Be Good. Before talking about your work, smart management, we would like to know... A little more, if possible, about you and your amazing career. Can you tell us first about how you came to be interested in human behavior in general, and maybe specifically about your interest in decision-making processes?
- Speaker #0
I had an earlier career as a musician, and that is how I financed my studies. when I did my PhD, I had to make a decision whether to stay on the stage and play. That was entertainment music, mostly jazz, Dixieland, soul, and or risk an academic career. That was a decision. How do you make such a decision? Now, for me, the way I do it is, I'm not going to be a I have been playing music for, at this time, for 14 years on the stage. It was the safe option. I knew how it works and I earned much more money than as an assistant professor. And on the other side, I thought, okay, is this what you want to do for the rest of your life? And at the end... I took the risk, but for me it was a risky decision because I couldn't know whether I would ever make it to a professor at a good university. And that's a kind of important life decision that inspired me to look closer, how we actually make decisions.
- Speaker #1
Could you share with us any mentors that had a particularly strong influence on you? Do you have any researcher or other people who have played an influential role in your professional career?
- Speaker #0
So the people who intellectually influenced me, that was certainly Herbert Simon, with respect to his acknowledge of uncertainty, as opposed to a world of calculable risk where you can optimize. That's his famous concept of satisfying. But also a German-American psychologist, Egon Brunswick, who is less known. emphasized that to understand behavior, just like Simon's, we need to look at the world, the structure of the world, not just inside about traits or risk properties and risk diversion and such things. During my life, I benefited very much from the research group at the Max Planck Institute. And that was between around 30 or 35 researchers, from graduate students to what would have been the US system associate professors. And where we had for many years an open culture where ideas could be discussed, where nobody had anxieties about saying, what's the evidence for that point? But at the same time, the group was a big family and we still meet every year. And I'm just coming back from Barcelona, the last meeting. So much of the things I learned and I rediscovered were products from many people discussing every time. And the social aspect of ideas is... Very important.
- Speaker #2
Yes. So, Professor Gigerenzer, as I mentioned earlier in your recent book, Smart Management, How Leaders Make Decisions in Uncertain Worlds, which has just been published, is co-authored with Jochen Reb and Seng Kualuang. And it will be at the center of the conversation today. But before we move into the content of it, can you just tell us what was the inspiration behind it? Why writing for managers? What was the idea behind the book?
- Speaker #0
Management is an ideal topic of how to make decisions under uncertainty. Now, uncertainty means that you do not know the complete set of events that might happen in the future, nor the consequences. So that's management. That's most of life. as opposed to what Jimmy Savage called a small world, where you know all the future events and their consequences and their probabilities. So management is not a lottery. And so that work came out from the earlier work of the ABC research group, that is ABC is so adaptive. behavior and cognition. And adaptive means that you need to tune your heuristics to the problem at hand. There is no single hammer that works for everything. And my two co-authors were part of the research group, and they are now in Beijing and Singapore. So we wrote this thing together. We wanted to do it more personally, but it was... COVID-19, but we managed that to meet often enough. And I think this book brings very concrete examples, very concrete heuristics and discusses when it's a good thing to apply them and when not. And it provides an alternative to the usual book. A curricula in business schools where you learn that how should you make good decisions? Expected utility maximization. How should you not make them? Cognitive illusions. Errors. But then you don't know what to do because you can't maximize in the real world, in a VUCA world, and... If you just point out what's going wrong, you don't know what to do right. And that's what this book, so The Blind Spot, it fills.
- Speaker #2
Thank you. So for managers, what would be the one key learning or benefit in trusting intuition?
- Speaker #0
So first, the book teaches a few principles, which is take uncertainty seriously. Often. Uncertainty is reduced to risk, at least in most economic models, but also by many behavioral economists who assume that is the right way to do, that is right in a world of known risk, and that would be immediately applied to the real world of MOOC. And then take heuristics seriously, which means to be a little bit more humble. and accept that in much of the real world you can't optimize. And maximizing expected utility and maximum is just a version of that. So what do you really do? And then the book shows there are many ways to satisfy us. For instance, many ways to hire a person, to decide between the applicants, and think about that. And learn the adaptive toolbox of heuristics and learn wisely to select them. And here you need experience. While in the traditional proof of expected utility marks, you don't need experience. You don't need to learn. You just do a calculation.
- Speaker #1
Professor, I would like to start with a key concept from your book. And first of all, with the concept of simple heuristics. Could you explain what you mean by a simple heuristic or what you call smart heuristics?
- Speaker #0
Let's take Henry Morkowitz, who won his Economics Nobel Prize for an optimization method. So it's known as an optimal portfolio for the question, you have N assets, how do you invest in there? It's known as the mean variance portfolio. When Harry Markowitz made his own investments for the time of his retirement, then we might assume he used his Nobel Prize winning optimization method. He did not. He used a simple heuristic. In that case, the heuristic is called 1 over n. n is the number of assets. And it means divide equally. So if you have two options, then 50-50. Three, a third, a third, and so on. That's a heuristic. The optimization portfolio is not a heuristic. It requires extensive data, extensive calculation, and estimation. So, studies, by the way, show that in many investment situations, 1 over n makes more money, measured by Sharpe ratio, by other traditional criteria, than the Markowitz portfolio. And also, many modern versions cannot systematically do better. So, this is an example of a heuristic. And many people use the same heuristic for other things. For instance, parents with two or more children try to divide their love and time equally. That's one over n. And the heuristics they carry, as one can see here, often sense of fairness. With them, an equal division. And the point is that one needs to... study how well they do in, for instance, accuracy. So, what's often called naive diversification can do very well under UNSREC.
- Speaker #1
Could you give us some concrete examples?
- Speaker #0
Here, I'll give you an example of three heuristics. Let's start with Elon Musk. When Elon Musk was young and Tesla was young, he reported that he made the hiring procedure and he relied on a single criterion, which is, does the person have an exceptional ability? If no, not hired. If yes, hired. This is an extreme heuristic, because it just looks at one. variable or one feature. Now, you might think that it is a bit irrational. At least it would look from the heuristics and biases program that he might have cognitive limitations or something like that. But no, one needs to look at this and study it. So, in this case, there is one variable that is strongly correlated with a number of other variables. For instance, if someone has an exceptional ability, and even if this person is an excellent musician, which is not necessarily important for Tesla, but that means the person is likely able to concentrate, to stay on the problem, to sweat, to persevere, to be able to work with others together, like in an orchestra. So, finding an excellent variable is often the real question. And that brings along many others that are here. So, this is a heuristic from a broader class. There is one clever Q heuristic. Find an important aspect. Now, let's move on. A second example. So, Jeff Bezos, Amazon. He reported that when Amazon was small, he did the hiring, but he did it slightly different from Musk. He used a different heuristic that we call a fast and frugal tree. Okay, let me explain. So, it looks at most at three variables, not just one. And interestingly, the first one was the same as Musk. Namely, does the person have an exception ability? If no... No higher. If yes, that's not enough. For Bezos, a second question is asked, namely, can I admire this person, which is an unusual question. But Bezos said, if I admire a person, I will learn from that person, and that's important for him. So, if the person... Eh. If he thinks he cannot admire the person, no hire. But yes, a third and last question is asked, will the person improve the average quality in the unit he or she will be in? And that's a reasonable question because by hiring in this way, you always improve the group. And only then it's been hired. So this can be seen as a decision tree that is not complete. whether the decision is in every point. So exception ability, yes or no? And if yes, then admire, yes or no? And if no, then will it raise the proof? Note that these fast and frugally trees are not like in a full tree, and there's an order in it. So if you fail on the first one, the other two will not compensate. That's very different for most rational decision models, where you always can compensate.
- Speaker #1
A second concept that is fundamental in your thinking is ecological rationality. Could you define what ecological rationality is and why this concept is fundamental for making good decisions?
- Speaker #0
Yeah. The... As you may see, my approach to decision-making is to look precisely how a decision is being made. The standard concept of rationality is about consistency. So, whether it's transitive and the consistency axioms. And the ecological rationality is a term that it is... basically maps into Simon's concept of bounded rationality, which is not what Kahneman called bounded rationality, because that's also the reason why we use the term ecological rationality. It means it's a functional term. How good is this heuristic? To what degree is it adapted to the problem? So let me illustrate. In the case of Elon Musk, where the hiring heuristic is just one variable. Has this person an acceptability? That, now you can do some mathematics and see that whether this Q, whether, put it this way, if you add more, variables to this queue, does it improve? And if you imagine now, say, the first variable has this validity in terms of a linear model, and the second one has only half of that, and the third additional contribution, like beta weights, then you can prove that in this situation, looking at more variables will improve. not improve the decision. It even can make the decision worse because you have getting more and more error. So ecological rationality studies the condition under which certain heuristics work and do not work. And this is a new discipline because it has been assumed by at least still it's been assumed by the mainstream mainstream behavioral economists that a heuristic is always second class and optimization is always better. Now, in a local world, you can't optimize, period. That's an illusion to think this way. You have to ask another question. There are many heuristics. Which one of that will work in this situation? And if you have an exponentially decreasing weights, then you can just go with the best reason and ignore everything else. If that's not the case, if the weights of the variables are rather flat, then procedures like 1 over n or tallying that you're giving don't estimate weights, but just doing unit weights just can't. avoid error in estimation and can be better than regression models. And these are effects that we call less is more. And less is more happens under uncertainty. It doesn't happen in the world of, in the small world of risk models.
- Speaker #1
The third concept that seems fundamentally in your thinking is the adaptive. toolbox. Again, could you explain this idea of the adaptive toolbox? How do you think humans make decisions using this toolbox?
- Speaker #0
As an adaptive toolbox of heuristics, they can be used consciously or unconsciously. Take the example of hiring. Now we have Musk's one reason hiring rule. We have Jeff Bezos'fast and frugal three with up to three reasons. Then you can add a few. For instance, a number of heuristics are social, and a social heuristic for hiring would be word of mouth. Word of mouth is often used, so you ask your own employees, do you know someone who would be a good person for that job? And there are studies that show that in a healthy company, word of mouth is hard to beat. And you can see why. Because the person who recommends another person to the job feels responsible him or herself. I will not recommend someone who works less than he or she. So that's an example of the adaptive toolbox. And then you can classify the heuristics in broader classes we've just seen. One is social heuristics, like imitation. Very important for innovation in businesses. Much of innovation is copying and editing. It's not that Facebook invented its Facebook. It copied. or Google copy. And so these type of heuristics can be systematically studied and their ecological rationality investigated. And that's much harder than saying you just maximize some utility.
- Speaker #2
This is one of the central elements of the book and more generally of your thinking. And you highlight particularly the underestimation of the importance of intuition in decision making. And just for the audience to know, you've also recently devoted another book called The Intelligence of Intuition. So one thing I would like to know is what is your definition of intuition? What do you have in mind when you talk about intuition? Because I think you have very... specific definition of it.
- Speaker #0
So intuition is a form of unconscious intelligence. So it is a feeling that has three components. One is it's based on long experience. Otherwise, there's no intuition. So there may be a year-long experience with the subject. Second, you sense very quickly what you should do or not do. And third, you cannot explain why you sense that you should do that or not. So intuition is not a sixth sense or not a God's voice. It's also not something that... women have, and may not, everybody has intuition, who works for sufficient time in a certain topic where there's also feedback. And you are right saying that in decision-making theory, intuition is looked at suspicious, but not in many other fields. So, you're right. Just think about mathematics or physics, where intuition is respected. Einstein said, intuition is a gift. So the intuitive spirit, he said, is a gift, and the rational spirit is its servant. And we have created a society that honors the servant and has forgotten. the gift. Just an example. If you talk with the best chess players in the world, Judas Polgar or Magnus Carlsen, they emphasize that their excellent play is a mixer between intuition and deliberate learning. And this is the first important point. Namely, intuition is not a post. to deliberate thinking, as it's still assumed in much of behavioral economics, where you have a system one and a system two. No, no, it goes hand in hand. So, another example, a doctor who sees you often, and today the doctor thinks something is wrong with you, but cannot explain what's wrong. That's an intuition. Long experience, it's quickly in consciousness, and the doctor cannot explain it. And then the doctor will move on and do diagnostics. But there's no contradiction there. It's not an either-or that you should stop all your intuition, as we are told. And that is an important insight. There will be almost no error. where you will do well without intuition. As you can add, there will be almost no error where you will do well without thinking. And it's not a contradiction.
- Speaker #2
Interesting, because in management, sometimes we have an ambiguous point of view where managers are also invited to overcome their cognitive biases. So what's your thinking?
- Speaker #0
against these approaches that were into can intuition actually be leveraged while avoiding pitfalls yeah of course so i'm not going here to criticize much the heuristic and biases approach i've done this many times so just make sure many of the so-called buyers buyers as we know today aren't any biases and they they are If you ignore information, then maybe something good or not. The question is ecological rationality. So when Harry Markowitz ignores his own optimizing portfolio, he may be right. When Elon Musk just relies on one variable, he may be right. Depends on the situation. So the juristic and biases tradition needs an ecological perspective. It's not true that being consistent would be always right. And using a heuristic loophole was such a first thing. Intuition is, I have worked with many large companies. And the CEOs and the top leaders of large companies, when in the studies with these companies, you will find that They say that 50% of the important professional decisions is at the end a gut decision, so an intuition. You get these if you do an anonymous service with them or in some cases we use a person high up in the company who has the trust of everyone and can just openly talk. The same executives who make about every other question at the end intuitive, and the emphasis is on the end. It's not arbitrary. So they sit on data, and there's too much data, and they don't know how reliable it is. And in many situations, the data gives you an answer, and many others, no. And then what's an intuitive decision is that you then, after, the data can tell you, feel that you shouldn't do that. So, in the studies, as I said, in each of the large companies I've worked with, and these are worldwide companies, about 50% of all decisions are, at the end, an intuitive one. And now the interesting point, the same executives would never admit that in public. There's fear. For an intuitive decision, you have to take responsibility. We live in a society where fewer and fewer executives are willing to take responsibility. So what do they do? They just made an intuitive decision, but they can't announce it as one. So, one version is... They take a middle manager and then would ask him, you have a week or two, find me the reasons. And that's a loss of time, intelligence and resources. The more expensive version of that is the company or the top management hires a consulting firm. That then will waste months on finding reasons for the already made decision. How often does it happen? I have worked with one of the largest worldwide consulting firms and asked the principal over lunch, would you be willing and tell me how often customer contacts involve? Justifying decisions that already have been made. He said, Professor Gigerenzer, if you don't tell me, if you don't tell my name, I will. It's over 50%. So, I tell you this story to illustrate the anxiety of admitting gut decisions. They are made, but there is an anxiety. And together with the anxiety of taking over responsibility, and then the waste of time and resources, only to pretend that decision has been made with only, with no intuition. And this is the world we live.
- Speaker #1
Sounds familiar somehow. We do research and we can hear that very often. Now, there are still some major advantages in using intuition, sometimes against algorithm or complex systems. Would you highlight to managers why intuition can actually be a very good way to make decisions? What are the main advantages you would highlight?
- Speaker #0
Yeah. So first... You don't waste money. You don't waste time. But you need courage to stand up and say, as a manager or as a politician, look, so... The honest version would be, in the cases I've written, not to hire a consulting firm and deceive yourself and everyone else, but to stand up and say, look, we, the board, have now spent a week over this decision. Maybe, should we buy this company, another company, or not? Or should we move to Vietnam? Something big. We have looked at data, a previous experience, and the data isn't clear. And there's no point now to go on with that. But someone has to take the responsibility, and I, as the CEO, have to do. So, based on my own experience, and on my intuition, I think we should not pursue this any first. So that would be the honest version. And it would save the company lots of time. And many of the problems being too slow inside the company, always the customers, comes from the anxiety of making decisions and prolonging. You can measure the, it's the technical term is defensive decision making. So you protect yourself and may even go with a decision that's not the best for the company. So what do you, defensive decision making and the lack of a good error culture, and that is what hurts many companies. You do not find it as much in family companies. And in a family, it's your own money. For the CEO, it's not his or her own money. It's a company's money. And family businesses plan long, the next generation, and not to the next three months report. So intuition, not taking intuition seriously, costs enormous amounts of time. and money to companies.
- Speaker #1
Two very good reasons. Maybe I'm going to hand over to Eric, just conscious of time.
- Speaker #2
Yes, professor. How would it be possible to change this situation? Meaning to make intuition at the center of our decision-making process and the intuition as you define it.
- Speaker #0
So, There are a number of very concrete procedures I have in my work with the simple rational done. So one is, give you just two examples. Almost all big companies have problems in defensive decision making, in wasting too much time. There are exceptions. For instance, the International Air So, companies like Lufthansa, with whom I have worked, they have an excellent cockpit culture. Cockpit culture, that doesn't mean that in the rest of these companies everything is okay. And that's why it's so safe to fly. On the other hand, many hospitals don't have a good air culture. And that's why so many people die in hospitals. That's not the only reason. So this just illustrates that we live in a society where we have, here's a pocket of excellent decision culture and error culture, and there is not. So what to do if you have a company where it's not? That's the aggression. I'll give you just two examples. One is a juristic that's called, yeah. Find a good model and that the best model would be the CEO or people on the very, very top. Example. But with a large health provider that had the same problem, that decisions are too slow, that they are too defensive, and nobody wants to take responsibility. So there was a new CEO. It was a she. And in my experience, often women have less fear to make decisions. And she... She assembled all the top management and said, look, last year we made this decision. We all were for it. And it was wrong, as we know now. Now, let's discuss what I, who voted for the decision, what I did wrong. And that sets a totally different sign for a different culture. So, from discussing our culture. And then, of course, the manager says, oh, we can talk about that. We don't have to hide that. It's one example. A second example is set some signs that signal that the culture has changed. You can just talk about it. It doesn't help much. Science. Here's one. Do you know the game? monopoly so there is a there is you can get into jail there and there is a card called get out of jail card so we instructed the ceo to give a get out of jail card to every manager with the instruction if you take risks for the company and it goes wrong hand in your card No questions. And also, put the card on your desk so that you see it every day. And that changes the entire situation. For instance, if there's someone who has the card lying on the desk for years, you ask another question, a different one. So these are science models that can help to change the culture in one that helps the organization. and where the decision makers don't have to hide themselves.
- Speaker #2
There is, I think, a fundamental topic in your book, which is about helping leaders to create a smart decision culture, what you call a positive Volcker culture, heuristic culture, and error culture. Could you summarize what... How are your recommendations to create this smart decision-culturing organization?
- Speaker #0
I'll give you an example from my own experience as a director at a Max Planck Institute with a research group of maybe 35 researchers, a dozen technical people, and any number of research assistants and secretaries and so on. How to deal with such a group? The biggest problem is, it's an interdisciplinary group, how to get them together. So, one heuristic for setting up a research group is, have them begin, all of them at the same time, within a week. That's important, because in my experience, those who are hired a few months earlier, I tend to think about those who come later always as younger siblings. And having everyone together and putting the administration into chaos helps everyone to join and to face this new situation. Second heuristic is, so every day at four o'clock, coffee and tea. And no obligations. And it's not a waste of time. This half an hour, or even if it's more, people, there will be trust. If you talk about something that has nothing to do with research, trust. If they talk of research, this is one of the most important places where new ideas are generated. And I always, when I was, I always went there. I never asked people to go there. It's just signal that. It's no point. I know some would try to imitate the model and forced everyone to go there. No, that's not important. Or what's also very important for any manager. as well for a research director, is to have at least one contrarian in your group. A person who speaks up against the director, against the group consensus, but on a factual basis, with respect. Many politicians fail to see that. We know about famous politicians like Putin, who... As far as I know, want to have people who clap. You don't want that. You want to have people who are willing to stand up for the facts and inform you and criticize you. I've written an entire paper, and it's in the book, The Intelligence of Intuition, you find more of these rules. And important, the rules, you need always to adapt them. For instance, the first rules. Have everyone start at the same time is not a good rule six years later. You would do very badly if you would replace your old group at the same time, because the culture that has evolved will die out. And it's always going back and forth between facts, between experience, between intuition. That's what leadership does. There is no one recipe.
- Speaker #1
Thank you, Professor. Well, maybe a bit of opening, because there's a hot topic currently with artificial intelligence. And I've read that you make a link between artificial intelligence and psychological intelligence. And one can actually help each other. They could help each other. So can you just develop a little bit about what can... Maybe from the theory of Herbert Simon or the development of artificial intelligence, how can it be inspired by human psychology?
- Speaker #0
Yes. So I started out from the distinction that Jimmy Savage made between small worlds and large worlds. In a small world, you can optimize. Complex methods will work. And that's why so many decision theories study. Choices being in game balls, it's a small world. But whether that translates into the real world is not clear. In a large world, you can't optimize. And that's what's called the management rule cover-up. In this situation, you need heuristics. The human mind evolved not to deal with game balls, well-defined game balls, but with uncertainty. And that's why we have this adaptive tool ball. So, having said that... The question is, where can complex algorithms be likely successful and where not? And that's what I call the difference between a stable world and an instable world. And it corresponds to the small world of Savage and others and the large world. The big successes of AI are in well-defined worlds and stable world. A well-defined world is one like chess. and go and a stable world is where tomorrow is likely like yes. Instable worlds include predicting the flu, predicting COVID-19, predicting human behavior, predicting recidivism. In these areas, the success of complex algorithms is the key. is not there. It is in well-defined problems. Or like in large language models where there is a language there, there is a matrix between correlations, and that's fairly stable. But just to give an example, the research in Princeton universities asked the research community, the machine learning community, to predict the future of so-called fragile families. And they delivered, yeah, what can be called big data, millions of data points for these families. So fragile families are usually families with only one parent. And this was real prediction, not out of sample, but real prediction of the future, where, for instance, the child's grade point average at age 15 was predicted or whether the mother still has a job or a house, a home, in a certain time. They had, I think, If I remember correctly, 1,600 submissions and from with mostly highly complicated algorithms. And the result was that most of these machine learning algorithms could not beat a very simple algorithm. They just looked at three or four data points or maybe just two, such as how did the child perform six years ago. And these are examples where machine learning or complex algorithms in general don't do very well. And we need to see this. In our studies, for instance, closer to management, there is a common problem. You have a huge database. So a company has a huge database and wants to know which of the customers will likely purchase, make purchases again. And how do you predict that? This is all the high uncertainty. There are so many factors that predict this. And then there are two answers again. I use complex models or I'll use, I investigate. Now we're getting to psychological AI. That's the term I use. We don't start with logistic regressions or random forests. or other machine learning and try which one does better. But we looked how experienced managers do that. And the answer is most of the managers use a simple heuristic that's called the hiatus heuristic, which is if someone hasn't bought anything in the last nine months, out, otherwise in. That's again one reason, like Elon Musk, one reason. And one reason heuristics can be powerful under high uncertainty, not in a world where you know everything. So, and we have shown with 24 companies that this hiatus heuristic predicted future outcomes better than the typical machine learning algorithms. So that is random forests, regularized logistic regression in that case. And so the inside of that is the following. Complex algorithms can work very well and better than humans in certain areas, which are called stable worlds, but not necessarily. everywhere. And we can inform AI by studying how experts make decisions and then putting that heuristic as an algorithm. And the example I just gave is a very simple algorithm. You just have one. You still have to estimate the time, like is it nine months or six months, depending on the problem. But it gives you a different thinking. For instance, we also have shown that with the same heuristic, so take just the most recent, we could predict Google, so the spread of the flu, better than Google Flu Trends. Remember, Google Flu Trends reigned for eight years. I was closed down in 2015 and had the idea that with big data, You can predict the flu-related doctor visits, which is a useful problem. So it's what's called now-casting. What's now? What's now happening that we know that? And we have seen that a simple heuristic, that is, that we know that humans use in situations of high uncertainty, namely just take the most recent data point seriously. and ignore the rest. That could predict the spread of the flu better and every year better than Google flu trends and for all the updates of Google flu trends. The Google engineers updated it when something unexpected happened, the swine flu, and it couldn't. With big data, you are like a big tanker. You can't steer around. But the reason is that he can adapt to anything that's new. So, psychological AI is the idea to, if the problem is about uncertainty, psychological AI doesn't help you for playing chess or golf. That was Robert Simon's one error that he thought that it would help in these areas. But it helps you under uncertainty. So find out how experienced people solve problems and its heuristics and answer, because it has to be sufficiently simple to be robust. And then model this as an algorithm and compare your complex algorithms with the simple ones. And often you will find you actually do better simple. And that has political consequences because, as you may know, the European Union has a Digital Data Act. And one of the issues is transparency, meaning understandability for the public, for credit scoring. You want to know why you have a bad score. You might want to improve it, but it's black box. And I am working with the largest credit scores in Germany. And we found that they are using highly complex algorithms. And which is. You can't even say that if you have more than two credit cards, that hurts you. It's only true in maybe 80, 90% of the cases, because everyone is linked with everyone. So people cannot know what to do. And we found that if you have an algorithm, if you delete all these interactions and reduce it to half a dozen or a few more variables, You do as well and better. And it's transparent. People can understand. They can change their behavior. And you don't have to nudge them into something that they don't understand. So I think that simplicity has also societal value and allows people to understand much better when they're being scolded.
- Speaker #1
And that was a very insightful conclusion. Thank you very much for... your participation, Professor Gigerenzer. Is there maybe anything you'd like to leave our listeners with, like perhaps where they can find more about your work?
- Speaker #0
Well, I mean, you can start with the book that is about smart management, and or you might read the book, as you mentioned, The Intelligence of Intuition, or you can read some of my more scientific books. that rationality for mortals, but read. And also, I think the basic insights are that take uncertainty seriously and notice that we still, the mainstream of behavioral economics doesn't do that. Otherwise, we wouldn't declare everything an error. That is simple. And second, align to that, take heuristics seriously. They are not second best in a VUCA world. They are the only thing we can do. And the question is a different one. Which heuristic makes sense? Which is smart and which is not smart in this application? And finally, have the courage to make decisions. and to stand up for them and take responsibility. Be Good, a podcast by the BVA Nudge Unit.