BIASES, FALLACIES AND WORKSFORMEISM

TROUBLED BRAINS 

We have seen that when we want to better understand how the world works, what is going on and how our decisions affect it, a personal opinion, based on experience or traditions, is not the same as an idea supported by quality evidence. We also explored the role that our irrational beliefs can play in the unintentional creation of post-truth. Should we, then, trust our reason, our thinking? We wish we could, but the problem is that our brains don't think quite right. As before, we should not take it personally. That's just the way we are and none of this is a personal attack. If we identify some of the most common problems with the way we think, perhaps we will be better prepared to avoid these problems contributing to the creation of post-truth. Knowing the stones that may trip us up helps us better detect them and avoid them. 

During World War II, aerial combat became essential, but keeping planes in the air amid enemy fire was no easy task. On each mission, you were almost as likely to be shot down by anti-aircraft fire as not. It became necessary to reinforce the exterior of aircraft in order to better withstand enemy attack. One obvious option was to put metal plates all over the aircraft, as if putting armor on it. But that was impractical: such a reinforced aircraft would not have been able to take off because of its weight. A compromise solution had to be found: a careful decision as to where the aircraft should be reinforced and where it was not so necessary. To that end, during the war, researchers at the U.S. Center for Naval Analyses conducted a thorough study in which they looked at where the aircraft returning from missions were most damaged. Based on that information, they decided that the parts that showed the highest density of bullet holes needed to be reinforced: parts of the wings, the belly of the plane and the place where the tail gunner was located. 

Abraham Wald was a Jewish mathematician born in the Austro-Hungarian Empire (now Romania) who had fled Nazi persecution and exiled himself in the United States in 1938. His entire family, except for one brother, was exterminated in Auschwitz. Wald was one of the mathematicians working on the Applied Mathematics Panel that had been set up especially to solve mathematical and statistical problems connected with the war (yes, there was such a thing). He immediately noticed the error in the above reasoning. It is such a common mistake that surely, without realizing it, all of us make it, and will continue to do so, in this situation or in others. His analysis prevented implementing the idea of reinforcing the areas of the aircraft that returned most damaged from the missions, and showed the correct way to consider the problem. 

Based on the information on which areas of the aircraft had the highest concentration of bullet holes, Wald concluded that those were precisely the places that should not be reinforced. Why? Because, as the damage to aircraft returning from their missions was analyzed, those were the least critically vulnerable places, the places where aircraft could be hit by bullets and still not crash. In contrast, the places where no damage was observed were those where the bullets caused the plane to crash. Wald convinced the military of this, and the planes were reinforced in the right places. In this way, many lives were saved and the Allies' chances of winning the war were improved. 

This common error is known as survivorship bias, and consists of evaluating what happens to those who "survived" a certain process without taking into account that, in reality, that process eliminates many along the way. Those who did not "survive" (literally or figuratively) are not considered in the analysis because they are invisible. They simply did not make it to the finish line, which is where the data is collected. Thus, failures are underestimated even when the right data is available and the right decision is intended. 

This bias is present every time someone highlights a success story and believes that, if they follow the same steps, the path will be the same. For example, a story like "Mary didn't finish high school and started a clothing business that earns a lot of money. I think I'm going to drop out and start a clothing business, too". Would this be a good decision? Do we know how many Marys dropped out of high school, started a clothing business and did very badly? Of course we don't. We don't know, because we don't see the ones who failed. 

If we pay attention for a day and look for examples of this, we may find many. The broke actors who lived in their cars until they were "discovered" by some agent, the entrepreneurs who, starting from nothing, managed to build emporiums, the people who cured themselves of terrible diseases by simply eating healthy... All these are common examples of the survival bias. Keith Richards has lived a life of excess and he is still around, while more than one vegetarian who sleeps eight hours a day and does yoga will predecease him. 

Semmelweis saved not only those mothers and children at the Vienna Hospital, but so many other people who, because of the new practice, did not get sick and never knew it was thanks to him. Bearing this mind, if someone today tells us "I never wash my hands and I still don't get sick, so hand washing is not that important", we know that there may be biases in their thinking, such as survival bias. 

In the English series Luther, the eponymous detective tells murderous Alice Morgan: "I will tell you this, Alice. You can revel in your brilliance for as long as you like, but people slip up. Happens time and time again." To which she replies, "Well that's just faulty logic postulated on imperfect data collection. What if you only catch people who make mistakes? That would skew the figures, wouldn't it?"

To try to avoid this common mistake, we need to be vigilant and ask ourselves: do we have all the information we need, or only part of it; can we access the information we are missing? It's easy to dismiss health advice by telling survivor anecdotes, isn't it? We will always find someone who deliberately gets sunburned in the sun without sunscreen and doesn't get skin cancer. The problem is that we all like success stories, and we try to extract from them advice about how we should act. Quite possibly there is no reason why those particular cases were successful. Perhaps it was pure luck. 

This flawed thinking is an example of many mistakes we all make. We all think in a certain way when faced with the same situations. Every human being is different, special, unique. But our brains are built in a similar way, they are the result of the same evolutionary path and function following the same basic rules. These flaws that are inherent to the functioning of our minds are known as cognitive biases and are, to a large extent, responsible for many of our wrong decisions or judgments. 

Already at the end of the 19th century, Antoine Lavoisier, the "father of modern chemistry," warned of the errors to which our thinking can lead us long before there was any talk of cognitive bias. In the preface to his Elementary Treatise on Chemistry (1789), he said that, to avoid these errors, we must "keep only the facts, which are the data of nature and cannot deceive us: seek the truth only in the natural chain of experiments and observations, in the manner of mathematicians who arrive at the solution of a problem by means of the simple arrangement of data, and by reducing reasoning to operations so simple, to assumptions so brief, that they never lose sight of the evidence that serves as a guide. 

 Almost always, we make mistakes in our thinking, so much so that we can predict that they will occur. In some people or situations more than in others, but they will always be present. Some of these mistakes are systematic and very difficult to detect and avoid. The trouble is that, by the very nature of cognitive biases, we can never be sure whether or not we are falling into one. What we can do is try to pay more attention. 

Let's look at other examples of very frequent problems in our way of thinking that are especially likely to create unintentional post-truth.

WORKS FOR ME 

Hydrolyzed collagen is in fashion. It is sold in pharmacies as a dietary supplement, in small bottles or as a soluble powder. Collagen is a protein that we all make, and in large quantities, especially in the skin, joints and bones. We mentioned it when we told the story of Lind and his fight against scurvy. As we age, its amount gradually decreases, so the idea would be that consuming it as a supplement would help our skin stay smoother and more elastic. Does it though? 

Behind the seemingly innocent statement that "if we eat collagen, we will have collagen", there is a hypothesis: the body uses the proteins it consumes as they come to it. Science always starts this way, with interesting, imaginative, crazy hypotheses. But, unlike quackery, it does not stop there. Every idea of this kind must be put to the test. 

In the case of collagen, we should be wary of this hypothesis since our bodies never directly use the proteins they consume. They separate them into their component parts, called amino acids, and recombine these amino acids to form new proteins. It is something similar to how the letters of the Latin alphabet are used to put together all the words of our language, or those of Italian or English. If our body consumed poems instead of proteins, it would disassemble the words into letters and use them to write other words, that it could use. 

Those amino acids that we need to build our own proteins are absorbed by the body through food. Each of our cells receives them and uses them as materials to make the proteins it needs. Food has proteins made by other living beings (cows, pigs, fish, fruits, vegetables, etc.). Our digestive system is in charge of breaking down those proteins until the individual amino acids are released (this is, precisely, hydrolyzing a protein), and those are the amino acids that we use to build our human proteins. 

Going back to hydrolyzed collagen, does it  really make our skin smoother? The media often discusses what celebrities do to stay beautiful and healthy. This gives both the media and celebrities more visibility, and being a celebrity is mostly about visibility. Everybody wins, right? Well, not the rest of us: there is never any discussion about whether what celebrities do is really effective. The hydrolyzed collagen sold in pharmacies is basically collagen protein produced by some animal and then separated into amino acids by a chemical process. All this does is save our digestive system a step. Moreover, there is already a food that is essentially pure collagen, but as a complete protein and not previously hydrolyzed: gelatin. 

Regardless of how we consume collagen, whether hydrolyzed or not, whether from gelatin or from a steak, the mistake is thinking that these amino acids will be used in our body to make collagen again. Not necessarily! They will be used for whatever a cell "decides" according to its metabolic needs of the moment and its function. 

Those who consume hydrolyzed collagen do so with the intention of strengthening their bones or rejuvenating their skin, as if they were incorporating collagen as a protein into their own tissues, or as if collagen were reassembling itself in our cells from the same amino acids that were once part of collagen. This is not how cells work. Behind the idea that consuming collagen makes us have more collagen is a form of magical thinking. 

Even if we understand that this makes no sense, consuming hydrolyzed collagen might still have some beneficial effect. Studies have been carried out on the subject. Not many, and not conclusive, but we can say that, so far, there is no scientific evidence.

Two typical problems arise. Number one, this last sentence is not powerful enough. Perhaps I should say "hydrolyzed collagen is useless!". But that would disrespect the evidence, as it would signal a certainty that does not exist, a kind of post-truth. However, when you add up what we know about how cells and the digestive system work, and this absence of effect on the skin in some studies, there is quite a lot of certainty that it does not work. That pretty much is going back, again, to a scale. We don't know everything, but that doesn't mean we don't know anything. On this subject, the needle that moves according to the evidence is much closer to we know everything than to we know nothing.

 Let's rethink what it would take to convince us that hydrolyzed collagen supplements are really effective in improving the appearance of the skin. If there had been a double-blind RCT or, even better, several of them, or a meta-analysis,1As we saw in Chapter III. that clearly showed that the skin of people in the group treated with hydrolyzed collagen improved over a control group receiving the corresponding placebo, we could be much, much more confident in this claim. But, for now, we don't have much more than anecdotal evidence, or "it works for me". If we have to believe without reliable evidence that something works, it is not clearly demonstrated. 

Beyond whether or not hydrolyzed collagen works, there is an interesting phenomenon related to what we were saying earlier about how our brains think. Those who use it are often convinced that it works. They recommend it to others, who then join them. 

When we see that our experience or our common sense tells us something and the evidence tells us the opposite, we should not be so surprised. Most likely, what we thought was merely the effect of biases, or irrational beliefs, as we discussed in the  chapter. If we want the truth and we do not agree with some statement based on scientific evidence, we can challenge it, but with more scientific evidence, using the same methodology, and not with a mere opinion. It is not enough to simply say "I don't agree". Not because it is not fair, or ethical, or because of any moral question or duty. It is not enough because reality will hit us on the head and, in general, we don't want that to happen. 

The "it works for me" approach is widespread, especially regarding health or nutrition. The same bias that confirmed to Linus Pauling that vitamin C "worked" "confirms" that hydrolyzed collagen "works". Richard Feynman used to say that we are most easily deceived by our own selves. Since this is Feynman's opinion on introspection, and he was not a scholar of that subject, at this point we might doubt that opinion, although it is a great time to mention that abundant research on the subject supports that position. 

In order to curb post-truth, we need to know the mechanisms of validation of the claims,2 And that is what we discussed in the first section of the book.   but that is not enough. We also need to identify the problems that can make it difficult to convince us. So far, we have talked about beliefs as the framework of values that we have, or the emotions that a subject arouses in us. We are now adding errors of thought such as itworksformeism. 

We all think, but we don't tend to ask ourselves much about how we think. Itworksformeism is another example that we are thinking wrong, because it leads us into various cognitive biases and fallacious arguments.3A cognitive bias causes flaws in reasoning. A fallacy is an invalid argument that appears to be valid. These two concepts often go hand in hand, since many fallacies work because they exploit biases we have.

Several phenomena underlie this scenario. There is a really frequent bias, one as frequent as it is undetectable: the tendency we all have to see things in a way that supports what we previously believed. This is known as confirmation bias, and it involves selecting, from a set of facts, those that support our prior position, and excluding the rest. This is not something we do on purpose, but rather a recurrent, unconscious mistake: we reject facts that make us uncomfortable because they do not agree with what we think. 

Confirmation bias is rampant: I see it everywhere. Although perhaps I think I see it everywhere and that is not true. It could very well be that what is "making me believe" that it is everywhere is precisely my confirmation bias, which makes me maximize the times I do identify it, or think I do, and minimize the times I do not.

Cherry-picking facts that support our prior position is known as the fallacy of incomplete evidence. We do not do this on purpose. If a person was previously convinced that vitamin C cures colds and recovers quickly once, she will attribute it to the vitamin C supplement, and ignore the many times she took it and the cold did not go away. Since this is not a conscious process, the person will most likely not remember those other instances directly. If she remembers them, she may say, for those situations, something like "that cold was much stronger than normal" or "I just changed the brand of vitamin C, it must have been of worse quality", that is, ad hoc explanations, prepared for that situation. We cherry pick every time we choose, from a series of data, what to take into account and what to dismiss just because it does not match our expectations; every time we highlight a particular anecdote to make a decision. This behavior inadvertently fosters the emergence of post-truth: we believe that the truth lies in what is indicated by the partial or incorrect evidence we select, and we fail to evaluate the totality of the evidence and note where the consensus lies. 

We often give weight to what we say by citing data. If this is done correctly, incorporating the whole body of evidence, it is essential, because it strengthens our argument. But cherry picking is dangerous, because it makes it seem that what we say is valid when it is not necessarily so. When someone tells us something and shows us data, let us ask ourselves: what data are they not showing us? 

The fault in all this is not educational in nature. This happens to all of us, in our daily lives and in major decisions. When a teacher modifies something in the way he explains a subject, and considers that this change has made his students learn better, maybe it is not so: a confirmation bias might be at work. When a CEO makes a decision regarding his company based on an anecdotal case where he did something similar and it worked, he could also be subject to confirmation bias. 

For these reasons, personal experience or itworksformeism do not give us good information. We should not rely on ourselves too much. 

CORRELATION AND CAUSALITY

Imagine that, a long time ago, there was a drought. Without water, there was no game or fruit to gather, and our survival depended on rainfall. Perhaps someone, just to do something, started dancing, and suddenly it rained. Oral tradition then kept the custom of dancing to attract rain. What happened? From two events that seemed related because one happened after the other, it was assumed that the first caused the second, and some danced their rain dance every time they needed water. When it came, all was well. When it did not, perhaps there was something wrong in the way they had danced. Of course, those people knew nothing, back then, about confirmation bias or the ad hoc generation of explanations. Will rain dance go out of style, now that we do know? 

When something changes, and then something else also changes, either in the same direction (both increase, or both decrease) or in the opposite direction (one increases and the other decreases), we speak of correlation. In the example of the dance and the rain, we see a correlation: one of the phenomena occurred first, and the other, shortly thereafter. This is undeniable. The problem arises when our mind makes us draw a hasty and, in this case, erroneous conclusion. People were convinced that the correlation implied that the dance had caused the rain. That is, they considered that there was a causal relationship between the two events. Their minds created an illusion of causality, just like when, in an episode of the series The Wire, they say, "He gave you the rain dance. A guy says if you pay him, he can make it rain. You pay him. And when it rains, he gets the credit. If it doesn't rain, he finds reasons for you to pay him more." 

It is quite possible that a correlation is indeed due to something causing something else (if I put my hand in the fire, I get burned, and yes, the former caused the latter). When we told the story of Semmelweis, we saw that he observed a correlation between not washing hands and the death of women. But how do we know whether or not that particular correlation implies causation? Experiments allow us to determine this. Since we change only one variable, and in principle leave everything else unchanged, if we observe a change, we can attribute it to the variable we changed. In these cases, the correlation occurs because the mechanism behind it is indeed one of cause and consequence. But sometimes it is not, and this is where the problems arise. Often the two events occur one after the other coincidentally  (the dance and the rain), or perhaps they are both the consequence of a third event that we did not take into account.

The autocorrect function of my cell phone will consistently change causal to casual. Be aware, in case the same thing happens to you with your cell phone, or in your life. 

One study showed that taller people tend to earn more money than shorter people.4See Persico, N., Postlewaite, A. and Silverman, D. (2004). The effect of adolescent experience in labor market outcomes: the case of height, Journal of Political Economy, 112(5): 1019-1053. In particular, the association was made between height at age 16, i.e., before entering the labor market, and what that person earned as an adult. Does this mean that taller people are hired preferentially, or for better jobs? In other words, does being taller make you earn more money as an adult? Some may intuitively think that a taller person has better self-esteem or is more attractive to employers and that this could explain the phenomenon. But, later, other researchers looked at this differently: as early as age 3, taller children perform better on cognitive tests.5See Case, A. and Paxson, C. (2006). Stature and status: height, ability, and labor market outcomes, NBER Working Paper, no. 12466. In very simplified everyday language, taller children tend to be smarter. We are not saying that being tall causes intelligence, nor that being intelligent causes being tall, but only that there is a correlation between the two variables. The more intelligent, or better qualified, can access higher paying jobs. This does not prove a causal relationship either, but at least it gives us an alternative interpretation: it is no longer height that has to do with earning more as an adult, but being tall is related to being better prepared, and this, in turn, to earning more as an adult. 

That is: if there is causation, there is generally correlation (this applies almost always, not in all cases). But correlation does not mean causation (among other things, because correlation has no direction, and, although umbrellas open more on rainy days, no one in their right mind would say that they cause rain). 

This is the correlation fallacy: considering that correlation implies causation. Another name given to this particular fallacy of attributing cause and consequence to two things that occur one after the other is the Latin expression post hoc ergo propter hoc ("after this, therefore, because of this"). There is another variant of the same principle in which the two events occur at the same time, and not one after the other. In this case, the error of attributing causality to the relation between the two is known as cum hoc ergo propter hoc (cum is "with", just as post is "after", so it is "with this, therefore, because of this"). We can observe, too, that the ability to give names to fallacies correlates with literacy in Latin. 

But these are fairly obvious examples. Let’s consider this one: We take a tablet of a medicine and, shortly afterwards, we feel better. Therefore, the tablet made us feel better. Let us now look at this same reasoning, but in the framework of the correlation fallacy. That pill we took may have cured us, but it may not. We could very well be attributing a causal relationship to two events that occur one shortly after the other. Although perhaps, in this case, that correlation is due to the fact that there is a causal relationship. Perhaps that tablet was effective. How do we know? First, let's remember that itworksformeism could be misleading. When, in the previous chapter, we talked about alternative medical therapies, we mentioned that they generally act as a placebo. But why do people who use them feel certain that they work for them? In part, because the vast majority of the diseases that we usually suffer from go away by themselves, and when we analyze the type of diseases or problems that are usually addressed with alternative medicine, we see that they are generally those that have a high likelihood of "going away on their own". When we accompany this spontaneous cure process with alternative medicine and are eventually cured, we attribute the cure to the process. Alternative medicines exploit, sometimes deliberately and sometimes inadvertently, our tendency to surrender to the illusion of causality. We return to the vaccine doubters, and the recurrent myth that makes them doubt (even though it has been totally refuted): that vaccines can cause autism. 

Autism is not communicable, it is a congenital condition. But the first symptoms are generally observed around 2 years of age, when children usually have already received several vaccinations. First vaccine, then autism, and one incurs in the fallacy of attributing causality to a temporal correlation. 

There is another great example --or a tragic one, depending on the point of view-- that bad decisions do not only occur in everyday matters, but can also happen on a large scale, at the level of public policies. It is quite clear that children who live in homes with lots of books tend to do better academically. In an effort to make children do better in school, a few years ago a plan was proposed in Illinois, USA, whereby babies would be given one book per month from birth until they entered kindergarten. Sounds reasonable? Based on the information we have now, not so much. A house with lots of books usually indicates a home with parents who read and were able to educate themselves. Children of well-educated parents generally do better in school. But we don't know whether they do better in school because the parents are well educated, or because they come from homes where they were better nurtured, or for other reasons. What is extremely unlikely is that it is due to the mere presence of books in the home. Undoubtedly, having books should help children to read. It is a necessary condition, but surely, not a sufficient one. In the end, this plan was not carried out, but it would have represented a major expense for that state, and one based on attributing causation to mere correlation. 

Again, we cannot blindly rely on the way we think. We put these mistakes under the spotlight not to wallow in our inability, but to understand where we are starting from, and to be able, from there --and, especially, with humility-- to head in a better direction.

INTUITION AND FLAWS

When we gaze at the horizon, if we rely on what our intuition and our common sense tell us, it may seem to us that we live on a flat Earth and that the sky is a huge hollow hemisphere above our heads, a celestial vault. Intuition is very powerful, and mankind believed that the Earth was flat for a very long time, much longer than the time that has passed since we know that it is not so. The evidence that we have collected and sorted out over time now shows us that we were wrong. However, if we look at the horizon and the sky, the Earth still looks flat to us. Our intuition continues to deceive us, even though we already know what reality is like. How can we be so bad at thinking and, at the same time, have made it this far? Not only were we able to occupy every environment on the planet, but we modified it to serve our purposes: we domesticated plants and animals, we built cities and roads, we changed river courses. 

Evolution is the mechanism by which, out of a set of different individuals of the same species, some survive and others do not, and those that survive leave offspring that are similar to them, and thus, after a long, long time, characteristics are selected that are beneficial to the species or that, at least, are not detrimental. But our survival as a species does not depend on whether we know if we are on a planet in the middle of the universe or on a flat surface that ends at the horizon. From an evolutionary point of view, it does depend, instead, on whether, faced with a small noise or change, we realize that there is a lion crouching, and we quickly decide to flee. If we are wrong 99% of the time, and there was no lion, nothing serious happens. But the 1% of the time we are right, we manage to survive. We are the children of humans who made those intuitive, experience-based decisions, sometimes wrong and full of cognitive biases. Our brain sometimes prioritizes a quick decision over a good one. 

So where does that leave us? If, after all, intuition can be right and, evolutionarily speaking, perhaps it is not so bad, what is the problem? It is that the issue is not whether intuition is right or wrong in general terms, but how we know if, for a particular case, it is. In order to understand reality, we have a subjective, internal way of thinking, which sometimes works very well and other times can fail miserably. We don't say that we think wrong so that we say "how awful" and continue playing, as in a Peanuts comic strip. We neither ignore this fact and go on about our business, nor resign ourselves and believe that this is tragic and insurmountable. 

We, those same beings who think badly, managed to invent (discover?) a strategy for understanding reality that does exactly what we need: eliminate –or at least reduce-- the presence of cognitive biases, and also generate answers that we can continue to test to see if they hold up against the attack we continue to make on them with this same methodology. This strategy helps us understand the world more objectively because it takes us out of the equation as much as possible. Or, in a different sense, it includes us in that equation, by finding a space of consensus in which we can all affirm that the world outside our heads is more or less like that, and that it is similar enough for each of us to meet and converse. Thus, science becomes a set of mental tools that help set our cognitive biases aside, so that we can ask the world how it is and receive more consistent answers. 

This works better than the alternatives. Not better in a dogmatic way, but simply better in a practical way: it gets things right more often. 

That's the beauty of it. Brains selected to dodge lions may have gotten us this far, but common sense did not put our species on the Moon, first and foremost because that same common sense told us that the Earth was flat. It is up to us to decide how to act from this point on. We cannot simply count on learning to avoid biases and other flaws in our thinking. Hopefully, we can learn to recognize them, to be alert and to remember, even so, that it is often much easier to find them in other people's reasoning than in our own. 

I can't help thinking that I have been choosing only a few of the many, many biases that exist. Is this selection guided by my confirmation bias? Do I choose those that "show" the point I want to make? Or do I choose from the menu of biases I know (availability bias)? There is no escape, but we can be more aware and train ourselves to identify these problems in ourselves. Otherwise, we fall into a posture where, to everyone else, the mind presents cognitive mirages in the form of biases, but it works for me.

Let us distrust common sense, both our own and that of others. Let us reason, but let us put ourselves to the test. By accepting that we can be wrong in our thinking, we will be better prepared to think better. 

AVOIDING BIASES 

In the previous chapter we discussed how our emotions or our values can influence the creation of unintentional post-truth. In this one, we add the problem that sometimes we think wrong. We are not aware of it, but our reasoning can be full of cognitive biases. 

We can't count on finding flaws in our thinking directly, because it's probably not a good idea to put our thinking in charge of this, but detours can be made. 

Here are some suggestions on how to proceed: the fifth Pocket Survival Guide to Post-Truth: 

POCKET SURVIVAL GUIDE #5

HOW TO THINK BETTER? 

Are we seeing only the information of the final outcome and not what was there at the beginning? (Survival bias.) 
Are we seeing only the information that agrees with what we think? (Confirmation bias.) 
Can we be attributing causation to correlation? 
Can we be confused by what our common sense and intuitions tell us? 
Can we ask for or get the missing evidence? 

In this Pocket Survival Guide, introspection reappears, allowing us to look at ourselves to try to identify what our previous position on an issue is, whether it is relevant to that issue that we seek the truth or whether we might be falling into cognitive biases. But something else is added: we analyze whether we have the evidence, and whether we have all the evidence or only part of it. And here is another tool to fight post-truth: if evidence is missing, we must demand it from others, we must perhaps seek it more actively. Introspection is a necessary, but not sufficient, condition to fight post-truth. 

We are looking for the best possible way to solve problems. We need to obtain evidence, understand it and know how much certainty it gives us, but none of this is enough if we are not willing to change our position –or at least try to-- if the evidence contradicts what we previously thought about a factual issue. 

Already more introspective and comfortable in our discomfort, we now turn to something even more stinging than examining our own beliefs or ways of thinking: in what way do our groups of belonging influence the beliefs we embrace or the way we think?