Effective public policies



In section 1, we discussed how we can gain knowledge. In sections 2 and 3, we analyzed the phenomenon of post-truth, dissecting it into various components, both unintentional and intentional. As we have seen, the structure of post-truth often repeats itself, there is a pattern common to many particular situations. 

Once we are clearer about this pattern, we can identify it in new situations and, on that basis, try to do something about it. It is thus time to look into possible solutions. 

This last section consists of three chapters and some additional considerations at the end. Of the many approaches that can be proposed to try to fight, survive and defeat post-truth, we select three that could make both a profound and immediate difference. In the first chapter we briefly discuss how to achieve effective public policy. We address both decision-makers and citizens, and the core of the proposal is to try to defeat post-truth from the public policy side by directing attention to whether or not it solves the problems it is meant to solve. The second chapter focuses on communication, an essential element to try to coexist in this world we share. Like the previous chapter, it addresses professional communicators as well as each of us, focusing on how to achieve effective communication. Finally, we present a chapter that brings together and systematizes much of what has been discussed in the book in order to focus on what each of us can do to combat post-truth. 


It might seem that the danger of post-truth is that it creates a fog of permanent confusion in which we can no longer distinguish true from false. If this were the only danger, it would be a philosophically interesting phenomenon, but not a crucial one. However, the danger is greater, and it is also urgent. This fog puts the health and well-being of our society at real, massive and tangible risk. Therefore, in addition to fighting to make it evident as a problem, we will need to outline and test possible solutions. 

As we have already seen, neither intuition, nor traditions, nor goodwill are particularly useful tools. Sometimes it’s worse: we don't even know if they are because we don't measure their consequences. We need to be effective, and that means knowing whether the measures we take really work. How can this be achieved at the individual level? And what can decision-makers do at a more general level? In a country, there are several types of decision-makers who have a large-scale impact, who define directions that affect many people: the states themselves and some non-governmental organizations, companies and various associations. In this chapter, we will focus particularly on public policies, which are the measures taken by states to try to solve citizens' problems. 

For a public policy to be effective, we can base it on evidence, as we have discussed in the case of medicine. But, as we also said then, science alone cannot be our only compass. We cannot exclude cultural, ethical, historical, political, economic aspects. We need to incorporate science, but not be blindly governed by it, i.e., we are interested in having public policies influenced by evidence, informed by evidence. This distinction is important for several reasons. First, there are not only issues in which we must include values and other beliefs, but there are whole areas of knowledge in which we should not act informed only by scientific evidence, even if it is of high quality, without taking into account experience, context and other inputs. 

If we better understand the value of evidence in public policy, and seek it out, we will surely be more effective. And here there is often confusion. The evidence that informs policy can be of two types, and each of these two types contributes from its own side. On the one hand, there is science as a product, what we know about a subject. For example, we know that cigarette smoking is harmful to health. A public policy that seeks to care for the health of citizens will take this information into account to try to reduce cigarette consumption. On the other hand, there is science as a process,1See more in Chapter I. which allows us to find out if a policy works or not to achieve what is proposed. For example, does raising cigarette taxes discourage cigarette consumption? In this case, the question of whether or not a decision is effective can be approached scientifically, just as when we looked at whether or not a drug worked. This aspect is still particularly underexplored in public policy. It is rare to say explicitly and in a very concrete way what a given measure seeks to achieve, it is rare to measure what happened and it is even rarer for the results to be used to evaluate whether the decision worked or not and, based on that, to change course. But it is a road that is beginning to be traveled, and that is no small thing. 

In politics, there are permanent tensions between what is urgent and what is possible, between available resources and the decision of what is a priority and what is not. Even between the individual interests of decision-makers, their tribal identity, loyalty to their groups and the common good. In addition, there are intra-governmental conflicts, pressures from external agents, the influence of partisan opposition and the public perception of the citizens, who will vote on whether or not they want the course set to be maintained. This must also be incorporated into a policy informed by evidence. 

Another aspect to keep in mind is that it is not appropriate to automatically implement the results of research, but that the experts involved in the path to implementation should be included in the conversation. A policy that is beautiful in its design can collide with the reality of places that have their own cultures, their own ways of doing things, their own experience of what works and what doesn't work. Since our focus is on making the policy effective, if we do not include as interlocutors those who are "in the trenches", even if the evidence is solid, it may not end up working in the real world. 

Furthermore, as we have discussed, sometimes the evidence will be clear and moderately complete, but many other times it may not. There is incomplete information and incomplete knowledge. In some cases, it will be necessary to first invest in generating the evidence before deciding, and this clearly requires time and resources of all kinds, as well as a firm political decision to support the process. Here, another challenge appears: if before making a decision we decide to wait until we have absolutely all the information, we run the risk of something that can be even worse: inaction. Not acting, or stalling, is also a decision. But we often fail to realize this due to several cognitive biases.2 See more in Chapter VI. First, we highlight what does happen and fail to see what doesn't happen. Also, generally, when we act and then something happens, we believe –sometimes correctly, sometimes not-- that there is causality, but we don't when we don't act and then something happens. And this can be a problem in terms of evidence-based decision making. 

So, in the face of incomplete or unclear evidence, what we can do is apply the precautionary principle, which we could put into words more or less like this: are we rushing into actions without sufficient evidence? Are we postponing actions with sufficient evidence? Are these actions urgent? What are the consequences of erring through action, and what are those of erring through inaction? This last question is key. Rarely has anyone been embarrassed by a missed policy step, but the cost of a bad policy outcome can be high for an official. With that same reward system that we citizens ourselves feed with our votes, we continue to reward inaction, even when it is not the right decision when considering the evidence. 

How many current situations are the result of perpetuating policies that we know do not work, but we do not dare to change? How close to the precipice will we get without turning just because we are not sure which way to turn? Changing the way we construct these decisions is largely about recognizing when we have to do something. In those cases, we can try to base our decision on the best available evidence, even if it is not much, as is done in medicine. It will be the best possible decision. Our precipice, which is on the other side of continuing not to do, may be even riskier, because it would imply ignoring the evidence about the current path and following our intuition, doing "what has always been done", or following without question what a leader or an opinion maker who is not trained in the field tells us. Which is tantamount to flipping a coin and trusting that things will work out. Maybe or maybe not. And if they do, we will have learned nothing. 

When it comes to defining public policies, which may influence the lives of very many people and require investments of scarce resources, this approach becomes even more important. In this case, it is not enough to make a decision. This decision should be accompanied by explicit goals of what is to be achieved, and also indicate what the metrics of success will be. And, of course, the results would have to be effectively measured, in order to see whether or not those goals were achieved, and to communicate them to society. As with any experiment, the objectives and the definition of success of a policy must be defined before it is put into practice. Otherwise, no matter where the arrow has landed, we can paint the bull's eye around it and say we have succeeded. Unfortunately, it is rare to see our politicians and governments make these points explicit beforehand. 

The problems are clear. The question, as always, is what the alternatives are.


In the case of a government that must decide public policy, much of the pressure comes from citizens. Some prefer policies that "seem" reasonable, that accommodate their intuitions and confirmation bias, even if they go against the best available evidence. In these cases, there is a very great tension, because governments will need citizen support to implement their policies, and if they seek to base them on evidence but the citizenry does not go along, the approach may not work. There could even be a kind of dramatic irony of evidence-driven policy where the action itself is positive for the population, but has negative electoral repercussions for the official. Again, this puts a great responsibility, but also a great possibility, in the hands of us, the citizens: voting for those who implement evidence-based policies is our tangible way of encouraging our public officials to embrace that path. 

In addition, as we already discussed,3 In Chapter VII. if a state wants to communicate an evidence-based policy to its citizens in order to gain their support, it is probably effective for it to choose experts who do not have a "partisan brand", so that they can expose the proposed policy in a more neutral way, free from negative emotions. These "brands" function as signals to one's own tribe and alienate others. Avoiding them becomes part of evidence-based policy, because if communication fails and support is not forthcoming, implementation is likely to fail. 

Sometimes, despite having evidence available, politicians put it aside and bet on what their intuition tells them or what their voters want to be done, which is equivalent to trying to hit a target with an arrow by closing their eyes instead of opening them. Or, from a darker but perhaps more real perspective, to change the target of the common good for that of reelection. It is true that many times reelection is in itself important to guarantee a certain continuity in public policies defined for the common good, but we would expect good rulers not to seek reelection against the common good. 

Therefore, a state that intends to influence its policies on the basis of the available evidence will need to gain support from society. To do so, it should communicate clearly and effectively that it chooses to prioritize some problems over others and that there is some certainty that the proposed intervention will work because of such and such evidence. In addition, the state should be able to measure the success of its policy clearly and communicate adequately to society what has happened. Given this, we, as citizens, can help hold the state accountable: we can point out the mistake and also celebrate the success. 

In other cases, citizens ask governments for evidence that what they decide will work. And here there is another risk: the tension between evidence-based policy and policy-based evidence (or faith, belief, conviction or convenience, as the case may be). In policy-based evidence, we discard from the total body of evidence that which contradicts our preformed idea, and choose –or invent, in a clear example of intentional post-truth-- that which supports exactly what we wanted to conclude at the outset. Somehow "sprinkling science" –or something that looks like it– on a statement makes it more reliable in the eyes of those who cannot or will not take it with an attitude of healthy skepticism. As public perception begins to look favorably on the fact that a policy is evidence-based, a temptation arises: instead of genuinely obtaining and weighing evidence to define, on its basis, actions to be taken, what some do is to first define the course they wish to follow and then select only the evidence that supports that course, or else show something that looks like evidence and is not really evidence. Thus, an evidence-based policy sometimes starts to be more of an empty brand than a real issue. Therefore, it is important that we reserve the term for when there really is quality evidence, and not just some isolated evidence. If this is not the case, it is better not to call it that, or we run the risk of the expression becoming meaningless. 

It is relatively easy to pretend that there is evidence. If we citizens are not very attentive, are not experts or do not look up to them, it is possible that these phenomena –evidence-based policy and policy-based evidence-- are indistinguishable to us. A guarantee of post-truth, again. And, thinking about intentional post-truth, whoever wants a certain course defined may not even need to manipulate the facts. It is enough for them to manipulate which facts to show and which to omit, and to communicate them in a convincing way. There is no need to lie. Just sweep under the rug the part of the information that contradicts your position, and you’re all set. This is selection bias, and therefore, it is essential to always ask what information is missing. What we see in public policies and in so many important issues of civic life also happens on a small scale, and we all do it to a greater or lesser extent, and with a greater or lesser degree of intentionality. 

We need to demand that science be sound and penalize the use of science –or what looks like science-- as a mere rhetorical tool. To assess whether there is real science at the foundation of the decisions, or whether it is just pseudoscience, we can do several things we discussed in previous chapters. Isolated evidence is not enough. We must take into account the total body of evidence and read where the consensus is. We must look closely at the process that led to a particular claim, not just the final claim. Scientific-sounding language may work to lend credibility to a decision, but we can assess whether it is just empty language or whether there is actually solid evidence behind it. 

To fight post-truth in public policy, we need to include an evidence-based approach to the whole process. Since in section 3 we discussed tobacco, sugar and climate change, let's briefly look at some of what has been or is being done in this regard. We will also see how basing public policy on evidence can protect us, at least in part, from post-truth.


With both tobacco4 See chapter X. and sugar,5 See chapter XI. we have a very serious problem: although many of us know that tobacco is harmful and how we should eat to be healthy, few of us achieve the necessary self-control to behave in a healthy way. In these cases, should States intervene, through taxes or regulations, to try to modify citizens' habits? Small government proponents will say something in favor of individual freedom, such as "once the information on what to do is disseminated, it is up to each individual to decide whether to follow it or not", or they will think that measures against industries should not be taken, or they will oppose paying even more taxes, whatever they may be. Big government proponents will consider that the state should take care of its citizens by controlling what food is available to society and what food is not (and here we have the curious paradox that, when we move from food to other things, such as illegal drugs, some people reverse their preferences regarding state intervention, which tempts us to wonder what would happen if sugar were presented as a drug). 

Should we restrict ourselves to educating and informing the population? Or should we make laws? Taking into account what we know about tobacco and sugar, should we prohibit them, allow them without controls, or regulate them? Can we discourage consumption without prohibiting it altogether? To what extent is it "right" to interfere from the State and to what extent is it not? If information is enough, why do we penalize, for example, motorcyclists riding without helmets? Both tobacco and excess sugar in the diet are proven to be harmful and addictive: do we treat them in the same way, as dangerous substances? Or do we incorporate into the decisions the fact that the harm of sugar is only for those who consume it, while tobacco also harms non-smokers who inhale cigarette smoke as passive smokers? 

To distinguish the two, the big or small government stance depends on one's worldview, one's ideology, irrational issues. On that, there is not much to say. Each of us will surely feel closer or further away from each of these positions. The question is whether we would be willing to compromise on them if they presented us with evidence that effective policy might need to challenge or relax them. 

Trying to achieve political and social consensus among people who think differently in relation to these kinds of things is one of the challenges of a pluralist democracy. There is no post-truth here, nor could there be, because we cannot speak of truth. 

Moving from morals to evidence, are any of these interventions effective? What do we know so far about whether or not people's behavior can be modulated through small interventions that involve neither allowing without control, nor prohibiting and making illegal? 

Once we accept that we are not able to control everything, especially when we talk about addictive and pleasurable substances, we can think about controlling the environment around us so that this control influences our behavior. Our willpower is not enough, information is not enough. We have to modify the environment to protect our health. 

Tobacco is generally less contentious than sugar, in the sense that its status as a toxic substance is better understood and its harmful effect on non-smokers is well known. Moreover, since it is not something necessary for survival, as food is, its regulation is generally more accepted by society (which is not to say that this is a good argument, but simply a perception that must be taken into account). 

In addition to considering the welfare of citizens, there are economic reasons for defining public policies. In health issues, prevention is often less costly than the treatment of the sick, which could even suggest an approach where both big and small government proponents may find actions that satisfy them, achieving measurable effects on the welfare of people through state intervention while minimizing such investment.


In the case of cigarette smoking, the harm is not only to smokers, but also to those around them. For this reason, the focus of anti-smoking policies is usually based on protecting the health of the population as a whole, and not just on trying to prevent people from taking up smoking or getting smokers to quit. 

Information campaigns do not usually change the behavior of smokers very much. Most are well aware that smoking is very harmful to their health, but are unable or unwilling to quit. When self-control fails so miserably, the aim is to modify the environment so that it discourages smoking. Thus, the aim is to influence the smokers’ behavior without trying to change their beliefs. One strategy is to increase taxes on cigarettes, which makes the product more expensive and less affordable for consumers. Also, regulations can be established to prohibit smoking in certain places. 

The World Health Organization (WHO) considers tobacco an extremely harmful substance, and suggests measures to control it. Timidly, evidence is also beginning to emerge as to which tobacco control policies are effective and which are not. According to WHO, significantly increasing tobacco excise taxes and prices is the single most effective and cost-effective measure for reducing tobacco use and encouraging smokers to quit. However, it is one of the least used tobacco control measures

One of the first things we will have to do to combat post-truth is to clearly differentiate which claims are factual and which claims are not. If raising taxes is the most cost-effective measure, we are talking about something that can be considered true not only at a scientific level, but also as public policy. It is evidence achieved by measuring costs and effectiveness of the measure precisely from that perspective. Therefore, to set this aside or deny it is to surrender to post-truth. And here the argument that is used is important. If someone says that "taxes should not be raised because it will not work", it is post-truth. On the other hand, if someone argues that "taxes should not be raised because the State should not take care of people who voluntarily decide to harm their health by smoking", it is an ideological argument with which we may or may not agree, but it is not post-truth. 

The fact that even knowing what we know raising tobacco taxes is still not a very widespread measure is a sign that, beyond knowing what the best decision is, it is “easier said than done”. We must better understand what factors are helping to prevent this measure from being implemented more easily. Perhaps, there is an industry influence or something similar at work, what we could consider intentional post-truth. For example, one of the measures that proved to be most effective in terms of decreasing sales as a way to protect public health was to force tobacco companies to make packages with unattractive colors that were similar to each other and that did not strongly signal the brand. But the tobacco industry did not stand idly by: Philip Morris sued Australia (a whole country!) for taking this measure, and lost the lawsuit. The industry tries to protect itself from the measures taken against it. 

In the case of taxes, there could also be opposition from smokers or more libertarian citizens. Smokers are voters, and if the public perception of these taxes is negative, this may slow down the implementation of these decisions. 


When we analyzed the damage to health caused by sugar6 See more in Chapter XI. and the apparent influence of the industry in hiding or obscuring this information, we were left with the question of what can be done about it. On an individual level, what is the health advice regarding our diet that we could take into account based on the evidence available today? Possibly something very simple: a balanced diet, avoiding excesses of added sugars, saturated fats and salt. 

It is known that we consume more food and drink if they are presented to us in large portions, on large plates and in large glasses. We drink less wine if the glasses are small, and we eat less if the food is served on a small plate, which then seems full. Yes, one may rationally know that one is being deceived in some way, but time and again it is shown that, even if we know it, the deception still has an effect on us. This is another example, a very small one, where we have to be careful not to let post-truth creep in: if our intuition tells us that this cannot be so, and the very strong evidence says it can, what do we trust? This is something we can consider to be known. To brush it aside, to deny it, or to maintain that "for me it's not so" or "I don't agree" is to collaborate in promoting post-truth. 

Today, many specialists speak of an obesogenic environment around us. Can we think of public health policies that help prevent obesity and metabolic diseases? Some countries are trying to solve this problem by taxing harmful products, reducing their availability by means of barriers to access, such as reduced sales hours, minimum ages for purchasing products, reduction or prohibition of advertising, etc. 

Some countries have implemented a sugar tax, which makes sugary products slightly more expensive.7See Lustig, R. H., Schmidt, L. A. and Brindis, C. D. (2012). The toxic truth about sugar, Nature, 482(7383): 27-29. Thus, foods and beverages rich in sugar are not banned, but their consumption is discouraged. As of 2018, some thirty countries have introduced a sugar tax. 

Mexico is one of the most obese countries in the world: in 1980, 7% of Mexicans were obese, but by 2016 that value had already tripled (20.3%). Due to this alarming situation, Mexico established in 2014 a 10% sugar tax, and in the few years that passed, effects are already visible: there was a decrease in sales of sugary drinks (soft drinks, juices) of 5.5% that first year, and of 9.7% during 2015, while the sale of drinks without this tax increased on average by 2.1% during those two years.8See Colchero, M. A. et al. (2017). In Mexico, evidence of sustained consumer response two years after implementing a sugar-sweetened beverage tax, Health Affairs, 30(3). Of course, not enough time has passed to know if these changes are modifying the obesity trend, but for now there has been an effect on sales. 

This is also important: to understand what evidence we already have and how reliable it is, to be demanding about what evidence is still missing and how to go about obtaining it, and also to decide with what level of certainty we are willing to act (or not to act), and to define public policies. This is not easy. There are intermingled interests and biases, both our own and those of others. 

Industry pressure on the sugar tax is often fierce. For example, in countries seeking to introduce such a tax, such as the United Kingdom, Coca-Cola has argued that it would reduce investment. We do not yet know whether the tax on sugar-sweetened beverages can effectively lead to a reduction in obesity. The decline in sales does occur, but it may not be enough to have an impact on a multifactorial and complex disease such as this. However, it may be something to consider. 

This is already a less clear-cut situation than what we saw with tobacco. But in the face of the best available evidence, even if it is still scarce and somewhat confusing, we need to ask ourselves what is the danger of erring through action and erring through inaction. In the case of metabolic diseases, which manifest themselves after many years, the question is whether we can wait for better evidence in the face of a rampant obesity epidemic. 

With such incomplete evidence, should we act or not? If the sugar tax is not enough, would it be better not to enact it? The World Health Organization's 2015 report on "Fiscal Policies for Diet and Noncommunicable Disease Prevention" recommends a sugar tax that makes the price of sugary drinks at least 20% more expensive, and a subsidy on fruits and vegetables that makes them cheaper. These suggestions can be reported to the countries so that each country can decide whether or not to adopt them, or to what extent. 

We thus go from tobacco, where, having spent more time, everything is clearer, to sugar, where things are "dirtier" and we are under pressure to act because the risk of doing nothing is very high. This is real life. 

There are States that are more and less comfortable with the idea of approaching public policies as design problems in which the idea of establishing cycles of "do, measure, learn and repeat" is key, accepting that each attempt will provide information for the next to be more successful. The tension between waiting for more evidence before making a public policy decision and the urgency of solving problems is addressed differently in each country. Some wait to act, others test and learn. It took about forty years to start testing legislation that would mitigate the negative effects of tobacco. In that case, it would probably have been better to start earlier. Perhaps with sugar we are in a similar situation. Maybe this time we can do things differently. 


And if the sugar problem is complicated, the climate change problem is much more so. Here, the evidence is much less clear to the non-expert, the effects are not noticeable on a day-to-day basis –which awakens cognitive biases in us-- and the issue is so complex and large-scale that it requires solutions involving entire countries. Moreover, it is an issue that, at least in the United States, is highly charged politically, as we have seen, and very much subject to post-truth. How to define effective public policies in such a context? 

On the issue of climate change, what is known is very well known –there is anthropogenic climate change and it is very dangerous-- and the decisions that need to be taken must be implemented urgently and be effective, or we will not be able to solve the problem. The risk in this case is not so much to make a mistake in doing, but to make a mistake in not doing enough. 

In climate change, post-truth takes different forms, and positions that oppose the consensus often do not say so explicitly. Sometimes, lies are spread in the form of fake news, but other times, they are criticisms that to a greater or lesser extent could have a certain rationale.

In this sense, what comes up most often is a reference to the fact that more data is needed to be sure, or that, in reality, certain evidence could be interpreted differently, or that future forecasts are miscalculated. Of course, all of this could be true. Research must continue and, in fact, it is ongoing. It is also possible that a particular piece of evidence allows multiple interpretations, but even if it does, the total body of evidence is so robust that it would not be threatened by this situation. And as for the forecasts regarding how much global temperature is expected to continue to rise, and what effects this might have, the question is not so much whether they will be wrong or not, but whether we are willing to take the risk. 

To solve this problem, we need to overcome post-truth together, as one big human tribe living on this planet common to all. 

We all know we should exercise, eat healthy, sleep eight hours a day, not smoke and use sunscreen. Few of us do all that. We know that even if we take all the precautions, we can still get sick. We also know that we can live long and healthy lives even without taking care of ourselves in all these respects. Yes, all that is possible, but it is not the right way to think about it: our risk of getting sick is much higher if we do not take care of ourselves. A similar thing is happening with climate change. Even if fears were exaggerated, and it were not necessary to reduce carbon dioxide emissions so rapidly, the risk of not taking the right measures at the right time, and of the consequences being catastrophic and irreversible, puts us all at risk, no longer at the individual level, but at the species level. To be skeptical or cautious in this situation is very close to intellectual dishonesty. 

Being alert to the claims of science is a desirable attitude of healthy skepticism. But if this suddenly shifts to denialism, we are talking about something else. 

Anthropogenic climate change is a fact. That is all the truth we can determine with the help of science. On the other side, there are the lies and fallacious tools of post-truth, used to insinuate doubts about this issue as they were once used to insist that tobacco was not carcinogenic, and which are still present, unfortunately, in the --wrong-- idea that vaccines cause autism. These are just examples. Who knows to what ignoble end these same strategies will be used in the future. In this scenario, the parlor game of moderation is not an attitude of healthy caution, but merely a psychologically cheap way of appearing prudent, and implies siding with the lie and its consequences.


The above examples are current, urgent situations for which courses of action must be chosen on the basis of information that is not entirely complete and clear. But how could we find out whether a given course of action is effective and, above all, whether or not it is more effective than the alternatives? This is a factual question and, as we have seen, it is possible to address it with the same methodology of science. 

Abhijit Banerjee and Esther Duflo are two economists who, for many years, have been conducting social experiments using the logic of randomized controlled trials (RCT)9 See Chapter III. to obtain reliable evidence to guide public policy. They founded an organization called J-PAL, which researchers from all over the world can join. The goal of this organization is to conduct research and training to gather and disseminate rigorous evidence about what public policies and social programs really work. In one of their many studies, they set out to find out how to improve vaccination coverage in rural India. 

India has a public health system with a reasonable distribution of health clinics. However, it was discovered that, in the Udaipur area, 45% of the health personnel who administer vaccines, among other responsibilities, were not present at the clinics on a given day, so the clinics were closed. It was possible, then, that many parents did not suspend their daily chores to go to the clinics with their children because they were not sure that the clinics would be open. 

Based on these hypotheses, the following experiment was conducted. A sample of 134 villages were randomly assigned them to three different groups: 30 villages were visited by mobile vaccination units, another 30 also got vaccination units and, in addition, received a small incentive (a one-kilo bag of lentils per vaccination, and a dinnerware set if they completed the scheme) for parents who brought their children to be vaccinated, and the other 74 villages served as a control group. This was, in essence, a randomized, controlled study.10See Chapter III. The two groups of villages that got vaccination units advertised them: a social worker from the village was responsible for informing mothers of the existence of the unit and the benefits of vaccination. 

The study measured how many children received a dose of vaccine and how many completed the schedule. The results were staggering. Many received one dose, did not complete the schedule but, even so, just counting the vaccination units, they managed to triple the number of children who received the complete schedule, from 6% (control group) to 18% (groups with units). This was the effect of having the units instead of the clinics, which parents never knew if they were going to be open or not. As for the effect of the incentive, the number of children who received the full scheme doubled in the villages that got the unit AND the incentive. Thus, reliable and rigorous evidence was obtained that providing vaccination units and simple incentives succeeded in increasing from 6% to 39% the number of children vaccinated with the full schedule, which required at least five visits to the unit. 

You might think that it is too expensive for the Indian state to implement this intervention on a larger scale, but when you do the math, you see that giving the incentive brings the cost per child vaccinated down by half, in another example of not relying too much on your intuition. This was because each unit managed to vaccinate more children per day owing to the incentive (and we can only say "owing to the incentive" because this was a rigorous experiment). In that way, we were able to prove causality much more reliably than if we had simply made observations and tried to identify correlations. 

Of course, this methodology also raises criticisms, some more valid than others. The point made by those engaged in social experiments such as these is that a small but genuine response is much more useful and gives much more information than a tentative and unreliable response. On the other hand, some possible criticisms have to do with ethical issues. In this research, there were villages that did not get vaccination units. Were they, then, deprived of an effective measure? Not really, because it was not known to be effective until the experiment. Some parents received incentives for vaccinating their children. Does this benefit mean that they were "bought" or forced to do something they did not want to do? Could it be that the lentils had pressured the parents to do something they were not convinced of? Most likely not, because in those populations, a kilo of lentils is not so valuable. A family that actively opposed vaccination would most likely not have been convinced by lentils alone. The effect of this incentive was probably more related to the fact that it was sufficient for many families, who may have been indifferent to vaccination rather than opposed to it, to get organized enough to leave their daily chores and bring their children to the vaccination unit. 

What good does the outcome of this particular research do India, and potentially other countries? By seeing the effectiveness of having vaccination sites available that parents know will be open, India could pursue policies aimed at decreasing absenteeism of health personnel at its clinics. In addition, offering small rewards not only increases the percentage of children vaccinated, but also makes the intervention cheaper. 

Regarding the cost of designing and implementing a public policy RCT, let us consider the following: they are expensive, but by generating more reliable answers about what works and what does not, in the long run they may save the State money. If a politician had intuitively said "do the vaccination units, but not the lentils, because that makes the intervention much more expensive", he would have managed to vaccinate many children, but the opportunity to vaccinate more of them, and at a lower cost per vaccinated child, would have been lost. 

Social experiments must be conducted with great care, not only methodologically, but also ethically. In addition, the scope of their conclusions must be clear. Once this is achieved, the information they provide can make public policies more effective. Isn't that the real objective of public policies: to improve the quality of life of citizens? 

There are more and more examples of public policies applied in various countries around the world with a view based on evidence that they actually work. They are not a theoretical or academic idea. They are being implemented, and some of them have surprising results.


We often believe that the solutions that work to solve small, limited or seemingly simple problems are not the same as those that work for more complex problems, such as, for example, a multinational company defining manufacturing and distribution protocols for its products, a marketing company seeking to run effective advertising campaigns, or even a government deciding on public policies. All these problems have in common that they also need evidence, data, to find out whether what is being done works or not. And this is where science comes in as a methodology, as a process. 

RCTs are not always possible, nor desirable. But, if they are viable, they are a wonderful strategy for answering questions. They allow us to set aside many biases, even those of which we are unaware. Our intuitions about what should happen cease to matter, and we get objective results that can tell us whether something works or not. That "something" can be almost anything whose effectiveness we are interested in finding out, from a communication strategy to public policy. When Wikipedia wants to ask for donations on its site, it randomly sends out different messages, measures how many clicks and money it receives with each one, and then adjusts its communication by selecting the messages that were most effective. This is also an experiment and, in particular, has the same structure as an RCT, since there is randomization and control groups. 

We have strategies to address complex problems, even in areas more typical of the humanities, with the same basic methodology we have described. And this methodology is an integral part of our fight against post-truth because it allows us to distinguish true from false, real from imaginary: How do we fight poverty? How do we increase security in cities? How do we improve children's education? We need to take these big questions, which are the important ones, and separate them into small portions that can be answered. The big questions are intractable. Small ones, well constructed, will do. Instead of trying to solve the big problem of how to improve the health of an entire population, we can first break it down into manageable portions. For example, how can we improve vaccination coverage in areas where there are still few people vaccinated? And this is now a question we can answer by looking for evidence. There are many reasons that could explain why there are areas where the percentage of vaccinated children is still very low. Given that vaccines are usually free, why don't parents vaccinate their children? Is there cultural resistance? Do parents put off vaccination because they don't see a clear value in it? Or is it that vaccination centers are not always open and available? Each of these possible reasons has a different solution. If we do not know what is actually happening, we will not be able to find it. 

If we focus on defining small, narrow questions that we can answer, and whose answers can lead us to answer the big questions, we will be taking clear and firm steps towards the solution. Those steps would be slow, but they would represent a genuine breakthrough. Small but genuine progress trumps big intentions. 

We are all used to hearing our officials announce measures that are "clearly" going to reduce unemployment, improve education or health, or increase security. However, do we know if those proposals are going to work or not? Once they have been implemented, will we know whether or not they worked? In something that seems like an open secret, the answer to these two questions is usually a resounding "no". Even with the best of intentions, and with common sense telling governments and citizens that this time this measure will surely be effective (unlike those of previous governments), the fact is that not only is it not usually measured very carefully whether the measure actually worked, but resources are allocated to implement it, generally without knowing whether it will work or not. As we think with climate change, what is the risk of erring through action vs. erring through inaction? In the case of the limited resources of any state, making a mistake by betting on an ineffective policy not only implies losing the opportunity to solve that problem, but also reducing the resources available to address other problems. And we cannot ignore this reality. 

It is not easy. The real world is complex. But we want to make it better and, for that, we have the help of evidence, which can guide public policies. It can guide them, but then it is up to us to decide how much it will influence the final decision. Let us avoid the false dichotomy between "technocrats" and "politicians". Let us form multidisciplinary teams of experts who can dialogue with each other: researchers, managers, politicians, etc. 

The fact is that without the expertise of politicians to navigate the pressures of lobbying, media and public opinion, it is difficult for evidence-based policy to gain acceptance and be implemented. But without an evidence-based perspective, without clear metrics of success and without our representatives being held accountable for their decisions, the policies chosen will be those that most easily navigate those pressures, rather than those that manage to navigate them by prioritizing the common good.


When we explicitly focus on the fact that a public policy works, we reassess the evidence and give it the place it deserves. This does not exclude ideology, which is relocated in other aspects: there are ideological views both in the delimitation and prioritization of problems and in the incorporation or not of "layers of complexity" that include traditions, values and other factors. 

In the fight against post-truth, looking at this can help us not to be swayed (to an excessive degree) by outside interests, eventual attempts by some actors to generate confusion and dissimulate or hide the truth. And our horizon is clearly marked as the one that seeks the common good. 

For this, a new, very brief Pocket Survival Guide: 



How committed are we to the idea of "do, measure, learn and repeat" as a way of approaching problems? 
Can we clearly identify the problem to be solved? What evidence do we have or need to think about when it comes to an effective public policy? What non-evidence-based aspects should be included? 
Are we rushing actions without sufficient evidence? Are we postponing actions with sufficient evidence? Are these actions urgent? What are the consequences of erring through action and erring through inaction? 
How will we measure whether the public policy implemented was effective? Are we willing to change course if it was not?

This Survival Guide, approached from the point of view of decision-makers, also includes citizens, who can demand that governments adopt such an approach and can penalize them when they fail to do so and rejoice when they do. In this way, all stakeholders are included and committed to this approach. There is hope. 

But this will not be enough. We also need to be able, as a large human family, to communicate with each other in order to coexist, perhaps in agreement and perhaps in disagreement, in a single reality shared by all, one that excludes post-truth. In the next chapter, we will approach communication with a similar view: based on evidence, but including other layers of complexity.