The post-truth factory




When we addressed unintentional post-truth in section 2, we reviewed some of the problems that generate distortions of the information we get and saw that one of them is that we are part of the problem. In this chapter, we will take up those ideas again, but with a slight change in focus: we will concentrate on the intentional adulteration of information. 

When unintentional post-truth is added to the adulteration of information, we run the risk of also becoming unwitting spreaders. Each one of us played an essential role for the events described in this section to have occurred. Whoever knows how to manipulate information in such a way as to impose a message controls the post-truth factory. It is an übertruth, an industrialization of the guilty raw material hidden in each of us, and whoever masters it has the potential to implement the post-truth of their choice. 

Whoever runs the post-truth factory can sell out to the highest bidder. The One Ring, the Ring to rule them all. The industry of information adulteration. 

Of course, we should not fall for conspiratorial ideas either. As far as we know, no one has yet managed to handle this challenge well. There have been and there are attempts, companies offering "distortion" services, but it is not clear how effective or influential they were or are. The strategy works relatively well in marketing, but it is not evident, at least not yet, that it is effective in politics or other areas. Either way, this strategy, which is already an industry, will most likely be perfected, and we had better be prepared for when that happens. 

Apart from distorting facts, the post-truth industry pollutes and destroys the public discourse. If a company, a party, or a lobbyist hires someone to manufacture favorable post-truth, so can their rivals –which, incidentally, shows why post-truth is not enough. In this case, post-truth is pitted not against the rational attempt to understand the world, but against another form of post-truth. No matter who wins, we all lose. 

Intentional post-truth does not necessarily imply an intention to convince. The goal is often merely to confuse and establish a doubt that seems reasonable. When there is truth, it is obscured or questioned. When there is not, it is suggested or implied. Generally, it is not asserted forcefully. 


It is worth examining post-truth creation strategies because they tend to reappear in other "post-truth" scenarios, and remember that all these strategies succeed to a greater or lesser extent because we fail to prevent them from succeeding. 

On the one hand, the groups funded their own research, which often generated biased information in their favor. This does not imply that we should disbelieve all research depending on who funds it, but perhaps we should be more alert to whether there are conflicts of interest and whether they are made explicit. Organizations –scientific, industrial, state-- have their interests, and we must take them into account to understand the bias of what they conclude. It is not necessary to think of researchers being paid to commit fraud, although this can also happen, of course. Sometimes it is enough to give incentives and to select which information to show and which to hide. This can already skew the results towards the side that benefits the funder. We analyzed the results of studies that addressed the question of whether sugar-sweetened beverages cause obesity, taking into account whether or not they were funded by the sugar industry.1See Schillinger, D. et al. (2016). Do sugar-sweetened beverages cause obesity and diabetes? Industry and the manufacture of scientific controversy, Annals of Internal Medicine, 165(12): 895-897. Out of 60 studies, 26 yielded no causal relationship and 34 found a positive correlation. If we only look at this, it might seem that there is a scientific controversy, that there is no consensus. But when we look at the funding, it turns out that the 26 studies that found no correlation were funded by the sugar industry, while, of the 34 that did, only one had been funded by the industry. The results of this analysis show, at the very least, that research is much less likely to show that sugar-sweetened beverages cause obesity when funded by the sugar industry. Perhaps this is a selection bias, or perhaps it is an intentional manipulation. In fact, the authors of the analysis conclude: "This industry appears to be manipulating contemporary scientific processes to create controversy and benefit its own interests at the expense of public health. 

Another familiar strategy –mentioned in the previous chapters– is to argue that science has not yet made a final decision on the subject, to ask for more research to be done even when there is already an extremely solid scientific consensus. In these two ways, doubt is generated where there is none. When this is reinforced with aggressive marketing, lobbying to affect regulations and biased information spread by the media, we have a whole structure of intentional distortion promoting the idea that we still cannot be sure even though, considering the evidence, we can say that we are.2 More on doubt as a product in Chapter X on tobacco.


Another way to contribute to the generation of post-truth is to introduce certainties not supported by evidence or directly contradicting the available evidence. We see this when some groups interested in selling a product or promoting an idea resort to various strategies to confuse, to disseminate as true positions regarding which we cannot be certain, at least not yet. 

Pharmaceutical companies manufacture drugs and understandably seek to make money from selling them: this is typical of any manufacturing company. But we must be alert to certain practices. This phenomenon, which we will discuss in the case of pharmaceutical companies –collectively, Big Pharma-- illustrates mechanisms and strategies that can also be observed in many other areas where post-truth is produced for the benefit of a group. If the product that a pharmaceutical company manufactures is one of several that have similar effects, but the company causes a distortion of information so that its product is presented as the perfect solution, it is generating certainty where there is none. Perhaps, the drug is not even that effective, but it is presented as a panacea. This is another way to confuse, to generate an intentional post-truth: there should be doubt, but certainties are imposed, and the truth is lost, obscured. 

Suppose for a moment you’re a physician who treats patients every day. This doctor intends to practice evidence-based medicine, but, in fact, she has neither the time nor the financial incentive to devote dozens of hours per week to reading all the new papers on her specialized field, apart from the fact that it would be impossible for her to keep up to date. Once she has finished his studies, in order to continue updating her knowledge, she attends conferences which are often organized and financed by the pharmaceutical companies, who set the topics and choose the speakers. 

From the pharmaceutical companies' point of view, the incentive lies in trying to get their products to be chosen by doctors when prescribing drugs to their patients. To this end, they advertise in specialized magazines and at conferences, send medical visitors to doctors' offices to "explain" the advantages of their products and offer gifts or benefits for the doctors. Of course, each physician can ignore all this and decide based on his or her knowledge and experience (which is another type of bias), but the influence of these practices in their final decision can’t be denied. 

It is very easy –dangerously so-- to completely distrust pharmaceutical companies and physicians, but we have mentioned many times the risks of unreasonable distrust. We need effective drugs, and we need physicians who function as competent experts. The alternatives are no better. Therefore, we need to fight the simple and convenient temptation to distrust everything, and address two points. First, how do we find reliability in the midst of all these distortions? Second, are we in any way responsible for or complicit in some of these distortions? 

The first point has to do with tasks that we have discussed: finding consensus, identifying conflicts of interest, etc. So let’s address the second point, which hits home and makes us look critically at our behavior. Companies target us, the consumers, and not only physicians, in their advertising. They do this so that we influence doctors when they prescribe a drug for us. In fact, many countries prohibit this kind of advertising for this very reason. As in any advertising, even if they do not lie, they exaggerate the benefits and obscure the inevitable adverse effects of every drug. 

Whether this type of patient-directed advertising actually leads to higher prescriptions for a drug has been the subject of studies. In his book Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients, physician Ben Goldacre describes one: “Trained actors, posing as depressed patients, were sent to visit doctors in three American cities (three hundred visits in all). They all gave the same background story, about the problems they were having with low mood, and then were randomly assigned to act in one of three ways at the end of the consultation: to ask for a specific named drug; to ask for ‘medicine that might help’; or to make no specific request. Those who did what adverts drive patients to do — ask for a specific drug, or ‘medicine’ — were twice as likely to receive a prescription for an antidepressant pill”. This places the responsibility, and also the possibility of change, back on us. 

Drugs, like many other products, are protected by patents that seek to ensure that generators of new knowledge receive an economic reward for it. These patents expire at some point, and thereafter, any laboratory can manufacture the drug without having to pay the creator for it. Once the patent expires, drugs tend to become cheaper –among other things, because new manufacturers do not have to pay research and development costs. For example, when its patent for the stomach protector omeprazole expired, AstraZeneca created a new one called esomeprazole. Esomeprazole was virtually identical clinically, but because it was protected by a patent, it was also more expensive. The company helped the transition from one drug to the other by decreasing advertising for the former and increasing advertising for the latter. For patients and health systems, the situation worsened: they had a drug that was just as effective as the previous one, but more expensive. For the company, clearly, everything had improved. 

Another strategy of the pharmaceutical industry is to make some diseases or conditions be perceived as especially significant, and then, to introduce the drug that addresses them. This may have been the case with social anxiety disorder, which in 1980 was diagnosed in 1-2% and soon after, in 13% of the population. In 1999, the company GSK obtained the license to sell and aggressively advertise paroxetine, a drug which treats this disorder. If we are told that we have a disease that explains what is wrong with us and we are offered a drug that cures or controls it, wouldn't we pressure our doctors to prescribe it? All these are strategies that target us, that hack the very mechanisms that make us collaborate in the unintentional generation of post-truth in order to achieve, in these cases, the acceptance and propagation of intentional post-truth.



There are other situations in which the aim is to distort the information that the public gets, and a great example, particularly in politics, is propaganda. In this case, the goal is to generate a certain perception in public opinion and trigger a certain response from society. 

There are several ways to achieve this. Often we intentionally seek to polarize discussions by circulating pieces of communication with extremist positions, based more on emotion than on facts, and that is why they elicit a response. But sometimes polarization truly responds to great differences. Yet it also feeds on tribalism, and, as a consequence, it creates more tribalism. This makes highly motivated people fight for what they believe is the right thing to do, which has its positive side. But this phenomenon also plays a part in the fragmentation of reality into small pseudorealities, private worlds, fictions that each group keeps alive. This contributes to the loss of links and hinders the dialogue necessary to achieve acceptable and stable solutions consistent with the reality of the shared world. Moreover, hyperpolarized content is usually highly distorted, and when it is replicated, it becomes even more so. This is also true for fake news. 

Another strategy is to wear us out, to create an environment in which it becomes impossible, or too costly, to fight all the battles at once. As Garry Kasparov argues: ""The point of modern propaganda isn't only to misinform or push an agenda. It is to exhaust your critical thinking, to annihilate truth.” After we have spent hours of attention to dispose of ten lies, we may be too tired to keep fighting when we hear the eleventh.


Propaganda devices are often aimed at taking advantage not only of our shortcomings in identifying manipulated information, but also of the tools and algorithms of search engines and social networks. 

A recent instance was the 2016 U.S. presidential election, where there was an abundance of fake news and even Russian interference in the content seen by the various tribes. This Russian interference was carried out through bots, software that spreads messages on social networks appearing to be from real people. 

But exploiting search engines and social networks as advertising tools is no easy task. In recent times, some companies do this on a large scale. Of course, they do not do it openly, but their clients often include political parties in various countries, which try to influence the electorate before an election. 

A big scandal in this regard occurred when it became known that the company Cambridge Analytica offered this service and had used information about users of the Facebook platform. The scandal was twofold. Firstly, it became clear that these professional services of intentional distortion of information not only exist, but have been active for some time (Cambridge Analytica, Palantir and other companies). Secondly, it became clear that the information about us held by social networks, based on what we do or don't do on the platform, is an exchangeable commodity, has value and can therefore be bought and sold. 

From the information available from Facebook, Cambridge Analytica could know with some confidence which users were likely voters for each of the presidential candidates in the 2016 US elections or for each of the positions (remain or leave) regarding the United Kingdom's permanence in the European Union during Brexit. On that basis, it chose which messages to deliver to them in order to get them to vote for the result it had been hired to deliver.


It has long been known that controlling what information gets to whom and how is often a fabulous tool for modulating public opinion. But in order to be effective, information manipulation is affected by several factors mentioned in section 2: our beliefs, our tribalism, our biases and the difficulty to distinguish competent experts from false experts. 

Intuitively, it is often considered that profit-seeking groups generate this distorted information and then spread it among us, who innocently fall into their traps. But researcher Dan Kahan posits another model. He argues that it is not that we are gullible, but that we are motivated a priori in a certain direction, and whoever identifies that motivation can use us to their advantage. 

Our tribalism is one of the main factors that make the manipulation of information successful. None of this is new, but the Internet, and in particular social networks, have made it much easier for us to meet members of our tribe that we do not know, that do not live in our city, that may be in other countries and speak other languages. These virtual communities are fertile ground for those who seek to manipulate us by directing to us a message prepared for us to accept, one that leverages our individual biases and those of our tribe. If we have a conspiratory or denialist theory, we will immediately find some site on the Internet that has the same narrative. It's there, for sure. It's just waiting for us to find it. All it takes is for someone to want to mislead us in the direction we want to be misled in for us to accept their explanation. 


For an open society to function, we need something we take for granted and whose importance we only notice –as with so many things we take for granted-- when we lose it: a shared reality, a set of facts that we do not dispute. On that firm foundation, we can deliberate, identify our agreements and disagreements, and find the political and social consensus that allows us to live together, and the first victim of post-truth is reality. Post-truth fragments that reality, we lose both the past that unites us and the future we will have together, and each group, each tribe, moves within its own bubble. This fragmentation breaks links, and with them the access to information that could challenge our positions. If we do not see that we are all in this fight together, if we do not see that we need to behave as a single tribe, we will continue to drift further and further apart and, consequently, lose the ability to have meaningful conversations about how to manage our common world. 

Some may think that they would be perfectly capable of living without that other tribe, whatever it is, that tribe that they detest, that they consider a threat to the others. Of course, there may be extremist positions that we cannot accept. But if this is not the case, if we are thinking about tribes with which we do not agree in terms of values, traditions or other issues that do not belong to the realm of the factual, there is no truth or post-truth, then we will need to learn to coexist. 

Letting post-truth grow not only threatens the inner bonds of our human family. It is also fertile ground for totalitarianism. As Hannah Arendt wrote in The Origins of Totalitarianism: "The ideal subject of totalitarian rule is not the convinced Nazi or the dedicated Communist, but people for whom the distinction between fact and fiction (i.e. the reality of experience) and the distinction between true and false (i.e. the standards of thought) no longer exists."

Not only political processes are threatened. So is, broadly understood, public health. What we saw in the previous chapters shows that, at least on some occasions, some industries or groups are capable of intentionally distorting our reality. Identifying and combating post-truth becomes, then, literally a fight for our survival. 

There is also another danger, perhaps less tragic, but at the same time, more real, constant and gradual: the erosion of credibility and trust by drops that slowly hollow out the stone. 

When attempts to manipulate public opinion by some power groups come to light, one of the collateral damages is that all politicians lose credibility in our eyes, and so do all industries, all media, all institutions and all scientists. 

Let us not forget that for one case of manipulation we hear about there are a hundred more that we do not. This is very serious, because among those who try to manipulate us in order to confuse us for their own benefit, there are many others who do not. Not only that: as experts in their fields, they are useful to us as allies to combat post-truth. While mistrust is sometimes justified, generalized mistrust is harmful. 

Something else may also happen: post-truth generates post-truth, it is a vicious circle. Faced with the feeling that everyone is lying, we think we cannot trust anything or anyone. So we follow our intuition, immersed in all our biases and distortions. This, of course, makes the problem worse. When nothing makes a difference, our excessive distrust, or even our weariness or disinterest in finding out the truth, can contribute to creating more post-truth.



intentional manipulation of information seems really dangerous. However, to what extent does it really succeed in changing perceptions and achieving its intended results? The tobacco industry and some others were very successful. But how replicable is this, and is there really a set of intentional post-truth strategies that guarantee success? It is still unclear whether bots, fake news and other tools of information distortion by social networks are really as influential as their own creators claim. Cambridge Analytica promises its clients very concrete results, but, by the looks of it, they are not as effective as they claim. Perhaps we overestimate the power of these tools and surrender to the conspiratorial idea that there is someone behind the scenes controlling everything, when the only thing we can –and should-- be aware of is that someone is certainly trying, and that we are both potential victims and potential unwitting victimizers. Likewise, even if they may not manage to turn public opinion in the direction they want so effectively, we cannot rule out that they will become better in time. 

Political Science PhD Brendan Nyhan believes that the real influence of all these disinformation machines is being exaggerated, and he calls for a search for evidence to find this out: "None of these findings indicate that fake news and bots aren’t worrisome signs for American democracy. They can mislead and polarize citizens, undermine trust in the media, and distort the content of public debate. But those who want to combat online misinformation should take steps based on evidence and data, not hype or speculation."3 Available online at: Almost paradoxically, the effectiveness of the strategy of generating post-truth to change public opinion based on networks is still part of the very post-truth that these companies sell.


Even though there is no agreement on how relevant these manipulations are on a day-to-day basis, possible solutions are being considered, which we could separate into two major diametrically opposed approaches. One, more paternalistic, proposes to regulate the media and penalize them if they spread fake news. Facebook and other companies are testing tools to detect fake news and block their distribution within the platform. Also, there are fact-checking organizations working to distinguish fake news from real news. Some media outlets are trying to better fact-check their news, something we all took for granted they were doing, plus there seems to be more interest from readers regarding the trustworthiness of the media. But the reality is that sometimes the tools of distortion and concealment of information are so sophisticated that it becomes very difficult to know what is real and what is not. Many power games are played simultaneously, and journalism is simply not able to check everything in real time, and either they do not have the resources --in terms of time, money or contacts-- to do this task well, or their motivation is not seeking the truth, but seeking the click: they must be the first to report the news, or it has no value. Today, journalism has a strong incentive to give the scoop, a high incentive to save resources, and a low incentive not to be wrong. The only way out of this cycle is for us, as users, to let them know that they must also have a high incentive to be credible. That we actively oppose click-driven media and professionals and reward credibility and consistency. 

Another, more libertarian, view eyes the above measures with concern. Quis custodiet ipsos custodes? Who will guard the guards themselves? What if the means to check post-truth are taken over by the post-truth generators and become mere means of censorship? We leave the control of information outside the realm of the citizenry, in power groups that also have their agenda. This position considers that the principle of establishing regulations is based on the somewhat abstract idea that whoever implements them is outside the interests game and has an objective "post-truth meter" allowing them to distinguish unerringly what is true from what is false. Alternatively, tools and guidelines are proposed for citizens to assess whether the news is reliable or not. Within this vision, the media themselves are often aligned, as they do not wish to be regulated and under the control of others. 

Social networks are also trying to reduce the number of bots and fake accounts by certifying that the person writing is a real person with a first and last name. But this also presents problems, because along the way the possibility of participating anonymously is lost, something that some consider an inalienable right and which, in some countries where the state controls content or in societies where dissidence is punishable by prison or death, means that citizens lose the possibility of expressing themselves. 

There is another point to consider in relation to the idea of regulating social networks such as Facebook: in some countries, this is the way in which most citizens are informed and communicate. Regulating this platform would be tantamount to regulating the Internet, and it is not entirely clear whether this is desirable, let alone even possible. If history is any guide, it probably is not. 

The truth is that, nowadays, spreading false news, either intentionally or accidentally, is not usually penalized, which means that there is not much incentive to change this practice. When the media is wrong, the retraction of the false news appears hopefully a few days later, and with a much smaller exposure than the original news. The process is still not very transparent, and nobody ever stopped reading a newspaper for publishing a false story, especially if they agreed with it.


In the fight against post-truth, one of our main enemies is our own tiredness, the exhaustion that comes from being alert all the time, that discouraging "cognitive fatigue". But if we intend to be informed and independent people, we need to analyze whether there may be intentional adulteration of information. Failing  to do this leaves us in a more complicated place, a place where each group manages its "group reality", and where the "common reality" is that of the most powerful group, which manages to impose its own on the others by shouting the loudest. 

In fact, sometimes we spread fake news unwittingly. But some do it knowingly, and in this case part of the intention is to "invade someone else's territory," to show power, to try to control the narrative, especially if it is false. We cannot look the other way when this happens. 

And here there is a difficult problem to solve. Some will want to and will be able to fight in this battle to find out the truth. Others will not. It is possible that some, immersed in the worries of their daily lives, have no time left to deal with these matters, and we must understand them. Others, too, may exclude themselves from this struggle, feeling that they are unable to understand the subtleties and to distinguish the real from the fictitious. But fighting for truth is not an elitist action, quite the contrary. Let us think of ourselves as a single tribe, as one big human family. Just as some members of the tribe specialize in one task, others specialize in another. If we work for the common good, we will take better care of each other. Everyone has something to contribute. 

Let us add, then, new tools to our toolbox, tools that will help us, together with the ones we have considered, to identify manipulated information.

With this chapter we conclude section 3, devoted to the analysis of fraudulent post-truth. Having dissected the problem, we turn to more specific questions: What can each one of us do to fight for the truth? What can communicators or decision-makers do? Is there hope?



Where is the power? What are its motivations? 
Are we challenging the narrative that we receive with an attitude of healthy skepticism? Is our trust excessive? Is our mistrust excessive?