The paths of information

50min

MEDIA, NETWORKS AND ECHO CHAMBERS

FROM DATA TO INFORMATION 

In any society, different ideas arise as to which issues are problematic and which are not, which are a priority and what should be done to solve them. Public debate, which may occur to a greater or lesser extent but always exists, focuses on that discussion and is one of the linchpins of any democratic society. But if we isolate in different tribes and each one operates on a different set of "facts", we have something new: the possibility of a debate disappears because there is nothing to talk about, if we do not even agree on what is going on, on what the facts are. Facts and opinions become confused. Only if we generally agree on what the facts are can we talk about them without ignoring our different ideological positions. But if we confuse facts with opinions, the conversation becomes one about narratives, because they are all there is. 

And here let me return to something that perhaps has more to do with my personal worldview: if we do not manage to live together as a human tribe in a reality shared by all, what we do is live fragmented within illusions that only look like reality. This not only threatens democratic life, but also any possible bonds among humans.

The role of post-truth information is crucial. It is a commodity that we generate, receive, assimilate and propagate permanently in our daily lives, and each of these four stages separately can participate in creating unintentional post-truth. 

To begin with, how is information generated? We have already discussed how knowledge is generated, but now it must be communicated so that, at some point, it can be made available to the rest of society. Here we will analyze how scientific disciplines disseminate this knowledge. Not so much because we are interested in science in this context –after all, a lot of post-truth arises around non-scientific topics-- but because it allows us to show a process of knowledge validation and information generation in action. 

IN THE BEGINNING, THERE WAS THE PAPER 

Scientific knowledge is an intricate and beautiful edifice that is always progressing, but science is made by people like you and me, who are concerned about the advancement of science, but also about how to get the next promotion, pay the bills and pick up the kids from school without perishing in the attempt. 

But if science is made by ordinary people, and people can make mistakes, do self-serving things, or fail to see the obvious, then why do we give scientific knowledge such an important role? Oddly enough, to answer that question we have to answer another: how is scientific research done? Understanding this helps us understand why, how and to what extent we trust what science says. 

Scientific research has quite particular rules, some explicit and others tacit, known to those who hail (as we do) from that world. For example, today's scientific community, made up of professionals who have careers, submit reports and have union problems, would be unrecognizable to a Victorian or Renaissance scientist. But it is also the case that researchers are still being trained today under the guidance of already trained researchers, in a master and disciple style that would be perfectly recognizable to them, or even by a medieval researcher. 

An important part of scientific research is literature research: reading everything available on the topic, understanding what is known and what is not. But the most creative part is designing, carrying out and interpreting careful observations or experiments that seek to address particular questions whose answers are not yet known. The training of a professional scientist is not just about studying subjects and succeeding in exams. What is central is learning how to think about questions and devise ways to answer them, learning what is known and what is not known about the subject at hand (and, less sexy but no less necessary, learning the technical skills needed in the lab or in the field). 

Scientific activity is at the same time deeply collaborative and deeply competitive. Not only is there collaboration within each research group (the more expert ones help the less expert ones, both in technical and intellectual matters), but it is also common for different groups to collaborate with each other by exchanging experiences, suggesting ideas or contributing knowledge or technical resources, for example. Thus, collaboration becomes, spatially, global. But, in a sense, there is also temporal collaboration with the scientists of the past and with those of the future. An idea that was speculative in 1940 becomes orthodox thirty years later, when someone discovers new evidence. Novelties illuminate, or are illuminated by, the ideas and experiments of the past, even when they serve to show that they were wrong. 

Isaac Newton, quoting medieval authors, said that we are dwarfs who see far because we stand on the shoulders of giants. Legend has it that he said this to mock Robert Hooke, who was short and hunchbacked. And the wonderful thing about history and legend is that together they show us that: that on one side is the edifice of science, and right next to it, the builders, who are genial, envious, generous and stubborn, in other words, human. 

Scientific activity is also very demanding. Experimental results must not only be correct, but also replicable, i.e., if others repeat the same steps, they should obtain the same results. And since science in general is also public, those results are usually made public in conferences or specialized journals, in articles known as papers. This is how information is generated: we look at the past, acquire skills, ask questions, obtain answers and expose them for all to see. But that requires a very careful and destructive review of your own results: years of research can be demolished by an error, and that is why no one wants to go through the "trouble" of being found to have made a mistake after publishing. Of course, this is not always achieved. Sometimes, it is very complex to repeat an experiment under exactly the same original conditions, and, in some areas, small variations in a reagent or procedure can make the difference, as in the story of Jekyll and Hyde. Other times, the problem is so complex that it has too many aspects that are not controlled, or so scarce or inaccessible that the number of samples used is too small and, therefore, the conclusion is statistically dubious. 

FALLIBILITY 

Replicability issues occur for various reasons related, in large part, to the fact that the system of rewards and punishments in which scientists are immersed promotes certain behaviors, sometimes intentional and sometimes not, that lead to research that is not as solid as it could be. And, since scientists are people, they are susceptible to the same traps that we all fall into and that we discussed in previous chapters, such as tribalism or confirmation bias. In addition, research must be funded by someone, usually public and sometimes private institutions. This means that not everything has the same probability of being investigated, which introduces another bias in what knowledge is generated and what knowledge is not.1 More on this in Chapter VI.

We mentioned publishing in journals and conferences. How do you decide what to publish and what not to publish? Even with all the precautions, the evaluation of the quality of work through peer review does not guarantee valid results. Thus, mediocre and even fraudulent papers have been published. In such cases, the authors or other scientists may ask for the paper to be retracted, which means that a notice is published stating that it should not be considered. Also, the journal itself may decide to do so, if it learns –usually from the criticism of other scientists-- that the paper had serious flaws. In practice, both in print and online publications, the paper may not "disappear", but a clear signal is added that it is no longer valid. 

Everything can fail, always. But what is the alternative? Not doing peer review would imply that people who are not experts in the field decide, with arbitrary criteria, whether to accept or reject papers. This system may fail, but peer review is still considered, at least for now, better than the alternatives. Once published in a specialized journal, this new scientific knowledge is available to the rest of the community, to the peers. Of course, it is also available to all of us, although technical language and our lack of expertise in the specific topics make that knowledge generally inaccessible to the general public. But, once it is there, it can be taken on by communicators capable of telling what it consists of and what its relevance is in a way that does not require too much specific knowledge. 

The publication process is an excellent means of making what we know public and subjecting it to the criticism of others. However, it is not so straightforward and simple. This new information generated is not a true reflection of all the knowledge available. In addition to all the biases and problems of the authors of the papers and the research itself, the process introduces even more biases. What gets published is a subset of the total papers. Distortions appear, a mirage of reality, and this is fertile ground for post-truth. 

For example, a paper that is excellent but gives negative results, i.e., something like "we thought that this drug was going to improve patients with this disease, but in the end it did not, and we still do not know how to improve them" has little chance of being published. And that is a problem, because other scientists would need this information for their research, both so as not to redo things that others have already done and because failures are very useful for learning. But there is simply not that much interest from journals in publishing negative results. 

This is more serious when, instead of the publication of scientific papers, what’s at issue is the analysis of the efficacy of new drugs or treatments. When testing new drugs, pharmaceutical companies do not usually report those that do not give positive results. To combat this, several initiatives have arisen, such as the AllTrials campaign, which aims to have absolutely all clinical trials in the world registered in a publicly accessible database, indicating the methodology used and the results obtained.2 Mentioned in Chapter III.

IT'S NOT ALL BAD 

All this may lead us to think that the “pillars of science" are not as strong as we are led to believe. It seems that conditions are such that what science generates is "bad science" or, at the very least, "mediocre science", and its affirmations, the mere results of interests and mistakes. But although science, as a human activity, has flaws, the very fact that it is a public activity and that it is subject to criticism helps to correct these biases. The results obtained continue to be tested, fraudulent publications are usually identified sooner or later, and the career of a scientist who is caught lying is automatically destroyed. In a sense, errors in science show that something is right: we know something is wrong because we have the mechanisms to find that it is, which is not the case in any other activity. 

At the moment, there is great discussion about promoting greater transparency in research and communication of results. The problems of incentives for scientists and specialized publications are also being discussed. Of course, there is still a long way to go, and the system will never be free from this type of problems, or new ones will arise. 

In these circumstances, should we doubt science? Doubting is always good. Distrusting everything is not. The difference between the two is that, in the first case, we are willing to analyze what we are told actively and critically, while, in the second, we simply surrender to complexity. To fight post-truth and win, we will sometimes have to accept that the world is complicated, our tools for understanding it imperfect, and that, even so, knowing is possible. If scientific results are not as robust as we believe, the same methodology of science can improve them. We must know that lies have legs and walk, but also that those legs are short. 

This mechanism of information generation that we have seen is not used in other fields, which have different ways of validating and disseminating knowledge. Sometimes, instead of peer review, what is considered a "standard of quality" is to achieve publication in sites that are particularly demanding about the material they accept, or to obtain special recognition from other experts in the field or from recognized organizations. However the new information has been generated, it must now leave the experts' space and reach the rest of us.

SUPPLY AND DEMAND 

Generally, information comes to us through many sources: the media, books, websites, communicators, social networks or, simply, our circle of close and trusted people. 

Communicating well is very difficult. On the one hand, one must be versed in the field, understand aspects of scientific evidence such as its relevance and reliability, and recognize if and where there is consensus. Also, there may be biases present (from the author or the platform), even errors or fraud. In addition, and fundamentally, one must be able to recognize competent experts. But, apart from all this, a communicator has to have a totally different set of skills, in order to tell a story that adequately reflects the original findings so that people who lack very specific knowledge can access a representative idea of what was done and how it’s relevant. All of this must take into account the readership and how experiences and interests may influence understanding of the subject matter. In addition, professional communicators must be aware of the Dunning-Kruger effect traps and keep in mind that the expert does not know that others do not know, and also that non-experts think they know. That is the expertise of a communicator: to know and keep in mind all this in order to bridge the gap between two worlds without being hermetic or trivial.

PROFESSIONALS AND CLICK COUNTERS 

But, of course, the communication process itself also introduces biases and distortions. The communication industry has its own success metrics, which are, depending on the field, copies sold, ratings, clicks on the web or interactions on social networks. For this to happen, one more skill is required: one must identify the topics that "sell" more than others and know how to explain them in a compelling way. Some styles are more appealing than others: there is often an attempt to exalt knowledge and make it more exciting or relevant than it really is. If a newspaper headlines a story "Studies show that eating chocolate every night before bed makes us smarter", it will get many more clicks than if the headline reads "The IQ of 78 subjects who ate 3 oz of chocolate daily for a week was measured to have increased by 2%, and the probability that this is true and not a mere statistical error is 95%, but no one has replicated the experiment yet".

Besides, a new source of distortion contributes to the unintentional appearance of post-truth by generating a mirage of reality. As David McRaney says, “If you see lots of shark attacks in the news, you think, 'Gosh, sharks are out of control.' What you should think is 'Gosh, the news loves to cover shark attacks.”

In the case of science, although it is not always useful, fun, or familiar, many communicators emphasize these aspects with the idea of showing relevance and attracting the attention of the audience. This is not "wrong", of course, because it is a useful tool to bring science closer to more people. But it is a factor that can distort what is reported because it introduces a selection bias: there are topics and approaches that end up being showcased more than others, and they are not necessarily the most relevant or the best science. 

Sometimes there is even talk about a controversial issue, or one that is not so or is greatly exaggerated. Part of the press succumbs to these somewhat perverse incentives and seeks to attract its audience through expressions that appeal to emotion, to surprise ("controversial!", "miraculous!", "against all odds!"). It would be interesting for the media to review this practice of "click-seeking", which promotes exaggerated news which suffer from extremes of reliability and unreliability. Of course, the change in the communication model of the last few years has greatly affected the media, and they have not yet achieved the balance that would allow them to have a good audience for news that includes context, evidence and reliability. Also, the question arises as to what we, as an audience, are doing to reward media that consistently report in a relevant and truthful manner, and how much we feed the click-seeking cycle. After all, it is not the media that consumes junk news. 

Another factor that makes a news item "publishable", and that contributes to the distortion of which information reaches us, is that it attracts attention because it is a sudden and unusual event –an earthquake, an accident. If 30 people die when a building collapses, the media will cover the story, explain the facts and look for culprits. If 100,000 people do not die because they were vaccinated, live in better sanitary conditions and received adequate preventive medical care, that is not published anywhere. Avoiding disaster gets fewer clicks than not doing it. 

All this gives information the same survival bias that we already know and that in this case is information survival: what we see is a cut of what is actually there, and that cut is inclined to show the most striking and attractive facts for the public, often deformed to appear more striking and attractive than they really are. Thus, relevance, and often truth, can fall by the wayside. As when we mentioned survival bias before, we need to ask ourselves what we are not seeing, what was left out of the picture. Not only the topics that don't "captivate" as much; perhaps, in order to protect a certain narrative, data that doesn't fit it is also being left out. 

It is common, and generally considered good journalistic practice, for a media outlet to represent the different positions on an issue as a way of showing impartiality. But not all controversies are productive. For there to be a controversy between two opinions, both must have some degree of foundation. No one would think of publishing a controversy about kung-fu techniques between Jet Li and Albert Einstein. But that is precisely what some media repeatedly do when one position is based on evidence, while the other is nothing more than an opinion supported by one's own desires, imagination or beliefs. We do not have a real controversy, so the issue should not be presented by the media as a conflict between two equivalent points of view. We cannot equate knowledge based on evidence with mere opinion based on a personal appreciation that, intentionally or through ignorance or incapacity, may not be correctly assessing this evidence. In these cases, a false balance is generated that contributes to post-truth by giving the same weight to something that has evidence behind it and to something that does not. 

Someone might argue that the media can present issues as they wish, and that it is up to the audience to assess the reliability of the relevant information. While this is true for individuals (after all, we are all entitled to our opinions), the media occupy a privileged position, and that privilege should imply certain obligations regarding the veracity of what is published. When a media reflects the supposed two sides of an argument, where only one is based on evidence, it is collaborating in distorting the issue before society. And this is not just an opinion, as science even allows us to ascertain whether or not the media are capable of influencing public perception when they do this, and research shows that this is not a vague concern, but a very real danger.3See Koehler, D. J. (2016). Can journalistic false balance distort public perception of consensus in expert opinion?, Journal of Experimental Psychology: Applied, 22(1): 24-38. As far as we know, the way the media portray an issue can greatly influence public perception. Even if they go out of their way to clarify or emphasize that one side of an argument is supported by scientific evidence and the other is not, what the public effectively remembers afterwards is that the non-evidence-supported side, the one that is just an opinion, is a valid alternative and worthy of consideration on a par with the other. Evidence indicates that myths about a subject remain in people's minds even after they are provided with the correct information. Therefore, something that may seem harmless, such as a television panel giving hope and claiming to know miracle cures for cancer, for example, can be dangerous by contributing to members of your audience deciding to disregard proven treatments. 

Let's imagine two papers are published: one tells the discovery of a detail that explains why ultraviolet radiation causes skin cancer, and the other argues that solar radiation is harmless. Given that the consensus is very strong in considering that UV rays damage the skin and cause cancer, from a scientific point of view the first paper is much more reliable than the second, which perhaps yields this result because it has some problem of methodological design or interpretation. However, the second would possibly be more "publishable" than the first, at least in some media, because it attracts attention precisely because it goes against the consensus (and in favor of what people who love to sunbathe would like to hear). This is where journalistic ethics come in. We need to ask ourselves whether or not the second paper deserves to be mentioned. Although it may be more attractive, it could contribute to distorting the consensus in the eyes of society. 

As if all this were not enough, sometimes professional journalism, particularly on health, eventually creates more distrust in people because it seems to contradict itself: one day they say, for example, that drinking coffee prevents cancer, and the next day, that it causes it.4There are many contradictory results on the relationship between coffee drinking and cancer, but, apparently, the balance leans very slightly towards preventing it. See Yu, X. et al. (2011). Coffee consumption and risk of cancers: a meta-analysis of cohort studies, BMC Cancer, 11:96. Are they lying? Not necessarily. Maybe they were based on particular research that could have replicability issues, or they do not take into account if the research was done, for example, with laboratory mice and not with human beings. Still, it is not journalism that loses credibility here, but science. This adds up to post-truth, which feeds back on people's distrust, which, in turn, causes more post-truth, in a vicious circle that harms public discourse and perhaps also the media. Because, although journalistic misbehavior is generally not penalized, in the long run, the accumulation of contradictions erodes the credibility of the professional media. They are the experts in communication and we don’t trust them. If the media act as if all positions on factual issues are uncritically equivalent, why would we care to read the mainstream press rather than inform ourselves through gossip or Twitter? 

Unintentional post-truth feeds on all these distortions that modulate which information reaches us and how. Irrational doubt is sown, or too much certainty is attributed without justification. All this is distracting. One of the easiest ways to distract is to provide "fun" or "flashy" content. We see this in all fields, and particularly in politics. Many politicians generate "scandalous news" as a way of focusing attention on those issues to the detriment of others, and the media disseminate them because they generate headlines that sell. These politicians operate in a similar way to trolls in social networks: they focus on irrelevant and trivial topics, sometimes through offensive or "politically incorrect" comments, in order to arouse an exaggerated response from the media and the population. 

Communicating information accurately and balancing all the conflicting tensions is a very complex task. You have to manage to convey evidence in such a way that non-experts can understand it. What they need is not to understand the details of the research, but what it means and how reliable it is. Perhaps one of the areas that needs most improvement is the communication of uncertainty. You don't come out of initial research with statements of total certainty, because it's clear that certainty doesn't exist. But by the time the statement reaches the public, it has become a concrete and uncontrovertible fact. The original uncertainty dies along the way. 

Personally, being moderately aware that when knowledge is generated it has a "halo" of uncertainty, I find that exaggeratedly forceful communication, which tells me a fact in a binary way, all or nothing, actually makes me distrustful, it makes me reject it. I don't know if it's right either.

To minimize post-truth, communication should be more about how knowledge was achieved and how much uncertainty surrounds it. Questions on the nature of science and not only on the content should be included. Also, in addition to the end result, explanations of how the knowledge was arrived at should be provided. This applies to all factual issues, not just those of scientific fields. An unemployment measurement, a voting intention survey, an analysis of the economic impact of a certain policy, in all these cases the information will not tell us much unless it is accompanied by a context that explains how it was obtained and how uncertain it is. Thus, we could better understand what value to place on a given statement. Some have managed to convey information without losing the qualities that make a news story "sell". A good example is Hans Rosling, who managed to communicate statistical information in an engaging way, making it clear that they were facts and, in addition, incorporating uncertainty. 

We need good journalism, good professional communication, and we need, as active members of the public who are determined to break the cycle of junk information, to reward good behavior. As in information generation, possible solutions are also being tried in information transmission. For example, there is a call not to publish news unless it is relevant, to look more closely at the evidence behind it, to make papers and academic documents more accessible to journalists. Training is also being offered to journalists to help them distinguish "good" science from "bad" science. And many media are trying, for now with more good will than success, to provide data-driven journalism. 

The information that is generated in the form of academic papers is not a perfect reflection of all new knowledge. Of this information, only a part reaches us all, and the part that does reach us may, in turn, be distorted to a greater or lesser extent. We do not consider here another factor that may play a part: these distortions may be generated deliberately, driven by the interest of those involved. We shall discuss that later. So far, we are analyzing unintentional post-truth, the one that stems from carelessness, from turning a blind eye, from being entertained by the news, for example. 

Now, of all the information we get, which do we take into account and which do we disregard? And another delicate issue: to what extent can our choices regarding which media to consume or how we get informed be distorting the information we get even more? That is where we enter the equation, and that is a sign of additional complications.

IF IT'S ON TV, IT MUST BE TRUE. 

UNDER SIEGE 

If we want to sell advertising for toothpaste or politicians, we will tell our client that we are capable of selling anything. Basically, we would present a theory of communication: we send a message, and the public receives it. But experience and research show that this is not the case. We are not antennas that passively receive all the information that is broadcast. Rather, we are radios that tune in to certain frequencies and ignore others. We select the media that we want as a source. And, within those media, we also select messages. We consider some to be more credible and reliable, and others less so. 

We have to choose, because there is so much information available that it is impossible to keep up to date. Also, it makes sense to think that we will choose those sources of information that, for some reason, we find more reliable. But here something we have discussed before reappears: we all think we are being rational and we are not aware when we are not. Are we sure that we select our sources of information because of their quality? Are we not selecting those that align with our beliefs and values or with our confirmation bias? Are we not preferring the sources that our tribes consider reliable? Are we not confusing experts with false experts? 

In fact, a recent analysis shows that, in the United States, people consider different media trustworthy or not very differently depending on whether they are Democrats or Republicans,5More on this at https://www.knightfoundation.org/reports/perceived-accuracy-and-bias-in-the-news-media. which allows us to infer that we first choose a media outlet according to whether it agrees or disagrees with our position, and then we justify the choice "rationally". 

After choosing certain sources, don't we keep the information we find most interesting and ignore other information that may not be of interest to us, but would be valuable for us to know? Most probably yes. It is very tiring for our minds to be permanently receiving information, so we apply filters: we actively search, and preferentially retain that which agrees with what we previously thought, and if we are given information that contradicts our positions, we easily set it aside. 

THE LIBRARY OF BABEL 

Imagine we enter a library with millions of books arranged in shelves and aisles of shelves. Even if we manage to make out the spines of a few thousand books, and read a few hundred, we know that there are millions, and that what we manage to read is only a small part of what can be read. We know that, if we are asked to describe the library, our description, however accurate it may be, will also be limited and partial. But we can also be like the blind men in the Buddhist (or Hindi) legend who are in the presence of an elephant for the first time. The one who touches the trunk thinks it is a snake. The one who touches the ear thinks it is a fan. The one who touches the leg thinks it is a trunk. The one who touches the tusk thinks it is a spear. They are all partially right, but they are all wrong, and since they cannot see the extent of what they are missing, they cannot understand that they are wrong. 

That is the Internet. Clearly, having immediate access to virtually all the knowledge that humanity has accumulated so far is a good thing. But there are some more complicated aspects to which we need to be alert. When searching for information in search engines like Google, Bing or Yahoo, biases appear. Every Internet search engine uses an algorithm to provide an answer: a program processes instructions and returns, in a particular order, a list of sites. What appears and what does not? The algorithm defines a list based on how relevant each site seems to be to what we are looking for, and this is not necessarily representative of all the information out there. 

So, just as we search for information in a biased way, based on our motivation, algorithms add their own biases. What appears first in the list is what gets most clicks, and what gets most clicks usually appears first, which generates a positive feedback that makes the most popular more and more visible and makes the least more invisible. Then, what we see is not necessarily neither the most correct, nor the most informative, nor the most reliable source, but only the most popular, filtered by the logic of an algorithm that we do not know. 

As if this were not enough, algorithms learn from what we do or don't do: if we click on a site, it will then show it to us preferentially. This makes the results of something as innocent as an Internet search also distorted, as can be seen by doing the same search on different people's computers. There will be omissions and emphasis, and we need to keep this in mind so that we are not fooled into thinking that using a search engine means having everything available, without bias. 

These algorithms do their work invisibly, and we have the illusion that they are not there, that what the web shows us is a faithful reflection of what exists. It is an illusion, a new mirage. We are in the library but we choose to behave like the blind men in the legend, not because we have limitations (we always do), but because we act as if we did not. 

Algorithms are optimized for the click, but not out of malice, but because we are too. This is successful because we are human and we prefer the simple and fast rather than the complex and slow; we prefer to have fun, to be entertained and not to keep striving, when for that we have the rest of our daily life. That's why gossip shows and magazines, entertainment, panelists yelling at each other are so successful. And if there is something uncomfortable, difficult, challenging, constantly questioning us and demanding effort, it is to go against those preferences to actively seek the truth. Therefore, the web did not invent anything new, nor is it the cause of these evils. It is just something we generated to give us what we want, and now we have to take responsibility for what we let happen. It's up to us to want something of higher quality. 

POSITIONS BASED ON FACTS OR FACTS BASED ON POSITIONS? 

The ease with which Internet searches provide us with answers can be a double-edged sword. Used appropriately, they allow us to access knowledge, to learn, to get informed. But if what really motivates us is not the search for truth, but the confirmation of what we previously thought –which may be wrong, and may be influenced by our biases, irrational beliefs or emotions--, we will also find something that tells us that we are right. If we believe in NASA's conspiracy not to tell us that the Earth is flat, in a couple of clicks we will access sites that support this. The same goes for myths such as vaccines and autism and so many other misconceptions. 

This ease nurtures post-truth, because it makes it very difficult for us to realize that we are wrong. If we ask a question to find out the answer, that's one thing. But let's be careful not to be asking for the answer so that the Internet can point us to the sites that support it. Because if we do this, we think we are looking at the world when in fact we are looking in a mirror. 

Somewhere in the middle are those who do not have an opinion or information on a subject and honestly seek answers. If we are not experts on the subject, there is another problem: it is very easy for us to get confused and not be able to distinguish what is true from what is false. It is very easy to create thousands of pages with an opinion, but that does not ensure that the opinion is correct. It is as if we had a very poor library, with only one book, and to hide that fact we made thousands and thousands of copies. It looks like consensus because it is so much, but it is a mere repetition. How can we defend ourselves? One possibility is to ask ourselves again the questions in the Survival Guides of previous chapters.

SOCIAL NETWORKS 

What happens with search engines happens much more with social networks such as Facebook, Twitter or Instagram. Social networks fight for our attention. Their business model requires us to be present and active on the network. Since they know well that we tend to stay on them longer if we like what we see, and that what we like is usually what tells us that "we are right" (again, confirmation bias), that's what they give us. They also know perfectly well that if a piece of content arouses strong emotions in us, we will be more likely to interact with it by liking, commenting and sharing. A cocktail of post-truth in an increasingly distracted society. An ever brighter light that points to the book we liked the most, which makes us choose it repeatedly, until we confuse what we see with what is there. 

By design, social networks feed on our biases and our tribalism and give us more of what we like most: if we find something interesting, it will show it to us more often, just like what our friends are interested in. The motivated reasoning we mentioned in previous chapters is at work again.6See Chapter V. But, again, it is clear that companies will do what they need to be successful, and subjects will do what they need to be satisfied, and feedback is extremely dangerous. If we do not ask ourselves why they are successful, and we do not force ourselves to analyze our behavior, we will not be able to get out of our confinement. 

In general, we prefer not to come across positions that challenge our own, as discussed above. In addition, we tend to use social networks for entertainment or to have contact with people we are interested in. But that interest is usually sustained in sharing positions. If on a social network someone says something we strongly disagree with, we can select an option to stop reading it. 

And this also feeds back: the more we fine-tune what we see on the networks according to our interests, the more distorted our vision of reality becomes, and this is also where post-truth comes in. The mirage, the shared reality that is fragmented into portions –like the elephant-- to the consumer's liking, into pieces that do not touch. Little by little we abandon the common world to inhabit a private world. We arrange our virtual habitat to our liking, with contents that show us how good and intelligent we are, and what we do not see, because we expel it from the habitat, is as if it did not exist. Social networks did not invent this phenomenon, but they made it much easier. 

BUBBLES AND ECHO CHAMBERS 

Add this to our tendency to group ourselves into tribes and we have an explosive bomb: we gradually isolate ourselves in "ideological bubbles" in which we expose ourselves to the ideas of people who think the same way we do and are cut off from the ideas of others. We censor content that bothers us. Since our ideas do not come into contact with those of others, we have no need to justify them, and since they are the only ones we see, we think they are the only ones possible. Thus, these ideological bubbles end up undermining the possibility of talking about shared reality. 

The idea was put forward that there is an attention economy: our attention is a valuable commodity, and there are companies competing for it. Social networks are an example of this, because they need us to spend time on them to see advertising, which is what they live off. As they say, if something is free, we are the product. This situation helps to generate an open competition in which the different networks try to get us to choose them, which leads to a mess in which we are permanently bombarded by information that we have neither the time, the energy nor the interest to question too much. 

In this context it is predictable, and even healthy, that we filter what reaches us, that we take refuge in what threatens us less, what we feel we can control more. And that is the social networks themselves. 

GRADUAL POLARIZATION 

Social networks have become a very efficient way to inform us, but also to send signals to our tribes, and they contribute to the polarization of positions. As we said before, if we talk only to people who think like us, our position becomes more extreme, we believe in a false consensus and we feel more confident.7See Chapter VII.

This is due to two factors. First, we increasingly isolate ourselves in different echo chambers, where what is heard is what we or those like us emit or, as we will see in the next section, what others want us to hear because they understand what bait will get us hooked. And we create these echo chambers by selecting those who agree with us. Thus, our biases expand, our tribe's biases expand, and we all help with our behavior to generate a sense of consensus when perhaps there is none. We are deceived by mirages that do not reflect reality. 

This also makes us not understand how another person could think differently about an issue, when everyone we know thinks as we do. If we are wrong, this is an excellent way for us to never come across a well argued, evidence-based position that allows us to revise our own and perhaps correct it. 

In addition, we interact more with extreme content, which attracts more attention and drives us to act, either by supporting or rejecting it. And even if we are right, it is much more comfortable to be right without having to go to the trouble of constructing arguments that can survive outside the boundaries of the tribal territory. If the content is emotional, it will get us more easily. And if that's what makes us be in those networks, the companies that make them will favor that behavior. 

Polarization makes discussions more binary than they really are. It is more about declaring whether one is "for" or "against" than it is about addressing complexity or the existence of other possible positions. And that collaborates with post-truth by eliminating the common ground we need to have as a foundation for debating. We are not saying that we have to agree, or that you have to be gray in order not to be black or white. No. There is no problem in being black or white, as long as we are there for matters of substance and not because of tribalism, as long as we are there without failing to recognize the existence of others, as long as we have arrived there with the full spectrum of grays in hand. 

Clearly, polarization goes against showing subtleties such as uncertainties, doubts or different points of view. Those who say something about an issue in a social network generally hold extreme views. Those in the middle tend to shut up8Preoţiuc-Pietro, D. et al. (2017). Beyond binary labels: political ideology production of Twitter users, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, vol. 1. or withdraw from the networks, and the survival bias means that few talk about them because they are more invisible. 

Although it is still not very clear how much social networks really influence the polarization we witness, some proposals to counteract this effect have emerged recently. On a personal level, we can fight the echo chambers, we can try to break our bubbles by incorporating people who do not think like us into our networks, trying to have productive discussions with them. Even if our use of social networks is only as a hobby, we have already seen that it can also be dangerous. We can empower ourselves, we can try not to be passive participants. 

Beyond this, once we receive the information, which we have already seen can be, to a greater or lesser extent, incomplete, biased and distorted, the ball is in our court. How do we behave? What do we do? 

PART OF THE PROBLEM OR PART OF THE SOLUTION

CONTAGIOUS 

In a simpler, but by no means better, time, the process of information generation was defined in a rigidly directional way: it was produced somewhere (the media) and received and consumed by us (the audience). But this is less and less the case. Once we receive information, it doesn't stay there. Before, we could discuss it with neighbors down the block, and now, with neighbors on the network. The difference is the speed and the distance we can go: just as the Internet gives us access to global information, it also gives us a very easy way to spread it globally. If we receive post-truth and rebroadcast it, we collaborate --involuntarily-- in bringing it to new places. To fight post-truth, we must not only identify it, but also actively block its spread, while continuing to disseminate what is true. If post-truth is a disease, and adulterated information is the infectious agent, we are both the ones who get sick and the ones who spread it. Let us avoid infecting the rest.

FAKE NEWS! 

All the biases and distortions we mentioned accumulate before they reach us. Let us imagine this fictitious situation, which perhaps is not so fictitious when we look within to try to find out if it cannot happen to us. First, a person claims to have been cured of cancer "miraculously" by drinking grass juice every morning, although in reality he never had cancer. Maybe he says this because he wants to deceive, maybe just as a game, to get attention or out of boredom. It doesn't matter. Then a professional media outlet picks up the "news item". Someone in a  Facebook group for healthy eating posts the story. A friend of ours shares that post on their Facebook wall. We read it, believe it, share it. unintentional post-truth, but post-truth nonetheless. 

What are the problems? The first person lied, or perhaps said something without thinking too much about it. The media outlet took the information not only without checking it (Was the person telling the truth? Could they provide studies showing that they had had cancer and no longer had it? Is there data that they were cured by that and not by something else?), but also without considering that what we know about medicine, and about cancer in particular, makes it highly unlikely that such a cure actually happened. In the Facebook group, they share that news --which is now fake news– thinking that it might be of interest, but without checking it either. Our friend does the same. We do the same. All the links in the chain could have checked or, at least, could have contributed a dose of healthy skepticism and not kept spreading the fake news. And it did not happen. Nor does the professional media have any incentive to provide us with truthful content because we don't penalize them when they don't. Facebook has no incentive to stop this because their business model is that the content makes us be "there", and that is achieved by appealing to emotion (high hopes, great outrage) rather than by giving us real content. As for us, we are not used to question the ideas shared by our friends on social networks because we think we would be questioning them. 

We go from thinking that if something is published in the media it must be true to thinking that if it coincides with what we think it is true. We should not give the media our blind trust, but neither should we totally distrust it, because the latter leads us to then place our trust in alternatives that are no guarantee of truth either, such as following our confirmation bias. 

When we talk about post-truth, sooner or later the danger of "fake news" comes to the forefront: news that circulates, in general but not only, on the web, spreads rapidly and is simply falsehoods or distortions of reality. At first glance, these news items look real, often because they prey on people's biases. 

Some prefer to reserve the term fake news for deliberately false news, but here we will use it in a broader sense, which inevitably encompasses several different phenomena: regardless of how the falsehood was generated, if what results is false, we will call it fake news. 

Various attempts are emerging to check the validity of a news story to see if it is fake or not, from fact-checking organizations to good journalistic practice guides. But something is being overlooked: fake news is so successful because people (we) help spread it. People believe in this news and share it; or they don't believe or stop believing, but they collaborate in spreading it "just in case". 

In the context of echo chambers favored by the networks, fake news finds a free way to spread at great speed, and we are largely responsible for this. Of course, some benefit from fake news, and intentionally use it as a tool to achieve what they want, by putting together a propaganda scheme. This is intentional post-truth, but we are not talking about that here.9 We do so in chapter XIII. Here we have to think of miraculous treatments, of photos of events that did not happen, of unchecked rumors that we believe and spread without even questioning their veracity. As Napoleon Bonaparte said: "The problem with things taken from the Internet is that it is difficult to verify their authenticity".10There is no way Napoleon could have said that. According to Wikipedia, at the time of Napoleon, the Internet only existed in the United States. All those times when we do not seek to deceive, but we deceive, we do not seek to confuse or generate doubts where there is no room for them, but it happens. This is unintentional post-truth and it is harmful in two ways: not only do we lose the truth along the way, but the false news that we spread saturates us and, when true news that does deserve to be shared appears, instead, it is lost, forgotten, not recognized as true and valuable. The boy who cried wolf over and over again. The false seems true, and the true seems false, again and again. 

For example, myths such as that of the Earth being flat or that vaccines cause autism are supported by all the components mentioned in previous chapters: beliefs, biases, tribalism, distrust of experts. If those myths reach us through social networks and either we don't care too much about them, or we find them amusing, or they make us doubt, maybe we spread them. We even spread them in order to say, indignantly, that they are lies. 

Spreading "just in case" does not help to get the truth out, but only to confuse people even more. When this is done by someone who has a lot of reach, the effects can be dramatic. We need to pay attention because, in addition, technology is advancing rapidly and today fake videos can be made thanks to artificial intelligence programs that mix voice with image. Thus, videos can appear of politicians giving speeches they never gave, testimonies of events that never happened. This technology is known as deepfake, and people started talking about it at the end of 2017.11More on this at http://futureoffakenews.com All this is, for now, identifiable as fake, but at some point, sooner rather than later, it will no longer be. The false will appear true, and the true will no longer be so clearly distinguishable. It may be used to invent events that never happened or to deny events that did happen. And as the mirage becomes more like the real, everything will be covered by the same cloak of doubt and certainty. We will neither fully believe it nor fully disprove it. What will we do then?

SHORT, BUT SERVICEABLE LEGS 

Mark Twain used to say that A lie can travel halfway around the world while the truth is putting on its shoes, and perhaps that phrase was never more accurate than it is now. In March 2018, a paper titled "The spread of true and false news online"12See Vosoughi, S., Roy, D. and Aral, S. (2018). The spread of true and false news online, Science, 359(6380): 1146-1151.  analyzing how fast news moves on Twitter, was published in the journal Science. The authors concluded that fake news moves about six times faster than real news: "Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information." This phenomenon was seen more intensely in political topics than in others related to terrorism, science, financial information, natural disasters or urban legends. Something false is 70% more likely to be retweeted than something true. Also, they found that something true rarely reaches more than a thousand people, because it is not retweeted, while the top 1% of fake news reached between 1,000 and 100,000 depending on how viral it went. 

And this phenomenon occurs because of our behavior. Just as gossip circulates quickly in a small town, fake, attractive and novel news will spread in a similar way, but virtually. There are no mysterious forces to invoke, no plots or bots, although we cannot rule out that these factors also contribute.13And we discuss them in Chapter V. The Internet has turned us all into small-town dwellers. 

In this age of algorithms and technology, the center of it all is still us: even if someone designs a fake news story with some ulterior motive, they need us to spread it. We do it surely with good intentions, because the news caught our attention and we thought our friends would be interested to know about it. That innocent behavior is precisely the problem. 

But how do we know what is true? The answer is the first step before trying to modify our behavior. 

TELLING TRUE FROM FALSE 

This situation puts us in a position where it is increasingly difficult to distinguish good from adulterated information. To begin with, we cannot rely on something being labeled as "fake news" because some use that expression to label information that they do not like, that does not agree with their previous positions. Some fake news generators have even been seen to say that some true news is actually fake. Real fake fake news. A race in which the boy who cried wolf is not only not penalized for telling lies, but, if he is quick enough to call the truth-teller a liar, he actually wins. For this reason, we do not have to look at the terms that are used. We have to see if there is a matching concept behind them. Beware of mirages. 

We have mentioned some of the things we can do. In addition, we need to understand whether the information is factual and, therefore, we can address it --or not-- based on evidence, or whether it refers to issues that go beyond it, such as moral issues, traditions, intuitions or emotions. And here is the first obstacle on our path, because something emotional, anecdotal and striking is much more attractive than something more moderate which clarifies the evidence on which it is based. The emotional discourse, even when it is false, is more attractive than the erudite citation that indicates page and year of publication. But once we look deeper and find what is factual, what we can check whether it is true or not, we have to track the evidence, assess the totality and see where the consensus is, as we said. This is something that fact-checking organizations and professional journalists should do, and sometimes do. 

In addition, when information comes to us, we need to be aware of biases and distortions that may have occurred along the way, and we need to ask ourselves where the information is coming from, what information we are missing, and whether or not the total body of knowledge on the subject is being adequately represented. We may need to turn to experts to ascertain this on topics where we are not, and so we need to do our best not to confuse competent experts with false experts or end up following arguments of authority. A false expert may install doubt where there is none, and may do so in very subtle ways: for example, by emphasizing a minority position that goes against the consensus, or by feeding relativism or conspiratorial ideas. 

On the other hand, we need to look in the mirror, look within and try to identify our biases and beliefs. Are we cherry picking the information, or are we honestly considering it in its entirety? Are we following our tribes? Also, we will need to make an effort to go deep and carefully read all the information, not just the headline or the tweet. That requires "taming" our behavior. 

As if this were not enough, the steps above feed back: sometimes, we consider a competent expert a false expert just because he does not agree with our positions. 

The discussion on the influence of fake news in our lives is relatively recent and there is still no agreement as to whether it is something of great concern or whether its relevance is being exaggerated. It is also unclear whether, beyond the content of each fake news item, the fact that so many circulate all the time may not be contributing to diminish the credibility of all interlocutors, such as science popularizers or journalists. We do not know exactly how much confusion they may be creating, but, when in doubt, given all we have to lose, probably the best course of action is to be alert.

PREVENTING A “POST-TRUTHOGENIC” ENVIRONMENT

To begin with, let's look at how we behave in the face of the information that we get. Ideally, we should be able to block the spread of false news, while collaborating with the spread of true news. Sometimes, we will be able to figure out whether or not to trust a given piece of information according to the suggestions mentioned above. But other times we won't, won't know enough or won't have the resources –in terms of time, knowledge or attention-- to find out. That situation calls for more humility, less emotion and a pause to analyze both the risk of sharing something false and the risk of not sharing something true. 

What we get on WhatsApp or read in the networks, is it true? Let's assess the risk of action, and also inaction. If we get a request for money for a child with a terrible disease to go to the other side of the world for stem cell treatment, which is almost always a scam to prey on desperate and vulnerable parents, what to do? The easy thing is to retweet, share and feel good, engaged, empathetic. Questioning whether this miracle cure is real or not and stopping the ball if it seems to us that the request is not legitimate is much more difficult. 

It's all very fast and, although our influence in the propagation of unintentional post-truth is infinitesimal, when added to that of millions of others, it ends up becoming an unstoppable snowball. We exert influence, often with good intentions, which would be funny if it weren't terrible. 

If we share information on social networks, let’s make it count. Let’s not do it thoughtlessly. Both what we share and what we don't share affects other people, let's not forget. Clearly, if we are not motivated by the truth but by being popular on social networks, we will lose. On the other hand, if we care about credibility, we will be better off. Just as we will be better off if, as active and introspective users, we contribute to rewarding credibility over popularity. 

But there is another problem: even if we try to "behave," others might choose not to play that game. What then? Keeping quiet and not spreading fake news already helps a lot, but spreading the truth helps even more. And if someone else is spreading fake news, exposing the mechanisms they are using may be more helpful than discussing the issue itself. Exposing the process and not talking about the content runs the discussion toward "best practices." Process kills topic.

Much of the responsibility for post-truth creation lies with us. But the good thing about this is that, then, so does much of the possible solution, although not all of it. It's very difficult. We talk about changing our behavior. It's like telling an obese person to stop eating, or a heavy smoker to stop smoking: we have to see if we can deal with something addictive, that gives us pleasure, something that takes a lot of effort to change. When people cannot "control" themselves, we help them from the outside and try to modify the environment to make those behaviors more difficult. Just as we talk about living in an "obesogenic environment", we also live in a "post-truthogenic" environment, so we can think of solutions that modify the environment. 

Several different solutions are being attempted in this regard. Companies like Google and Facebook are trying different actions to identify fake news and prevent them from spreading. It is still all very recent, and just as many celebrate the fact that these companies are taking responsibility for the damage they cause and even have a word for this, accountability, others are concerned about leaving the selection of admissible content to new algorithms defined by the very companies being held accountable. Others go further and speak of censorship and a great danger in the loss of diversity of voices, which could end up being a worse cure than the disease and represent an even greater threat to democracy. 

But even if these companies' approaches were to succeed in identifying fake news, it should also be noted that this inevitably takes time and has the potential for error, both in incorrectly flagging something false as true and the opposite. 

In the meantime, concerned users withdraw from social networks or interact much less. But this does not solve the problem either. 

Without very clear proposals yet for a system-wide solution, we offer as an alternative a decentralized solution, aimed at each of us who inhabit these spaces and spread news. A new Pocket Survival Guide:

POCKET SURVIVAL GUIDE #8

HOW CAN WE BETTER RELATE TO INFORMATION?

Does the information we care about relate to something wholly or partially factual? 
Is the evidence supporting the claim or how to access it mentioned? Is it complete and reliable? 
What are the biases and distortions that may have influenced the adulteration of the information, from the time it was generated until it reached us? 
What are our own biases and beliefs? Could we be confusing competent experts with false experts? Could we be, at least in part, susceptible to the influence of our tribalism? 
Can we be inside an ideological echo chamber and not know it? Shouldn't we incorporate people from other tribes into our networks? 
Do we honestly try to identify whether or not the information is fake news? 
How do we act in the face of this information? Do we collaborate or not with its dissemination? Do we analyze the risk of being wrong by acting and wrong by not acting? 
How do we deal with the agent who provided us with false information? Do we put truth above popularity? Do we demand accountability? Do we penalize bad behavior? 

This chapter was the last in our analysis of unintentional post-truth. The "usual suspects" reappeared, and we added other layers of complexity. We will now turn to post-truth intentionally weaponized by groups seeking to benefit from it. We will exemplify this with cases that are, to a greater or lesser extent, well documented, not so much because of the particular issues, but because, through them, we can see the same process that, in essence, repeats itself over and over again. We do this as a way of bringing our attention to the strings of malicious, fabricated post-truth, in order to see them when we have a new puppet in front of us.