The Social Epistemology of Coronavirus

coronavirus magnifying glass
Don Fallis
Department of Philosophy
Northeastern University

last updated 6/23/20

I teach courses that focus on how people can acquire knowledge in a digital world.  And now that we are not supposed to come within six feet of each other, and even more of our interactions are mediated by digital technology, we are definitely living in a digital world.  When classes at Northeastern went remote during the last few weeks of the Spring 2020 semester due to the pandemic, I began writing a series of posts for my classes drawing connections between the pandemic and the topics that we had been studying.

Pandemic Safety and the Weight of Evidence (or is simply being human a risk factor for novel coronavirus?) 6/23/20

As a result of COVID-19, Northeastern University, like many employers, has been faced with the question of whether to have employees return to the workplace during a pandemic or to have them continue to perform their duties remotely where possible.  The NU administration has decided that faculty and staff will return to the workplace at the beginning of the fall semester.  When the possibility of faculty teaching remotely in the fall comes up in discussion, they emphasize that “that will not be the norm.” See https://huntnewsnu.com/62592/front-1/qa-provost-madigan-and-chancellor-henderson-discuss-reopening/

Of course, they also say that “if a faculty member has risk factors like being elderly or having diabetes, then they probably shouldn’t come to campus, and we will accommodate that for sure.”  The specific risk factors that they have in mind are those identified by the CDC.  See https://www.cdc.gov/coronavirus/2019-ncov/need-extra-precautions/groups-at-higher-risk.html

This appeal to risk factors suggests that the NU administration thinks something like the following:

(*) Faculty members who do not have any of the risk factors identified by the CDC are not at significantly increased risk of death and serious illness from COVID-19.

Thus, such faculty members are not in need of accommodation, at least on health grounds.

In this post, I would like to suggest that (*) would be a dangerous conclusion to draw on the basis of the existing scientific evidence.

The NU administration are not experts on viruses or epidemics.  Thus, like most of the rest of us, they do have to rely on scientific experts when making decisions about the pandemic.  Moreover, this is a perfectly reasonable thing to do, especially when there is wide consensus among scientists on a particular issue.  See "Testimony about the Coronavirus" and "Appeals to Authority and the Coronavirus".

However, as philosophers Eric Schliesser and Eric Winsberg point out, the science of COVID-19 is rather unsettled compared to more established areas, such as climate science.  See https://www.newstatesman.com/politics/economy/2020/03/climate-coronavirus-science-experts-data-sceptics

Now, it is important not to overstate the case here.  Virology and epidemiology are certainly well-established areas of science.  What is relatively unsettled are the details about COVID-19 in particular, such as what things are risk factors for death and serious illness.  As the CDC website says, “COVID-19 is a new disease and there is limited information regarding risk factors for severe illness.”

Indeed, in some respects, scientists have already been massively wrong about how to deal with the virus.  Just consider one notable example:  The scientific consensus now is that wearing face coverings significantly reduces the spread of COVID-19.  See https://www.npr.org/sections/health-shots/2020/06/21/880832213/yes-wearing-masks-helps-heres-why  However, that was not the initial recommendation of the scientific community.  As a result, back in February, you could get some serious disapprobation for wearing a facemask!  See https://www.businessinsider.com/coronavirus-celebrities-wearing-masks-are-promoting-a-myth-2020-2  And even as recently as April, the CDC website still said that “If you are NOT sick: You do not need to wear a facemask unless you are caring for someone who is sick (and they are not able to wear a facemask).”  See https://web.archive.org/web/20200403000543/https://www.cdc.gov/coronavirus/2019-ncov/prevent-getting-sick/prevention.html

With all this in mind, the reason that I think that (*) would be a dangerous conclusion to draw on the basis of the existing scientific evidence is that it ignores what philosophers call the weight of evidence.  See https://plato.stanford.edu/entries/imprecise-probabilities/#WeiEviBalEvi

Imagine two possible scenarios:

Scenario A: You have an opportunity to place a bet of the outcome of a coin flip.  You have been able to toss the coin a thousand times and it has come up heads roughly half of those times.  How likely is it that the outcome of the next coin flip will be heads?

Scenario B: You have an opportunity to place a bet of the outcome of a coin flip.  You know nothing about the coin (other than that it has a head and a tail).  You don’t know anything about whether it is fair or biased.  And if it is biased, you don’t know anything about which direction it is biased.  How likely is it that the outcome of the coin flip will be heads?

In both scenarios, the balance of evidence is arguably the same.  In other words, in both scenarios, you should assign a probability of 1/2 to the outcome of next flip being heads.  But the weight of evidence is clearly very different in the two scenarios.

By the way, note that these are extreme cases.  There are all sorts of in-between cases that we might consider.  My thought is just that we are closer to scenario B than to scenario A when it comes to risk factors for COVID-19.

Interestingly, the weight of evidence does not actually matter for how you should bet if you absolutely have to make a bet on the outcome of a coin flip right now.  But the expected value of gathering (or waiting for) more evidence before betting is much greater when the current weight of evidence is low.  Basically, if you are in scenario A and you flip the coin several more times, the results are not going to have much impact on the probability of getting heads.  But if you are in scenario B and you flip the coin several times, the results could tell you quite a lot about the probability of getting heads.  So, you should certainly get more evidence if you can before making your bet, especially if a lot is at stake.

Do you think that the same lesson applies to risk factors for COVID-19?

For further reading, see https://www.sciencemag.org/news/2020/03/how-sick-will-coronavirus-make-you-answer-may-be-your-genes

The Future of Lying in the Era of the Coronavirus 4/7/20

Jeff Hancock, an expert on lying at Stanford University, has a very nice TED talk on the “Future of Lying.”  See https://www.ted.com/talks/jeff_hancock_the_future_of_lying  I was inspired to write up some thoughts about the future of lying in the era of the coronavirus.  Please let me know what you think.

As the name suggests, my course on Knowledge in a Digital World focuses on how people can acquire knowledge in a digital world.  And now that we can’t come within six feet of each other, we are really living in a digital world.  How might this drastic and immediate increase in our reliance on digital technology impact lying and deception?  In general, it seems like successful deception is now going to be even easier to pull off.

As we discuss in Knowledge in a Digital World, there are many things about the world that we can’t find out directly for ourselves.  For example, in order to know how many cases of coronavirus there are in China, we have to rely on information sources that we find on the internet.  Moreover, if we want to verify that testimony, our only option is to check with other sources on the internet.  As Michael Lynch (2016, 17-19) points out, there is no “independent check.”  This clearly makes us vulnerable to online deceivers.

Well, now that we are all “sheltering-in-place,” there are even more things about the world that we can’t find out directly for ourselves.  For example, I can no longer verify for myself what is going on in downtown Boston, much less what is going on in downtown Wuhan.  So, it seems like we are even more vulnerable to online deceivers.

Of course, even though I can’t go see for myself what’s going on in downtown Boston, I can get pretty close by watching a video recording.  See https://www.boston.com/news/local-news/2020/03/30/wicked-vacant-boston-aerial-drone-footage  In other words, I don’t just have to rely on someone’s testimony.

But can we trust what see in a video anymore?  After all, it might be a deepfake, a fake video recording created using machine learning.  So, we have to worry about whether the particular person appearing in a YouTube video really said and did what they appear to say and do.

But even worse, can we trust what see in a video chat anymore?  Researchers are getting closer to being able to create deepfakes in real-time.  See https://cacm.acm.org/magazines/2019/1/233531-face2face/fulltext  Thus, the person that you seem to be talking to on Zoom might actually be someone else entirely.  This is a serious epistemic problem since, these days, most of our interactions with other people are virtual.

But there is at least one type of lying that may have become more difficult in the era of the coronavirus.  Hancock coined the term butler lies for those polite lies that we use to control our private space in a highly connected world.  For example, we say, “I’m on my way” when we haven’t left yet and we say, “Gotta run” when we just want to get off of the line.  But now that we are all “sheltering-in-place,” everybody knows exactly where everybody else is.  We’re all at home.  “The only excuse is ‘I don’t want to,’ and no one wants to hear that right now.”  Seehttps://www.technologyreview.com/s/615437/virtual-happy-hour-introverts-lockdown-coronavirus/  So, in order to continue to be effective, butler lies are going to have to get a bit cleverer.

Privacy versus Safety 4/1/20

The conflict between privacy versus security is now hitting very close to home.  My apartment complex just notified us that a resident has tested positive for the coronavirus.  I’d like to know whether it’s somebody a few doors down who might be touching the many of the same common surfaces that I am or somebody in a separate building that I don’t really need to worry so much about.  But they’re not giving us that kind of detail.

During the week on privacy in my course on Knowledge in a Digital World, we focus primarily on the question of how much the government and corporations should know about members of the public.  But here, the question is how much members of the public should know about each other.  Some countries, such as Singapore and Taiwan, are providing the public with much more information about the spread of the virus than others, such as the United States.  See https://www.nytimes.com/2020/03/28/us/coronavirus-data-privacy.html  When an extremely contagious disease is out there, how much we should get to know about the infection status of our fellow citizens?  What do you think?

Such knowledge would definitely be useful.  It would give us a better idea of where it is safe to go and how careful we need to be.  As noted in the NYT article above, Singapore and Taiwan are apparently doing a much better job containing the spread of the virus than the United States is.

Of course, there are downsides as well to informing the public.  Some people are not completely rational, especially during a pandemic.  For example, a man who contracted the coronavirus on the Diamond Princess cruise ship received threats for coming back home to Utah (even though he is remaining in quarantine).  See https://kutv.com/news/local/st-george-couple-diagnosed-with-coronavirus-threatened-for-returning-home

By the way, as I suggested in my previous post on privacy, it is not just a question of how to balance the value of privacy against the value of security.  Different rights are coming into conflict here.  For example, while the people of Singapore and Taiwan may have less privacy than we do, they certainly have more freedom of movement than we currently do.

Now, it might be suggested that we can have both privacy and security.  After all, in order to better protect ourselves, we don’t need to know the names of the people who are infected.  We just need to know where they are and where they have been to some level of precision.  However, it turns out that anonymizing personal information is easier said than done.  For example, 87% of American can be uniquely identified with just three pieces of data (zip code, gender, and date of birth).  See https://dataprivacylab.org/projects/identifiability/  Now, we probably don’t need to know the date of birth, or even the gender, of the people who are infected.  But my point here is just that it doesn’t take much to put a name to a few pieces of data.

Even if we don’t want to release the names of infected people to the general public, maybe at least scientists who are studying the spread of the disease in order to contain it should be able to access detailed information about infections.  Here’s an analogous case.  After 9/11, librarians were asked destroy certain information (e.g., about reservoirs and dams) that terrorists might use to wreak havoc.  See https://www.latimes.com/archives/la-xpm-2001-nov-18-mn-5594-story.html  However, people who could demonstrate a legitimate need for such information could still get access to it.  What do you think?

More Disinformation about the Coronavirus 3/31/20

In my previous post, I mentioned a couple of reasons why people might spread false information about the coronavirus.  But another type of disinformation is now out there in the air along with the virus.  In addition to attempting to cause people to believe falsehoods (e.g., so that they will buy your product), deceivers sometimes simply want to create uncertainty about truths.

In an apparent attempt to deflect attention from their own failures in dealing with the coronavirus, foreign governments are pushing numerous conspiracy theories about the origins of the virus.  See https://www.nytimes.com/2020/03/28/us/politics/china-russia-coronavirus-disinformation.html  In the article on obfuscation that we read in my course on the Philosophy of Lying and Deception, Finn Brunton and Helen Nissenbaum (2017) describe how this strategy works (see the section on “Manufacturing conflicting evidence”).  In addition to muddying the waters about the facts of the matter, this strategy also has the salutary effect (from the perspective of the deceiver) of undermining trust in legitimate sources of information (see Fallis and Mathiesen forthcoming).

With respect to the coronavirus, our own President may be engaging in a similar sort of misdirection.  At a recent press conference, he suggested that medical workers at a New York City hospital might be stealing personal protective equipment (PPE) and that reporters should investigate.  See https://www.newsweek.com/trump-cuomo-masks-hospital-doctors-stealing-new-york-1494949  While I suppose that it is possible that such a theft is going on, Trump offered no evidence to support this accusation other than a huge increase in the number of masks ordered (an occurrence that one might expect during a pandemic, wouldn’t you think?).

Interestingly, the phenomenon that we talk about in my course on Knowledge in a Digital World, citogenesis or circular reporting is part of the new disinformation campaigns of these foreign governments.  According to an official in the State Department (quoted in the NYT article above), the Russians “push out a false message, which the Chinese and Iranians pick up and promote, and then Russian actors will repost the Chinese or Iranian versions of the message to make it seem like new information that had originated independently elsewhere.”

For more on the question of why people are motivated to post false information online about the coronavirus (as well as the question of why people are motivated to believe it), see https://news.stanford.edu/2020/03/16/fake-news-coronavirus-appealing-avoid/  Jeff Hancock is an expert on lying at Stanford University.

The Coronavirus Infodemic 3/25/20

President Trump is still not being careful with the facts about the coronavirus.  See https://www.cnn.com/2020/03/23/politics/trump-anthony-fauci-briefings/  But unfortunately, he is by no means the only source of misinformation on the topic.  Many sources on the internet are making false claims about the origins of the virus and about possible cures for the disease.  See https://www.nytimes.com/2020/03/08/technology/coronavirus-misinformation-social-media.html  A notable example is that you can tell whether you have the disease simply by trying to hold your breath for 10 seconds.  And the danger is not just coming from websites and social media.  Purveyors of misinformation are now using text messages as well.  See https://www.nytimes.com/2020/03/16/us/coronavirus-text-messages-national-quarantine.html

In many cases, purveyors of misinformation intend to mislead people.  For example, websites that are trying to sell products for treating the disease need people to believe their false claims.  See https://www.nytimes.com/2020/03/24/business/coronavirus-ecommerce-sites.html  However, it is important to keep in mind that some purveyors are simply trying to grab your attention.  Much like the Macedonia teenagers who posted fake news during the 2016 Presidential election, these people make money as long as you click on their stories (see Fallis and Mathiesen forthcoming).  But since there is a reinforcement mechanism for just making up stuff, it is still no accident that many people end up being misled.

Moreover, it can be extremely dangerous if people believe these false claims.  At least one person has died from ingesting a drug after President Trump suggested that it was a possible treatment for coronavirus.  See https://www.cnn.com/2020/03/23/health/arizona-coronavirus-chloroquine-death/  Even the “hold you breath for 10 seconds” example can cause harm if people decide that they are not infected based on the test and then put other people at risk. 

Given the dangers (epistemic and otherwise) that this “coronavirus infodemic” poses, an important question for social epistemologists is what to do about it?

One possible strategy is to flag false or misleading content about the coronavirus so that people will know not to trust it.  However, there may be limits to the effectiveness of this strategy, especially in this case.  Much of the misinformation about the coronavirus is such that many people would like to believe it.  For example, it is comforting to think that there is a simple way to cure, or at least treat, this disease.  A warning from a fact checker is just another piece of information, and often not a reassuring one.  So, many people may be inclined to think that the fact checker is wrong rather than that the flagged content is wrong.

A more effective strategy may be to simply take down false or misleading content so that people who might be misled never see it in the first place.  Several social media companies have started to do just this.  See https://www.theguardian.com/world/2020/mar/19/twitter-to-remove-harmful-fake-news-about-coronavirus

Of course, as John Stuart Mill (1978 [1859]) famously argued, there are potential epistemic costs to restrictions on speech.  Since censors are fallible, they may inadvertently take down true content as well as false content.  In that case, we lose the opportunity to acquire true beliefs from such true content.  And even if the censored content is false, we lose “what is almost as great a benefit, the clearer perception and livelier impression of truth, produced by its collision with error.”

Indeed, it is worth noting that the pandemic might have been avoided if it weren’t for censorship (see Tufekci 2020).  Those medical workers in Wuhan who saw the danger of the coronavirus first hand were deterred by the government from raising the alarm.

But I don’t think free speech considerations should stop social media companies from taking down fake news and fraudulent advertising related to the coronavirus.  It is not clear that speech has the epistemic benefits that Mill envisioned when it is intentionally misleading or simply made up (see Mathiesen 2019, 174).  Such content is almost certainly false and it is not being offered to further public discussion.  At best, it is just an epistemic distraction.  At worst, it is a public health hazard.

What do you think?

Skyrms and Semmelweis 3/24/20

I am a fan of Brian Skyrms’s theory of how signals carry information.  However, there is at least one claim of his that I object to.  According to Skyrms (2010, 80), “if receipt of a signal moves probabilities of states, it contains information about the state. If it moves the probability of a state in the wrong direction—either by diminishing the probability of the state in which it is sent, or raising the probability of a state other than the one in which it is sent—then it is misleading information, or misinformation.”  To see what is wrong with his claim about what it means to be misled, we can look at an episode in the history of science that is especially relevant to current events.  See https://www.cnn.com/2020/03/20/health/ignaz-semmelweis-handwashing-discovery-trnd/

Like most communicable diseases, COVID-19 is spread by a microorganism.  And it is difficult to protect yourself from something that is too small to see.  But because of what scientists have discovered over the years about such microorganisms, we are much better off than people in the past.  Most notably, we now know that we are less likely to contract the disease or spread it to others if we wash our hands regularly.

In the mid-1800s, the Hungarian physician Ignaz Semmelweis was working in a maternity ward in Vienna.  Tragically, about 10% of the women admitted to the ward died of “childbed fever.”  But after a series of experiments, Semmelweis discovered that this number could be drastically reduced if doctors simply washed their hands before treating patients in the ward.

At the time, the predominant theory of disease was the “miasma theory.”  That is, diseases are caused by “bad air.”  However, the results of Semmelweis’s experiment suggested that the cause of childbed fever was something on the doctors’s hands.  In particular, Semmelweis though that it was caused by “cadaveric matter,” particles from corpses that got on the hands of doctors when they performed autopsies in another part of the hospital.  But a number of other hypotheses (including the true one, the “germ theory” of disease) were also consistent with his experimental results.  See https://ed.ted.com/lessons/how-a-few-scientists-transformed-the-way-we-think-about-disease-tien-nguyen

Now, let us look at what Skyrms’s theory says about this case.  Semmelweis’s results definitely lowered the probability of miasma theory.  As a result, they raised the probability of the remaining hypotheses, including the cadaveric matter theory and the germ theory.  So, Semmelweis’s results carried information.  But according to Skyrms, a signal carries misleading information whenever it raises the probability of a false hypothesis.  And since they raised the probability of the (false) cadaveric matter theory, Skyrms has to say that Semmelweis’s results were misleading. 

I would say that sounds wrong, wouldn’t you?  After all, Semmelweis’s results eliminated a false theory that a lot of people believed.  Their only shortcoming was that they did not eliminate all of the false theories.  And very few scientific experiments do that.  But based on what Skyrms says about misleading information, a signal is misleading unless it does.

Gaslighting and the Coronavirus 3/20/20

In my course on the Philosophy of Lying and Deception, we talk about various dangerous forms of communication (lying, paltering, bullshitting, etc.).  President Trump has been accused of engaging in many of these activities.  But in the last couple of days, Trump has been accused of engaging in a new activity, gaslighting.  See https://www.huffpost.com/entry/donald-trump-coronavirus-snuck-up-on-us_n_5e731e27c5b6eab779424d52 

The term comes from the film (and play) Gaslight in which a woman is manipulated into thinking that she is crazy.  For several weeks, it certainly seemed like Trump was dismissing the threat of the coronavirus.  For example, he said that the number of cases would soon go from 15 to “close to zero.”  But now Trump is telling us that he always took the coronavirus very seriously.  Maybe we’re just confused in thinking otherwise?

In my course on Knowledge in a Digital World, we talk about echo chambers.  According to Thi Nguyen (forthcoming), the distinctive feature of echo chambers is the undermining of trust in the testimony of outsiders.  For example, conservative talk radio undermines trust in climate scientists.  Gaslighting can be seen as a special case of such undermining of trust.  It involves undermining a person’s trust in themself as a reliable source of information about the world.

In addition, even though it involved intentionally causing false beliefs, gaslighting may be another example of non-deceptive manipulation that Cohen (2018) talks about.  What do you think?

For more on the epistemology of gaslighting, see Spear (forthcoming).

Privacy and the Coronavirus 3/19/20

In my course on Knowledge in a Digital World, we talk about government surveillance and about voluntarily giving up personal information.  It looks like these things may be ramped as a result of the pandemic.  See https://www.technologyreview.com/s/615370/coronavirus-pandemic-social-distancing-18-months/

The goal of social distancing is to minimize the first spike of coronavirus patients.  Toward this end, the current restrictions on movement are probably going to last a couple of months.  However, since a vaccine won’t be ready (and herd immunity won’t be effective) for about 18 months, infections are likely to rise again when the restrictions are eased.

In order to ease restrictions on movement while still minimizing the following spikes of coronavirus patients, governments and other organizations may need to gather more information about people to assess their risk of transmitting the disease.  For example, before you are allowed on an airplane or into a public building, you may have to prove that you have already recovered from the disease, have your temperature taken, or even allow your recent movements to be tracked (to see if you have been to high-risk areas).  Of course, to put this in perspective, we have always (and especially after 9/11) traded some amount of privacy for the privilege of being able to move around freely.

What do you think?

Crowdsourcing and the Coronavirus 3/18/20

On Monday, the White House Coronavirus Task Force came out with a number of guidelines for the public.  See https://www.whitehouse.gov/briefings-statements/coronavirus-guidelines-america/  Among other things, they are advising us not to gather in groups larger than 10.  Since many projects are still going to require many people to interact, there will now have to be more use of information technology to collaborate at distance.

Note that the task force itself has over twenty members.  See https://en.wikipedia.org/wiki/White_House_Coronavirus_Task_Force  So, if they were following their own guidelines, they didn’t actually all meet together in person to come up with those guidelines.

Simply using information technology to collaborate at a distance may not, strictly speaking, be an application of “wisdom of the crowds.”  However, in an attempt to develop treatments for coronavirus, some scientists are using crowdsourcing.  They have created a video game that allows members of the public to try to design proteins that will bind with and neutralize the virus, which will allow a faster search through the vast number of possibilities.  See https://www.the-scientist.com/news-opinion/scientists-use-online-game-to-research-covid-19-treatment-67230

Does the pandemic provide other examples of mass collaboration or crowdsourcing?

Appeals to Authority and the Coronavirus 3/18/20

An appeal to authority is often treated as a fallacious form of reasoning.  And there certainly are instances where it is not a good idea to simply believe what a purported authority says.  For example, you probably don’t want to listen to a basketball star about whether to take the coronavirus seriously.  See https://www.bostonglobe.com/2020/03/12/sports/jazzs-rudy-gobert-touched-every-mic-joke-days-later-he-was-diagnosed-with-coronavirus/

However, Merrilee Salmon (1995) suggests that, under the right circumstances, an appeal authority can be good evidence.  Basically, you can appeal to an authority when (a) most of the things that they say on a particular topic are true and (b) there is no special reason to think that they would be wrong in this particular case.  So, when it comes to the coronavirus, we can probably trust virologists and epidemiologists, especially when there is a large consensus among them.

With regard to coronavirus, policy makers, as well as people in general, do seem to be turning to scientific experts.  And there does seems to be a consensus among these experts about many issues (e.g., that “social distancing” policies are needed to prevent the exponential growth that would overwhelm the healthcare system).  Of course, it can be difficult to know what to believe when there is disagreement among the scientific experts.  For example, at least initially, they were saying different things about whether people who were asymptomatic could spread the virus.  See https://www.cnn.com/2020/03/14/health/coronavirus-asymptomatic-spread/

Some subjects, such as climate change and vaccinations, have become politicized.  In those cases, certain beliefs are so deeply part of people’s social identity that it is extremely difficult to dislodge them even when there is a scientific consensus going the other way.  If a scientific expert contradicts such a belief, instead of giving up the belief, people may start to question whether this person really is a scientific expert.  See Kahan et al. (2011).  Fortunately, coronavirus does not yet seem to be one of these politicized subjects.

What do you think?


Crying Wolf in Reverse 3/17/20

Here’s another thought on the credibility issue.  The prototypical example of a liar losing credibility is the boy who cried wolf.  He repeatedly claims falsely that there is a crisis.  And then, when there actually is a crisis, no one will believe him.

But President Trump has done the opposite.  In the past, he repeatedly claimed falsely that various things that were “perfect” and nothing to worry about.  And when it comes to the coronavirus, the President and his surrogates initially dismissed the threat.  See https://www.cnn.com/2020/03/17/media/fox-news-coronavirus-reliable-sources/

So, when he finally declares that there is a crisis and extreme measures are needed, maybe people will be inclined to believe him.  What do you think?

When he dismissed the threat, that might have had a significant epistemic cost.  The article cited above suggests that many people believed him, which may have slowed down the necessary response to the crisis.  But maybe there is no epistemic cost when he finally takes the threat seriously.

Credibility in a Crisis 3/15/20

In a crisis, it is extremely useful for political leaders to be trusted by the public as sources of information.  When collective action (such as “social distancing”) is needed to mitigate the crisis, it is important for members of the public to have true beliefs about what they should do.  And since their testimony gets disseminated far and wide, political leaders are in an especially good position to convey such knowledge. 

However, President Trump is famous for making false or misleading claims.  See https://www.washingtonpost.com/politics/2020/01/20/president-trump-made-16241-false-or-misleading-claims-his-first-three-years  Many people have been worried that this will damage his credibility, and that it will be a problem when the country faces a crisis (as we do now).  See https://www.nytimes.com/2020/02/26/us/politics/trump-coronavirus-credibility.html

In my course on the Philosophy of Lying and Deception, we discuss various costs (epistemic and otherwise) of lying and deception.  But political leaders losing credibility seems like a more specific problem.  As with any liar, the political leader suffers a loss of reputation (see Bok 1978).  But in a crisis, society arguably suffers an even greater loss than the political leader.  And the problem is not just what we are less likely to trust testimony in general.  The problem is that we are less likely to trust the testimony of one of the few sources of information that we all have access to.

So, do you think that the President has lost credibility?  If so, do you think that this loss of credibility is going to interfere with the public’s ability to acquire knowledge about the coronavirus and what to do about it?  Or are there alternative sources of information that can allow the public to acquire the requisite knowledge?

Testimony about the Coronavirus 3/15/20

Not many of us are experts on viruses or epidemiology.  So, if we want to acquire knowledge about the coronavirus and what to do about it, we are probably going to have to rely on testimony.  But exactly how we should we decide what to believe?

In my course on Knowledge in a Digital World, we read selections from David Hume (1977 [1748], 72-90) and Merrilee Salmon (1995, 99-107) in which they make some recommendations on this topic.  For example, according to Hume, “we entertain a suspicion concerning any matter of fact, when the witnesses contradict each other; when they are but few, or of a doubtful character; when they have an interest in what they affirm; when they deliver their testimony with hesitation, or on the contrary, with too violent asseverations.”  Salmon basically suggests that we should believe what someone says if most of what they say on that particular topic is true.

Are any of these recommendations relevant to deciding what to believe about the coronavirus and what to do about it?  How are you deciding what to believe?