Feeds:
Posts
Comments

Archive for the ‘Uncategorized’ Category

2017-09-15_10-23-37.png

Credit: https://www.flickr.com/photos/porsche-linn/

“You can’t correct fake news!” “People don’t care!” Research findings about misinformation this week have made for somewhat depressing reading, especially if we’re too glib in our summaries.

But I would urge anyone who cares about news and truth not to fall into a pit of despair, just yet. Misinformation science is still very young. Many of the results we saw this week were first attempts to answer very large and vexing questions.

That means these results are far from conclusive. And even if we could be certain that these findings are correct, there are still many unanswered questions. Within these gaps may yet lie the path towards addressing our modern information conundrum.

What do these latest findings tell us?

A study by Gordon Pennycook and David Rand found that tags on Facebook showing stories as “disputed by 3rd party fact-checkers” — one of the platform’s most prominent anti-misinformation initiatives — only had a modest effect on readers’ evaluation of the stories’ accuracy. At the same time, the study revealed a serious unintended consequence: When some fake stories were tagged as disputed, participants were more likely to believe the un-tagged stories were true (compared with participants in a control condition, who saw no tagging whatsoever). This side effect was particularly strong among Trump supporters and 18- to 25-year-olds.

It seems that since flags are applied to some, but not all, stories, many people then infer that any story without a flag is true.

So far, so worrying. Still, as Politico points out, there are a number of reasons not to throw our hands up in defeat. The “disputed” tag is just one part of Facebook’s efforts, which also include adding fact-checks to “related articles” modules, shutting down fake accounts (not just Russian ones) and reducing financial incentives for fake news and other misinformation.

Then there’s news literacy. A Facebook spokesman said the platform’s efforts also include “helping people make more informed choices about the news they read, trust and share” — but Facebook has done little such work. (Back in April, they seeded people’s news PSAs whose tips could be a lot more helpful than they are.)

In fact, so far, very few actors have made any serious attempts to improve adult news literacy. This is frustrating, because the work is crucial. After all, if people so easily assume that stories without a disputed tag must be true, that demonstrates a major gap in understanding – not just in the very specific matter of what fact-checkers are capable of, but in how every member of our citizenry approaches every article they read. The default reading mode needs to be something like “open, but critical” — not “swallowing whole unless I’m told otherwise,” and also not “believing nothing.” (Disclosure: I’m the co-creator of Post Facto, a game designed to teach adults news literacy skills.)

You can grumble that it’s a lost cause, or you can see this as one of the biggest social challenges of our generation, and get to brainstorming solutions. I choose the latter.

Do people want news literacy training?

Oh, but what’s this? A Pew study, also out this week, uses survey data from 3,015 adults to outline a five-fold typology of the modern American, based primarily on a) their level of trust in information sources and b) their interest in learning, especially about digital skills. Pew estimates that about half of U.S. adults fall into the Doubtful and Wary typologies — and these people are relatively uninterested in building their news literacy skills.

What’s more, another 16% of Americans fall in the Confident group — and as their name implies, these highly educated folks don’t feel they need news literacy training. They could well be wrong, given the power of the Dunning-Krueger effect. And while I’m not aware of studies that specifically investigate the correlation between educational level and misperceptions, there’s evidence that high levels of numeracy and science literacy are no protection against misinformation about science.

To quote Eleanor Shellstrop, “Oh, fork.”

2017-09-15_10-19-14.png

Credit: goodplacegifs.tumblr.com / NBC Universal

But wait! What does the data really say? How did the researchers ask about people’s appetite for news literacy improvement? Well, respondents were asked:

  • How much they think their decision-making would be helped by training on how to use online resources to find trustworthy information
  • How much they think their decision-making would be helped by training to help them be more confident in using computers, smartphones and the internet
  • Whether they “occasionally” or “frequently” need help finding the information they need online.

Yes, I certainly am disappointed that so many Americans answered “not much.” But I’d argue that this handful of questions cannot reveal the complexity of Americans’ relationship with their information needs and challenges. Not only are the questions few in number, but they’re also pretty abstract – it’s possible they just didn’t mean much to the respondents, or really make them think about their particular information challenges.

It’s possible, too, that the word “training” is pretty off-putting. How much would the answer change if we asked whether they’d like “help”? Or “resources”? There are probably people who can’t commit to a formal training program, but would welcome the occasional assistance. Understanding that will help news literacy efforts to meet these people where they are.

Here’s some more questions we could ask:

  • Do you always know whether the information you read is true?
  • How often would you say you confidently make that determination?
  • What information would help you to make that determination?
  • How much do you think you’d be helped by more information on news organizations’ practices? On what makes reporting good or bad? By easier access to databases of facts? By help evaluating whether a story is true?… and so on.

A news literacy research agenda

A research agenda for investigating these questions would probably combine quantitative and qualitative approaches. Focus groups could help us understand what questions resonate with people, and what assumptions we’ve made. Surveys and experiments will help to quantify the attitudes that people hold.

By noting the limitations of Pew’s question set, I mean no criticism of their methodology. The researchers were trying to understand the interaction of a number of psychological and sociological factors, without writing a survey so long that participants would tend to drop out.

What I am suggesting, though, is that this is just the beginning of what must be an intense investigation of how people view their own information literacy skills and gaps. In the past decade or so, we’ve started to build a picture of how people process misinformation and corrections. This work is vital, but if we are to craft solutions, we need to also understand people’s attitudes towards their own information literacy, or lack thereof.

That’s why, while I sometimes find myself catching my breath at the sheer scale of the problem before us, I also see a reason to roll up my sleeves and pour that concern into the serious work of research. There is no time to lose.

Cross-posted at Medium

Edited intro Sept. 15, to make clear that those quotes aren’t good summaries of the findings.

Advertisements

Read Full Post »

If I thought there was too much daily information for me to absorb and process before Nov. 8 – well hoo, boy.

I’m trying out some techniques to a) read a sliver of what’s important to me, b) keep track of the research and story ideas that generates and c) try and occasionally go back to my notes, re-read, synthesize and think about misinformation problems on a deeper level.

Technique 1 is to keep a daily research journal. I’ve made a few entries. It’s a start.

Technique 2 is to write VERY QUICK blog posts on what I’m pondering and/or hope to research, with links to the stories that sparked the ponders. That way I can hopefully connect with people interested in the same ideas.

OK, today’s VERY QUICK thoughts:

  1. What can we learn from Wikipedia? What exactly are the “rigorous logic and rules” cited here? Could fact-checkers apply these? Can we teach these in news literacy courses for kids and the general public?
  2. What kind of research do we have about what techniques work in teaching news literacy to adults? I don’t mean broad-brush ideas like “don’t lecture, don’t insult” but really nitty-gritty approaches. I get the feeling from my interactions on Facebook that a lot of people assume the really simple debunk checks are beyond them, when they’re really not. How do we break through that assumption?
  3. Do we have anything like the well-rounded understanding of how people in the U.S. acquire news about the world in today’s info environment (e.g. social media vs. mainstream news sources vs. Wikipedia vs. personal conversation vs….?) I’m guessing no, though I intend to read up. Where do communication scholars think the biggest gaps are? It seems like we are fighting so blindly, trying to combat misinformation and misperceptions when we don’t fully understanding the components and interplay of people’s media diets.

Read Full Post »

OLYMPUS DIGITAL CAMERA

Stanislaw Burzynski, the Houston doctor who has treated thousands of cancer patients with unproven medications, is set to take the stand to defend his medical license today.

As I wrote for Newsweek recently, Burzynski is a celebrated figure in the alternative medicine world, and credited by some with saving their lives. The Texas Medical Board, however, says Burzynski overbilled, misled patients and made numerous medical errors – and the board’s expert witness said Burzynski’s use of untested drug combinations amounted to “experimenting on humans.”

The hearings started last November, when the Texas State Office of Administrative Hearings (SOAH) heard the board’s case. Now Burzynski’s lawyers will bring their own witnesses, beginning with the doctor himself. A document filed with SOAH estimates that Burzynski’s testimony will take two days.

According to the document, Dr. Burzynski will testify about the standard of care at his clinic, the use of antineoplastons, compliance with the FDA and charges of false advertising, among other issues. He will reply to “allegations of non-therapeutic treatment,” “inadequate delegation,” and “the aiding and abetting of the unlicensed practice of medicine.”

A number of patients and Burzynski Clinic employees will testify on Dr. Burzynski’s behalf, as will several outside physicians who treated clinic patients, and Mark Levin, a board-certified oncologist.

The hearing resumes just weeks after the Burzynski Research Institute announced that it has started patient enrollment in an FDA-approved phase 2 study of antineoplaston use in diffuse intrinsic brainstem glioma, a cancer that mainly attacks children.

The hearing is scheduled to run until May 12.

Hear more about the Burzynski story in my interview with the Point of Inquiry podcast.

Read Full Post »

OLYMPUS DIGITAL CAMERANote to readers: I just returned from a captivating trip to the Galapagos. Since I couldn’t blog from the islands themselves, I’ll be recapping the trip in a series of day-by-day posts based on the journal I kept and photos I took.

The purpose of this trip wasn’t originally tied to my interest in science communication; I simply had the pleasure of keeping my Dad company because my mother has a thing about reptiles. But after I finish combing through my 900+ photos, and get over this lingering case of boat-induced vertigo, I do hope to draw out some science comms connections.


 

OLYMPUS DIGITAL CAMERADay 2: After a surprise night in Miami, thanks to American’s malfunctioning cargo doors, finally got to Quito. Carlos, the driver, took a liberal attitude towards lane markings. But for that, the drive was enjoyable. He coasted gently, though rapidly, round the bends and down the deep hillsides, like a skier carving a path in fresh snow. Traffic was remarkably light, as if the roads were undiscovered. Indeed, most of the asphalt was new, as was the airport itself.

Hazy blue mountains and deep green ravines swung into view from time to time. Then Quito revealed itself, tall hotels perched like a dinosaur’s spikes along the spine of a sharp emerald ridge.

OLYMPUS DIGITAL CAMERAThe narrow buildings of the Carolina district, mostly 10 stories or so tall, stand shoulder to shoulder. They’re nearly all of a 1960s to 1980s vintage, unattractive taken in isolation, but a pleasant hodgepodge all together, with their jostling attitude and their pops of color. Lying here on the hotel bed, I yearn to go out and taste more of the city, but I ought to be here when my dad gets back. I also fell behind with my altitude sickness regimen, due to the delay in Miami, so not surprisingly I feel myself in a languorous stupor.

A maid just knocked on the door and offered a basket of cute little pastries. I have no idea what this means – as I’ve had to point out with shame to two people already, I don’t even speak “un poco” of Spanish.  I said, “No, gracias” to the nice pastry lady and closed the door.

Finally decided I had to escape for a bit, and spent a pleasant hour in Parque La Carolina.

OLYMPUS DIGITAL CAMERA

OLYMPUS DIGITAL CAMERA

Ran across a jubilant crowd of black-and-yellow clad sports fans, drumming and singing in the permanent grandstands that occupy a long stretch of Av de los Shyris:

And here are the grandstands from the back:

OLYMPUS DIGITAL CAMERA

Read Full Post »

Note: I have joined the “virtual class” component of Dan Kahan‘s Science of Science Communication course at Yale University. As part of this I am endeavoring to write a response paper in reaction to each week’s set of readings. I will post these responses here on my blog – the second such response is below. The first is here.

I will also be participating in the discussion on Kahan’s own blog. (The discussion on session 2 is here.) Comments are welcome are either site.


By several accounts it seems we have Jon Miller and his colleagues to thank for bringing a certain amount of scientific rigor to the study of science literacy. According to Miller, his work was the first attempt to use item-response theory (IRT) technology to design reliable cross-national estimates of public scientific understanding. Pardo and Calvo write that Miller’s work, and that led by John Durant in Britain, built an empirical foundation for the field through use of “clearly specified dimensions and comparable questionnaires.” Indeed, Miller’s methods were adopted by the National Science Foundation for its science literacy surveys; and likewise his colleague Durant laid the groundwork for the European equivalent, the Eurobarometer.

Miller’s civic scientific literacy (CSL) measure of 1998, however, shows room for improvement in several areas – as does the Eurobarometer.

Pardo and Calvo’s criticisms

Pardo and Calvo in particular focus on the Eurobarometer’s sub-optimal reliability, demonstrated by a Cronbach’s alpha coefficient of 0.66, below the standard 70 cut-off point. True-false questions bear much of the responsibility for the test’s low reliability.

Pardo and Calvo argue that the overall difficulty of the test – and therefore its discriminatory power – is low. “If the test were applied in an educational context, almost all of the items in Q55 and Q56 could be answered correctly by most individuals in the population,” they write. It appears the test’s discriminatory power was strong enough to allow comparisons between countries, but within the more scientifically literate nation, the test is not finely calibrated enough to give insights into sub-populations.

These authors also posit that the test items are not a very representative sample of what constitutes a firm grounding in elementary science, because:

  • they pose questions about basics of scientific theories, alongside others on more specialized “or even esoteric” matters;
  • a majority of questions call on explicitly taught knowledge (memory recall), while two others require recalling and combining several pieces of knowledge;
  • the survey has a poor balance between different scientific subjects.

My criticisms and questions

Several other aspects of Miller’s (and in some cases, Durant’s) method jumped out at me as ripe for refinement or at least questionable, and I would be curious to hear others’ thoughts on this:

  • The 67 percent threshold (location or difficulty level): beyond the fact that this equals two-thirds, is it just arbitrary? Why is two-thirds a good level at which to peg the threshold? I’m tempted to say that with Miller’s questions being as elementary as they are, it’s very hard to think of someone who only gets two-thirds right as scientifically literate at all!
  • Miller doesn’t detail the model underpinning his confirmatory factor analysis. Now, CFA is a new concept for me, so correct me if I’ve got this wrong. His CFA reveals factor loadings that vary from 0.46 to 0.83 for the United States study, and from 0.34 to 0.70 for the European study. He says this process reveals nine items in each study that constitute “a unidimensional measure of construct vocabulary.” So he seems to be saying that these nine items do load on the factor of construct knowledge, i.e., they are good indicators of such knowledge. But what does this actually mean to the reader, without understanding how the loadings were calculated or what assumptions Miller may have used?
  • The methods questions seem flawed to me. For example, the Eurobarometer asks respondents to rate “how scientific” astrology is, on a scale of 1 to 5. The answer, apparently, is 1. Is this an objective fact? While it’s certainly no 5, you could make a convincing argument that astrology deserves perhaps a 2 – it does have a certain system to it that its practitioners follow, albeit one completely ungrounded in physical reality. There is no commonly understood rubric of what it means for a discipline to be a 2, 3 or 4 on a 5-point scale of scientific-ness – so how can you objectively say the answer is definitely 1?

The cultural criticism

However, I wish to take issue with one family of criticisms against the Miller/Durant style of measurement: that it fails to take account of cultural differences. Pardo and Calvo write of the Eurobarometer, “No allowances is made for the idea that some population segments might be influenced in their appropriation of a scientific proposition by values or beliefs in their society’s culture.” This is true, but I don’t think it’s a damning criticism. It simply points out that the NSF and Eurobarometer surveys measure the “what” rather than the “why.” It is important – but not sufficient – to note what percentage of a population agree “all radioactivity is man-made.” Researchers are right to then explore the cultural cognition that goes into creating agreement or disagreement with that statement.

To put it another way, just because we measure the public’s science comprehension, does not mean we necessarily adhere to the science comprehension thesis. We can use science comprehension measurements as a stepping stone on our way to more complex analyses.

A final note: understanding versus belief

I think Dan Kahan raises a more important criticism in his research on evolution. This work demonstrates that people are capable of two simultaneous mental states that, on the surface, appear conflicting: they can *understand* the theory of evolution, and yet *not believe* in the theory. Further surveys on science literacy should tease apart the prevalence of, and interaction between, these very different dimensions.

Read Full Post »

Ebola shifted from a running story to red-alert breaking news yesterday, as the first diagnosis was made in the US – and in Dallas, my home. As a local, a journalist and a researcher on misinformation, I’ve had many reasons to follow the developing news closely.

So far I’ve detected few howlers. The local broadcasters, newspapers and their online operations have pretty much stuck to the information officials have given them. While in many cases we might decry journalists “toeing the official line,” in the case of a rapidly evolving health story that official information becomes much more important. Few journalists are in the position to question the CDC’s scientific advisories, and speculating about the questions that agencies have refused to answer could lead to a host of undesirable outcomes – from misinforming the public and causing panic, to ethical violations against the patient and his family, to just plain appearing foolish.

As usual, if you want to see abhorrent misinformation, you can turn to Alex Jones’ Infowars – though I don’t really recommend it. Among the examples of responsible service journalism exhibited there today is the assertion, “The Ebola infection is contagious during the incubation period. This, however, is disputed by the World Health Organization.” Interesting epistemological approach there: basically, “we have the facts, and we won’t spell out the source, although we’ll link to eMed.tv. WHO seems to disagree with the authority that is eMed, so I guess we’ll mention that.”

We can also count on Glenn Beck to go where others won’t, although his sins are more in the realm of silly conjecture than actual misinformation about the facts. Beck used the opportunity to compare the unnamed Ebola patient with Typhoid Mary, and this leads to the inevitable conclusion that the government’s preparing to “fence people up.” Also, the question of why Texas has been hit by both West Nile and Ebola comes up, and the answer is: “a gigantic open door” welcoming diseases from Africa, Mexico, and Central and South America.

What are you seeing in the Ebola coverage? Any gross misinformation or rumor-mongering? I’d especially like to see interesting examples of local media coverage in Dallas, and to hear from my fellow journalists in the Big D.

Read Full Post »

I’m working on developing a feature pitch regarding journalists’ application of psychological research on misinformation. This piece would take one of two very different shapes. Either I find examples of such practices, and write:

Story 1. How Reporters and Editors are Applying Psychology to the Newsroom

…or I find credible evidence that journalists are not currently applying such research, and write:

Story 2. Does Psychological Research Have a Place in the Newsroom?

Some thoughts on how this could develop:

Starting with trying to write story #1: I’ll be asking some leading lights in the misinformation field about what examples they’ve heard. Also, major think tanks (Poynter, Knight Digital Media Center, etc). and fact checkers. Next, on to editors to ask if they’ve applied any of these principles. Any leads gratefully received! This is really at an early exploratory stage. At this point the approach is very anecdotal, as I very much doubt there’s been a survey on this… though I’ll ask…

Story #2: I think this is a likely outcome. And while #1 is probably an easier write/read (everyone likes to read about interesting new practices that their colleagues/competitors have adopted), perhaps #1 #2 is truly where the conversation should start. In the literature, some researchers have offered advice on how journalists should apply their findings. That these findings should be applied ASAP may seem trivial.

But, could it not be argued that it is too soon to make changes based on the research? After all, we don’t approve drugs based on only a couple of trials. Perhaps we ought to tread carefully and wait for a more solid evidence base. (In fact, do we even have evidence that negations are ineffective corrections? Nyhan and Reifler couldn’t find it.)

Just think of a news organization trying to institute changes now, working at the usual lumbering page of a media corporation, and debuting their ground-breaking new model just as researchers find that they got some of their research absolutely wrong. On the other hand, some of the changes hardly seem to require years of reinvention… and on the other, other (!) hand, this is an industry in need of reinvention. In that process, surely it should incorporate researchers’ best available description of how people absorb and retain information.

Read Full Post »

Older Posts »