Feeds:
Posts
Comments
Credit: UK Government

Credit: UK Government

Note: I have joined the “virtual class” component of Dan Kahan‘s Science of Science Communication course at Yale University. As part of this I am endeavoring to write a response paper in reaction to each week’s set of readings. I will post these responses here on my blog – my paper for week nine is below. Previous responses are here. I will also be participating in the discussion on Kahan’s own blog.


This week’s (well, last week’s) reading focused on synthetic biology. Dan invited us to imagine that the White House Office of Science and Technology Policy had asked us to study the public’s likely reaction to this emerging technology. What kind of studies would we do?

The readings were:

Presidential Commission for the Study of Bioethical Issues. New Directions: The Ethics of Synthetic Biology and Emerging Technologies (December 2010).

Pauwels, E. Review of quantitative and qualitative studies on U.S. public perceptions of synthetic biology. Syst Synth Biol 3, 37-46 (2009).

Dragojlovic, N. & Einsiedel, E. Playing God or just unnatural? Religious beliefs and approval of synthetic biology. Public Understanding of Science 22, 869-885 (published online 2012, version of record 2013 – for convenience’s sake, I will refer to this as “Dragojlovic 2012”)

Dragojlovic, N. & Einsiedel, E. Framing Synthetic Biology: Evolutionary Distance, Conceptions of Nature, and the Unnaturalness Objection. Science Communication (2013)

I want to start off by taking stock: listing what we appear to know already, based on this week’s readings, and then figure out what outstanding questions remain.

What we know(ish)

Here’s a summary of findings from the readings (roughly organized from strongest evidence base to weakest):

  • Most people know little or nothing about synthetic biology (Pauwels)
  • The familiarity argument – that as people become more familiar with a technology, their support for the technology will increase – is not well supported (Pauwels, others)
  • For many people, synthetic biology provokes concerns about “playing God” and who has the right to “create life” (Pauwels, Dragojlovic 2012)
  • Framing for synthetic biology is similar to that for cloning, genetic engineering and stem cell research (Pauwels)
  • Domain of application has an effect on framing (Pauwels)
  • Acceptance of risk-benefit tradeoff depends on oversight structure that can manage unknowns, human and environmental concerns, and long-term effects (Pauwels)
  • Belief in God increases disapproval of synbio through two mechanisms – the idea (among weak believers) that genetic manipulation interferes with nature, and the idea (among strong believers) of encroachment on divine prerogative (Dragojlovic 2012)
  • Framing synbio as “unnatural” leads to negative perceptions only when characteristics of the particular technological application – eg, evolutionary distance between DNA donor and DNA host – increase perceived relevance of such arguments (Dragojlovic 2013)
  • Individuals who view nature as sacred or spiritual are most responsive to unnaturalness framing (Dragojlovic 2013)


Now, to answer the questions – via a little additional reading.

 

Part 1: Single study

The question:

Imagine you were asked by the White House Office of Science and Technology Policy to do a single study to help forecast the public’s likely reaction to synthetic biology. What sort of study would you do?”

At this juncture, it is probably more useful to model the general reactions people have and the associations they make when they learn about synthetic biology, rather than simply polling their support for the technology. (As we previously discussed, there’s little external validity to questions asking for opinions on something that most respondents don’t understand.)

I think the starting point would have to be more qualitative studies (or – cheating a bit – a mixed-methods study that starts with a qualitative phase). There seems to be little sense in creating a quantitative study in which the choices of responses are simply sentiments that we guessed people would entertain – far better to convene focus groups and see what sentiments people actually entertain. This would lay the groundwork for more informed quantitative studies.

Among the reading for this week, the only qualitative research was the pair of Woodrow Wilson International Center for Scholars studies discussed in Pauwels. These produced some insights – but as Pauwels points out, “The most important conclusion of this article is the need for additional investigation of different factors that will shape public perceptions about synthetic biology, its potential benefits, and its potential risks.”

Some of this work has now been carried out.

Looking beyond the week’s reading, I see that the Wilson Center has continued to carry out both qualitative and quantitative studies, some of which Pauwels summarized in a 2013 paper, “Public Understanding of Synthetic Biology.

Her major findings were:

  • Before hearing a definition of synthetic biology and learning about its applications, participants tended to describe synbio through comparisons to other biotechnology, such as cloning, genetic engineering and stem cell research. This could be crucial to understanding the ways that public debate about synbio might evolve, Pauwels contends.
  • Participants – even some of those generally positive about synthetic biology – expressed concerns about unintended consequences. (Interesting to note that some of these concerns came up when discussing genetically modified mosquitoes, a topic from a previous week in this class.)
  • Participants’ value judgment about synthetic biology varied depending on the technology’s proposed application. If the proposed application was in an environment that appeared more contained, participants were less concerned about risks.
  • Participants expressed ambivalence about engineering life. These attitudes take the form not only of the much-discussed unease at “creating life” and “playing God,” but also much more generalized anxiety – “this term makes me feel scared.”

This is is a very good start, but I feel there’s a bit more unpacking a qualitative study could do.

For example, under “ambivalence toward engineering life,” Pauwels includes the following reactions from participants:

It could also be dangerous if we do not research it enough to find out any long­term effects.”

“This could lead to huge scientific advances, but it can also lead to countries or people using it for their own ‘evil agendas.’ It reminds me of Jurassic Park.”

“It seems exciting but makes me somewhat uncomfortable. Where are the limits?”

“It sounds like we are playing God. Who are we as humans to think [that] we can design or redesign life? It might be nice to be able to do so, but is it right? It seems [that] there are many ethical and moral issues. Perhaps we are getting too arrogant.”

“I feel concerned because, not being perfect, we believe we know what is best in creating life. As in science­ fiction movies, when we do—in time—it goes in a direction we didn’t think about… I believe [that] when life is created, it is meant to be created that way for a purpose we may not even know right now.”

There are many underlying fears and concerns there, expressed in various combinations. These include concerns about unknowables (to coin a phrase, both known unknowns and unknown unknowns), longterm effects, human and scientific hubris, immoral applications by bad actors, security, unnaturalness, and violations of nature or of God’s dominion. There’s also an implied recognition (“where are the limits?” “many ethical and moral issues”) of the need to prevent technological applications that exceed society’s moral norms, and of the potential of technological advances to change the very locus of our morality.

I’m particular concerned with the need to explore the public’s feelings on moral limits. So far studies of the public’s moral objections to synthetic biology has focused on intrinsic moral objections (it is wrong to usurp God’s position as creator) rather than extrinsic moral objections (certain applications would be morally problematic). This seems strange given that as a society we have already collectively recognized some biotech applications as unethical – most notably, human cloning. It therefore seems imperative to explore public opinion on the subject, and try to separate measures of intrinsic and extrinsic moral objection.

With this preliminary information at hand, the most useful question to ask next is which of these attitudes, or general sets of attitudes, is most responsible for a negative predisposition to synthetic biology.

Part 2: More studies

The question:

Imagine you conducted the first study and the OSTP said, “wow, that’s a great start, one that convinces us that we need to do a lot more. We’d like you to outline a comprehensive program of empirical study—consisting of as many individual studies, building progressively on each other, as you like—with a budget of $25 million and a lifespan of 36 months.” What might such a program look like?”

I would propose a series of quantitative studies that would seek to model a situation in which citizens learn about synthetic biology, and then seek establish the frequency of the ideas and opinions expressed in the qualitative study.

Participants would be given a basic description of synthetic biology, and would then be asked to agree or disagree with the following (or perhaps, indicate their level of agreement on a multi-point scale):

  • Synthetic biology is unnatural.
  • Those who practice synthetic biology are playing God.
  • Synthetic biology scares me.
  • Synthetic biology just feels wrong.
  • If we start using synthetic biology, we may not be able to control the consequences. (With variations for environment, human health, security.)
  • I’m concerned that we don’t know what the long-term effects of synthetic biology will be. (With variations for environment, human health, security.)
  • Synthetic biology holds great promise.
  • Synthetic biology is exciting.
  • Synthetic biology could improve people’s lives.
  • Etc.

Potentially a great deal could be learned just in the correlation between these responses. For example, are there many respondents who say synthetic biology “just feels wrong,” but don’t agree with any of the usual-suspect statements about why it feels wrong? This indicates either that synthetic biology taps into a deep-seated fear that people find difficult to attribute cause or voice to – or perhaps that thre is an expressible reason for their misgiving that we haven’t yet succeeded in drawing out of qualitative study participants.

Another hypothesis to explore: perhaps this a strong correlation between unnatural/playing God responses and fear of unintended consequences. This may indicate that expressions such as “playing God” are sometimes used less to express a religious or spiritual conviction, and more to express a sense of humanity’s hubris.

It would be useful to pair these questions with a five-point measure of respondents’ support for synthetic biology, to try and determine the relationship between support strength and various attitudes.

I think it could also be useful to ask a series of questions that attempt to get at the way people make risk-benefit analyses about synthetic biology. This may also have an interesting bearing on their level of support. (As Dragojlovic (2012) points out, a key further question to arise from that study was, how do we consider risk-benefit trade-offs in way that accommodates value-based risks?) Participants could be asked to agree or disagree (on a five-point scale) with the following:

  • The risks of synthetic biology outweigh the benefits.
  • The benefits of synthetic biology outweigh the risks.
  • There is no acceptable level of risk for a technology or product. (Perhaps ask variations on this tailored to human health, environment, etc.)
  • The best way to judge whether we should use a technology is to weigh the benefits against the risks.
  • It doesn’t matter what the benefits or risks of a technology are; if it’s unethical, we shouldn’t use it.
  • The “rightness” or “wrongness” of synthetic biology depends on how it’s used.

Etc, etc – that’s an imperfect start, for sure, but I think with the right questions we could get into an interesting area of psychology.

Outstanding questions

There is, of course, much more that can be investigated. Here were the major area that Pauwels and Dragojlovic highlighted as ripe for future research – along with a few extra thoughts of my own.

  • We need further investigation of factors that will shape public perceptions about synthetic biology, and its benefits and risks (Pauwels). I think this is key – several of the studies we read followed up on “playing God”/”creation of life” concerns, but these concerns are probably only responsible for a small proportion of objections to synthetic biology. In Dragojlovic 2013, the baseline model, which included only the experimental manipulations (unnaturalness framing, evolutionary distance and so on), explained about 5% of variance in attitudes. This, Dragojlovic says, shows that most attitude variance is due to other factors.
  • Pauwels asks about nature of claims raised by “playing God”/”creation of life” concerns: “does it refer to polarization involving broad cultural/philosophical dimensions or to polarization strictly linked to religious values?” Dragojlovic 2012 illuminates some aspects of this but leaves further questions on table. Intriguingly, the Presidential Commission says it “learned that secular critics of the field are more likely to use the phrase “playing God” than are religious groups.” This may hold true only for organizational leaders and not for the populace at large, but it still neatly points out the importance of separating the religious and philosophical/cultural dimensions of the “playing God objection.”
  • Note that Dragojlovic 2012 was carried out in Europe – so a similar study of religious objections carried out in the US could yield quite different results.
  • What constitute effective counter-arguments to the unnaturalness objection? (Dragojlovic 2013)
  • Identify conditions under which advocates and opponents of emerging technology can use rhetorical frames to shape how citizens perceive the technology (Dragojlovic 2013)
  • Who is more or less likely to be swayed out of the unnaturalness objection – the religious or the irreligious?
  • What is the relationship between the “unnaturalness” and “playing God” objections? It seems like there is a lot of overlap, but an effective communications strategy would surely depend on understanding how each interacts with personal identity, which are simply immutable and which more finely shaded, etc.
Advertisements

One of the first companies to try and automate fact-checking now says “there is no market for fact-checking” — at least, not as you and I know it.

Paris-based Trooclick launched its plug-in last June, promising to check the facts in IPO stories against Securities and Exchange Commission filings, and against other articles. The original business plan was to make traders the prime audience, and eventually transform the plug-in into an add-on for Bloomberg or Dow Jones terminals. Trooclick was one of just a handful of efforts to automate the fact-checking process, some of which I highlighted for the Columbia Journalism Review.

The plug-in worked well, CEO Stanislas Motte says — so well that, in a way, it killed itself off. “The algorithm worked and didn’t find any errors,” Motte says. The reason, he says with hindsight, is that companies know their words are being scrutinized by regulators, and don’t dare to make misstatements.

As a result, Motte now concludes, “There is no market for fact-checking, especially on financial and business news.” That savvy business audience already knows to trust a limited number of sources — and they can usually spot the important errors themselves, Motte says.

From errors to omissions

After the success-cum-failure of the plug-in, Trooclick began thinking where the problem really lies. Its conclusion: “The real problem is not on errors but on omission… Big speakers prefer to use omission rather than errors,” Motte says. The way you combat that problem is by presenting different points of view and facilitating debate, Motte says.

2015-03-19_09-35-59

So in December, Trooclick announced a new product, with quite a different tack. The Opinion-Driven Search Engine uses the same natural language processing as its predecessor — technology due to receive a US patent on March 27— to scour news articles, blog posts and tweets. But instead of comparing facts against a reference, the new site categorizes quotes and paraphrases attributed to executives, analysts and journalists. (Trooclick describes all these statements as “quotes,” but in reality they do include paraphrasing too.) These “quotes” are designated either positive, negative or neutral, and the site displays lists of the positive and negative statements, side by side. Soon Trooclick hopes to move beyond “positive” and “negative” to perhaps three or five points of view on a given topic.

A sample Trooclick story page.

A sample Trooclick story page.

A viewpoint summary will be another key ingredient in Trooclick’s new recipe. With a huge chunk of readers never making it past the headline, Trooclick sees it as important to quickly summarize the major viewpoints on an issue in the first couple lines of each entry.

“Everything will be focused to give you the synthesis very quickly,” Motte says. “Today… on our website you can find 20, 30, 40 quotes [on an issue]. This is boring and maybe no one reads it. But this is only the beginning.”

The company, which has about $2 million in funding from its founders and France’s Banque Publique d’Investissement (Bpifrance), is considering two business models for the product. One is a white-label offering to social media or search giants, such as LinkedIn or Yahoo. The second is a b-to-b-to-b approach, in which a customer could use Trooclick technology to provide its own client companies with easily digestible media monitoring.

Moving fast

The company is aiming for some major advances in a very short time frame. In about three weeks the website will add the ability to filter stories by the person being quoted — a key move, Motte says, because he wants to start emphasizing speakers over news outlets. By June, Motte says he’s “80% confident” that Trooclick will have developed a capability to reliably detect and categorize three to five families of opinion for each topic, along with the functionality to summarize those opinions in a couple sentences.

And then it’s on to politics: by the end of this year, Motte wants Trooclick gearing up to tackle the 2016 US presidential election. By early 2016, Trooclick aims to analyze 50,000 news articles a day, on business, politics and other topics.

That seems a big leap for a product that still stumbles at times with classifying “positive” and “negative.” For example, here are some of the quotes Trooclick catalogued for the story, “Ryanair plans to offer low-cost flights between Europe and the U.S.”:

2015-03-19_12-57-52

The circled comment is not exactly positive…. just sort of informational.

Here’s one from the story, “Lufthansa pilots to go on strikes on Wednesday”:

2015-03-19_12-58-17

That might be positive if you side with the pilots and want to see their strike having an effect. For a lot of parties, I’d call this a negative.

I have no way of knowing how widespread the errors are, and I do see a lot of quotes that Trooclick has catalogued correctly. But seeing these errors does make me wonder if the company’s timetable is a little optimistic.

Motte acknowledges, “One of the biggest challenges for us is error rate,” though he won’t say what the site’s rate is. “If you are at 80% it’s great. The objective is to be more than 80%.”

Is Trooclick right for politics?

The move into politics is also surprising, given Motte’s views about fact-checking. “Speakers, companies, even politicians prefer omission [to making misstatements],” he says.

A map of fact-checking operations around the world, by Duke Reporters’ Lab.

A map of fact-checking operations around the world, by Duke Reporters’ Lab.

I just can’t buy that, given the 64 active fact-checking operations around the world, 22 of them in North America, and the frequency with which they find politicians making statements worthy of “Pants on Fire” or “Four Pinocchio” ratings. (Full disclosure: I’m a consultant for the American Press Institute’s Fact-Checking Project, so I arguably have a stake in seeing political fact-checking succeed.)

Where does this leave automated fact-checking?

Trooclick sales and marketing assistant Darcee Meilbeck says she does still think of the company’s work as fact-checking:

“In the last seven to eight months, yes, we have gone through a pivot… We realized that fact-checking is more than just true and false. That’s the story I’ve told people — we realize fact-checking isn’t just black and white. There is bias elimination that comes into it as well. That’s, I think, where we fit in at the moment.”

I’d say Trooclick’s new direction is an intriguing play at helping people to graze on news more intelligently — I’d hesitate to use the phrase “fact-checking” when no actual facts are being checked.

I must admit I was disappointed to see the company’s shift away from an automated tool that compared news reports to official reference sources. That disappointment that could be well driven by my own over-optimism rather than any realistic sense of what such a tool could today achieve, technologically or financially, and it’s not meant as criticism of the interesting work that Trooclick has turned to.

But while I may have been slightly dewy-eyed about what automated fact-checking can achieve now, I still think that for the long term, this target is both achievable and necessary. The battle against misinformation is going to require a combination of automation, leveraging of big data and some kind of social media or browser add-on, for the simple reason that most of us don’t go looking for verification, and even those of us who are verification junkies can’t possible verify everything we read. So the media ecosystem needs fact-checks that seek their readers out, rather than the other way round; and even better, seek out the lies that human journalists don’t have the bandwidth to.

In my CJR piece I very briefly highlighted a few tools and research projects that might fill that role. I didn’t know which would pan out and I’m not sure anyone does yet. But in the demise of Trooclick’s fact-checking plug-in, there’s an opportunity to formulate a couple hypotheses:

  • Business journalism isn’t crying out for fact-checking, in the way that political or science journalism is. Reasons include the less contentious nature of the content and lower personal and ideological investment by readers.
  • Automated fact-checking — especially the natural language processing component — is really, really hard. Maybe too hard, given the current state of the art, for a small start-up to handle. It’s possible such technology just isn’t ready for commercial roll-out yet — and the volume of research required to fine-tune it would be easier to carry out in academia or at huge companies like Google.

I welcome my readers’ thoughts on these theories, as well as their own prognoses for the future of automated fact-checking.

Post-script:

Stepping away from automated fact-checking for a moment, it’s also worth considering the role of crowdsourced verification — if only because of two high-profile launches in recent weeks. They are Fiskkit, a platform for commenting on the news, which won the Social Impact award at the 2015 Launch Festival startup conference; and Grasswire, a platform that invites the public to fact-check breaking news stories.

I wouldn’t close off crowdsourcing as an avenue to explore. If nothing else, I think Fiskkit’s combination of in-line annotation, logical fallacy tags, “respect” button (an outcome of the University of Texas’s Engaging News Project) and comments makes a good bid to be the forum for civil discourse that Facebook never was, and probably never could be. What I’m not sure it adds up to is good fact-checking. Wikipedia has shown us how far crowd-sourcing fact can take you, which is pretty far indeed — up to a point. I’ll be very surprised to see any crowd-sourced effort beat that track record.

Cross-posted to Medium.

Note: I have joined the “virtual class” component of Dan Kahan‘s Science of Science Communication course at Yale University. As part of this I am endeavoring to write a response paper in reaction to each week’s set of readings. I will post these responses here on my blog – my paper for week eight is below. Previous responses are here. I will also be participating in the discussion on Kahan’s own blog.


This week, two interlocking sets of questions have arisen for me:

  1. Is it problematic that many tests of opinion reflect the ad-hoc opinion people form about something they don’t know about or don’t understand?
  2. How are we to weigh the relative importance of the opinions of know-nothings, know-slightly-somethings, know-quite-a-lots, etc?

Nanotech vs. GM foods vs. fracking

These questions came about because the various emerging technologies under discussion – nanotechnology, genetically modified food and fracking – seem to have different profiles in terms of how much people know about them, versus people’s opinion of the risks involved or the advisability of the technology. (We’ll put aside discussion of GM mosquitoes for now, as they’re a bit more of an isolated case.)

From a completely unsystematic review of the literature I happened to have at my fingertips, I drew up this rough approximation:

  • People know the least about nanotechnology, and their feelings about it are pretty neutral.
  • People know a bit more GM foods, though roughly half the population still knows close to nothing. On GM foods, average opinion ranges from neutral to very negative, depending on the question being asked.
  • People’s knowledge of fracking is roughly equal to their knowledge of GM foods. Opinion tends to the negative but I don’t have a strong sense – one study I looked at had only 22% in favor but another 58% undecided. Pew found 41% in favor of expanding fracking.

(By the way, where you really see the big risk-perception differences is when you compare the polarization on these issues – that is, how risk perception correlates with ideological outlook. That’s one more variable than my brain can really handle in this early stage of theory formation, so for now let’s just put it in a nearby cubby, as a reminder to come back and visit later.)

Is know-nothing opinion data meaningless?

Now here’s the point where you might expect me to say, “Hang on – let’s get into the numbers, and let’s disaggregate them. If we want a true sense of public opinion, let’s only look at the favorability among those familiar with the technology – because if they don’t know what they’re judging, how can they judge?”

That certainly seems the tack taken by many social scientists. George Bishop’s book The Illusion of Public Opinion discusses the many ways that the public’s lack of knowledge confounds opinion polls, especially when paired with bad survey design. Good researchers word their questions carefully to try to elicit a true opinion – though there are arguably limits to what they can do.

Dan Kahan has called out a Pew poll on GM food as one example of bad survey design producing meaningless “opinion” data – and I think he’s mostly right. But I would argue it is actually quite important that we measure the opinion of the “know-nothings” (or at least, “know-next-to-nothings”) and “no littles.”

This is because people do hold opinions about stuff they don’t understand. They do it all the time!

From a purely logical point of view, of course, this makes no sense. A proposition needs a clear reference to have meaning, you might say. But people aren’t very rational. They don’t make a lot of sense. They have limited time for learning about the world around them, and somehow are expected to produce opinions on that world. (A nasty pairing that Walter Lippmann observed back in 1922, but which is all the more true today due to increasing technological complexity and the demands of social media.)

A philosophy of Subway

Take this example. The website I Fucking Love Science posted this manufactured meme on Facebook:Safeway water hoax   To which a few people reacted like this:

Just on the basis of this one hoax meme, some people started to proclaim their intention to boycott Subway. Whether they’d really follow through, I don’t know. But what’s interesting is the object of their concern.

A philosopher might say that for these commenters, the reference of “DHMO” has been displaced. The true reference of “dihydrogen monoxide” is the substance water, which ordinarily could be understood through use of various names, or “senses” – such as “water,” “H20,” and so on. But for these commenters, the reference of “DHMO” is something like “this chemical that has all these bad properties.” The commenters then form their opinion using their own reference for DHMO.

But if a pollster came and asked them for an opinion such as “should we ban dihydrogen monoxide from our food,” he probably wouldn’t probe that deeply – and would just be measuring their opinion about the true reference, water.

That’s wrong and it’s also right. It is wrong in the sense that if you want to know what people truly think about water, you’ll have failed. But if you want to know what policy action they want taken about water, it’s relevant. People will spread their misconceptions to others, have them in mind when thinking about and voting for politicians, and draw on them when grocery shopping. Probably when it comes to DHMO, they won’t get very far before someone corrects them. But other misconception-based opinions, whose errors are more subtle, have real power to shape policy.

Kahan encountered a variant of this when his colleague briefly defined fracking for a woman who hadn’t previously heard of it:

It’s a technique by which high pressure water mixed with various chemicals is used to fracture underground rock formations so that natural gas can be extracted.”

“Oh my god!,” the receptionist exclaimed. “That’s sounds terrifying! The chemicals—they’ll likely poison us. And surely there will be earthquakes!”

The receptionist doesn’t know all the ins and outs of fracking. She probably has some misconceptions – for example, thinking that the chemicals make up a large proportion of the injected fluids. But now “fracking” has a reference for her, one that may have inaccuracies, and she’ll use that to shape her opinion. (In fact, clearly she already has.)

GM food sells like crazy – so what?

People don’t always run with their misconceptions, of course. Sometimes, a misconception can actually keep one from acting on an opinion. As Kahan says of GM foods, “People consume them like mad.” That’s because people’s bundle of misperceptions includes the idea that GMs aren’t already widespread in our food supply – which they are. In a survey by Hallman et al of 1,148 Americans, only 43% knew that food with GM ingredients is current for sale in supermarkets, and only 26% thought they had ever eaten GM food.

I would warn against drawing too much inference from people’s food consumption. The fact that “people consume them like mad” doesn’t tell us that people are OK with GMOs, because if you don’t know that the thing you fear is in your food, you don’t know not to eat that food. People could still be anxious about GMOs, and in fact, they appear to be: in Hallman’s study, only 45% agreed that it was safe to eat GM foods, 59% said it was very or extremely important for food with GM ingredients to be labeled, and 73% said such labels should be required.

Know-nothings and know-somethings

Of course, there are shades of ignorance, and maybe we can begin to distinguish the ignorance levels for which we are interested in attitudes, from those where attitudinal data is just plain useless.

One key instance: if you literally have not heard of something before, than any data purporting to measure your attitude is invalid. The poll is only capturing your attitude towards something of which you are being informed in a highly artificial environment. This might give some indication of “how you would feel about thing X, had someone just happened to tell you about it in the real world” – but probably not a very good indication, and in any case we’re not interested in “what would people say if told X.” In this paper I’m genuinely only interested in “what people think about X” – and doing so in a way that acknowledges that people’s knowledge is almost always incomplete, or wholly or partially wrong.

This component, the “know-absolute-zeros,” seems to form a larger or smaller component depending on the technology involved, and I wonder whether that can account for some of the variation in average opinion and, potentially, polarization levels. I promise no answers, but let’s at least look in that cubby before we call it a day.

Polarization: what’s normal?

Kahan asks the question of why nanotechnology didn’t end up polarizing public opinion. My proposal: nanotechnology simply didn’t get enough media coverage to make people fear it.

There are several mechanisms by which media coverage – even if it does not exaggerate the risks of a technology – could heighten concern among those inclined to be fearful. When coverage is scant, people don’t receive the signals they need to categorize or prioritize an issue as one for possible concern.

On the other hand, GM foods and fracking are more frequent subjects of media coverage. And the difference between these two issues is, I think, the anomaly to explore, rather than nanotechnology.

These two technologies seem to have similar rates of familiarity (ie, about half of Americans have no idea about them) and yet different levels of concern. For GM foods, I’d say the level of concern appears high, as cited above. With fracking, levels of concern appear lower. In a survey of 1,061 Americans, Boudet et al found a mean position of 2.6 – between “somewhat oppose” and “somewhat support.” More than half were undecided about whether to support fracking or oppose it.

It gets weirder, though. GM foods, while they elicit a lot of concern among the population as a whole, aren’t very polarizing at all. Science comprehension reduces concern among both right-wingers and left-wingers – very unlike the pattern for say, climate change. But for fracking, polarization increases with science comprehension – a pattern one would normally only expect for a much more mature technology.

Reflecting on that receptionist, Kahan says, “It turns out that even though people don’t know anything about fracking, there is reason to think that they — or really about 50% of them– will react the way she did as soon as they do.” Indeed. The key now is to figure out: 1. Does that make fracking normal or abnormal? 2. What does that tell us about how people form opinions? and 3. What does that tell us about how we should be communicating with the public about emerging technologies?

Note: I have joined the “virtual class” component of Dan Kahan‘s Science of Science Communication course at Yale University. As part of this I am endeavoring to write a response paper in reaction to each week’s set of readings. I will post these responses here on my blog – my paper for week seven is below. Previous responses are here.

I will also be participating in the discussion on Kahan’s own blog.


Here was our assignment for week 7:

Imagine you were

  1. President Obama about to make a speech to the Nation in support of your proposal for a carbon tax;
  2. a zoning board member in Ft. Lauderdale, Florida, preparing to give a presentation at an open meeting (at which members of the public would be briefed and then allowed to give comments) defending a proposed set of guidelines on climate-impact “vulnerability reduction measures for all new construction, redevelopment and infrastructure such as additional hardening, higher floor elevations or incorporation of natural infrastructure for increased resilience”;
  3. a climate scientist invited to give a lecture on climate change to the local chapter of the Kiwanis in Springfield, Tennessee; or
  4. a “communications consultant” hired by a billionaire, to create a television advertisement, to be run during the Superbowl, that will promote constructive public engagement with the science on and issues posed by climate change.

Would the CRED manual be useful to you? Would the studies conducted by Feygina, et al., Meyers et al., or Kahan et al. be? How would you advise any one of these actors to proceed?

The readings 

First, some thoughts on these four readings.

The CRED Manual: well-intentioned, but flawed2015-02-27_10-42-04

Source material: Center for Research on Environmental Decisions, Columbia University. “The Psychology of Climate Change Communication: A guide for scientists, journalists, educators, political aides, and the interested public.”

When I first read the CRED manual, it chimed well with my sensibilities. My initial reaction was that this was a valuable, well-prepared document. But on closer inspection, I have misgivings. I think a lot of that “chiming” comes from the manual’s references to well-known psychological phenomena that science communicators and the media have tossed around as potential culprits for climate change denialism. But for a lot of these psychological processes, there isn’t much empirical basis showing their relevance to climate change communication.

Of course, the CRED staff undoubtedly know the literature better than I do, so they could well know empirical support that I’m not aware of. But the manual authors often don’t support their contentions with research citations. That’s a shame because much of the advice given is too surface-level for communications practitioners to directly apply to their work, and the missing citations would have helped practitioners to look more deeply into and understand particular tactics.

Let’s not talk about it

In particular I would put to one side much of the CRED recommendations to do with: 

Framing: Some of these seem like assumptions. “College students are concerned with green jobs” – how do we know? In addition, Myers’ work (see below) suggests that the suggestion of a “national security” frame is ill-advised – as is this:

“Communicators may find it useful to prepare numerous frames ahead of time, including climate change as a religious, youth, or economic issue.”

The method should not be to try whatever framing seems plausible and see what sticks – unless we’re doing that as part of a controlled field experiment.

Correcting misconceptions. The CRED manual says communicators should discover what misconceptions their audience has about climate change, and “replace” them “with new facts.” Is this doable? How would one replace erroneous information with new facts? The reasoning here sounds a little too close to the discredited information deficit model.

The authors go on to cite an example from some of their own research, concluding that communicators should try to correct misapprehensions because they lead the public to support inappropriate solutions, such as banning aerosols. Does this matter? I’d argue quite possibly not, because the most pressing science communication concern is arguably just getting people to believe in climate change, thus giving mandate to policy makers (who will choose from more viable solutions – there’s no suggestion that anyone is lobbying for them to ban aerosols).

What’s missing?

It’s highly surprising that the CRED manual doesn’t talk about ideological polarization and the types of messaging that might appeal to these different populations. This seems to me to be the area of climate communication research with the strongest empirical backing.

What’s left?

Not having read the underlying research, I am not sure how much credence I should give to the rest of the CRED recommendations – and there’s a lot of them. Notably:

  • Talk about avoiding losses rather than seeking gains
  • Choose a promotion or prevention focus for your messaging (although the above advice suggests we should focus on prevention!)
  • Work to prevent the single-action bias
  • Be careful what words you use to communicate uncertainty
  • Invoke the precautionary principle
  • Focus on immediate threats
  • Frame climate change as a local issue (CRED doesn’t give a citation, but Myers cites Hart and Nisbet 2011, O’Neill and Nicholson-Cole 2009)
  • Tap into emotion: CRED essentially advises climate communicators to appeal to both reason and emotion – but also to be aware of the pitfalls of appealing to emotion too much. It’s not clear how communicators are supposed to dig their way out of this conundrum.

Accordingly, I’m going to cheat a bit on the assignment and just make the following blanket statement: I won’t recommend that any of the speakers in this thought experiment read the CRED manual. There are, for me, too many uncertainties about its advice. But a more widely read communications researcher could probably go through the manual and revise it in a way that would be useful for our speakers.

Feygina’s system justification thesis

Source material: Feygina, Jost and Goldsmith. “System Justification, the Denial of Global Warming, and the Possibility of ‘System-Sanctioned Change.

The authors found that much of the effects of political conservatism and gender on environmental denialism can be explained by the subjects’ tendency to defend the societal and economic status quo. They also concluded that it is possible to eliminate the negative effect of this “system justification” by providing statements that frame environmental protection as patriotic and consistent with protecting the status quo.

I had some qualms with this paper’s findings – in particular Study 3, which examined the effect of presenting a system-preservation message (“being pro-environmental allows us to protect and preserve the American way of life,” etc.). The study used a sample size of just 41 and seems subject to the demand effect.

Myers’ public health framing

Source material: Myers, Nisbet, Maibach and Leiserowitz. “A public health frame arouses hopeful emotions about climate change.

The authors studied the effects of three climate change-related messages that framed the problem variously in terms of the environment, health and national security. Disaggregating the subjects into segments according to climate change knowledge, attitudes and behavior (with the six segments dubbed Alarmed, Concerned, Cautious, Disengaged, Doutbful and Dismissive), Myers found that a public health frame created the most hopeful response in a majority of these populations. She also found that the national security frame was most likely to generate anger, especially among the Dismissive and Doubtful.

Kahan: geoengineering and polarization

Source material: Kahan, Jenkins-Smith, Tarantola, Silva and Braman. “Geoengineering and Climate Change Polarization: Testing a Two-Channel Model of Science Communication.”

The researchers found they could offset cultural polarization over the validity of climate change by replacing a message advocating a lower atmospheric CO2 threshold with one in which scientists called for greater investment in geoengineering – applied technologies directed at combating climate change. Contrary to a competing hypothesis, Kahan et al found that subjects receiving information about geoengineering were slightly more concerned about climate change than were those in a control condition.

My main concern here is, why would geoengineering calm the polarizing effect of climate communication if renewable energy and other green technologies have not previously achieved this? The method – as Kahan puts it, “valorizing the use of human ingenuity” – is the same.

I also have serious reservations about the advisability of putting too much emphasis on geoengineering in the public discourse. The more airtime we give to this idea, the more legitimacy we lend it. And while geoengineering is certainly something that scientists should explore, right now it seems like it should be very far down our list of policy and funding priorities. There are many technologies for energy generation, improved electricity distribution and energy storage that are much closer to fruition than any proposed geoengineering technology, without the very serious risk of unknown side effects that geoengineering poses.

What to say?

Now, on to the assignment proper – my suggestions for our speakers:

President Obama

Some of the study results suggest Obama should modify his message to appeal to voters not already on his side. 

Meyers’ work suggests President Obama could try to emphasize the public health benefits of his proposal, and the administration already seems to have got the memo on that. Obama should not, however, use a national security angle, which is likely to anger those most skeptical about anthropogenic climate change. Feygina’s work suggests that additionally, Obama could talk about his proposal as a means of protecting the “American way of life,” i.e. the status quo. Obama could try reframing the proposal as a form of system maintenance rather than radical change – perhaps he could talk about his proposal as a natural extension of the previous cap and trade system introduced by a Republican president. Not surprisingly, Obama has tried this too, although perhaps he hasn’t stressed the point enough.

Kahan’s findings could be applicable on a broad scale – not to suggest that Obama should speak about geoengineering specifcally, since that’s not his policy aim; but that part of his reframing effort could include talk of human ingenuity. Once again, I think this has been tried, in the context of renewable technologies.

By his very role, and by public perceptions, Obama is rather hamstrung. He can’t really de-politicize his message. Feygina’s study notes (the abstract is actually a bit misleading on this point) that system justification did not fully account for political orientation’s effect on environmental attitudes, and suggests that “top down” factors such as official party platforms are also at work. There’s also the possibility that when Obama engages in re-framing (such as talking about making the US more secure, by reducing dependence on foreign oil), this is seen by conservative voters as a transparent ploy. Myers notes that important factors in real world communication, not reflected in her experiment, include the congruence between messenger and frame.

Zoning board member

The key for this official is that he doesn’t really have to mention “climate change” at all. I’m not suggesting that he suppress such talk, but it’s really not necessary to get the adaptation measures passed. The term “climate change” is inherently polarizing, and people can recognize the need to protect infrastructure from storms with or without a belief in man-made global warming.

Myers’ study suggests it may be useful for the board member to use a public health frame for the discussion, which would be natural when one is talking about the need to safeguard against flooding, etc. Feygina’s recommendations would also be easy to accommodate, as climate change adaptation on a broad scale involves protecting the “status quo” (ie, protecting the city against the forces of nature), although property owners and politicians may in reality have to start doing things very differently. It proabbly wouldn’t hurt to emphasize the human ingenuity and industry aspects of the officials’ approach, but this may not strictly be necessary as without talk of “climate change,” there may not be polarizing language in need of neutralization.

Scientist

Kiwanis International is a service club that emphasizes efforts to improve children’s lives. Feygina’s recommendations may or may not be necessary here, depending on the system-protection beliefs of the participants – but putting them into practice probably wouldn’t hurt. Myers’ work would point towards using a health frame here, perhaps focusing on preserving environmental quality to reduce childhood asthma, etc., and I see little drawback to doing so. Kahan’s work suggests that making reference to human ingenuity could help to neutralize some of the polarization that talk of climate would have on the more hiearchical/individualist members of the organization, though I have concerns about over-emphasis on geoengineering, as discussed above. 

Superbowl ad consultant

Feygina’s work would be useful because the ad must appeal to a broad spectrum of Americans, including those averse to changing the status quo. Again I see health framing as useful and don’t see any obvious drawbacks to such an approach; likewise an emphasis on human ingenuity. My concerns about geoengineering, outlined above, are even stronger for the ad than for a one-off talk at a Kiwanis club, since the message would reach many millions of people and be repeated often, thereby completely exaggerating the importance of geoengineering within the range of climate change approaches.

Note: I have joined the “virtual class” component of Dan Kahan‘s Science of Science Communication course at Yale University. As part of this I am endeavoring to write a response paper in reaction to each week’s set of readings. I will post these responses here on my blog – my paper for week six is below. Previous responses are here.

I will also be participating in the discussion on Kahan’s own blog.


Graphic from home page of the Consensus Project, led by researcher John Cook. Skeptical Science Graphics (Skeptical Science) / CC BY 3.0

Since the publication of John Cook’s 2013 study confirming climate scientists’ 97 percent consensus on humans’ responsibility for climate change, many science communicators have vigorously argued the importance of “teaching the consensus.” On a common-sense level, teaching the consensus seems like an obviously good idea. If you tell someone that 97 percent of experts on a subject agree, how could he carry on maintaining the minority position?

But science communication isn’t that simple. It’s much more frustrating, and fascinating.

Evidence for “teaching the consensus”

From Lewandowsky et al, The pivotal role of perceived scientific consensus in acceptance of science, Nature Climate Change, Oct. 2012.

From Lewandowsky et al, The pivotal role of perceived scientific consensus in acceptance of science, Nature Climate Change, Oct. 2012.

Let’s have a brief look at some of the evidence for teaching the consensus – which is backed not just by common sense but by several studies. Stephan Lewandowsky, in particular, has been a strong proponent of this approach. In “The pivotal role of perceived scientific consensus in acceptance of science,” he and his colleagues found that subjects told about the 97 percent scientific consensus expressed a higher certainty that CO2 emissions cause climate change – 4.35 on a 5-point Likert scale, versus 3.96 for members of a control group not exposed to the consensus message.

In addition, the consensus message appeared to have effectively erased ideology’s influence on global warming opinions. Those exposed to the message had a high level of agreement that CO2 causes climate change, regardless of their free-market ideology; whereas in the control condition, free-market endorsement was associated with a marked decrease in acceptance of human-caused climate change (see chart above).

Meanwhile, back in the real world…

But Dan Kahan points out that these findings don’t seem borne out by real-world evidence. From 2003 to 2013, the proportion of the US public who said human activities were the main cause of global warming declined from 61 to 57 percent.

During this period researchers published at least six studies quantifying the consensus, and there were also several notable efforts to publicize the consensus, including:

  • prominent inclusion in Al Gore’s documentary film and book “The Inconvenient Truth”;
  • prominent inclusion in the $300 million social marketing campaign by Gore’s Alliance for Climate Protections;
  • over 6,000 references to “scientific consensus” and “global warming” or “climate change” in major news sources from 2005 to May 1, 2013.

What accounts for this discrepancy? According to Kahan, “The most straightforward explanation would be that the NCC [Lewandowsky] experiment was not externally valid—i.e., it did not realistically model the real-world dynamics of opinion-formation relevant to the climate change dispute.”

What should consensus publicity look like?

I think there’s another possible explanation: that Lewandowsky did realistically model the changes in opinion that might happen with a concerted and well designed consensus-publicity effort – but that from 2003 to 2013, we did not actually see such an effort.

Kahan implies that messaging during this period was widespread and well-funded. But was it as widespread as we would need such a campaign to be? And were the campaigns carried out in the best manner possible? For example, did the communicators use the best dissemination methods, the best language and the best graphical representations? Should they have targeted different populations with different, tailored messages?

I would like to see a more comprehensive analysis asking the questions:

  • What did communication of the climate change consensus from 2003 to 2013 consist of? and
  • Did it meet certain criteria that we should require of such a campaign?

Whether the actual consensus messaging carried out from 2003 to 2013 had the same characteristics that made Lewandowsky’s messaging effective, I could not say. But it certainly seems worth investigating what those characteristics might be. Of course, a prime concern is to discover if those characteristics include or depend on the artificial psychology lab environment – which would indicate that it is impossible to influence climate change opinions through consensus messaging in the real world.

An aside on sample size

I also note that Kahan doesn’t question the validity of Lewandowsky’s sampling. I can’t help wondering if Lewandowsky’s findings might not be, in some part, an artifact of small selection size.

The researchers compared a control group of 47 to a consensus condition group of 43. This means they were not literally testing the effect of consensus messaging on individual participants, but concluding that the difference in opinions between the two groups was due to the consensus messaging that one group received.

While this approach is advisable (a literal “before and after” set-up presents the problem of demand effect, as our class saw in its examination of Ranney et al,) it also depends on a large enough sample size to minimize the possibility that uncontrolled and unseen variables are affecting results. I’m not convinced that Lewandowsky’s sample size was large enough for that.

file3581277331142

There is so much to love in Craig Silverman and the Tow Center’s new report, “Lies, Damn Lies and Viral Content” — from the very first sentence.

“News organizations are meant to play a critical role in the dissemination of quality, accurate information in society.”

Indeed! I feel a bit like that dorky kid who plays Dungeons & Dragons on his own for years, until he finds out that there’s actually a small D&D group that meets in someone’s basement. There’s been millions of words written about the news media’s struggle for viable business models in an online world, but I feel there’s very few people saying that,

  1. There’s a lot of misinformation out there,
  2. That matters because the news media’s job is to inform, so
  3. Even if social media content didn’t itself make its way into newspapers (which it does), papers have a responsibility to correct the public record and improve the state of public knowledge.

More data, more emotion

surprised

Silverman and his team have done great research here, driven by the data captured through their rumor-tracking tool, Emergent. There’s some eye-opening stuff on news outlets’ love of misleading headlines, and a handy list of recommendations for newsrooms.

Other insights about debunking needs that really grabbed me:

  • “Debunking efforts in the press are not guided by data and learning drawn from work in the fields of psychology, sociology, and political science… An evidence-based, interdisciplinary approach is needed.” Hear, hear.
  • There’s problems inherent in debunking the person, rather than the idea. I wonder, where does this leave the major political fact-checking sites (i.e. PolitiFact, FactCheck.org, Washington Post Fact Checker)?
  • Viral hoaxes appeal to emotion, but so can debunking stories. For example, most stories about a rumored pumpkin-spice condom made it clear that Durex was planning no such product — but the stories still managed to be eye-catching and funny.
  • Hoaxes with geographic focus can inspire action — this indicates that we would be wise to foster ever more local fact checking.
  • Silverman’s report is the first major work on journalistic fact-checking that I’ve seen bring in major voices from the skeptic movement (such asDoubtful News and Tim Farley of What’s the Harm). As a participant in both communities I often link these efforts in my mind, but have seen few others do so — and I think there is so much useful engagement that could happen between fact-checking journalists and skeptics.

Update, archive and correction fails

But what I want to focus on is the problem of updates and corrections, the persistence of web content and the double-edged sword that is the online news archive.

Silverman — who is, after all, an authority on corrections — notes severalmajor problems in news outlets’ updates to rumor-based stories:

A particularly egregious example of the news media’s failure to update rumor articles. From Lies, Damn Lies and Viral Content, by Craig Silverman of Columbia Journalism School’s Tow Center for Digital Journalism.
  1. The updates don’t happen very often. Silverman and his team analyzed six of the claims that they tracked on Emergent, comparing the number of news organizations that originally covered the claim to those that followed up with a resolved truth-status — and percentage that followed up varied tremendously, but on average only hit slightly more than 50%.
  2. Such stories are often updated badly. Most notably, many news outlets will update the body text and then simply slap the word “Updated” on the headline — which results in headlines that make a rumor sound true, even when the body text makes clear that the rumor’s been debunked.
  3. Readers probably won’t see the update. “Obviously, there is no guarantee readers will come back to an article to see if it has been updated,” Silverman says — indeed, there’s very little guarantee, and very little chance.
  4. Mistaken articles persist — forever. “…Online articles exist permanently and can be accessed at any time. Search, social, and hyperlinks may be driving people to stories that are out of date or demonstrably incorrect,” Silverman writes. Even though news organizations followed up on a rumor with a debunking story roughtly half the time, they did so by writing a new story. “This means the original, often incorrect story still exists on their websites and may show up in search results,” Silverman points out. Rarely were follow-up links added to the initial article.

Why are we in this mess?

and

How can we innovate out of it?

I think these concerns all tap into some major ways in which newspapers have failed to adapt to and take advantage of their new digital homes — and ways they can push forward:

Fish wrappers no more

Fish_n_chips

The best use for error-filled stories. “Fish n chips” by Canadian Girl Scout. Licensed under Public Domain via Wikimedia Commons.

 

First, news outlets forget that old stories don’t disappear from the public consciousness like, well, yesterday’s newspaper. If only last week’s and last month’s articles could still be wrapped around take-out fish and chips, the oily residue rubbing out hastily written paragraphs and unwarranted presumptions. Those days are gone. Not only do the archives linger on newspaper’s websites, but they’re linked to by other pages that will probably never die, and they’ll also get brought up by the right combination of words in a Google search.

Of course, I’m being a bit facetious claiming that archives are simply a liability. They also represent an enormous opportunity (more on that below). But right now, archives are a double loss for newspapers: their liability is unmitigated and their potential is untapped.

The news encyclopedia

Which brings us to: most newspapers lack a smart strategy for leveraging and commoditizing their archives.

2015-02-19_08-58-39

Putting aside for a moment our acknowledgment that archive stories could have been wrong to begin with, or could now simply be outdated, they still contain vast amounts of useful information that is not being used and will hardly be seen. If properly checked, coded and collated, a newspaper’s archives on a particular event would form a powerful living encyclopedia entry that would rival Wikipedia for accuracy and completeness.

If thinking about coding and collating the New York Times’ 13 million articles back to 1851 is too overwhelming, think about this: every story we write today is part of tomorrow’s archive. The best way to build that future archive into something useful and informative is to follow Silverman’s advice, that “asking reporters and producers to own a particular claim and maintain a page is a reasonable request.” Making substantial updates means an opportunity to reshare — and as Silverman points out, that drives traffic.

Silverman doesn’t spell out the shape of this claim page, but I’m thinking of two options. One, each article on the given topic is linked into (and receives links out from) a “hub page” that shows the development of the reporters’ investigation. There’s a prominent hyperlinked box on the top of every news story saying something like:

Or in a more streamlined fashion, maybe there aren’t new articles on the topic — maybe there’s one evolving article. To my mind, this goes way beyond just rumor-checking stories. It means moving from the outmoded and now relatively useless idea of a static article, in which we pretend that every story gets set in hot metal and therefore will never, ever be altered, to a system of constantly updated pages — whose changes, by ethical necessity, must also be completely transparent and easily comprehended. (And to prevent link rot and reference rot, you would need an easy way for people to link to a particular version of the page, pinned to a particular date.)

 

We’re no longer hemmed in by this. Photo credit: The Original Movable Type via photopin.

We’re no longer hemmed in by this. Photo credit: The Original Movable Type via photopin.

 

There are many ways to do this (perhaps most naively, dare I suggest we bring back the idea of atime axis for HTTP?). Some experimentation on this began ages ago and is already feeling old hat — I’m thinking in particular of the Guardian’s Live pages, which are great for conveying the latest on quickly developing news, but aren’t the best format for giving people a quick overview of the most pertinent facts. Anyway, news organizations have got complacent in this arena, and there is much more that can be done.

Corrections suck… but don’t have to

ALERT! ALERT! Corrections should be inescapable.

Corrections were never a particularly effective vehicle, and their minuscule power has arguably diminished still further as the demand for them has increased. In the past, corrections hid in a little box around page A2 or so of your paper. Maybe you stumbled over that box, maybe you didn’t. Now, it’s still more flawed. Without a physical paper in hand, there’s little chance of anyone just accidentaly casting their eye over the equivalent of page A2.

Yet our potential to get corrections to readers is so much greater nowthan 20 years ago. We have the technological tools to alert readers who’ve read the articles in question. Why don’t we harness this? There are so many ways it could work. A couple that spring to mind:

  • When you register for a site like the New York Times, you agree that the site will keep a log of what pages you visit so that, when something is corrected, it can ping you to let you know (you could select an email or a social media alert — or the news outlet could experiment to see what works best). Or maybe just by using the site (without registration) you have to accept these pings, as we accept cookies today.
  • Major news organizations could collaborate to launch a one-stop corrections notification shop. (Or a third party could develop with news org buy-in.) This app or plug in would be voluntarily downloaded by the user and track her news consumption, and compare this to a growing online corrections bank populated by the participating news orgs.

Let’s talk

Those are just a few of the ideas wheels I’ve got spinning in the wake of this very important (and may I add, eminently readable) report. Systematic research and creative innovation are both, sadly, such rare beasts in journalism today — but they don’t have to be. Share your thoughts, and let’s move this conversation forward!

Cross-posted to Medium.

This slideshow requires JavaScript.

Day 9: Another morning, another stunning backdrop of cliffs and tuffs. We took a dinghy ride along the rock face of Santiago Island’s Buccaneer Cove, whose striations were formed by successive volcanic eruptions, combined with the effects of waves and salt. We saw cormorants, swallow-tailed gulls (including a very hungry, continually chirping youngster, almost as big as its parents) and many brown noddies. Also a couple sea lions napping on rocks.

We made a brief return to the boat, followed by a wet landing on the soft-sanded Playa Espumilla. There, ghost crabs made fleeting appearances, skimming a foot or two across the sand before disappearing into their holes. We walked inland a bit to a lagoon, in a near-pastoral setting: rain had transformed the palo santo trees there into a minty green. The lagoon was a home for flamingos until the most recent El Niño event, which caused sedimentation.

We returned to the beach and I passed a very pleasant hour with my dad, walking where the surprisingly warm water could wash over our calves. Turtle trails disappeared into the mangroves here and there, evidence of the females once again tiring of male attention.

This slideshow requires JavaScript.

In the afternoon we made a wet landing on Rabida Island, which was unlike anything we’d seen before. The pebbles and sand on the beach were the color of dark brick. (I imagine that this is what Kauai’s iron-rich soil might have looked like thousands of years ago.) Here, our final walk was amid a riot of colors: red rocks, azure sea, turquoise lagoon, whitish-silver palo santo trees, greenish-silver grasses, lime green mangroves, and the yellow pop of little cactus flowers. The upper third of the hills above was also tinged a minty green – not so much from the scant foliage of the palo santo, but more from lichen.

We saw Galapagos doves, medium cactus finches, mockingbirds and black mangrove trees.

This slideshow requires JavaScript.

Nice, quick snorkeling dip: saw more King angelfish, juvenile Cortez rainbow wrasse, and Mexican hogfish, and two or three adult blue-chin parrotfish. The adults were usually paired with their initial phase counterparts, who are perhaps even more brilliant, in their costumes of golden yellow with vertical periwinkle strips. Both generations have a friendly, somewhat amused expression.

I also got to add several charming species to my list, including a young yellowtail damselfish. The site of his blueish-black, oval body wriggling into a round crevice, followed by his rippling yellow tail, was arresting enough – but when he popped his head out, with its puckering yellow lips, I couldn’t help but smile.

The puffer-like Panamic fanged blennies were also sweet, with their watchful stances within crevices. The patient, coral-clinging giant hawkfish was beautiful, with its snakeskin-like pattern. There were pink sea anemones and pencil sea urchins – and I finally got to see a white-tipped shark, who glided past me rather unassumingly, as if unaware of all the fuss his kind usually causes.

I was growing concerned that my “waterproof” camera wasn’t as robust as advertised, so left it on shore for this outing – you’ll have to cope with just my words this time.

This is the eighth in my Galapagos travel diary series. See the rest here.