Feeds:
Posts
Comments

Archive for the ‘Science communication’ Category

Picture credit: NOAA.

Picture credit: NOAA.

Note: I have joined the “virtual class” component of Dan Kahan‘s Science of Science Communication course at Yale University. As part of this I am endeavoring to write a response paper in reaction to each week’s set of readings. I will post these responses here on my blog – my paper for week 11 is below. Previous responses are here. I will also be participating in the discussion on Kahan’s own blog.


This week’s focus:

What is/should be the goal of climate science education at the high school or college level? Should it include “belief in” human caused climate change in addition to comprehension of the best available scientific evidence?

I started off thinking I had not changed my mind since writing my evolution education post two weeks ago. I planned to contend that, as with evolution, there is a reason that we are not satisfied for students to simply acquire knowledge about climate change. If they were to cogently describe what the theory of anthropogenic global warming (AGW) entails, but flat-out deny the truth of the theory, that would leave us unsatisfied – not just because global warming is a pressing issue which requires political will and thus voter backing to tackle (though that’s certainly true) but because we’d be left with the feeling that on some level the student still doesn’t “get it.”

Unpacking my argument from last week – which proposed that we should aim for students to believe the following…

(proposition p) Evolution, which says x, is the best supported scientific way of understanding the origins of various species, the way species adapt to their environment, etc etc.

… I can identify three reasons for this to be our aim:

  • First, because science *is* the best scientific explanation for these phenomena, and thus by knowing this, students know a true fact about the world;
  • Second, because armed with that knowledge, they are better equipped to apply the theory of evolution to scientific and other real-world problems; and
  • Third, (as I outlined in my comment to Cortlandt on the next post) because we wish students to understand the scientific justification for the theory of evolution, and if they understand that, then belief in proposition (p) necessarily follows. (It occurs to me now, however, that this is not the most terrific argument, because necessity does not flow in the other direction. Believing that p does not necessarily mean the student understands the scientific justification for evolutionary theory; he could take (p) on faith.)

The consensus problem

The climate equivalent of proposition (p) might be something like:

(q) The theory of anthropogenic climate change is the best scientific explanation we have for observed increases in the mean global temperature, and the theory predicts that if man continues to produce greenhouse gases at a similar rate, the temperature will continue to rise.

Proposition (p) could have included a stipulation about predictive power – indeed, to be a valid scientific theory, the theory of evolution must have predictive power. But while I didn’t think that needed to be spelled out for (p), I have done so for (q), because climate change is a subject whose vital importance – and whose controversy – truly rests on its predictions.

But there’s a problem here, and maybe a mismatch. In proposing that we aim for student belief in proposition (p), I figured we were disentangling identity from knowledge. Any student, taught well enough, could come to see that proposition (p) is true – and still choose not to believe in evolution, because their identity causes them to choose religious explanations over scientific ones.

For climate change, however, we may not get that far. There seems to be mixed evidence for the effectiveness of communicating scientific consensus on AGW.

As previously discussed, Lewandowsky et al found that subjects told about the 97 percent scientific consensus expressed a higher certainty that CO2 emissions cause climate change. Dan Kahan counters that this finding seems to bear little external validity, since these are not the results we’ve seen in the real world. From 2003 to 2013, the proportion of the US public who said human activities were the main cause of global warming declined from 61 to 57 percent.

In Cultural Cognition of Scientific Consensus, Kahan finds that ideology, ie “who people are,” drives perceptions of the climate change consensus. While 68% of egalitarian communitarians in the study said that most expert scientists agree that global warming is man-made, only 12% of hierarchical individualists said so.

2015-04-30_22-26-11

From Kahan, Jenkins-Smith and Braman, Cultural Cognition of Scientific Consensus. Journal of Risk Research, Vol. 14, pp. 147-74, 2011.

 

On the other hand, as Kahan said in a lecture at the University of Colorado last week (which I live-streamed here – unfortunately I don’t think they’ve posted the recording), most people who dismiss AGW nonetheless recognize that there is a scientific consensus on the issue. At least on the surface this seems at odds with Kahan’s previous findings, so I’d like to look further into these results. (I think the difference may come down to what Kahan describes, in Climate-Science Communication and the Measurement Problem, as the difference between questions that genuinely ask about what people know and those that trigger people to answer in a way that aligns with their identity. Why one of Kahan’s consensus questions fell in the former camp and one in the latter, I do not yet know.)

How is it possible that someone can recognize the scientific consensus on AGW, but still dismiss the truth of AGW? The most natural answer is that such people can readily dismiss the scientific consensus, perhaps arguing the scientists are biased and untrustworthy. This, by the way, points strongly that we should have always expected consensus messaging to fail!

 

So, if the aim is not consensus…?

Returning to education, I think this warning about consensus messaging points to the importance of creating a personal understanding of the science – i.e., exposing students to the reasoning and evidence behind climate change theory, and walking them through some of the discovery processes that scientists themselves have used. There may be serious limits to what this can achieve, because smart students may perceive that the arguments being used in the classroom have been developed by the scientists that they distrust. But undecided students may be persuaded by the fundamental soundness of the scientific arguments.

There is another danger: conservative students (especially the smart ones) may also reject the scientific arguments advanced in class because they will perceive that at a certain point they must taking things on authority; that the processes involved are too complex and the amount of data too large for a non-specialist to come to a solid independent judgment on. Furthermore, the students can entertain the idea that there is a viable alternative scientific theory because there are many prominent voices that back up this view.

 

Back to evolution

Again looking back at last week, I realize now that the same problem exists for evolution. The genius of “intelligent design” and “creation science” is that they allow an exit from the scientific-religious conflict in what many of us would call the wrong direction. Students can use this “out” to accept the science they like, reject that they don’t, and view it all as a “scientific theory.” Rather than accept (p) and then be forced to either choose religion over science, or somehow partition these parts of themselves (which Hermann, as well as Everhart, indicate is how many people cope), students may use religion *as* science and reject (p) altogether.

So now I’m beginning to doubt whether my aim in that essay really was achievable. It’s probably still a good idea to aim for beliefs of type (p), because this is a means of encouraging scientific literacy and nature of science understanding. But religious students with a good grasp of the nature of science will probably still find that “out” and will not agree with the evolution proposition. And other, less scientifically oriented students will simply say, “OK, this is the best science, but I trust religion over science.”

Read Full Post »

OLYMPUS DIGITAL CAMERANote: I have joined the “virtual class” component of Dan Kahan‘s Science of Science Communication course at Yale University. As part of this I am endeavoring to write a response paper in reaction to each week’s set of readings. I will post these responses here on my blog – my paper for week ten is below. Previous responses are here. I will also be participating in the discussion on Kahan’s own blog.


This week’s “questions to consider” (a reading list is here):

1. What is the relationship—empirically—between “accepting”/“believing in” the evolution and “understanding”/“comprehending” it?  Are they correlated with one another? Does the former have a causal impact on latter?  Vice versa?
2. What is the relationship—psychologically— between “accepting”/“believing in” the evolution and “understanding”/“comprehending” it? Is it possible to “comprehend” or “know” without “believing in” evolution?  Can someone “disbelieve” or “not accept” evolution and still use knowledge or comprehension of it to do something that knowledge of it requires?  Are there things that people are enable to do by “belief” or “disbelief in” evolution?  If the answer to the last two questions are both “yes,” can one person use knowledge or comprehension to do some things and disbelief to do others?  What does it mean to “believe in” or “disbelieve in” evolution? Is it correct to equate the mental operation or state of “believing” or “disbelieving” in evolution with the same mental state or operation that is involved in, say, “believing” or “disbelieving” that one is currently sitting in a chair?
3. What—normatively—is (should be) the aim of teaching evolution: “belief,” “knowledge,” or “both”?
4. If one treats attainment of “knowledge “or “comprehension” as the normative goal, how should science educators regard students’ “beliefs”?
5. If one treats attainment of “knowledge” or “comprehension” as the normative goal of science education, how should one regard political or cultural conflict over belief in evolution?

My response:

The empirical relationship between knowledge, understanding and belief

The evidence points strongly towards a distinction between knowledge and belief, for the simple reason that so many students have been able to demonstrate the former without the latter (or vice-versa):

  • In Hermann’s (2012) study, there was no statistical difference in understanding of evolution concepts between two extreme sub-groups of students, one believing in evolution and the big bang theory, and one not.
  • Sinatra 2003 (cited in Hermann) similarly suggests no relationship between belief and understanding of evolution
  • Blackwell found what could be termed a disjoint between understanding/application and belief: although an overwhelming majority of students selected appropriate answers when asked to categorize examples of specific evolutionary processes, the percent considering evolution the primary basis for progression of life on earth was 34-35 percent (with slightly different percentages for the two classes surveyed). The percent considering evolution compatible with their belief system was 27-29%, but only 6-9% said they could never believe in evolution.
  • Other studies found that it is common for students to believe in evolution without understanding it (Lord and Marino 1993, Bishop and Anderson 1990, Demastes-Southerland et al 1995, Jakobi 2010, all cited in Hermann).
A section from Blackwell's study, in which students had to categorize examples of various evolutionary processes.

A section from Blackwell’s study, in which students had to categorize examples of various evolutionary processes.

On the other hand, according to Hermann, several studies (Lawson and Worsnop 1992, Sinclair and Pendarvis 1997, Rutledge and Mitchell 2002) found that adherence to a religious belief system influenced the extent to which evolution was understood.

Defining our terms: the psychological relationship between knowledge, understanding and belief

The vagueness of the terms “belief,” “understanding” and “knowledge” obviously should give us pause when we are trying to make sense of these empirical findings.

I think we should try to define the terms in the way that is most fruitful to the problem at hand (while also seeking as much as possible not to create conflict with existing common usage, and to allow the above empirical findings to be applied). That problem is often put thusly, “Many students understand evolution, or demonstrate knowledge of evolution, without believing in it. Does this matter, and if so, what should we do about it?”

With this in mind we can come up with some rough working definitions:

  • Knowledge: Retained and retrievable true facts about physical properties and processes.
  • Understanding: A deeper form of knowledge accomplished by forming multiple connections between true facts on the subject at hand (evolution) and between the subject and others. (Similar to Gauld 2001’s definition, cited by Smith 2004.) An example of one such connection may be between a scientific theory and its supporting evidence (as suggested by Shipman 2002, cited by Hermann).
  • Belief: A committed, often emotional attachment to a proposition, which itself may be true or untrue, falsifiable or not. I would argue that it is sensible to talk both of faith-based belief in religious precepts, and of belief in scientific theories, which can be driven by thorough understanding of their scientific basis, or by blind faith in scientists. (I subscribe to the summary by Gess-Newsome 1999 [cited by Smith], of knowledge as “evidential, dynamic, emotionally-neutral”, and belief as “both evidential and non-evidential, static, emotionally-bound.”)

These definitions mesh well, I believe, with most of the empirical findings we read about this week, including Hermann, Smith and Everhart (2013). Hermann, for example, builds on Cobern’s partitioning concept to conclude that religious students view science ideas as “facts,” categorized differently than beliefs, which have a stronger emotional attachment. This helps students compartmentalize because they have created an “emotional distance” between scientific conceptions and religious beliefs.

In creating these definitions I have had to dismiss definitions that I think are unhelpful for the problem at hand. For example, according to Hermann, Cobern (1996) stated that knowing “is the metaphysical process by which one accepts a comphrehended concept as true or valid.” But this definition is actually much more like belief, as most of this week’s reading understands it.

I’ve also had to discard the philosophical convention that belief is a necessary condition of knowledge (Smith). When describing the way that people learn, and knowledge acquisition’s interaction with existing belief systems, this stipulation just doesn’t make sense (given the evidence we have of knowledge without belief). By casting off the impractical philosophical definition, I resolve a problem that Smith recognized – that if knowledge is dependent on belief, science education must foster belief.

There will always, I think, be messy edges and overlap between these realms. For example, it is hard to think of much useful knowledge that we can retain as utterly isolated “facts.” Facts that are part of a coherent schema are easier to retain or retrieve. We do, however, remember groups of facts that are connected in greater or lesser degree, both to each other and to other facts and schema in our brains. The difference between knowledge and understanding is thus one of degree.

Is lack of belief a problem? Or is it lack of understanding?

It should be noted that the issue with religious students’ non-belief in evolution is not merely one of semantics or a confusion of terms. The problem is we are not satisfied with students merely believing evolution in the way that they believe in discredited Lamarckian or Ptolomeic ideas. We don’t want them simply to believe “that evolution says x”: that implies that evolution has no special empirical status and it may as well be false, as those outdated scientific theories are. A student who can say only “that evolution says x” is merely parroting scientific language. She is in truth only a historian of science rather than truly a scientist herself – and I think that’s what so bothers us about the learning outcomes exhibited by students like Aidan and Krista, in Hermann’s study. We come away with the sense that their knowledge falls short of true scientific understanding.

I agree with Smith, however, that we should not go so far as to seek or require belief – or perhaps, I might say, “complete belief.” It is not and should not be the goal of a science class to completely overhaul students’ worldview, religion and all.

What we are seeking is for students to believe something like:

“Evolution, which says x, is the best supported scientific way of understanding the origins of various species, the way species adapt to their environment, etc etc.” (A conclusion similar to Smith 2004.)

And this requires an understanding of evolution, in the strong sense of understanding, which encompasses comprehension of justification. One may even argue that this type of belief follows necessarily from strong understanding: that is, if you understand mechanism of and scientific basis for evolution, and the comparative paucity of scientific explanation for other theories of species’ origins, then you will necessarily believe that “Evolution, which says x, is the best supported… etc, etc.” This could be a neat logical maneuver to employ because it means that we can avoid talking about the need for students to “believe in” evolution – which carries a lot of nasty cultural baggage – and just talk about understanding instead.

While several empirical studies have demonstrated that students can easily demonstrate knowledge of evolution without belief in evolution, understanding is a much more slippery eel. As previously alluded to, understanding encompasses a wide spectrum, starting from a state barely stronger than plain knowledge. But I would argue that understanding evolution, in its strong form, encompasses an understanding of the scientific justification for the theory of evolution – and that necessitates an understanding of the nature of science (NOS) itself.

Nature of science: the path to strong understanding of evolution

The best tactic for accomplishing this right kind of evolution belief, or strong understanding – and happily, a key to solving much else that is wrong with science education today – is to place much more emphasis on the scientific method and the epistemology of science. This includes addressing what sorts of questions can be addressed by science, and what can’t; and also the skeptical, doubtful tension within science, in which things are rarely “proven” yet are for good reason “believed.” Crucially this involves helping students to understand the true meaning of “scientific theory,” whose misunderstanding often underpins further misconceptions about evolution’s truth status.

This effort also involves exploring the tension between self discovery and reliance on authority – acknowledging that it is important for students to learn to operate and think like scientists, and we want as much as possible for them to acquire knowledge in this way: but that the world is far too complex for us all to gather our own data on everything. So students must learn how to judge the studies and reasoning of others, how to determine what evidence from others is applicable to what conclusions or situations, and how to judge who is a credible expert.

Misunderstandings of the nature of science (as well as certain broad scientific concepts) often lie at the heart of disbelief in evolution, as Hermann illustrates. In his qualitative study, both students showed a poor understanding of the methods and underlying philosophy of science, displaying a need for truth and proof – despite their good science knowledge performance.

Smith, rather inadvertently, gave another example of this problem. He cites a student who wrote to Good (2001):

I have to disagree with the answers I wrote on the exam. I do not believe that some millions of years ago, a bunch of stuff blew up and from all that disorder we got this beautiful and perfect system we know as our universe… To say that the universe “just happened” or “evolved” requires more faith than to believe that God is behind the complex organization of our solar system…”

Good uses this passage to justify making belief a goal of science education. Smith takes a contrary view, that “meaningful learning has indeed occurred when our four criteria of understanding outlined above have been achieved – even if belief does not follow” (emphasis in original). Instead I would argue that the student does not understand evolution in a meaningful way, having false impressions of underlying scientific and philosophical concepts such as entropy, order, and Occam’s razor.

Will nature-of-science education work with all students?

The research outlined above shows a mixed prognosis for our ability to overcome these issues and foster belief in the evolution proposition. Everhart’s work with Muslim doctors suggests that most participants considered subtly different meanings of the theory of evolution, and could consider evolution in relation to different contexts, such as religion and practical applications, with attitudes to evolution changing when the relative weights of these meanings were shifted. These meanings include a professional evaluation of the theory that could be held distinct from other evaluations. This suggests that participants may recognize the truth of evolution within a science epistemology framework, which should be sufficient for belief in our proposition, and not give evolution the same status within other, more personal epistemologies.

But Hermann suggests that students ultimately fail in integrating science and religion, which creates a fear of losing religious faith, causing the student to cling to the religious view while further compartmentalizing science concepts. This drives at the hard, hard problem at hand: even with a perfect understanding both of evolution and of the nature of science, religious students are likely to run into areas of conflict that create psychological discomfort. This is because the epistemic boundaries of science and religion are neither hard nor perfect. Some of the areas that science claims as well within its remit to explain – such as the age of the earth – run into competing claims from religion.

One way out of this conundrum is for a student to redraw the boundaries – to say, OK, I accept the scientific method where it does not conflict with my faith; but on this matter I must reject it. Hermann’s subjects appear to have done this to a certain extent, but run up against limits. I would hypothesize that this line-drawing process itself leads to further discomfort, especially among students who are brighter and/or show greater understanding of the nature of science, because they would consciously or unconsciously recognize the arbitrary nature of line-drawing. And unfortunately, one good way to resolve that discomfort would then be to discredit the scientific method.

Read Full Post »

Credit: UK Government

Credit: UK Government

Note: I have joined the “virtual class” component of Dan Kahan‘s Science of Science Communication course at Yale University. As part of this I am endeavoring to write a response paper in reaction to each week’s set of readings. I will post these responses here on my blog – my paper for week nine is below. Previous responses are here. I will also be participating in the discussion on Kahan’s own blog.


This week’s (well, last week’s) reading focused on synthetic biology. Dan invited us to imagine that the White House Office of Science and Technology Policy had asked us to study the public’s likely reaction to this emerging technology. What kind of studies would we do?

The readings were:

Presidential Commission for the Study of Bioethical Issues. New Directions: The Ethics of Synthetic Biology and Emerging Technologies (December 2010).

Pauwels, E. Review of quantitative and qualitative studies on U.S. public perceptions of synthetic biology. Syst Synth Biol 3, 37-46 (2009).

Dragojlovic, N. & Einsiedel, E. Playing God or just unnatural? Religious beliefs and approval of synthetic biology. Public Understanding of Science 22, 869-885 (published online 2012, version of record 2013 – for convenience’s sake, I will refer to this as “Dragojlovic 2012”)

Dragojlovic, N. & Einsiedel, E. Framing Synthetic Biology: Evolutionary Distance, Conceptions of Nature, and the Unnaturalness Objection. Science Communication (2013)

I want to start off by taking stock: listing what we appear to know already, based on this week’s readings, and then figure out what outstanding questions remain.

What we know(ish)

Here’s a summary of findings from the readings (roughly organized from strongest evidence base to weakest):

  • Most people know little or nothing about synthetic biology (Pauwels)
  • The familiarity argument – that as people become more familiar with a technology, their support for the technology will increase – is not well supported (Pauwels, others)
  • For many people, synthetic biology provokes concerns about “playing God” and who has the right to “create life” (Pauwels, Dragojlovic 2012)
  • Framing for synthetic biology is similar to that for cloning, genetic engineering and stem cell research (Pauwels)
  • Domain of application has an effect on framing (Pauwels)
  • Acceptance of risk-benefit tradeoff depends on oversight structure that can manage unknowns, human and environmental concerns, and long-term effects (Pauwels)
  • Belief in God increases disapproval of synbio through two mechanisms – the idea (among weak believers) that genetic manipulation interferes with nature, and the idea (among strong believers) of encroachment on divine prerogative (Dragojlovic 2012)
  • Framing synbio as “unnatural” leads to negative perceptions only when characteristics of the particular technological application – eg, evolutionary distance between DNA donor and DNA host – increase perceived relevance of such arguments (Dragojlovic 2013)
  • Individuals who view nature as sacred or spiritual are most responsive to unnaturalness framing (Dragojlovic 2013)


Now, to answer the questions – via a little additional reading.

 

Part 1: Single study

The question:

Imagine you were asked by the White House Office of Science and Technology Policy to do a single study to help forecast the public’s likely reaction to synthetic biology. What sort of study would you do?”

At this juncture, it is probably more useful to model the general reactions people have and the associations they make when they learn about synthetic biology, rather than simply polling their support for the technology. (As we previously discussed, there’s little external validity to questions asking for opinions on something that most respondents don’t understand.)

I think the starting point would have to be more qualitative studies (or – cheating a bit – a mixed-methods study that starts with a qualitative phase). There seems to be little sense in creating a quantitative study in which the choices of responses are simply sentiments that we guessed people would entertain – far better to convene focus groups and see what sentiments people actually entertain. This would lay the groundwork for more informed quantitative studies.

Among the reading for this week, the only qualitative research was the pair of Woodrow Wilson International Center for Scholars studies discussed in Pauwels. These produced some insights – but as Pauwels points out, “The most important conclusion of this article is the need for additional investigation of different factors that will shape public perceptions about synthetic biology, its potential benefits, and its potential risks.”

Some of this work has now been carried out.

Looking beyond the week’s reading, I see that the Wilson Center has continued to carry out both qualitative and quantitative studies, some of which Pauwels summarized in a 2013 paper, “Public Understanding of Synthetic Biology.

Her major findings were:

  • Before hearing a definition of synthetic biology and learning about its applications, participants tended to describe synbio through comparisons to other biotechnology, such as cloning, genetic engineering and stem cell research. This could be crucial to understanding the ways that public debate about synbio might evolve, Pauwels contends.
  • Participants – even some of those generally positive about synthetic biology – expressed concerns about unintended consequences. (Interesting to note that some of these concerns came up when discussing genetically modified mosquitoes, a topic from a previous week in this class.)
  • Participants’ value judgment about synthetic biology varied depending on the technology’s proposed application. If the proposed application was in an environment that appeared more contained, participants were less concerned about risks.
  • Participants expressed ambivalence about engineering life. These attitudes take the form not only of the much-discussed unease at “creating life” and “playing God,” but also much more generalized anxiety – “this term makes me feel scared.”

This is is a very good start, but I feel there’s a bit more unpacking a qualitative study could do.

For example, under “ambivalence toward engineering life,” Pauwels includes the following reactions from participants:

It could also be dangerous if we do not research it enough to find out any long­term effects.”

“This could lead to huge scientific advances, but it can also lead to countries or people using it for their own ‘evil agendas.’ It reminds me of Jurassic Park.”

“It seems exciting but makes me somewhat uncomfortable. Where are the limits?”

“It sounds like we are playing God. Who are we as humans to think [that] we can design or redesign life? It might be nice to be able to do so, but is it right? It seems [that] there are many ethical and moral issues. Perhaps we are getting too arrogant.”

“I feel concerned because, not being perfect, we believe we know what is best in creating life. As in science­ fiction movies, when we do—in time—it goes in a direction we didn’t think about… I believe [that] when life is created, it is meant to be created that way for a purpose we may not even know right now.”

There are many underlying fears and concerns there, expressed in various combinations. These include concerns about unknowables (to coin a phrase, both known unknowns and unknown unknowns), longterm effects, human and scientific hubris, immoral applications by bad actors, security, unnaturalness, and violations of nature or of God’s dominion. There’s also an implied recognition (“where are the limits?” “many ethical and moral issues”) of the need to prevent technological applications that exceed society’s moral norms, and of the potential of technological advances to change the very locus of our morality.

I’m particular concerned with the need to explore the public’s feelings on moral limits. So far studies of the public’s moral objections to synthetic biology has focused on intrinsic moral objections (it is wrong to usurp God’s position as creator) rather than extrinsic moral objections (certain applications would be morally problematic). This seems strange given that as a society we have already collectively recognized some biotech applications as unethical – most notably, human cloning. It therefore seems imperative to explore public opinion on the subject, and try to separate measures of intrinsic and extrinsic moral objection.

With this preliminary information at hand, the most useful question to ask next is which of these attitudes, or general sets of attitudes, is most responsible for a negative predisposition to synthetic biology.

Part 2: More studies

The question:

Imagine you conducted the first study and the OSTP said, “wow, that’s a great start, one that convinces us that we need to do a lot more. We’d like you to outline a comprehensive program of empirical study—consisting of as many individual studies, building progressively on each other, as you like—with a budget of $25 million and a lifespan of 36 months.” What might such a program look like?”

I would propose a series of quantitative studies that would seek to model a situation in which citizens learn about synthetic biology, and then seek establish the frequency of the ideas and opinions expressed in the qualitative study.

Participants would be given a basic description of synthetic biology, and would then be asked to agree or disagree with the following (or perhaps, indicate their level of agreement on a multi-point scale):

  • Synthetic biology is unnatural.
  • Those who practice synthetic biology are playing God.
  • Synthetic biology scares me.
  • Synthetic biology just feels wrong.
  • If we start using synthetic biology, we may not be able to control the consequences. (With variations for environment, human health, security.)
  • I’m concerned that we don’t know what the long-term effects of synthetic biology will be. (With variations for environment, human health, security.)
  • Synthetic biology holds great promise.
  • Synthetic biology is exciting.
  • Synthetic biology could improve people’s lives.
  • Etc.

Potentially a great deal could be learned just in the correlation between these responses. For example, are there many respondents who say synthetic biology “just feels wrong,” but don’t agree with any of the usual-suspect statements about why it feels wrong? This indicates either that synthetic biology taps into a deep-seated fear that people find difficult to attribute cause or voice to – or perhaps that thre is an expressible reason for their misgiving that we haven’t yet succeeded in drawing out of qualitative study participants.

Another hypothesis to explore: perhaps this a strong correlation between unnatural/playing God responses and fear of unintended consequences. This may indicate that expressions such as “playing God” are sometimes used less to express a religious or spiritual conviction, and more to express a sense of humanity’s hubris.

It would be useful to pair these questions with a five-point measure of respondents’ support for synthetic biology, to try and determine the relationship between support strength and various attitudes.

I think it could also be useful to ask a series of questions that attempt to get at the way people make risk-benefit analyses about synthetic biology. This may also have an interesting bearing on their level of support. (As Dragojlovic (2012) points out, a key further question to arise from that study was, how do we consider risk-benefit trade-offs in way that accommodates value-based risks?) Participants could be asked to agree or disagree (on a five-point scale) with the following:

  • The risks of synthetic biology outweigh the benefits.
  • The benefits of synthetic biology outweigh the risks.
  • There is no acceptable level of risk for a technology or product. (Perhaps ask variations on this tailored to human health, environment, etc.)
  • The best way to judge whether we should use a technology is to weigh the benefits against the risks.
  • It doesn’t matter what the benefits or risks of a technology are; if it’s unethical, we shouldn’t use it.
  • The “rightness” or “wrongness” of synthetic biology depends on how it’s used.

Etc, etc – that’s an imperfect start, for sure, but I think with the right questions we could get into an interesting area of psychology.

Outstanding questions

There is, of course, much more that can be investigated. Here were the major area that Pauwels and Dragojlovic highlighted as ripe for future research – along with a few extra thoughts of my own.

  • We need further investigation of factors that will shape public perceptions about synthetic biology, and its benefits and risks (Pauwels). I think this is key – several of the studies we read followed up on “playing God”/”creation of life” concerns, but these concerns are probably only responsible for a small proportion of objections to synthetic biology. In Dragojlovic 2013, the baseline model, which included only the experimental manipulations (unnaturalness framing, evolutionary distance and so on), explained about 5% of variance in attitudes. This, Dragojlovic says, shows that most attitude variance is due to other factors.
  • Pauwels asks about nature of claims raised by “playing God”/”creation of life” concerns: “does it refer to polarization involving broad cultural/philosophical dimensions or to polarization strictly linked to religious values?” Dragojlovic 2012 illuminates some aspects of this but leaves further questions on table. Intriguingly, the Presidential Commission says it “learned that secular critics of the field are more likely to use the phrase “playing God” than are religious groups.” This may hold true only for organizational leaders and not for the populace at large, but it still neatly points out the importance of separating the religious and philosophical/cultural dimensions of the “playing God objection.”
  • Note that Dragojlovic 2012 was carried out in Europe – so a similar study of religious objections carried out in the US could yield quite different results.
  • What constitute effective counter-arguments to the unnaturalness objection? (Dragojlovic 2013)
  • Identify conditions under which advocates and opponents of emerging technology can use rhetorical frames to shape how citizens perceive the technology (Dragojlovic 2013)
  • Who is more or less likely to be swayed out of the unnaturalness objection – the religious or the irreligious?
  • What is the relationship between the “unnaturalness” and “playing God” objections? It seems like there is a lot of overlap, but an effective communications strategy would surely depend on understanding how each interacts with personal identity, which are simply immutable and which more finely shaded, etc.

Read Full Post »

Note: I have joined the “virtual class” component of Dan Kahan‘s Science of Science Communication course at Yale University. As part of this I am endeavoring to write a response paper in reaction to each week’s set of readings. I will post these responses here on my blog – my paper for week eight is below. Previous responses are here. I will also be participating in the discussion on Kahan’s own blog.


This week, two interlocking sets of questions have arisen for me:

  1. Is it problematic that many tests of opinion reflect the ad-hoc opinion people form about something they don’t know about or don’t understand?
  2. How are we to weigh the relative importance of the opinions of know-nothings, know-slightly-somethings, know-quite-a-lots, etc?

Nanotech vs. GM foods vs. fracking

These questions came about because the various emerging technologies under discussion – nanotechnology, genetically modified food and fracking – seem to have different profiles in terms of how much people know about them, versus people’s opinion of the risks involved or the advisability of the technology. (We’ll put aside discussion of GM mosquitoes for now, as they’re a bit more of an isolated case.)

From a completely unsystematic review of the literature I happened to have at my fingertips, I drew up this rough approximation:

  • People know the least about nanotechnology, and their feelings about it are pretty neutral.
  • People know a bit more GM foods, though roughly half the population still knows close to nothing. On GM foods, average opinion ranges from neutral to very negative, depending on the question being asked.
  • People’s knowledge of fracking is roughly equal to their knowledge of GM foods. Opinion tends to the negative but I don’t have a strong sense – one study I looked at had only 22% in favor but another 58% undecided. Pew found 41% in favor of expanding fracking.

(By the way, where you really see the big risk-perception differences is when you compare the polarization on these issues – that is, how risk perception correlates with ideological outlook. That’s one more variable than my brain can really handle in this early stage of theory formation, so for now let’s just put it in a nearby cubby, as a reminder to come back and visit later.)

Is know-nothing opinion data meaningless?

Now here’s the point where you might expect me to say, “Hang on – let’s get into the numbers, and let’s disaggregate them. If we want a true sense of public opinion, let’s only look at the favorability among those familiar with the technology – because if they don’t know what they’re judging, how can they judge?”

That certainly seems the tack taken by many social scientists. George Bishop’s book The Illusion of Public Opinion discusses the many ways that the public’s lack of knowledge confounds opinion polls, especially when paired with bad survey design. Good researchers word their questions carefully to try to elicit a true opinion – though there are arguably limits to what they can do.

Dan Kahan has called out a Pew poll on GM food as one example of bad survey design producing meaningless “opinion” data – and I think he’s mostly right. But I would argue it is actually quite important that we measure the opinion of the “know-nothings” (or at least, “know-next-to-nothings”) and “no littles.”

This is because people do hold opinions about stuff they don’t understand. They do it all the time!

From a purely logical point of view, of course, this makes no sense. A proposition needs a clear reference to have meaning, you might say. But people aren’t very rational. They don’t make a lot of sense. They have limited time for learning about the world around them, and somehow are expected to produce opinions on that world. (A nasty pairing that Walter Lippmann observed back in 1922, but which is all the more true today due to increasing technological complexity and the demands of social media.)

A philosophy of Subway

Take this example. The website I Fucking Love Science posted this manufactured meme on Facebook:Safeway water hoax   To which a few people reacted like this:

Just on the basis of this one hoax meme, some people started to proclaim their intention to boycott Subway. Whether they’d really follow through, I don’t know. But what’s interesting is the object of their concern.

A philosopher might say that for these commenters, the reference of “DHMO” has been displaced. The true reference of “dihydrogen monoxide” is the substance water, which ordinarily could be understood through use of various names, or “senses” – such as “water,” “H20,” and so on. But for these commenters, the reference of “DHMO” is something like “this chemical that has all these bad properties.” The commenters then form their opinion using their own reference for DHMO.

But if a pollster came and asked them for an opinion such as “should we ban dihydrogen monoxide from our food,” he probably wouldn’t probe that deeply – and would just be measuring their opinion about the true reference, water.

That’s wrong and it’s also right. It is wrong in the sense that if you want to know what people truly think about water, you’ll have failed. But if you want to know what policy action they want taken about water, it’s relevant. People will spread their misconceptions to others, have them in mind when thinking about and voting for politicians, and draw on them when grocery shopping. Probably when it comes to DHMO, they won’t get very far before someone corrects them. But other misconception-based opinions, whose errors are more subtle, have real power to shape policy.

Kahan encountered a variant of this when his colleague briefly defined fracking for a woman who hadn’t previously heard of it:

It’s a technique by which high pressure water mixed with various chemicals is used to fracture underground rock formations so that natural gas can be extracted.”

“Oh my god!,” the receptionist exclaimed. “That’s sounds terrifying! The chemicals—they’ll likely poison us. And surely there will be earthquakes!”

The receptionist doesn’t know all the ins and outs of fracking. She probably has some misconceptions – for example, thinking that the chemicals make up a large proportion of the injected fluids. But now “fracking” has a reference for her, one that may have inaccuracies, and she’ll use that to shape her opinion. (In fact, clearly she already has.)

GM food sells like crazy – so what?

People don’t always run with their misconceptions, of course. Sometimes, a misconception can actually keep one from acting on an opinion. As Kahan says of GM foods, “People consume them like mad.” That’s because people’s bundle of misperceptions includes the idea that GMs aren’t already widespread in our food supply – which they are. In a survey by Hallman et al of 1,148 Americans, only 43% knew that food with GM ingredients is current for sale in supermarkets, and only 26% thought they had ever eaten GM food.

I would warn against drawing too much inference from people’s food consumption. The fact that “people consume them like mad” doesn’t tell us that people are OK with GMOs, because if you don’t know that the thing you fear is in your food, you don’t know not to eat that food. People could still be anxious about GMOs, and in fact, they appear to be: in Hallman’s study, only 45% agreed that it was safe to eat GM foods, 59% said it was very or extremely important for food with GM ingredients to be labeled, and 73% said such labels should be required.

Know-nothings and know-somethings

Of course, there are shades of ignorance, and maybe we can begin to distinguish the ignorance levels for which we are interested in attitudes, from those where attitudinal data is just plain useless.

One key instance: if you literally have not heard of something before, than any data purporting to measure your attitude is invalid. The poll is only capturing your attitude towards something of which you are being informed in a highly artificial environment. This might give some indication of “how you would feel about thing X, had someone just happened to tell you about it in the real world” – but probably not a very good indication, and in any case we’re not interested in “what would people say if told X.” In this paper I’m genuinely only interested in “what people think about X” – and doing so in a way that acknowledges that people’s knowledge is almost always incomplete, or wholly or partially wrong.

This component, the “know-absolute-zeros,” seems to form a larger or smaller component depending on the technology involved, and I wonder whether that can account for some of the variation in average opinion and, potentially, polarization levels. I promise no answers, but let’s at least look in that cubby before we call it a day.

Polarization: what’s normal?

Kahan asks the question of why nanotechnology didn’t end up polarizing public opinion. My proposal: nanotechnology simply didn’t get enough media coverage to make people fear it.

There are several mechanisms by which media coverage – even if it does not exaggerate the risks of a technology – could heighten concern among those inclined to be fearful. When coverage is scant, people don’t receive the signals they need to categorize or prioritize an issue as one for possible concern.

On the other hand, GM foods and fracking are more frequent subjects of media coverage. And the difference between these two issues is, I think, the anomaly to explore, rather than nanotechnology.

These two technologies seem to have similar rates of familiarity (ie, about half of Americans have no idea about them) and yet different levels of concern. For GM foods, I’d say the level of concern appears high, as cited above. With fracking, levels of concern appear lower. In a survey of 1,061 Americans, Boudet et al found a mean position of 2.6 – between “somewhat oppose” and “somewhat support.” More than half were undecided about whether to support fracking or oppose it.

It gets weirder, though. GM foods, while they elicit a lot of concern among the population as a whole, aren’t very polarizing at all. Science comprehension reduces concern among both right-wingers and left-wingers – very unlike the pattern for say, climate change. But for fracking, polarization increases with science comprehension – a pattern one would normally only expect for a much more mature technology.

Reflecting on that receptionist, Kahan says, “It turns out that even though people don’t know anything about fracking, there is reason to think that they — or really about 50% of them– will react the way she did as soon as they do.” Indeed. The key now is to figure out: 1. Does that make fracking normal or abnormal? 2. What does that tell us about how people form opinions? and 3. What does that tell us about how we should be communicating with the public about emerging technologies?

Read Full Post »

Note: I have joined the “virtual class” component of Dan Kahan‘s Science of Science Communication course at Yale University. As part of this I am endeavoring to write a response paper in reaction to each week’s set of readings. I will post these responses here on my blog – my paper for week seven is below. Previous responses are here.

I will also be participating in the discussion on Kahan’s own blog.


Here was our assignment for week 7:

Imagine you were

  1. President Obama about to make a speech to the Nation in support of your proposal for a carbon tax;
  2. a zoning board member in Ft. Lauderdale, Florida, preparing to give a presentation at an open meeting (at which members of the public would be briefed and then allowed to give comments) defending a proposed set of guidelines on climate-impact “vulnerability reduction measures for all new construction, redevelopment and infrastructure such as additional hardening, higher floor elevations or incorporation of natural infrastructure for increased resilience”;
  3. a climate scientist invited to give a lecture on climate change to the local chapter of the Kiwanis in Springfield, Tennessee; or
  4. a “communications consultant” hired by a billionaire, to create a television advertisement, to be run during the Superbowl, that will promote constructive public engagement with the science on and issues posed by climate change.

Would the CRED manual be useful to you? Would the studies conducted by Feygina, et al., Meyers et al., or Kahan et al. be? How would you advise any one of these actors to proceed?

The readings 

First, some thoughts on these four readings.

The CRED Manual: well-intentioned, but flawed2015-02-27_10-42-04

Source material: Center for Research on Environmental Decisions, Columbia University. “The Psychology of Climate Change Communication: A guide for scientists, journalists, educators, political aides, and the interested public.”

When I first read the CRED manual, it chimed well with my sensibilities. My initial reaction was that this was a valuable, well-prepared document. But on closer inspection, I have misgivings. I think a lot of that “chiming” comes from the manual’s references to well-known psychological phenomena that science communicators and the media have tossed around as potential culprits for climate change denialism. But for a lot of these psychological processes, there isn’t much empirical basis showing their relevance to climate change communication.

Of course, the CRED staff undoubtedly know the literature better than I do, so they could well know empirical support that I’m not aware of. But the manual authors often don’t support their contentions with research citations. That’s a shame because much of the advice given is too surface-level for communications practitioners to directly apply to their work, and the missing citations would have helped practitioners to look more deeply into and understand particular tactics.

Let’s not talk about it

In particular I would put to one side much of the CRED recommendations to do with: 

Framing: Some of these seem like assumptions. “College students are concerned with green jobs” – how do we know? In addition, Myers’ work (see below) suggests that the suggestion of a “national security” frame is ill-advised – as is this:

“Communicators may find it useful to prepare numerous frames ahead of time, including climate change as a religious, youth, or economic issue.”

The method should not be to try whatever framing seems plausible and see what sticks – unless we’re doing that as part of a controlled field experiment.

Correcting misconceptions. The CRED manual says communicators should discover what misconceptions their audience has about climate change, and “replace” them “with new facts.” Is this doable? How would one replace erroneous information with new facts? The reasoning here sounds a little too close to the discredited information deficit model.

The authors go on to cite an example from some of their own research, concluding that communicators should try to correct misapprehensions because they lead the public to support inappropriate solutions, such as banning aerosols. Does this matter? I’d argue quite possibly not, because the most pressing science communication concern is arguably just getting people to believe in climate change, thus giving mandate to policy makers (who will choose from more viable solutions – there’s no suggestion that anyone is lobbying for them to ban aerosols).

What’s missing?

It’s highly surprising that the CRED manual doesn’t talk about ideological polarization and the types of messaging that might appeal to these different populations. This seems to me to be the area of climate communication research with the strongest empirical backing.

What’s left?

Not having read the underlying research, I am not sure how much credence I should give to the rest of the CRED recommendations – and there’s a lot of them. Notably:

  • Talk about avoiding losses rather than seeking gains
  • Choose a promotion or prevention focus for your messaging (although the above advice suggests we should focus on prevention!)
  • Work to prevent the single-action bias
  • Be careful what words you use to communicate uncertainty
  • Invoke the precautionary principle
  • Focus on immediate threats
  • Frame climate change as a local issue (CRED doesn’t give a citation, but Myers cites Hart and Nisbet 2011, O’Neill and Nicholson-Cole 2009)
  • Tap into emotion: CRED essentially advises climate communicators to appeal to both reason and emotion – but also to be aware of the pitfalls of appealing to emotion too much. It’s not clear how communicators are supposed to dig their way out of this conundrum.

Accordingly, I’m going to cheat a bit on the assignment and just make the following blanket statement: I won’t recommend that any of the speakers in this thought experiment read the CRED manual. There are, for me, too many uncertainties about its advice. But a more widely read communications researcher could probably go through the manual and revise it in a way that would be useful for our speakers.

Feygina’s system justification thesis

Source material: Feygina, Jost and Goldsmith. “System Justification, the Denial of Global Warming, and the Possibility of ‘System-Sanctioned Change.

The authors found that much of the effects of political conservatism and gender on environmental denialism can be explained by the subjects’ tendency to defend the societal and economic status quo. They also concluded that it is possible to eliminate the negative effect of this “system justification” by providing statements that frame environmental protection as patriotic and consistent with protecting the status quo.

I had some qualms with this paper’s findings – in particular Study 3, which examined the effect of presenting a system-preservation message (“being pro-environmental allows us to protect and preserve the American way of life,” etc.). The study used a sample size of just 41 and seems subject to the demand effect.

Myers’ public health framing

Source material: Myers, Nisbet, Maibach and Leiserowitz. “A public health frame arouses hopeful emotions about climate change.

The authors studied the effects of three climate change-related messages that framed the problem variously in terms of the environment, health and national security. Disaggregating the subjects into segments according to climate change knowledge, attitudes and behavior (with the six segments dubbed Alarmed, Concerned, Cautious, Disengaged, Doutbful and Dismissive), Myers found that a public health frame created the most hopeful response in a majority of these populations. She also found that the national security frame was most likely to generate anger, especially among the Dismissive and Doubtful.

Kahan: geoengineering and polarization

Source material: Kahan, Jenkins-Smith, Tarantola, Silva and Braman. “Geoengineering and Climate Change Polarization: Testing a Two-Channel Model of Science Communication.”

The researchers found they could offset cultural polarization over the validity of climate change by replacing a message advocating a lower atmospheric CO2 threshold with one in which scientists called for greater investment in geoengineering – applied technologies directed at combating climate change. Contrary to a competing hypothesis, Kahan et al found that subjects receiving information about geoengineering were slightly more concerned about climate change than were those in a control condition.

My main concern here is, why would geoengineering calm the polarizing effect of climate communication if renewable energy and other green technologies have not previously achieved this? The method – as Kahan puts it, “valorizing the use of human ingenuity” – is the same.

I also have serious reservations about the advisability of putting too much emphasis on geoengineering in the public discourse. The more airtime we give to this idea, the more legitimacy we lend it. And while geoengineering is certainly something that scientists should explore, right now it seems like it should be very far down our list of policy and funding priorities. There are many technologies for energy generation, improved electricity distribution and energy storage that are much closer to fruition than any proposed geoengineering technology, without the very serious risk of unknown side effects that geoengineering poses.

What to say?

Now, on to the assignment proper – my suggestions for our speakers:

President Obama

Some of the study results suggest Obama should modify his message to appeal to voters not already on his side. 

Meyers’ work suggests President Obama could try to emphasize the public health benefits of his proposal, and the administration already seems to have got the memo on that. Obama should not, however, use a national security angle, which is likely to anger those most skeptical about anthropogenic climate change. Feygina’s work suggests that additionally, Obama could talk about his proposal as a means of protecting the “American way of life,” i.e. the status quo. Obama could try reframing the proposal as a form of system maintenance rather than radical change – perhaps he could talk about his proposal as a natural extension of the previous cap and trade system introduced by a Republican president. Not surprisingly, Obama has tried this too, although perhaps he hasn’t stressed the point enough.

Kahan’s findings could be applicable on a broad scale – not to suggest that Obama should speak about geoengineering specifcally, since that’s not his policy aim; but that part of his reframing effort could include talk of human ingenuity. Once again, I think this has been tried, in the context of renewable technologies.

By his very role, and by public perceptions, Obama is rather hamstrung. He can’t really de-politicize his message. Feygina’s study notes (the abstract is actually a bit misleading on this point) that system justification did not fully account for political orientation’s effect on environmental attitudes, and suggests that “top down” factors such as official party platforms are also at work. There’s also the possibility that when Obama engages in re-framing (such as talking about making the US more secure, by reducing dependence on foreign oil), this is seen by conservative voters as a transparent ploy. Myers notes that important factors in real world communication, not reflected in her experiment, include the congruence between messenger and frame.

Zoning board member

The key for this official is that he doesn’t really have to mention “climate change” at all. I’m not suggesting that he suppress such talk, but it’s really not necessary to get the adaptation measures passed. The term “climate change” is inherently polarizing, and people can recognize the need to protect infrastructure from storms with or without a belief in man-made global warming.

Myers’ study suggests it may be useful for the board member to use a public health frame for the discussion, which would be natural when one is talking about the need to safeguard against flooding, etc. Feygina’s recommendations would also be easy to accommodate, as climate change adaptation on a broad scale involves protecting the “status quo” (ie, protecting the city against the forces of nature), although property owners and politicians may in reality have to start doing things very differently. It proabbly wouldn’t hurt to emphasize the human ingenuity and industry aspects of the officials’ approach, but this may not strictly be necessary as without talk of “climate change,” there may not be polarizing language in need of neutralization.

Scientist

Kiwanis International is a service club that emphasizes efforts to improve children’s lives. Feygina’s recommendations may or may not be necessary here, depending on the system-protection beliefs of the participants – but putting them into practice probably wouldn’t hurt. Myers’ work would point towards using a health frame here, perhaps focusing on preserving environmental quality to reduce childhood asthma, etc., and I see little drawback to doing so. Kahan’s work suggests that making reference to human ingenuity could help to neutralize some of the polarization that talk of climate would have on the more hiearchical/individualist members of the organization, though I have concerns about over-emphasis on geoengineering, as discussed above. 

Superbowl ad consultant

Feygina’s work would be useful because the ad must appeal to a broad spectrum of Americans, including those averse to changing the status quo. Again I see health framing as useful and don’t see any obvious drawbacks to such an approach; likewise an emphasis on human ingenuity. My concerns about geoengineering, outlined above, are even stronger for the ad than for a one-off talk at a Kiwanis club, since the message would reach many millions of people and be repeated often, thereby completely exaggerating the importance of geoengineering within the range of climate change approaches.

Read Full Post »

Note: I have joined the “virtual class” component of Dan Kahan‘s Science of Science Communication course at Yale University. As part of this I am endeavoring to write a response paper in reaction to each week’s set of readings. I will post these responses here on my blog – my paper for week six is below. Previous responses are here.

I will also be participating in the discussion on Kahan’s own blog.


Graphic from home page of the Consensus Project, led by researcher John Cook. Skeptical Science Graphics (Skeptical Science) / CC BY 3.0

Since the publication of John Cook’s 2013 study confirming climate scientists’ 97 percent consensus on humans’ responsibility for climate change, many science communicators have vigorously argued the importance of “teaching the consensus.” On a common-sense level, teaching the consensus seems like an obviously good idea. If you tell someone that 97 percent of experts on a subject agree, how could he carry on maintaining the minority position?

But science communication isn’t that simple. It’s much more frustrating, and fascinating.

Evidence for “teaching the consensus”

From Lewandowsky et al, The pivotal role of perceived scientific consensus in acceptance of science, Nature Climate Change, Oct. 2012.

From Lewandowsky et al, The pivotal role of perceived scientific consensus in acceptance of science, Nature Climate Change, Oct. 2012.

Let’s have a brief look at some of the evidence for teaching the consensus – which is backed not just by common sense but by several studies. Stephan Lewandowsky, in particular, has been a strong proponent of this approach. In “The pivotal role of perceived scientific consensus in acceptance of science,” he and his colleagues found that subjects told about the 97 percent scientific consensus expressed a higher certainty that CO2 emissions cause climate change – 4.35 on a 5-point Likert scale, versus 3.96 for members of a control group not exposed to the consensus message.

In addition, the consensus message appeared to have effectively erased ideology’s influence on global warming opinions. Those exposed to the message had a high level of agreement that CO2 causes climate change, regardless of their free-market ideology; whereas in the control condition, free-market endorsement was associated with a marked decrease in acceptance of human-caused climate change (see chart above).

Meanwhile, back in the real world…

But Dan Kahan points out that these findings don’t seem borne out by real-world evidence. From 2003 to 2013, the proportion of the US public who said human activities were the main cause of global warming declined from 61 to 57 percent.

During this period researchers published at least six studies quantifying the consensus, and there were also several notable efforts to publicize the consensus, including:

  • prominent inclusion in Al Gore’s documentary film and book “The Inconvenient Truth”;
  • prominent inclusion in the $300 million social marketing campaign by Gore’s Alliance for Climate Protections;
  • over 6,000 references to “scientific consensus” and “global warming” or “climate change” in major news sources from 2005 to May 1, 2013.

What accounts for this discrepancy? According to Kahan, “The most straightforward explanation would be that the NCC [Lewandowsky] experiment was not externally valid—i.e., it did not realistically model the real-world dynamics of opinion-formation relevant to the climate change dispute.”

What should consensus publicity look like?

I think there’s another possible explanation: that Lewandowsky did realistically model the changes in opinion that might happen with a concerted and well designed consensus-publicity effort – but that from 2003 to 2013, we did not actually see such an effort.

Kahan implies that messaging during this period was widespread and well-funded. But was it as widespread as we would need such a campaign to be? And were the campaigns carried out in the best manner possible? For example, did the communicators use the best dissemination methods, the best language and the best graphical representations? Should they have targeted different populations with different, tailored messages?

I would like to see a more comprehensive analysis asking the questions:

  • What did communication of the climate change consensus from 2003 to 2013 consist of? and
  • Did it meet certain criteria that we should require of such a campaign?

Whether the actual consensus messaging carried out from 2003 to 2013 had the same characteristics that made Lewandowsky’s messaging effective, I could not say. But it certainly seems worth investigating what those characteristics might be. Of course, a prime concern is to discover if those characteristics include or depend on the artificial psychology lab environment – which would indicate that it is impossible to influence climate change opinions through consensus messaging in the real world.

An aside on sample size

I also note that Kahan doesn’t question the validity of Lewandowsky’s sampling. I can’t help wondering if Lewandowsky’s findings might not be, in some part, an artifact of small selection size.

The researchers compared a control group of 47 to a consensus condition group of 43. This means they were not literally testing the effect of consensus messaging on individual participants, but concluding that the difference in opinions between the two groups was due to the consensus messaging that one group received.

While this approach is advisable (a literal “before and after” set-up presents the problem of demand effect, as our class saw in its examination of Ranney et al,) it also depends on a large enough sample size to minimize the possibility that uncontrolled and unseen variables are affecting results. I’m not convinced that Lewandowsky’s sample size was large enough for that.

Read Full Post »

Note: I have joined the “virtual class” component of Dan Kahan‘s Science of Science Communication course at Yale University. As part of this I am endeavoring to write a response paper in reaction to each week’s set of readings. I will post these responses here on my blog – my response for week five is below. Week 1 is here, and week 2 is here. (I was away for weeks 3 and 4.)

I will also be participating in the discussion on Kahan’s own blog.


This week I seek to examine several myths that Kahan and Discover blogger Keith Kloor say news media have perpetuated in the wake of the Disneyland measles outbreak.

Trends in MMR immunization

MMR immunization rates

The most easily disprovable myth is that measles, mumps and rubella (MMR) immunization is falling. (This myth has been perpetuated by, among others, the Los Angeles Times.) The best source for this data is the CDC’s National Immunization survey. This data shows that since 1999, the percentage of children aged 19-35 months who had received the MMR vaccination held roughly steady, at between 90 and 93 percent (see chart). The slight fluctuations in that range have shown no particular trend.

While national immunization rates are important, more localized rates are crucial because pockets of low immunity can allow outbreaks to take hold. As the CDC notes, MMR coverage was below the 90 percent threshold in 17 states in 2013, and that definitely presents a problem. But is it a growing problem? I used CDC data to put together a quick spreadsheet of five- and ten-year trends, state by state, and at a glance found many states whose rates have dropped – but not by a statistically significant margin. The confidence intervals here are pretty high. (If anyone has done a serious trend analysis, I’d love to see it.)

Are parents concerned about vaccine safety? What does that mean?

In a January 22 piece on the Washington Post’s Wonkblog, Christopher Ingraham blames “the anti-vaccine movement” for the worrying rise in measles cases, citing an AP-GfK survey that found, as Ingraham puts it, “only 53 percent of Americans were confident that vaccines are safe and effective.” For a start, that’s a pretty big misrepresentation of the survey, in which 53 percent were very or extremely confident that childhood vaccines are safe and effective. Another 30 percent were somewhat confident.

In any case, Kahan argues that the AP-GfK survey isn’t a good measure: “Indeed, no public opinion survey of the general public can give anyone useful information on vaccine risk concerns. The only valid evidence of that generally is the National Immunization Survey, which uses actual vaccine behavior to determine vaccination rates,” he told Kloor.

I think we can agree that the NIS represents the best way to measure what proportion of parents are affected by concerns or other factors strongly enough to substantially affect their children’s immunization program. After all, if the concern is strong enough to actually affect vaccination outcomes (and surely those are the concerns we’re most interested in) then we should see that in a measure of vaccination outcomes!

However, there are several important things the NIS can’t tell us. Notably, it doesn’t give insight into the reasons behind non-vaccination. We are right to ask what economic, psychological and social factors are behind parents’ failure to immunize against measles, particularly in geographic or socioeconomic pockets that fall below target immunity, because knowing the causes of missed immunizations will help us formulate the best science communication response.

One study that has engaged with this question is Dempsey et al, “Alternative Vaccination Schedule Preferences Among Parents of Young Children.” The results of this study have been misrepresented by the Advisory Board Company and subsequently by the Post’s Petula Dvorak – the latter, for example, says “one in 10 parents are avoiding or delaying vaccines in their children because of safety concerns.”

Dempsey’s findings are more complex. She found that 13 percent of parents of young children reported using an alternative vaccination schedule (that is, they reported that did not completely adhere to the CDC vaccination schedule). Of these, a strong majority of 82 percent – but by no means all – agreed that, “Delaying vaccine doses is safer for children than providing them according to the CDC-recommended vaccination schedule.” Some parents who followed an alternative vaccination schedule may have done so because of difficulty accessing medical care, or because they simply failed to immunize on time.

I would note Dempsey’s small-ish sample size of under 800 (compared to over 13,000 in the CDC’s survey), and also potential motivated reasoning on the part of her respondents. That is, a parent whose original reason for delaying vaccine was a lack of time or simply forgetfulness might convince himself retroactively that the reason was “delaying vaccine doses is safer for children.” But, even given all these caveats, I think Dempsey makes a decent case for the power of safety concerns to drive non-vaccination – so I would be interested to hear any rebuttals.

Who doesn’t vaccinate, and why?

Having said that, safety concerns are but one part of a complex set of drivers – and some of the most destructive myths around non-vaccinators are about what drives them and, even more potently, who they are.

Many commentators have described non-vaccinators as “anti-science” or lacking “trust” in science and medicine. For the most part these are labels that the non-vaccinators would not themselves recognize. I could write another post entirely on whether that matters, but let me sum up for now by saying it does matter, at the very least, because such language is polarizing and alienating. For example, an NPR caller challenged Paul Offit on what she saw as his presumption that non-vaccinators are “yahoos who just don’t look at the scientific process at all.” Did she show a less than scientific mindset when she rejected Offit’s explanations as “pap,” arguing that “a two-year-old cannot accept this kind of chemical onslaught”? Perhaps. But she doesn’t see herself as rejecting science. She sees herself as a critical thinker – indeed, as she says, “an educated adult.”

As if that weren’t polarizing enough, we now have several stereotypes emerging about who non-vaccinators are: either rich hippies, religious nuts, or conspiracy theorists. As Dvorak writes: “The fringe who didn’t believe in medicine for religious and other reasons has exploded into a 10 percent, largely yuppie epidemic.” In the same article, George Washington University public health professor Alexandra Stewart says the non-vaxxers are primarily “white, educated populations of people with computers.”

The reality is much less homogenous – and less ideological. First, on a broad scale, MMR vaccination rates are roughly equal across most racial lines (91.5 percent for whites, 90.9 percent for black non-Hispanic, 92.1 percent for Hispanics) and a little higher for Asian-Americans (96.7 percent) and American Indians (96.3 percent). Second, as Kahan found in his study of 2,316 adults, Vaccine Risk Perceptions and Ad Hoc Risk Communication: An Empirical Assessment, “There is no meaningful correlation between vaccine risk perceptions and the sorts of characteristics that usually indicate membership in one or another cultural group.” These groups measures included a sliding scale of political orientation (liberal-conservative, Democrat-Republican) as well as two latent measures of risk-perception: tendencies to perceive risk to public safety and to social deviancy. (These risk perception measures were generated from subjects’ perception of risk from a variety of other issues, including climate change, marijuana legalization and gun ownership, and correlate with the common Hierarchy-Egalitarianism and Individualism-Communitarianism worldview scales.)

I would also point to some interesting work done by Yvonne Villanueva-Russell, who in extensive interviews found a variety of motivations for those who failed to vaccinate their children to the CDC schedule. Some of these parents seemed to fit the stereotypes (“crunchy mommas,” “ideologues”) but others simply put off making a decision for too long, or have children with health issues. Her sample size of 67 parents is small, but Villanueva-Russell’s work gives us some idea of the range of motivations we should take into account when designing further research – and before we speak out of hand about “anti-vaxxers.”

Read Full Post »

Older Posts »