Feeds:
Posts
Comments
Burzynski outside his SOAH hearing last November. (Credit: Tamar Wilner)

Burzynski outside his SOAH hearing last November. (Credit: Tamar Wilner)

Two Texas judges released their findings Oct. 12 in the case of Stanislaw Burzynski, the controversial Houston cancer doctor who’s been accused of deceptive advertising, making medical errors, and treating patients with untested combinations of drugs that amounted to “experimentation.” On most of these charges, the judges failed to find enough evidence of wrongdoing to discipline Burzynski.

The State Office of Administrative Hearings did find, however, that Burzynski is subject to sanctions for:

  • Aiding and abetting the unlicensed practice of medicine by one practitioner;
  • Failing to properly supervise three unlicensed physicians who misrepresented themselves as doctors;
  • Inaccurately reporting the size of one patient’s tumor, causing a misclassification of the patient’s response to treatment;
  • Failing to gain properly informed consent from several patients;
  • Failing to disclose his ownership interest in the pharmacy that prescribed patients’ drugs;
  • Failing to maintain adequate records to support charges made to four patients.

The conclusions from the judges at Texas’ State Office of Administrative Hearings (SOAH) are not binding, but come in the form of recommendations to the Texas Medical Board. It’s the usual practice for the board to accept and implement SOAH recommendations.

A hot take

I honestly didn’t know what conclusion to expect in this case, despite the many months I spent researching it for my Newsweek article last February. Burzynski has faced sanctions many, many times before, and it’s never impeded his practice much. So I don’t find myself particularly shocked by this result.

From my quick skim of the findings, it seems that SOAH failed to find against Burzynski on some of the most important charges because of missteps by the Texas Medical Board (who acted as the prosecution).

The board’s expert witness on the standard of care allegations, such as the use of untested drug combinations, was Dr. Cynthia Wetmore of Children’s Healthcare of Atlanta. But the judges found her testimony “troubling” when she accused Burzynski of violating the standard of care for Patient D — a patient who never received treatment at the clinic. The judges also noted that Wetmore is a pediatric oncologist in an academic, rather than private practice, setting.

They therefore concluded, “Dr. Wetmore’s expertise about the treatment of adult cancer and the standard of care used by physicians in private practice is limited and will be given little weight.”

That must have been a harsh blow for the medical board.

Has Burzynski “saved lives”?

The other surprising finding was this: “Respondent’s [Burzynski’s] treatments have saved the lives of cancer patients, both adults and children, who were not expected to live.”

With the caveat, again, that I have only skimmed the decision, it’s hard to see what reliable scientific evidence the judges had for this claim. Certainly I’ve never come across any. Yes, there are dozens of patients who claim that Burzynski did exactly that — save their lives. But did the judges really base this conclusion on anecdote alone?

The judges further listed, among “mitigating factors” meant to inform eventual sanctions against Burzynski:

If Respondent is unable to continue practicing medicine, critically ill cancer patients being treated with ANP [antineoplastons, the drug that has been the focus of Burzynski’s research] under FDA-approved clinical trials or a special exception will no longer have access to this treatment.

Amazing that after decades of research, with not a single randomized controlled trial demonstrating the effectiveness of ANP, the judges still see the loss of such drugs as some kind of harm to public health.

Note: The decision can be viewed by visiting http://www.soah.texas.gov/, clicking “Electronic Case Files,” then “Search Public Case Files,” then searching under Docket Number 503–14–1342.

Cross-posted from Medium, with minor alterations.

OLYMPUS DIGITAL CAMERA

Stanislaw Burzynski, the Houston doctor who has treated thousands of cancer patients with unproven medications, is set to take the stand to defend his medical license today.

As I wrote for Newsweek recently, Burzynski is a celebrated figure in the alternative medicine world, and credited by some with saving their lives. The Texas Medical Board, however, says Burzynski overbilled, misled patients and made numerous medical errors – and the board’s expert witness said Burzynski’s use of untested drug combinations amounted to “experimenting on humans.”

The hearings started last November, when the Texas State Office of Administrative Hearings (SOAH) heard the board’s case. Now Burzynski’s lawyers will bring their own witnesses, beginning with the doctor himself. A document filed with SOAH estimates that Burzynski’s testimony will take two days.

According to the document, Dr. Burzynski will testify about the standard of care at his clinic, the use of antineoplastons, compliance with the FDA and charges of false advertising, among other issues. He will reply to “allegations of non-therapeutic treatment,” “inadequate delegation,” and “the aiding and abetting of the unlicensed practice of medicine.”

A number of patients and Burzynski Clinic employees will testify on Dr. Burzynski’s behalf, as will several outside physicians who treated clinic patients, and Mark Levin, a board-certified oncologist.

The hearing resumes just weeks after the Burzynski Research Institute announced that it has started patient enrollment in an FDA-approved phase 2 study of antineoplaston use in diffuse intrinsic brainstem glioma, a cancer that mainly attacks children.

The hearing is scheduled to run until May 12.

Hear more about the Burzynski story in my interview with the Point of Inquiry podcast.

Too often, we talk about fact-checking as an elite activity. First, it was a job for the magazine’s in-house fact-checker (back when there were such things). Then it was a job for non-partisan fact-checking organizations or dedicated journalistic operations like FactCheck.org and PolitiFact.

As important as those organizations are, fact-checking is also something we all need to practice each day as news consumers. It’s really akin to media literacy: a critical reflex, backed up with a few modern tools, that we must bring to bear when we consume modern media in all its forms (but especially social media).

With that in mind, I put together this round-up of fact-checking tools that we all can use. It started as a talk for the Skeptic’s Toolbox, a workshop held two weeks ago in Eugene, Ore., and grew with feedback from participants. I hope you’ll find some useful suggestions in these lists. Please let me know anything you think I’ve missed!

Picture credit: NOAA.

Picture credit: NOAA.

Note: I have joined the “virtual class” component of Dan Kahan‘s Science of Science Communication course at Yale University. As part of this I am endeavoring to write a response paper in reaction to each week’s set of readings. I will post these responses here on my blog – my paper for week 11 is below. Previous responses are here. I will also be participating in the discussion on Kahan’s own blog.


This week’s focus:

What is/should be the goal of climate science education at the high school or college level? Should it include “belief in” human caused climate change in addition to comprehension of the best available scientific evidence?

I started off thinking I had not changed my mind since writing my evolution education post two weeks ago. I planned to contend that, as with evolution, there is a reason that we are not satisfied for students to simply acquire knowledge about climate change. If they were to cogently describe what the theory of anthropogenic global warming (AGW) entails, but flat-out deny the truth of the theory, that would leave us unsatisfied – not just because global warming is a pressing issue which requires political will and thus voter backing to tackle (though that’s certainly true) but because we’d be left with the feeling that on some level the student still doesn’t “get it.”

Unpacking my argument from last week – which proposed that we should aim for students to believe the following…

(proposition p) Evolution, which says x, is the best supported scientific way of understanding the origins of various species, the way species adapt to their environment, etc etc.

… I can identify three reasons for this to be our aim:

  • First, because science *is* the best scientific explanation for these phenomena, and thus by knowing this, students know a true fact about the world;
  • Second, because armed with that knowledge, they are better equipped to apply the theory of evolution to scientific and other real-world problems; and
  • Third, (as I outlined in my comment to Cortlandt on the next post) because we wish students to understand the scientific justification for the theory of evolution, and if they understand that, then belief in proposition (p) necessarily follows. (It occurs to me now, however, that this is not the most terrific argument, because necessity does not flow in the other direction. Believing that p does not necessarily mean the student understands the scientific justification for evolutionary theory; he could take (p) on faith.)

The consensus problem

The climate equivalent of proposition (p) might be something like:

(q) The theory of anthropogenic climate change is the best scientific explanation we have for observed increases in the mean global temperature, and the theory predicts that if man continues to produce greenhouse gases at a similar rate, the temperature will continue to rise.

Proposition (p) could have included a stipulation about predictive power – indeed, to be a valid scientific theory, the theory of evolution must have predictive power. But while I didn’t think that needed to be spelled out for (p), I have done so for (q), because climate change is a subject whose vital importance – and whose controversy – truly rests on its predictions.

But there’s a problem here, and maybe a mismatch. In proposing that we aim for student belief in proposition (p), I figured we were disentangling identity from knowledge. Any student, taught well enough, could come to see that proposition (p) is true – and still choose not to believe in evolution, because their identity causes them to choose religious explanations over scientific ones.

For climate change, however, we may not get that far. There seems to be mixed evidence for the effectiveness of communicating scientific consensus on AGW.

As previously discussed, Lewandowsky et al found that subjects told about the 97 percent scientific consensus expressed a higher certainty that CO2 emissions cause climate change. Dan Kahan counters that this finding seems to bear little external validity, since these are not the results we’ve seen in the real world. From 2003 to 2013, the proportion of the US public who said human activities were the main cause of global warming declined from 61 to 57 percent.

In Cultural Cognition of Scientific Consensus, Kahan finds that ideology, ie “who people are,” drives perceptions of the climate change consensus. While 68% of egalitarian communitarians in the study said that most expert scientists agree that global warming is man-made, only 12% of hierarchical individualists said so.

2015-04-30_22-26-11

From Kahan, Jenkins-Smith and Braman, Cultural Cognition of Scientific Consensus. Journal of Risk Research, Vol. 14, pp. 147-74, 2011.

 

On the other hand, as Kahan said in a lecture at the University of Colorado last week (which I live-streamed here – unfortunately I don’t think they’ve posted the recording), most people who dismiss AGW nonetheless recognize that there is a scientific consensus on the issue. At least on the surface this seems at odds with Kahan’s previous findings, so I’d like to look further into these results. (I think the difference may come down to what Kahan describes, in Climate-Science Communication and the Measurement Problem, as the difference between questions that genuinely ask about what people know and those that trigger people to answer in a way that aligns with their identity. Why one of Kahan’s consensus questions fell in the former camp and one in the latter, I do not yet know.)

How is it possible that someone can recognize the scientific consensus on AGW, but still dismiss the truth of AGW? The most natural answer is that such people can readily dismiss the scientific consensus, perhaps arguing the scientists are biased and untrustworthy. This, by the way, points strongly that we should have always expected consensus messaging to fail!

 

So, if the aim is not consensus…?

Returning to education, I think this warning about consensus messaging points to the importance of creating a personal understanding of the science – i.e., exposing students to the reasoning and evidence behind climate change theory, and walking them through some of the discovery processes that scientists themselves have used. There may be serious limits to what this can achieve, because smart students may perceive that the arguments being used in the classroom have been developed by the scientists that they distrust. But undecided students may be persuaded by the fundamental soundness of the scientific arguments.

There is another danger: conservative students (especially the smart ones) may also reject the scientific arguments advanced in class because they will perceive that at a certain point they must taking things on authority; that the processes involved are too complex and the amount of data too large for a non-specialist to come to a solid independent judgment on. Furthermore, the students can entertain the idea that there is a viable alternative scientific theory because there are many prominent voices that back up this view.

 

Back to evolution

Again looking back at last week, I realize now that the same problem exists for evolution. The genius of “intelligent design” and “creation science” is that they allow an exit from the scientific-religious conflict in what many of us would call the wrong direction. Students can use this “out” to accept the science they like, reject that they don’t, and view it all as a “scientific theory.” Rather than accept (p) and then be forced to either choose religion over science, or somehow partition these parts of themselves (which Hermann, as well as Everhart, indicate is how many people cope), students may use religion *as* science and reject (p) altogether.

So now I’m beginning to doubt whether my aim in that essay really was achievable. It’s probably still a good idea to aim for beliefs of type (p), because this is a means of encouraging scientific literacy and nature of science understanding. But religious students with a good grasp of the nature of science will probably still find that “out” and will not agree with the evolution proposition. And other, less scientifically oriented students will simply say, “OK, this is the best science, but I trust religion over science.”

OLYMPUS DIGITAL CAMERANote: I have joined the “virtual class” component of Dan Kahan‘s Science of Science Communication course at Yale University. As part of this I am endeavoring to write a response paper in reaction to each week’s set of readings. I will post these responses here on my blog – my paper for week ten is below. Previous responses are here. I will also be participating in the discussion on Kahan’s own blog.


This week’s “questions to consider” (a reading list is here):

1. What is the relationship—empirically—between “accepting”/“believing in” the evolution and “understanding”/“comprehending” it?  Are they correlated with one another? Does the former have a causal impact on latter?  Vice versa?
2. What is the relationship—psychologically— between “accepting”/“believing in” the evolution and “understanding”/“comprehending” it? Is it possible to “comprehend” or “know” without “believing in” evolution?  Can someone “disbelieve” or “not accept” evolution and still use knowledge or comprehension of it to do something that knowledge of it requires?  Are there things that people are enable to do by “belief” or “disbelief in” evolution?  If the answer to the last two questions are both “yes,” can one person use knowledge or comprehension to do some things and disbelief to do others?  What does it mean to “believe in” or “disbelieve in” evolution? Is it correct to equate the mental operation or state of “believing” or “disbelieving” in evolution with the same mental state or operation that is involved in, say, “believing” or “disbelieving” that one is currently sitting in a chair?
3. What—normatively—is (should be) the aim of teaching evolution: “belief,” “knowledge,” or “both”?
4. If one treats attainment of “knowledge “or “comprehension” as the normative goal, how should science educators regard students’ “beliefs”?
5. If one treats attainment of “knowledge” or “comprehension” as the normative goal of science education, how should one regard political or cultural conflict over belief in evolution?

My response:

The empirical relationship between knowledge, understanding and belief

The evidence points strongly towards a distinction between knowledge and belief, for the simple reason that so many students have been able to demonstrate the former without the latter (or vice-versa):

  • In Hermann’s (2012) study, there was no statistical difference in understanding of evolution concepts between two extreme sub-groups of students, one believing in evolution and the big bang theory, and one not.
  • Sinatra 2003 (cited in Hermann) similarly suggests no relationship between belief and understanding of evolution
  • Blackwell found what could be termed a disjoint between understanding/application and belief: although an overwhelming majority of students selected appropriate answers when asked to categorize examples of specific evolutionary processes, the percent considering evolution the primary basis for progression of life on earth was 34-35 percent (with slightly different percentages for the two classes surveyed). The percent considering evolution compatible with their belief system was 27-29%, but only 6-9% said they could never believe in evolution.
  • Other studies found that it is common for students to believe in evolution without understanding it (Lord and Marino 1993, Bishop and Anderson 1990, Demastes-Southerland et al 1995, Jakobi 2010, all cited in Hermann).
A section from Blackwell's study, in which students had to categorize examples of various evolutionary processes.

A section from Blackwell’s study, in which students had to categorize examples of various evolutionary processes.

On the other hand, according to Hermann, several studies (Lawson and Worsnop 1992, Sinclair and Pendarvis 1997, Rutledge and Mitchell 2002) found that adherence to a religious belief system influenced the extent to which evolution was understood.

Defining our terms: the psychological relationship between knowledge, understanding and belief

The vagueness of the terms “belief,” “understanding” and “knowledge” obviously should give us pause when we are trying to make sense of these empirical findings.

I think we should try to define the terms in the way that is most fruitful to the problem at hand (while also seeking as much as possible not to create conflict with existing common usage, and to allow the above empirical findings to be applied). That problem is often put thusly, “Many students understand evolution, or demonstrate knowledge of evolution, without believing in it. Does this matter, and if so, what should we do about it?”

With this in mind we can come up with some rough working definitions:

  • Knowledge: Retained and retrievable true facts about physical properties and processes.
  • Understanding: A deeper form of knowledge accomplished by forming multiple connections between true facts on the subject at hand (evolution) and between the subject and others. (Similar to Gauld 2001’s definition, cited by Smith 2004.) An example of one such connection may be between a scientific theory and its supporting evidence (as suggested by Shipman 2002, cited by Hermann).
  • Belief: A committed, often emotional attachment to a proposition, which itself may be true or untrue, falsifiable or not. I would argue that it is sensible to talk both of faith-based belief in religious precepts, and of belief in scientific theories, which can be driven by thorough understanding of their scientific basis, or by blind faith in scientists. (I subscribe to the summary by Gess-Newsome 1999 [cited by Smith], of knowledge as “evidential, dynamic, emotionally-neutral”, and belief as “both evidential and non-evidential, static, emotionally-bound.”)

These definitions mesh well, I believe, with most of the empirical findings we read about this week, including Hermann, Smith and Everhart (2013). Hermann, for example, builds on Cobern’s partitioning concept to conclude that religious students view science ideas as “facts,” categorized differently than beliefs, which have a stronger emotional attachment. This helps students compartmentalize because they have created an “emotional distance” between scientific conceptions and religious beliefs.

In creating these definitions I have had to dismiss definitions that I think are unhelpful for the problem at hand. For example, according to Hermann, Cobern (1996) stated that knowing “is the metaphysical process by which one accepts a comphrehended concept as true or valid.” But this definition is actually much more like belief, as most of this week’s reading understands it.

I’ve also had to discard the philosophical convention that belief is a necessary condition of knowledge (Smith). When describing the way that people learn, and knowledge acquisition’s interaction with existing belief systems, this stipulation just doesn’t make sense (given the evidence we have of knowledge without belief). By casting off the impractical philosophical definition, I resolve a problem that Smith recognized – that if knowledge is dependent on belief, science education must foster belief.

There will always, I think, be messy edges and overlap between these realms. For example, it is hard to think of much useful knowledge that we can retain as utterly isolated “facts.” Facts that are part of a coherent schema are easier to retain or retrieve. We do, however, remember groups of facts that are connected in greater or lesser degree, both to each other and to other facts and schema in our brains. The difference between knowledge and understanding is thus one of degree.

Is lack of belief a problem? Or is it lack of understanding?

It should be noted that the issue with religious students’ non-belief in evolution is not merely one of semantics or a confusion of terms. The problem is we are not satisfied with students merely believing evolution in the way that they believe in discredited Lamarckian or Ptolomeic ideas. We don’t want them simply to believe “that evolution says x”: that implies that evolution has no special empirical status and it may as well be false, as those outdated scientific theories are. A student who can say only “that evolution says x” is merely parroting scientific language. She is in truth only a historian of science rather than truly a scientist herself – and I think that’s what so bothers us about the learning outcomes exhibited by students like Aidan and Krista, in Hermann’s study. We come away with the sense that their knowledge falls short of true scientific understanding.

I agree with Smith, however, that we should not go so far as to seek or require belief – or perhaps, I might say, “complete belief.” It is not and should not be the goal of a science class to completely overhaul students’ worldview, religion and all.

What we are seeking is for students to believe something like:

“Evolution, which says x, is the best supported scientific way of understanding the origins of various species, the way species adapt to their environment, etc etc.” (A conclusion similar to Smith 2004.)

And this requires an understanding of evolution, in the strong sense of understanding, which encompasses comprehension of justification. One may even argue that this type of belief follows necessarily from strong understanding: that is, if you understand mechanism of and scientific basis for evolution, and the comparative paucity of scientific explanation for other theories of species’ origins, then you will necessarily believe that “Evolution, which says x, is the best supported… etc, etc.” This could be a neat logical maneuver to employ because it means that we can avoid talking about the need for students to “believe in” evolution – which carries a lot of nasty cultural baggage – and just talk about understanding instead.

While several empirical studies have demonstrated that students can easily demonstrate knowledge of evolution without belief in evolution, understanding is a much more slippery eel. As previously alluded to, understanding encompasses a wide spectrum, starting from a state barely stronger than plain knowledge. But I would argue that understanding evolution, in its strong form, encompasses an understanding of the scientific justification for the theory of evolution – and that necessitates an understanding of the nature of science (NOS) itself.

Nature of science: the path to strong understanding of evolution

The best tactic for accomplishing this right kind of evolution belief, or strong understanding – and happily, a key to solving much else that is wrong with science education today – is to place much more emphasis on the scientific method and the epistemology of science. This includes addressing what sorts of questions can be addressed by science, and what can’t; and also the skeptical, doubtful tension within science, in which things are rarely “proven” yet are for good reason “believed.” Crucially this involves helping students to understand the true meaning of “scientific theory,” whose misunderstanding often underpins further misconceptions about evolution’s truth status.

This effort also involves exploring the tension between self discovery and reliance on authority – acknowledging that it is important for students to learn to operate and think like scientists, and we want as much as possible for them to acquire knowledge in this way: but that the world is far too complex for us all to gather our own data on everything. So students must learn how to judge the studies and reasoning of others, how to determine what evidence from others is applicable to what conclusions or situations, and how to judge who is a credible expert.

Misunderstandings of the nature of science (as well as certain broad scientific concepts) often lie at the heart of disbelief in evolution, as Hermann illustrates. In his qualitative study, both students showed a poor understanding of the methods and underlying philosophy of science, displaying a need for truth and proof – despite their good science knowledge performance.

Smith, rather inadvertently, gave another example of this problem. He cites a student who wrote to Good (2001):

I have to disagree with the answers I wrote on the exam. I do not believe that some millions of years ago, a bunch of stuff blew up and from all that disorder we got this beautiful and perfect system we know as our universe… To say that the universe “just happened” or “evolved” requires more faith than to believe that God is behind the complex organization of our solar system…”

Good uses this passage to justify making belief a goal of science education. Smith takes a contrary view, that “meaningful learning has indeed occurred when our four criteria of understanding outlined above have been achieved – even if belief does not follow” (emphasis in original). Instead I would argue that the student does not understand evolution in a meaningful way, having false impressions of underlying scientific and philosophical concepts such as entropy, order, and Occam’s razor.

Will nature-of-science education work with all students?

The research outlined above shows a mixed prognosis for our ability to overcome these issues and foster belief in the evolution proposition. Everhart’s work with Muslim doctors suggests that most participants considered subtly different meanings of the theory of evolution, and could consider evolution in relation to different contexts, such as religion and practical applications, with attitudes to evolution changing when the relative weights of these meanings were shifted. These meanings include a professional evaluation of the theory that could be held distinct from other evaluations. This suggests that participants may recognize the truth of evolution within a science epistemology framework, which should be sufficient for belief in our proposition, and not give evolution the same status within other, more personal epistemologies.

But Hermann suggests that students ultimately fail in integrating science and religion, which creates a fear of losing religious faith, causing the student to cling to the religious view while further compartmentalizing science concepts. This drives at the hard, hard problem at hand: even with a perfect understanding both of evolution and of the nature of science, religious students are likely to run into areas of conflict that create psychological discomfort. This is because the epistemic boundaries of science and religion are neither hard nor perfect. Some of the areas that science claims as well within its remit to explain – such as the age of the earth – run into competing claims from religion.

One way out of this conundrum is for a student to redraw the boundaries – to say, OK, I accept the scientific method where it does not conflict with my faith; but on this matter I must reject it. Hermann’s subjects appear to have done this to a certain extent, but run up against limits. I would hypothesize that this line-drawing process itself leads to further discomfort, especially among students who are brighter and/or show greater understanding of the nature of science, because they would consciously or unconsciously recognize the arbitrary nature of line-drawing. And unfortunately, one good way to resolve that discomfort would then be to discredit the scientific method.

Credit: UK Government

Credit: UK Government

Note: I have joined the “virtual class” component of Dan Kahan‘s Science of Science Communication course at Yale University. As part of this I am endeavoring to write a response paper in reaction to each week’s set of readings. I will post these responses here on my blog – my paper for week nine is below. Previous responses are here. I will also be participating in the discussion on Kahan’s own blog.


This week’s (well, last week’s) reading focused on synthetic biology. Dan invited us to imagine that the White House Office of Science and Technology Policy had asked us to study the public’s likely reaction to this emerging technology. What kind of studies would we do?

The readings were:

Presidential Commission for the Study of Bioethical Issues. New Directions: The Ethics of Synthetic Biology and Emerging Technologies (December 2010).

Pauwels, E. Review of quantitative and qualitative studies on U.S. public perceptions of synthetic biology. Syst Synth Biol 3, 37-46 (2009).

Dragojlovic, N. & Einsiedel, E. Playing God or just unnatural? Religious beliefs and approval of synthetic biology. Public Understanding of Science 22, 869-885 (published online 2012, version of record 2013 – for convenience’s sake, I will refer to this as “Dragojlovic 2012”)

Dragojlovic, N. & Einsiedel, E. Framing Synthetic Biology: Evolutionary Distance, Conceptions of Nature, and the Unnaturalness Objection. Science Communication (2013)

I want to start off by taking stock: listing what we appear to know already, based on this week’s readings, and then figure out what outstanding questions remain.

What we know(ish)

Here’s a summary of findings from the readings (roughly organized from strongest evidence base to weakest):

  • Most people know little or nothing about synthetic biology (Pauwels)
  • The familiarity argument – that as people become more familiar with a technology, their support for the technology will increase – is not well supported (Pauwels, others)
  • For many people, synthetic biology provokes concerns about “playing God” and who has the right to “create life” (Pauwels, Dragojlovic 2012)
  • Framing for synthetic biology is similar to that for cloning, genetic engineering and stem cell research (Pauwels)
  • Domain of application has an effect on framing (Pauwels)
  • Acceptance of risk-benefit tradeoff depends on oversight structure that can manage unknowns, human and environmental concerns, and long-term effects (Pauwels)
  • Belief in God increases disapproval of synbio through two mechanisms – the idea (among weak believers) that genetic manipulation interferes with nature, and the idea (among strong believers) of encroachment on divine prerogative (Dragojlovic 2012)
  • Framing synbio as “unnatural” leads to negative perceptions only when characteristics of the particular technological application – eg, evolutionary distance between DNA donor and DNA host – increase perceived relevance of such arguments (Dragojlovic 2013)
  • Individuals who view nature as sacred or spiritual are most responsive to unnaturalness framing (Dragojlovic 2013)


Now, to answer the questions – via a little additional reading.

 

Part 1: Single study

The question:

Imagine you were asked by the White House Office of Science and Technology Policy to do a single study to help forecast the public’s likely reaction to synthetic biology. What sort of study would you do?”

At this juncture, it is probably more useful to model the general reactions people have and the associations they make when they learn about synthetic biology, rather than simply polling their support for the technology. (As we previously discussed, there’s little external validity to questions asking for opinions on something that most respondents don’t understand.)

I think the starting point would have to be more qualitative studies (or – cheating a bit – a mixed-methods study that starts with a qualitative phase). There seems to be little sense in creating a quantitative study in which the choices of responses are simply sentiments that we guessed people would entertain – far better to convene focus groups and see what sentiments people actually entertain. This would lay the groundwork for more informed quantitative studies.

Among the reading for this week, the only qualitative research was the pair of Woodrow Wilson International Center for Scholars studies discussed in Pauwels. These produced some insights – but as Pauwels points out, “The most important conclusion of this article is the need for additional investigation of different factors that will shape public perceptions about synthetic biology, its potential benefits, and its potential risks.”

Some of this work has now been carried out.

Looking beyond the week’s reading, I see that the Wilson Center has continued to carry out both qualitative and quantitative studies, some of which Pauwels summarized in a 2013 paper, “Public Understanding of Synthetic Biology.

Her major findings were:

  • Before hearing a definition of synthetic biology and learning about its applications, participants tended to describe synbio through comparisons to other biotechnology, such as cloning, genetic engineering and stem cell research. This could be crucial to understanding the ways that public debate about synbio might evolve, Pauwels contends.
  • Participants – even some of those generally positive about synthetic biology – expressed concerns about unintended consequences. (Interesting to note that some of these concerns came up when discussing genetically modified mosquitoes, a topic from a previous week in this class.)
  • Participants’ value judgment about synthetic biology varied depending on the technology’s proposed application. If the proposed application was in an environment that appeared more contained, participants were less concerned about risks.
  • Participants expressed ambivalence about engineering life. These attitudes take the form not only of the much-discussed unease at “creating life” and “playing God,” but also much more generalized anxiety – “this term makes me feel scared.”

This is is a very good start, but I feel there’s a bit more unpacking a qualitative study could do.

For example, under “ambivalence toward engineering life,” Pauwels includes the following reactions from participants:

It could also be dangerous if we do not research it enough to find out any long­term effects.”

“This could lead to huge scientific advances, but it can also lead to countries or people using it for their own ‘evil agendas.’ It reminds me of Jurassic Park.”

“It seems exciting but makes me somewhat uncomfortable. Where are the limits?”

“It sounds like we are playing God. Who are we as humans to think [that] we can design or redesign life? It might be nice to be able to do so, but is it right? It seems [that] there are many ethical and moral issues. Perhaps we are getting too arrogant.”

“I feel concerned because, not being perfect, we believe we know what is best in creating life. As in science­ fiction movies, when we do—in time—it goes in a direction we didn’t think about… I believe [that] when life is created, it is meant to be created that way for a purpose we may not even know right now.”

There are many underlying fears and concerns there, expressed in various combinations. These include concerns about unknowables (to coin a phrase, both known unknowns and unknown unknowns), longterm effects, human and scientific hubris, immoral applications by bad actors, security, unnaturalness, and violations of nature or of God’s dominion. There’s also an implied recognition (“where are the limits?” “many ethical and moral issues”) of the need to prevent technological applications that exceed society’s moral norms, and of the potential of technological advances to change the very locus of our morality.

I’m particular concerned with the need to explore the public’s feelings on moral limits. So far studies of the public’s moral objections to synthetic biology has focused on intrinsic moral objections (it is wrong to usurp God’s position as creator) rather than extrinsic moral objections (certain applications would be morally problematic). This seems strange given that as a society we have already collectively recognized some biotech applications as unethical – most notably, human cloning. It therefore seems imperative to explore public opinion on the subject, and try to separate measures of intrinsic and extrinsic moral objection.

With this preliminary information at hand, the most useful question to ask next is which of these attitudes, or general sets of attitudes, is most responsible for a negative predisposition to synthetic biology.

Part 2: More studies

The question:

Imagine you conducted the first study and the OSTP said, “wow, that’s a great start, one that convinces us that we need to do a lot more. We’d like you to outline a comprehensive program of empirical study—consisting of as many individual studies, building progressively on each other, as you like—with a budget of $25 million and a lifespan of 36 months.” What might such a program look like?”

I would propose a series of quantitative studies that would seek to model a situation in which citizens learn about synthetic biology, and then seek establish the frequency of the ideas and opinions expressed in the qualitative study.

Participants would be given a basic description of synthetic biology, and would then be asked to agree or disagree with the following (or perhaps, indicate their level of agreement on a multi-point scale):

  • Synthetic biology is unnatural.
  • Those who practice synthetic biology are playing God.
  • Synthetic biology scares me.
  • Synthetic biology just feels wrong.
  • If we start using synthetic biology, we may not be able to control the consequences. (With variations for environment, human health, security.)
  • I’m concerned that we don’t know what the long-term effects of synthetic biology will be. (With variations for environment, human health, security.)
  • Synthetic biology holds great promise.
  • Synthetic biology is exciting.
  • Synthetic biology could improve people’s lives.
  • Etc.

Potentially a great deal could be learned just in the correlation between these responses. For example, are there many respondents who say synthetic biology “just feels wrong,” but don’t agree with any of the usual-suspect statements about why it feels wrong? This indicates either that synthetic biology taps into a deep-seated fear that people find difficult to attribute cause or voice to – or perhaps that thre is an expressible reason for their misgiving that we haven’t yet succeeded in drawing out of qualitative study participants.

Another hypothesis to explore: perhaps this a strong correlation between unnatural/playing God responses and fear of unintended consequences. This may indicate that expressions such as “playing God” are sometimes used less to express a religious or spiritual conviction, and more to express a sense of humanity’s hubris.

It would be useful to pair these questions with a five-point measure of respondents’ support for synthetic biology, to try and determine the relationship between support strength and various attitudes.

I think it could also be useful to ask a series of questions that attempt to get at the way people make risk-benefit analyses about synthetic biology. This may also have an interesting bearing on their level of support. (As Dragojlovic (2012) points out, a key further question to arise from that study was, how do we consider risk-benefit trade-offs in way that accommodates value-based risks?) Participants could be asked to agree or disagree (on a five-point scale) with the following:

  • The risks of synthetic biology outweigh the benefits.
  • The benefits of synthetic biology outweigh the risks.
  • There is no acceptable level of risk for a technology or product. (Perhaps ask variations on this tailored to human health, environment, etc.)
  • The best way to judge whether we should use a technology is to weigh the benefits against the risks.
  • It doesn’t matter what the benefits or risks of a technology are; if it’s unethical, we shouldn’t use it.
  • The “rightness” or “wrongness” of synthetic biology depends on how it’s used.

Etc, etc – that’s an imperfect start, for sure, but I think with the right questions we could get into an interesting area of psychology.

Outstanding questions

There is, of course, much more that can be investigated. Here were the major area that Pauwels and Dragojlovic highlighted as ripe for future research – along with a few extra thoughts of my own.

  • We need further investigation of factors that will shape public perceptions about synthetic biology, and its benefits and risks (Pauwels). I think this is key – several of the studies we read followed up on “playing God”/”creation of life” concerns, but these concerns are probably only responsible for a small proportion of objections to synthetic biology. In Dragojlovic 2013, the baseline model, which included only the experimental manipulations (unnaturalness framing, evolutionary distance and so on), explained about 5% of variance in attitudes. This, Dragojlovic says, shows that most attitude variance is due to other factors.
  • Pauwels asks about nature of claims raised by “playing God”/”creation of life” concerns: “does it refer to polarization involving broad cultural/philosophical dimensions or to polarization strictly linked to religious values?” Dragojlovic 2012 illuminates some aspects of this but leaves further questions on table. Intriguingly, the Presidential Commission says it “learned that secular critics of the field are more likely to use the phrase “playing God” than are religious groups.” This may hold true only for organizational leaders and not for the populace at large, but it still neatly points out the importance of separating the religious and philosophical/cultural dimensions of the “playing God objection.”
  • Note that Dragojlovic 2012 was carried out in Europe – so a similar study of religious objections carried out in the US could yield quite different results.
  • What constitute effective counter-arguments to the unnaturalness objection? (Dragojlovic 2013)
  • Identify conditions under which advocates and opponents of emerging technology can use rhetorical frames to shape how citizens perceive the technology (Dragojlovic 2013)
  • Who is more or less likely to be swayed out of the unnaturalness objection – the religious or the irreligious?
  • What is the relationship between the “unnaturalness” and “playing God” objections? It seems like there is a lot of overlap, but an effective communications strategy would surely depend on understanding how each interacts with personal identity, which are simply immutable and which more finely shaded, etc.

One of the first companies to try and automate fact-checking now says “there is no market for fact-checking” — at least, not as you and I know it.

Paris-based Trooclick launched its plug-in last June, promising to check the facts in IPO stories against Securities and Exchange Commission filings, and against other articles. The original business plan was to make traders the prime audience, and eventually transform the plug-in into an add-on for Bloomberg or Dow Jones terminals. Trooclick was one of just a handful of efforts to automate the fact-checking process, some of which I highlighted for the Columbia Journalism Review.

The plug-in worked well, CEO Stanislas Motte says — so well that, in a way, it killed itself off. “The algorithm worked and didn’t find any errors,” Motte says. The reason, he says with hindsight, is that companies know their words are being scrutinized by regulators, and don’t dare to make misstatements.

As a result, Motte now concludes, “There is no market for fact-checking, especially on financial and business news.” That savvy business audience already knows to trust a limited number of sources — and they can usually spot the important errors themselves, Motte says.

From errors to omissions

After the success-cum-failure of the plug-in, Trooclick began thinking where the problem really lies. Its conclusion: “The real problem is not on errors but on omission… Big speakers prefer to use omission rather than errors,” Motte says. The way you combat that problem is by presenting different points of view and facilitating debate, Motte says.

2015-03-19_09-35-59

So in December, Trooclick announced a new product, with quite a different tack. The Opinion-Driven Search Engine uses the same natural language processing as its predecessor — technology due to receive a US patent on March 27— to scour news articles, blog posts and tweets. But instead of comparing facts against a reference, the new site categorizes quotes and paraphrases attributed to executives, analysts and journalists. (Trooclick describes all these statements as “quotes,” but in reality they do include paraphrasing too.) These “quotes” are designated either positive, negative or neutral, and the site displays lists of the positive and negative statements, side by side. Soon Trooclick hopes to move beyond “positive” and “negative” to perhaps three or five points of view on a given topic.

A sample Trooclick story page.

A sample Trooclick story page.

A viewpoint summary will be another key ingredient in Trooclick’s new recipe. With a huge chunk of readers never making it past the headline, Trooclick sees it as important to quickly summarize the major viewpoints on an issue in the first couple lines of each entry.

“Everything will be focused to give you the synthesis very quickly,” Motte says. “Today… on our website you can find 20, 30, 40 quotes [on an issue]. This is boring and maybe no one reads it. But this is only the beginning.”

The company, which has about $2 million in funding from its founders and France’s Banque Publique d’Investissement (Bpifrance), is considering two business models for the product. One is a white-label offering to social media or search giants, such as LinkedIn or Yahoo. The second is a b-to-b-to-b approach, in which a customer could use Trooclick technology to provide its own client companies with easily digestible media monitoring.

Moving fast

The company is aiming for some major advances in a very short time frame. In about three weeks the website will add the ability to filter stories by the person being quoted — a key move, Motte says, because he wants to start emphasizing speakers over news outlets. By June, Motte says he’s “80% confident” that Trooclick will have developed a capability to reliably detect and categorize three to five families of opinion for each topic, along with the functionality to summarize those opinions in a couple sentences.

And then it’s on to politics: by the end of this year, Motte wants Trooclick gearing up to tackle the 2016 US presidential election. By early 2016, Trooclick aims to analyze 50,000 news articles a day, on business, politics and other topics.

That seems a big leap for a product that still stumbles at times with classifying “positive” and “negative.” For example, here are some of the quotes Trooclick catalogued for the story, “Ryanair plans to offer low-cost flights between Europe and the U.S.”:

2015-03-19_12-57-52

The circled comment is not exactly positive…. just sort of informational.

Here’s one from the story, “Lufthansa pilots to go on strikes on Wednesday”:

2015-03-19_12-58-17

That might be positive if you side with the pilots and want to see their strike having an effect. For a lot of parties, I’d call this a negative.

I have no way of knowing how widespread the errors are, and I do see a lot of quotes that Trooclick has catalogued correctly. But seeing these errors does make me wonder if the company’s timetable is a little optimistic.

Motte acknowledges, “One of the biggest challenges for us is error rate,” though he won’t say what the site’s rate is. “If you are at 80% it’s great. The objective is to be more than 80%.”

Is Trooclick right for politics?

The move into politics is also surprising, given Motte’s views about fact-checking. “Speakers, companies, even politicians prefer omission [to making misstatements],” he says.

A map of fact-checking operations around the world, by Duke Reporters’ Lab.

A map of fact-checking operations around the world, by Duke Reporters’ Lab.

I just can’t buy that, given the 64 active fact-checking operations around the world, 22 of them in North America, and the frequency with which they find politicians making statements worthy of “Pants on Fire” or “Four Pinocchio” ratings. (Full disclosure: I’m a consultant for the American Press Institute’s Fact-Checking Project, so I arguably have a stake in seeing political fact-checking succeed.)

Where does this leave automated fact-checking?

Trooclick sales and marketing assistant Darcee Meilbeck says she does still think of the company’s work as fact-checking:

“In the last seven to eight months, yes, we have gone through a pivot… We realized that fact-checking is more than just true and false. That’s the story I’ve told people — we realize fact-checking isn’t just black and white. There is bias elimination that comes into it as well. That’s, I think, where we fit in at the moment.”

I’d say Trooclick’s new direction is an intriguing play at helping people to graze on news more intelligently — I’d hesitate to use the phrase “fact-checking” when no actual facts are being checked.

I must admit I was disappointed to see the company’s shift away from an automated tool that compared news reports to official reference sources. That disappointment that could be well driven by my own over-optimism rather than any realistic sense of what such a tool could today achieve, technologically or financially, and it’s not meant as criticism of the interesting work that Trooclick has turned to.

But while I may have been slightly dewy-eyed about what automated fact-checking can achieve now, I still think that for the long term, this target is both achievable and necessary. The battle against misinformation is going to require a combination of automation, leveraging of big data and some kind of social media or browser add-on, for the simple reason that most of us don’t go looking for verification, and even those of us who are verification junkies can’t possible verify everything we read. So the media ecosystem needs fact-checks that seek their readers out, rather than the other way round; and even better, seek out the lies that human journalists don’t have the bandwidth to.

In my CJR piece I very briefly highlighted a few tools and research projects that might fill that role. I didn’t know which would pan out and I’m not sure anyone does yet. But in the demise of Trooclick’s fact-checking plug-in, there’s an opportunity to formulate a couple hypotheses:

  • Business journalism isn’t crying out for fact-checking, in the way that political or science journalism is. Reasons include the less contentious nature of the content and lower personal and ideological investment by readers.
  • Automated fact-checking — especially the natural language processing component — is really, really hard. Maybe too hard, given the current state of the art, for a small start-up to handle. It’s possible such technology just isn’t ready for commercial roll-out yet — and the volume of research required to fine-tune it would be easier to carry out in academia or at huge companies like Google.

I welcome my readers’ thoughts on these theories, as well as their own prognoses for the future of automated fact-checking.

Post-script:

Stepping away from automated fact-checking for a moment, it’s also worth considering the role of crowdsourced verification — if only because of two high-profile launches in recent weeks. They are Fiskkit, a platform for commenting on the news, which won the Social Impact award at the 2015 Launch Festival startup conference; and Grasswire, a platform that invites the public to fact-check breaking news stories.

I wouldn’t close off crowdsourcing as an avenue to explore. If nothing else, I think Fiskkit’s combination of in-line annotation, logical fallacy tags, “respect” button (an outcome of the University of Texas’s Engaging News Project) and comments makes a good bid to be the forum for civil discourse that Facebook never was, and probably never could be. What I’m not sure it adds up to is good fact-checking. Wikipedia has shown us how far crowd-sourcing fact can take you, which is pretty far indeed — up to a point. I’ll be very surprised to see any crowd-sourced effort beat that track record.

Cross-posted to Medium.