When it comes to pharmaceuticals, there are three types of science communication
Direct (drug ads):
$4 billion a year (even though these ads used to be illegal). Assertions of benefit without data to support it, or with data that says nothing about the benefit (for instance, how many people take it), or with misleading data (“cuts the risk of stroke by half” – but half of what?). Ads are intended to sell drugs, not to help people make informed decisions. This is common, and can be very misleading
Public (FDA approval documents)
What the FDA knew at the time of application for approval. “A gold mine,” available free on the FDA Website, and a good source of data, especially in medical or statistical reviews. But documents are poorly organized and hard to get to, but if you’re persistent, they’re there. But what gets to the public, usually, is a distillation by pharmaceutical co. writers (package insert), or the ad. Tremendous loss of information. “Dense, frustrating but really useful.”
New type of drug communications, proposed by the speakers:
Based on food packaging nutritional labels, they developed a one-page document featuring a simple tabular display of benefit and side effect data, which could be posted in an easy-to-lookup format on the FDA Website or even incorporated into package labeling. They imagined going to the FDA and presenting the idea and getting it adopted. They met with FDA officials in 2002 and the concept received support, but officials said it would take an act of Congress to get that kind of labeling on ads. The idea of presenting benefits was new to them. They launched a series of studies to test the proposed labeling information. Study participants overwhelmingly liked it, were able to use it and were able to make sense of it, regardless of level of education. Consumers want and can understand these facts.
They presented the proposal at an Institute of Medicine/FDA conference, using an example drawn from FDA approval documents for a cancer drug specified by the agency (and describe how they selected the data). Their proposed table, among other things, organized side-effects into “life-threatening” and “symptom.” The information table has been reviewed by peers and the FDA risk advisory committee has recommended that it be used. That did, in fact, generate an act of Congress, which was passed and signed into law. BUT Congress gave the FDA a year to report back about whether they intended to implement it. And in that report, the agency said it needs at least four more years of study.
Take-home: Developing better science communication is one thing; getting it implemented is another thing entirely.
If research and science communication is to shape policy and practice, then communication needs to use the same criteria and rigor as the research itself.
While educators support an inquiry approach to communication, education researchers are not sure.
He uses the example of science kits designed for students. The problem is that the outcomes of using such kits are not being evaluated scientifically. Scientists often abandon their own evidentiary criteria for effectiveness when they step into science education. Why?
Another example: Various teaching/learning “styles” are popular in various educational settings, but too little rigor has been applied to whether – and how much – those styles work in educating children. He conducted a study of three styles: Direct Instruction, Socratic Inquriry and Discovery Learning, which looked rigorously at how a group of children learned a specific scientific principle different instructional modes, and found that the students who received direct instruction from their teachers learned more and retained it better than the other groups.
The study drew a lot of media attention and drew controversy among educators who advocate specific learning styles.
Klahr said that a fundamental criterion for good science is the operational definition – the specific set of operations and procedures that describe the thing being studied. When science education lacks such definitions, instead using terms for its approaches which are fuzzy, ill-defined and open to debate – eg, “direct instruction,” “adaptive instruction,” “constructivism” etc., – it becomes difficult to assess whether a given approach is effective.
Dietz recounted with the fable of the elephant and the blind men, which suggests that human observations are flawed and subject to argument. In reality, he said, people with different observations and understanding can talk to each other, and come to consensus, and accurate descriptions can result. Science is based on an effective method of communication; novelty is encouraged but strong selective pressure weeds out the sound ideas from the unsound.
We face consequential decisions about the environment, emergent technologies, etc. We need facts to make decisions, we have to attend to uncertainty – but we also have to take account of our diverse values. Public participation research tries to get to how we can make better decisions about complex challenges. (See NRC report: Public Participation in Environmental Assessment and Decision-Making – http://www.nap.edu/openbook.php?record_id=12434 – for which Dietz was an editor). The literature is substantial and draws on the methodology of the social sciences.
Public participation is defined as any effect to influence public policy or decision-making. By “public,” he means everyone interested in, or affected by a decision.
When done well public participation improves the quality and legitimacy of decisions and builds the capacity of all involved … in the process.”
Public participation processes are successful if and only if:
Ideally, the process should be co-designed by participants. The science has to address issues of concern to researchers – and issues of concern to the public.
Such work requires multiple kinds of expertise – about the subject, the process, the community, the politics, and people’s values. Everyone has legitimacy with regard to values, but good process and research can help articulate values and reduce value conflict.
One can’t design an effective public participation process in the abstract; it has to be highly context specific. That means asking:
Agencies need to have clarity of purpose, a commitment to use the process to inform actions, adequate funding and staff, appropriate timing, a focus on implementation and a commitment to self-assessment and learning from the process.
The process must be inclusive, collaborative in problem formulation and design, transparent and based on good faith among parties.
Conflicts can be driven by scientific uncertainty. Transparency of decision-relevant information and analysis is critical; one must pay attention to facts and values, be explicit about assumptions and uncertainties, include indepdendent review or collaborative inquiry, and be open to reconsidering past conclusions based on new information.
Communication is by definition two way and iterative. Trust is critical, hard to develop and easy to erode. Local-to-regional scale processes are well understood; national processes are not. Effective communication mustd take account of beliefs and values.
What remains to be done: Learn more about addressing values, scaling up from micro to macro, and investment in better data infrastructure to support these processes.