Talking Loud and Saying … Mostly the Wrong Thing

It doesn’t take that much to give people the wrong impression. You can drop a few pertinent facts that would otherwise give people the chance to form a full opinion.  You can make sure to compare almost-like with almost-like and say they’re the same. The way you select what you’ll mention frames everything that follows.

The way you frame it is the same in reverse though. When you want to discuss a topic with and leave the “right impression” the things you highlight, the studies you choose to quote, and the way you present them to the next person who doesn’t have the time to go through every reference will be a big part of the message they take with them.

The obvious spot where this come into a PhD is in the thesis. Perhaps particularly in the literature review where the stage for everything that follows is dressed and lit. There’s an awful lot of time in the PhD spent obsessing over what the evidence says, where the balance lies and how to present that fairly. It matters.

What Can You Say?

This all came to mind this last week. Around these parts there’s a show which likes to tout itself as our the apex of all things science show. It’s called “Catalyst” (I’m going to let you speculate on what particular reactions it might speed up). This last week they chose to talk about the risks to your health posed by everything that includes “Wi” and “Fi” in its tech specifications.

This is a pretty legitimate thing to talk about. It touches on lots of interesting stuff about how you assess and communicate risk. Except this is the frame they built to hang their pretty picture:” … “no evidence of established health risk,” is not the same as saying it’s safe. Sadly, guaranteeing safety is something not even our safety authority is willing to do.”

This is a fundamental misrepresentation of how evidence can be applied to assess risk and it sets an impossibly high bar. When I chat about anaesthetic risk with patients or families I can’t promise zero risk for the healthiest kid that turns up starving, waiting for their operation. I can only ever say that the risk of bad things happening is very, very low. I can’t say that nothing has ever been shown to happen under anaesthesia.

But that’s what they were after with this program. A complete and total guarantee of safety. Not just a heavy preponderance of multiple studies suggesting no measurable change in risk. Not just a lack of reproducibility of those few studies suggesting we should worry. Not just a lack of a biologically plausible mechanism but some sort of guarantee etched in a rare element more precious than humanity itself.

Which awful consumer hell version of critical thought have we reached where that’s the only acceptable option? Next time you walk through the city you’d better take precautions you won’t be seriously injured by a falling baby hermit crab. Sure there’s no plausible evidence that it’s a risk to worry about, but no one’s given you a guarantee, right?

Looking Sideways

What followed was a sequence of failures. The reporting was clearly heavily influenced by a single researcher. It opened by suggesting the heart has activity just like a pacemaker. They seemed to make the difference between ionising radiation, which can definitely alter DNA, and radio frequency, which doesn’t, a bit murky. There were claims the industry was pushing back, just like when people tried to stop smoking on planes.

The suggestion was made that the flatlining rate of brain cancer was just a result of the long latency of the disease, that would extend many decades from here. Just look at the delay after the atomic bombs we were reminded.

Except that the evidence on that front was misrepresented, not mentioning the steady rise in cases until a peak, and that some of the original research excluded all the cases of cancer from the first 13 years after those events. Not much latency.

They referred heavily to a particular study, the Interphone study, which suggested a possible link in one type of cancer in those who reported heavy use. They could have made more of the fact that this study relied on self-reported mobile use up to a decade earlier, including in those already diagnosed with a malignancy. They could have made more of the fact that the conclusion of the study reads “Overall, no increase in risk of glioma or meningioma was observed with use of mobile phones. There were suggestions of an increased risk of glioma at the highest exposure levels, but biases and error prevent a causal interpretation. The possible effects of long-term heavy use of mobile phones require further investigation.” It’s all about the frame you choose to hang.

When we got to the section of the program where the featured researcher, Dr Devra Davis, took us through the all important featured image of a child holding a mobile phone, then showed the terrifying colour bands, then realised it was a stock image available online with some added lurid bits and no real discussion of the quality of data, the whole thing was fairly cooked.

Capybara

Go ahead, put on a display. Don’t assume we’ll be impressed.

It was a fail. It was the sort of fail that would have had Fox Mulder slowly bleeding from an ear if he wasn’t busy having placebo-driven ‘magic mushroom’ trips to higher planes of consciousness so he could communicate in Arabic with brain injured coma patients (actual story line from the same week, no exaggeration).

Of course it’s also well established that the audience takes from any reportage the stuff they’re inclined to believe in the first place. Which means plenty of people would have been very persuaded by this coverage, rather than very persuaded by the holes.

The most distressing thing about all of this is that the biggest fail might not even be with that TV program. The biggest fail is possibly with the scientific groups who should be making themselves useful to people.

Tell Us About Risk

The International Agency for Research on Cancer are the ones who put out the information perceived as being all about risk that causes so much confusion. The really absurd bit is that everybody thinks it is about risk, but it’s not really about risk. It’s about how strong the evidence is.

This group is charged with looking at the evidence that any particular thing is related to cancer and break it up into any one of 5 groups: group 1 “carcinogenic to humans” meaning “we are pretty sure they have have the potential to cause cancer”. 2A gets “probably carcinogenic to humans” which in this case means “well there’s some evidence but we just can’t be sure”.

Group 3 is where you put all the substances that can’t be classified due to a lack of data and group 4 is for “probably not carcinogenic”. There’s one substance, caprolactam, in group 4.

So we come back to group 2B, which is for those things that are “possibly carcinogenic to humans”. This is somewhere between “there’s some evidence but we just don’t have enough to know where to put it” and “there’s not enough data to even guess”.

So it’s a dumping ground, a rubbish tip for over 250 things they’ve considered. Things like coffee. Or pickled vegetables. And that’s where WiFi comes in. About as dangerous as pickles.

To make it worse they’re not even mentioning the level of exposures that might be worth your time, or strength of associations in any of the groups. It’s really just about how much evidence is out there. Which means something with pretty much no evidence gets called a possible carcinogen. And everyone thinks quite reasonably that they’ve assessed risk, when they’ve really just assessed papers and words.

They just haven’t bothered to make an effort to communicate that properly.

Isn’t that an even bigger fail? Who are they trying to help or inform? What’s the point if the logic is inaccessible?

What’s the Conclusion?

The “flagship” science show failed every which way. The IARC fails to make things clear. Over and over after each deliberation, probably fuelled by “possibly carcinogenic” coffee. The show’s producers failed because of the way they framed everything they found. The other group failed because they don’t seem to bother even thinking about the frame.

In later responses, the ABC mentioned that a couple of experts who would have dismissed the links between cancer and WiFi were invited and declined to appear. They implied that they passed up an opportunity to have that alternative heard.

That might be true but the program ended up with a single voice presenting the view that more faithfully represents the consensus position. Not much effort there. It might just be that Catalyst, having previously had a program around statins and cardiovascular disease pulled offline for its lack of adequate representation of the evidence, has burned its credibility when reporting science. They’ve discouraged researchers from going on. A flagship, huh?

So what lessons do I take away when reflecting on how to present the evidence around a PhD? Perhaps the best advice comes from another science journalist.

Rose Eveleth writes and podcasts all over the joint. In this post at Last Word on Nothing, she describes a story that grabbed her interest on looting in archaeology. The author was very convincing on the subject. It seemed like time to pursue it.

Then she found someone else who flat out laughed at the idea. She disputed lots of the facts in a coherent fashion. She highlighted the complexity at looking in enough depth on the ground to actually represent the truth of the story. The story appealed, the evidence didn’t back it up. So the story got left behind.

Perhaps that’s the key. As it says in the final line of that post “always look for the person who will laugh at your story”.

If you can explore all the issues raised by that laughter and then communicate the research faithfully, that might be how you get there.

 

More Reading:

Here’s another one of the responses to the initial program.

The National Cancer Institute has an information page that seems pretty useful too.

Here’s a pretty useful thing on those IARC categories as well as a better way of showing the information, this time as it relates to meat.

That image was from the flickr Creative Commons area and is unaltered from the post by Heartlover1717.

Advertisements

8 thoughts on “Talking Loud and Saying … Mostly the Wrong Thing

      • I had not seen Catalyst for a while (it was on at a time I was otherwise engaged) then happened upon an episode and settled down with every expectation of edification and enjoyment – only to find my gob hanging open within the first five minutes. When did Catalyst become a half-baked popularist magazine show, rather than a science show?

      • I rarely drop by now after previous episodes left me underwhelmed. I think each reporter/producer works on their own stories though so I don’t know that it is universal. I did try to allude to the fact that you take away what you were looking to take away and I sat down with this one thinking it might not be great. Was trying to be honest enough to make that clear.

      • I did get that! 🙂 I do think you might have a point about the communication though. Yes, the media loves a story and they love to use the juiciest angle and to hell with the actual science. But, knowing that, does that mean that scientific bodies have a responsibility to word things more carefully, so they are less vulnerable to misrepresentation? I think this might be one of those, ‘should they have to? No, but it might be a good idea if they did.’ situations.

      • I think perhaps that part of the problem is that, once you’ve been in research for a while, you don’t have a clear idea of how people think outside of that bubble, which makes communication problematic.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s