Lessons from ‘Not the John West who Rejects’

Everyone loves new stuff. Superhero movies have clearly found new ways to spawn. There are teams all over the world working on new ways to restore vision. Of course there’s new Coke I guess. Maybe not all new things are great.

There’s sometimes focus from people in the online space about the great examples of new medicos and researchers out there doing things in what seems like new ways. Communicating in new ways. Open access. Taking up new opportunities.

The thing is sometimes I’m not sure any of this is new. It’s just clever people using the means available to them that weren’t available to generations of particular clever people before. There are plenty of examples of people who have been around a long time who already showed the way in a manner that looks very familiar.

Which gives me the chance to mention a personal favourite of mine that most people don’t know: John B West.

Not the Fish Guy

This particular John West is probably pretty much known to medical types and not many others. That’s because he is responsible for a tiny little book which pretty much covers all of the physiology of breathing. But more on that later.

To understand why John West provides so many inspiration points for a lowly researcher slogging away at a PhD, it’s worth knowing a little of his CV.

He started out in Adelaide way back in 1928. Obviously I wasn’t around but I’ve been to Adelaide and I am pretty confident it wasn’t a huge city back then. It’s certainly pretty low key now.

While in Adelaide he picked up his undergraduate medical degree (back in 1952). He headed across to London to work and casually picked up a PhD in 1960 (he’d already picked up his MD in 1959).

The 1960’s led him to find a new height to climb – a bit literally actually. He joined an expedition to the Himalayas with some Hillary guy to be one of the trip physiologists. Posts in Buffalo and back in London followed then he spent 1968 with the NASA Ames Research Centre in California. One year later, in 1969 he joined the faculty at the University of California at San Diego as professor of medicine and physiology.

He headed up a trip to Everest in 1981 and was chairman of the Science Verification Committee for Spacelab 4 in 1983 for NASA.

And along the way he knocked out a classic of the medical literature: his respiratory textbook.

I’ve actually left out huge slabs of his achievements, posts and awards.

As a novice researcher, I take a glance at this guy who wrote one of my favourite physiology nerd books and pretty quickly think ‘well that can’t be done’.

And it might not be possible to match him as a researcher because a) he’s obviously very good at it; and b) he hasn’t been doing so much of the clinical work near as I can tell.

There’s still a bunch of things to inspire a young researcher though and plenty of them pretty much mirror things I’ve learned from people who impress me from slightly more contemporary times.

So here follows a list of lessons not to reject from John B West:

1) There’s more than one way forward

It can be easy to think there’s only one path to a research career. And while the landscape might have changed, John B West describes his research training as “extremely haphazard”, further adding “in terms of formal research training I grew like Topsy”. This is reassuring if say, you end up trying to learn how to do research in prehospital helicopter work.

2) Get out and try stuff

West went to London as a doctor partly to see the world. Being in London bumped against the Postgraduate Medical School doing respiratory research. There happened to be the first cyclotron designed for medical research just opening. Getting out there got him somewhere he didn’t necessarily plan.

3) Find teams

The breathing guy further describes the team around London as providing “a very stimulating intellectual atmosphere with chemists, physicists, and engineers all working in the same unit”. Maybe it’s the team around you and the stimulation they offer that can really drive you. No man is an island of heaving lung tissue etc. etc.

He really isn't the fish guy. But hear is a majestic bear anyway.

He really isn’t the fish guy. But here is a majestic bear running with a salmon anyway.

4) Take up chances

It’s hard to know when they might strike and it’s probably easier to say ‘yeah … nah’ than ‘sure’ but serendipity can work out sometimes. West: “I happened to be sitting next to someone at a meeting of the Physiological Society in England who told me of plans for the Himalayan Scientific and Mountaineering Expedition, which was to take place in 1960 and 1961. At that time I had no special interest in high altitude but was selected by Sir Edmund Hillary to go as one of the physiologists, and the expedition was a great success… I helped make the first measurements of maximal oxygen uptake at an altitude of 7,440 m on Mount Makalu.” He would later end up leading a team to Everest in the early ’80s and describing the first measurements of oxygen uptake at the summit. That was a pretty good chance to take way back in 1960.

5) Be generous

John West could probably sit on his laurels and relax a little now. Played hard. Done good. That sort of stuff. He could probably spend his time turning a buck from his accumulated knowledge and expertise too. Yet back in 2011 he posted a series of lectures covering his accumulated wisdom on respiratory physiology online. Look, you can go and watch right here.

This is actually a pretty generous thing to do. Sharing his knowledge as much as possible probably gives it more of a chance to get out and help more people too. It’s enough to make me forgive the awful title music. And even the bow tie.

6) Don’t waste a word

Well, this one is aspirational rather than something I can claim I do. I fling words around like a toddler with confetti at a wedding. A long time ago the good Prof. West condensed his encyclopaedic knowledge of all things breathing into a text. It’s still the best text out there on matters respiratory. And it’s small. Not much over 150 pages.

If you actually sit down and read it though, every page is packed with profound stuff. When I finished my anaesthesia training my copy was so chewed up I replaced it. It’s shiny.

Really shiny.

Really shiny.

The Rest

Actually I don’t know the rest. I’m still getting things from this guy. And I’ll still be getting things from him for a while yet. Even in person. Because John B West, who must be around 87 years old is keynote speaker at a conference in Australia this year. He’ll be telling the crowd about research up on Everest. And maybe wearing an awful bow tie even though it’s Darwin.

Still getting out there at 87. I may not take his example on work-life balance.



A lot of the stuff about John B West’s career came from this profile at the American Physiological Society site. Oh, yeah. He was President of that group too.

The image of the bear with the salmon is from flickr’s Creative Commons area under CC 2.0. It was posted by Lake Clark National Park and is here in an unaltered form.




Doing Research That Might Not Work

When I was a little guy I thought research science type people were pretty much all about power. Big hair, white coat, potions that bubble and a maniacal laugh. That was the cardboard cutout version of a science person I would have pinned on my wall.

Then I got a tiny bit older and got a tiny bit more of a clue and I realised that mostly these science people who do research were on a mission to discover massive things that would change everything about everything. Or at least everything about one thing. It seemed like something as cool as magic that wasn’t actually magic. Well, I did say it was a tiny bit more of a clue.

Decades later and I find myself trying to do research and I finally understand more that researchers are generally just people trying to answer questions. In a way that involves a bit of method to get you there. And that also involves all the magic of repeatedly walking into a rake.

And that occasionally means researching stuff that might not work.


Simple Questions

Just in case you haven’t read all the background on this research project (I’m guessing that’s everyone) it’s about brains and lights.

It started with an observation. Whether I’m doing work in prehospital medicine or helping kids snooze at the hospital I am very interested in the brain. When we go to accidents in particular we see our fair share of people who have injuries to the brain. This is a big deal.

With minor injuries to the brain, it might well be that you are briefly unwell and then you recover. Some brain injuries leave you with permanent problems in just about any function you can name. Speech, thought, movement, sensation, anything.

What might be less obvious is that the injury doesn’t just happen when your head gets rattled. The injury triggers off evolving processes that can be worsened significantly by further insults over the following period. Things like low oxygen or not enough blood flow to tissues or too much pressure in the head.

So when we are looking after these patients you can probably imagine we’d like to make sure we give the brain all the things it wants to start healing (or at least not keep getting worse) as quickly as possible. We put as many monitors as we can on to do this. Strangely, this doesn’t include anything to look at the bit we care so much about – the brain.


Before My Time

Back in 1977, a researcher by the name of Franz Jöbsis described a technique where you could shine a light through brain tissue, look at the light that made it out the other side and figure out stuff about the levels of oxygen and metabolism happening deep in that brain tissue. This was the start of tissue spectroscopy.

Sounds like the perfect fit for the problem, right? So why isn’t this standard 38 years later? Well actually there’s all sorts of challenges that held it up.

For starters, Jöbsis first tried it out on cats. Cat lovers might tell you many things about the miraculous brilliance of those brains but it is fair to say they are smaller than humans. So in humans what we tend to do is shine light into the brain tissue and pick up what bounces back. But it doesn’t all bounce back.

Picture it like this. You give a big group of squirrels a whole lot of speed (well this analogy is struggling already). Then you release them into the woods and tell them to come back and tell you what they saw. Some will just head in and come pretty much straight back and tell you their story. But some of them run and never stop running (well just imagine how twitchy a squirrel on speed would be). And some bounce off 10, 20 or 30 trees in all directions before they finally stumble back with their own unique story about the woods. Now you have to put all the stories together and say something sensible about the bit between the trees. Messy.

Is he pre or post stimulants? Who can tell? [Photo by Corey Seeman via flickr and shared unchanged under CC 2.0]

Is he pre or post stimulants? Who can tell? [Photo by Corey Seeman via flickr and shared unchanged under CC 2.0]

And that’s part of the reason it has taken so long to figure out what to do with that technique from Jöbsis. To get to here we’ve ended up using devices that use particular different wavelengths of light in the near-infrared range which have been tested under different conditions so we hopefully know a bit about how the light is absorbed and reflected in the tissues (how many squirrels come back and how many run). Most systems then display a number between 0% and 100% which is supposed to tell you a bit about the oxygen delivery and use under the sensor.

But even that isn’t that simple. There are different manufacturers, and each one tries to figure out how best to do it slightly differently so you can’t really directly compare any of them. And they won’t tell you exactly how they do it.

And the number between 0 and 100%? Well it’s not really measured against a gold standard. They make some assumptions there too. Plus you’re only sampling a small area so what about the rest of the brain?



Trying Things

So why bother? Well we’re going to try something slightly different and see how we go. For starters, we won’t be using the sensors on a single site, but a couple of sites on the head and a comparison sensor checking circulation in the body to try and put more information together.

More importantly we’re not just going with the number. Part of the analysis will look at the number that’s all about oxygen stuff, as well as another one that’s about how much blood is in the area. But it will look at the patterns in how these change in all 3 spots over time and compare them to the other observations we already take. Because if there’s hope for this monitoring to show something new, it would seem like the best bet is to think that it might pick up a change as it is happening, rather than relying on a single number not tested against any real gold standard.

And that might work. Or it might not and that would be absolutely fine. There are already companies out there telling people these monitors add vast amounts of knowledge to managing a brain injury. That’s not really true. But they have to sell units and saved brains sound like a pretty good story.

Of course it might just work and tell us new things about what is happening in the brain in real time. Which would be fairly sensational since we could then start figuring out how to treat patients using that information to hopefully stop all those evolving injuries. And less brain injuries would mean more people getting back to the lives they planned.

Or we might find something cool that’s unexpected and that would be a bonus. It’s just as likely though that the story from the speed-addled squirrels will be pretty confusing and we’ll find it’s not useful. Which would also be a great result since ruling something out of the calculations still brings us closer to finding things that work and not exposing patients to things that don’t.

So now there’s one more challenge if I’m to use this post to inspire me back into it. We don’t have squirrels in Australia.

The Discussions To Have So We Can Ask the Questions we Need

There is not much that makes me feel older than marking the birthday of my first son. Last week was eight years since his birth and I remember that day in almost too much detail. The drive to hospital. The scans on arrival. The long twilight of labour. The delivery and the last wisps of hope lost as those scans were proved correct. Alexander never took a breath.


My wife and I were deep into our specialty training in paediatrics and anaesthesia respectively but we never contemplated seeing a still life of his heart on the ultrasound. In my memory, I feel my wife’s howl more than I hear it.


That day we were probably just one of six families in Australia going through a similar grief. Yet with all our training we’d barely given it a thought. Stillbirth has been so neglected until recently that even definitions shift between the death of a baby after 20 or 28 weeks (and that’s just the two most common ones). Using the lower number, the Australian Institute of Health and Welfare tells us that in 2011 there were 2,220 foetal deaths. The national road toll for the same year was 1,310. Did you see any news bulletins highlighting a stillbirth rate that hasn’t fallen in decades?


At the time, one source of distress was the lack of answers. Not just the lack of answers either, but the sense that no one was looking for them. Since then there’s been major efforts by organisations such as the Stillbirth Foundation Australia and the Australian and New Zealand Stillbirth Alliance to make research happen. However, a lot of this appears to be on the basics – agreeing on definitions; testing standardised investigation programs; describing the epidemiology. The Stillbirth Alliance lists a total of six research projects under way.


This is not to say these aren’t good projects, or that work isn’t being done elsewhere. There have been recent updates on potential risk factors such as maternal weight through pregnancy and sleeping position during late pregnancy. A Victorian team seems optimistic that they’ll have a screening blood test for low foetal oxygen levels within five years. It’s just that these reports feel like occasional telegrams from a frontier left mostly to itself.


Perhaps the reason researchers are left to work quietly alone is our discomfort at confronting the mess of the bereaved. Just this month a team from Oxford University released results of the first national survey in the UK asking families about their care. It revealed tremendous variability in aftercare, describing some institutional experiences as “unacceptable”.


After Alexander’s death we saw the full range of people’s compassion, ignorance and sometimes fear. I have no doubt that there were times when the grim chaos of our loss made those nearby tell themselves we were better left alone. Some realised that saying they had no idea what to say captured a lot. A few disappeared.


But when we didn’t talk we were left to grapple with despair and the guilty aftermath. I would have been better off had those around let me express my blinkered rage at those for whom it all went well or my disdain for those coping with the minor hiccups of life.


More than anything, I just wanted people to talk about Alexander. My greatest fear was not the sorrow his name would bring, but the thought that he would be forgotten by everyone, left only as a secret burnished tale for a married couple.


But with a few notable exceptions, we don’t share these stories. We need everyone to face up to this and engage with families’ grief. Then people might feel the urgency to find means of preventing these deaths and there’d be support for bright researchers to tackle their questions.


Back in that first pregnancy like so many others we read aloud around my wife’s growing belly. We read The Little Prince. In it, that serious little boy talks of his return to the skies and tells the aviator that his gift will be the laughter of the stars, as he’ll be able to look up and know that his young friend has returned there and is laughing on one of them. And so I listen when I look up at the sky.


Eight years later, my first hope is that we’re closer to bereaved parents knowing that they can search the stars with those around them understanding why. Then I hope we’ll shift things, so that no one else has to look with me.


A spot we go to be together

A spot we go to be together


PS This post is a very slightly altered version of an article The Guardian were kind enough to put up here. There’s some very generous people who share stories that matter to them in the comments (yes, you can actually read the comments).


It may seem a bit left field given the other posts on the blog. The loss of Xan has many ongoing impacts and one of those is a passion to support researchers asking questions that are vital to preventing people collecting their own stories, in whatever area, that they’d do well without. 


Finally, I have the space here to mention that there is a hero to this story and that’s my amazing wife. Actually she’s the hero of most of the stories that get to the blog, there’s just never enough space to do that justice. 

Research Without Test Tubes

When I first mention I’m trying to do a PhD to people, they assume that I mean something with labs and coats and magic liquids. I have no idea why they would get that idea from media portrayals of people who do research.

All science liquids are also brightly coloured.

All science liquids are also brightly coloured.

What they don’t appreciate is not only do I not get the PhD showbag with the white coat, protective eyewear and cocktail set is that a lot of my PhD is not (so far) about “researchy” stuff like that. In the 2 years of study set-up a lot of it has been about a bunch of skills I didn’t expect to pick up (as mentioned here and here). I’m not developing pipetting skills for stock photo shoots, I’m doing other bits to make the project work.

The Messy World of Prehospital Research

For so many projects, data is king. Without data, there’s no real stuff to measure. And without real stuff to measure I might as well work on writing homeopathy manuals and horoscopes. Collecting clinical data in an environment where cars might be driving by and medical teams might be sweating into small puddles in the summer sun has the potential to be a little challenging.

To add to the disastrous set-up for sober clinical research, I can’t collect all the data myself. If you were going to choose to conduct research, would you do it where you can’t control when you can do it and you can’t control the data capture? If so, we can do a joint run on T-shirts saying “I’m not with stupid, I am stupid”.

Mind Control

The only thing I can do is make things as uniform as possible. Which brings me to the latest chapter of my PhD – the education bit. What I’m aiming for is some type of control by getting everyone on the same page.

The set-up of this project involves other people collecting data (our trial observers). Thanks to turnover, we won’t necessarily have the same trial observers the whole way through. Add in the fact that we’re trying to get overseas recruiting sites involved, and there’s a bit more complexity. A conservative estimate over about 3 years of total study time is that we might have 20 or so trial observers and I need to aim for them to capture consistent data every time (as I can’t control how often we see patients, I have to make sure we’re not losing data through sloppiness).

So we’ve set out to produce a trial education system, that can be replicated and that we can demonstrate ensures consistency. Here’s our version of how, ready for robust criticism or enthusiastic blowing of annoying plastic trumpets (figuratively only – no one should blow those things).

1. Break it Down

The first thing we did was break down every component of the trial observer’s job. We started at the moment they turned up in the morning to set-up, through every bit of the job and then up until the point they had to leave the hangar at night. Each bit of the day has a learning package, which also means each package is pretty small.

2. Build on Prior Knowledge

We have the advantage of having engaged people who know a bit already. We’re not trying to sell skis in Kiribati. Assume people are completely unaware, and we figured they’d switch off in the bits that felt patronising.

3. Don’t Assume They Know What You Think They Know

The catch was to recognise the bits they actually wouldn’t know. Once you’ve lived inside the project for a bit it can be easy to forget that other people don’t know the details of what you’re up to like you do (like this inventor). So everything gets run past someone who should have no idea what I’m talking about, to see if it kind of makes sense.

4. Multiple Media

There’s no telling what will work for people when you’re trying to get stuff through. Some people at the base don’t even like coffee, so allowances are necessary. So each bit that we’ve developed learning material for has a written bit, a video bit and a walk through with someone who knows the system.

The biggest challenge of this was probably learning how to make a video. It didn’t take that long to use the really simple software (I’ve used iMovie, and it has a few flaws, but it got stuff done). It’s a quick skill to get better at too, particularly if you start planning in advance what you want to put in and how you’ll tell the story while keeping people engaged.

5. Standards and Documentation

The last bit (which we’re just getting to) is the roll out. So each person doing the trial observer job will go to a site (we’re using Moodle because the organisation had that already) where the written packages can be found. We’ll be able to tell when they’ve read the material. They’ll also be able to see the videos that go with different sections so they can check out how things work wherever they are.

We’ll then put them through face-to-face chats going over everything and get them to demonstrate they know how it goes. This whole process will have something for the trainer to document they’ve demonstrated how awesome they are before we finish by putting them through a simulated job where they collect the data and bring it back. Easy, right?

A Sample

Here’s a sample of one of the videos where we’ve simulated a job to demonstrate how things might go when working with the crew of doctor (in red here), paramedic, pilot and aircrew. Keep in mind that some of the stuff leading up (like preparing the monitor in the morning, how to make the trial monitor link to a stand alone iPad and why we mark certain things that happen like patient movement) is covered in other videos. There’s also a couple of internal gags there and a bit of jargon (“packaging” for example is shorthand for wrapping everything up with a neat little bow for patient transport).

The aim though was to give our potential observers an idea of how things will run, while stopping them from getting so bored they pulled out their eyelashes.

I’ll leave it to the chorus to tell me if we’re close or not – here’s the computer version and the mobile version (this is a quirk of iMovie that I might have to address).

So I started a PhD. Then I ended up making videos. And now I do education packages. I’ll get the shock of my life when I actually have some data as part of this “research”.

Fraud and the Research Police

I would not be much of an actor. Acting seems to require the ability to shed the self-consciousness you collect in adolescence at least momentarily and let rip with an unfiltered version of humanity. To do less strikes a viewer immediately as all too artificial or false. Once you spot the lie, the whole mirror is cracked. That takes chutzpah, and I’ll happily confess that I couldn’t remove myself beyond the inhibiting “I am ridiculous right about now” mindset, the feeling of selling a lie. I’d be living in constant fear that I’d be pointed at for my mimicry.

Well, why go to the effort of making poison when you can just don the makeup?

Well, why go to the effort of making poison when you can just don the makeup?

Mimicry in Research

Research is not without its mimics. As a pretty fresh PhD type I have some of the zeal of the born again convert but anyone can see that fraud and research misconduct is all around with anaesthesia particularly blessed with the dodgy attempting to mimic good researchers. I’m not quite sure if I should be astonished people do this or that there isn’t more of it being called out.

The case study provided by Yoshitaka Fujii is one of the more recent impressive examples. Fujii’s area of research centred around approaches to decrease nausea and vomiting after anaesthesia (a noble cause worthy of pursuit). In 2011 an episode of plagiarism emerged and an investigation ensued. This eventually revealed that not only had ethics approval not been a feature for much of his work, data had been falsified in 172 of 212 papers (there’s a good summary in Retraction Watch here and a superb effort to look at it with even more rigour by Dr John Carlisle here). For his trouble he was sacked.

While there’s plenty that’s disturbing about that tale, at least post-operative nausea and vomiting is a bit of a niche. Maybe as long as prevailing views are appropriately updated and there’s little harm, that’s enough. What if the potential harm is more though?

The Heart of It

Don Poldermans was heavily involved in research promoting a particular class of drugs, beta-blockers, to prevent cardiovascular events around the time of surgery. Another noble cause. His work in the DECREASE trial was a big deal when I was training in anaesthesia. Finally anaesthesia had made it to the big stage of medical research. He ended up helping draft the guidelines on managing peri-operative risk.

His findings weren’t matched by others. Then it emerged that there were issues with his data, starting with issues of consent, potential data fabrication and submission of papers based on false data. Poldermans was sacked although he states his problem was only one of sloppiness, particularly in an underling, and disputes the accusations relating to data. His former employer reached the conclusion that no one was harmed.

Well as this article explains, there’s potential that lots were harmed because bad research presented as an authoritative source can change medical care. In removing Poldermans’ work from the meta-analysis, the risk of death was 27% higher if you got the treatment compared to those on placebo. If given as per guidelines, up to 10 000 patients in the UK alone may have died.  So where do you draw the line as to when research misconduct resulting in real world badness elicits more than sacking?

Nobody Expects the Research Police

If you’re a Monty Python fan, those words will trigger a fond recollection. One of the contributing factors (covered here, as an example) might just be that no one expects to get caught. That would be because it’s hard to see the real penalties flowing from the discovery of the misconduct.  Is sacking and embarrassment enough?

The journal Anaesthesia has laid out the approach both to prevention by automatic plagiarism checking and the steps involved in the response. They list the following steps:

* Seek an explanation from the author

* Correct or reject manuscripts.

* Contact the researcher’s institution where things look egregious and ask for an investigation.

It is noted that the last of these doesn’t always do much (indeed Fujii had been flagged as far back as 2000). Where publication has already happened, there can be corrections or retractions. All in all, it pretty much amounts to writing some letters. It’s not even clear if they get a particularly grumpy pensioner to draft the missive.

Should there be more than this? One earlier example in anaesthesia, involving Scott Reuben, resulted in 6 months jail time but not for the ills of the research. It was for fraud relating to the use of federal grant money.

So here’s my question: where are the research police? Where’s the squad wearing the tricked out white coats brandishing particularly scary scientific calculators and especially pointy clicky pens? Because until there’s more than letters to the editor, is there really enough incentive for the otherwise tempted to stay on the straight and narrow?

Jerry Seinfeld – Research Mentor

For a couple of annoying years I did the wedding MC circuit. Always the MC, never the embarrassing interloper. This is not something I enjoy. The MC gig is the one you get when you’re not so important to the couple that they want you involved for the truly meaningful bits, but they know you well enough to be fairly certain you’ll stick to the rules. You know not too dark, not too insulting, light on the nudity. I’ve recited bits of self-written sonnetry and memorised bits of foreign language diatribes. And in my pursuit of the level of “drawing an occasional smirk and no walk outs” I’ve come to greatly respect actual comedians. Be they improv superstars or super scripted performers, I stand in awe of all. And I have come to this conclusion:

Everything I need to know about my work, I can learn from Jerry Seinfeld.

Building Cricket Cages

He’s not everyone’s cup of herbal strained foliage dregs, but I’ve been a fan since the early days of the sitcom. A while back I came across this profile from the NY Times Magazine (bit of a longread). The thing I found most interesting is not just that he keeps bashing away at it to feed an obsession. The fascinating element is his obsession with approaching perfection, be it in the door mechanism of a 1957 Porsche or continual whittling away to create the perfect bit. Seinfeld crafts his jokes over more than just a couple of sessions – he keeps at it over years. Take as an example the joke about the marriage game of chess with the board made of flowing water and the pieces made of smoke. The key to the joke delivering was drawing a board in the air. Some years after he started performing it.

It turns out that he’s describing pretty much the process of being an anaesthetist (alright, there’s a bunch of really easy gags to make right about there, so I’m just going to wait patiently while you run through them…

… done? OK, moving on).

I’m not talking so much about the patter most of us work on to try and win the patient over. We only have a few brief minutes before patients, in a place of exquisite vulnerability, put their trust in a stranger. The routine for kids can be particularly challenging and confronting to hopes of retaining dignity in the workplace. Of course, most comedians probably don’t have the option of turning up the sleepy gas to make the heckling stop.

The real similarity is in the pursuit of ever incremental steps towards the perfect anaesthetic. Anaesthetists can obsess over the smallest detail of every element of what they do to try and produce the perfect parameters. A discussion of taping in a cannula can take up a leisurely lunch on a day off. It’ s a slow march towards small moments of perfection. To build the cricket cages that Mr Seinfeld reveres.

Actually, it's really impressive, but I'd just let the bug go and be a bug.

Actually, it’s really impressive, but I’d just let the bug go and be a bug.

Building Perfect Research for the Side of the Road

I had another MC gig a while back. This time at a conference (a whole different type of angry after that one, but that’s for another day). One of the speakers in the session was the principal investigator for a trial of an automatic chest compression machine for giving CPR to patients suffering cardiac arrest. What followed was a seriously impressive presentation of how to strive for the cricket cage when doing prehospital research.

The prehospital environment is by its nature messy. It isn’t a pristine lab where elements are easy (well, easier) to try and control. There’s often a bit of dodging the stuff that’s flying while trying to get the job done. Trying to manage this is a big challenge for a researcher trying to perform high quality work in a place defined by chaos.

The investigators for this study, the CIRC trial, published their study design a while back in Resuscitation.

CIRC Design

A big flaw they’d identified in previous research comparing the machines with people doing CPR was the possibility that those getting compressions hadn’t received good quality CPR, particularly as measured by amount of time actually receiving the vital compressions. So they set about addressing it. Across the 3 participating countries, they put more than 5000 prehospital providers through a standardised 4 hours training program. They then had them undertake exams to prove they were up to scratch. Then each centre had a period where their ability to deliver on the protocol was assessed before they were allowed to recruit. Follow-up assessments of quality and regular re-training were also part of the script. It’s not in here, but in the presentation the good Dr Wik also mentioned that every included patient had the duration of time and depth of CPR measured directly (including by transthoracic impedance).

The result – more than 80% of the time that patients were being treated in either the machine or manual compression group, they were receiving effective compressions. As compressions are vital to success of CPR, this is really important. And it’s around 20% more efficient than any equivalent prehospital study in the area has ever demonstrated. It’s staggering. It’s the sort of result anyone prior would have said was impossible.

I’m actually not going to get into the results (basically equivalent between the two groups, with lots of reasons given). The standout feature was the level of effort required to overcome the challenges of the setting. If we want to build an excellent research project, looking cool in flight suits doesn’t remove the need to be absolutely rigorous in getting the data.

So now we’re building our cricket cages, or examining the door of the Porsche. Before we even get going we know we need to simulate our jobs on ovals and in upturned cars, design our education plans and test our ability to collect the data reliably. It couldn’t be more vital to a good idea to get our script right, to test it out and whittle away to make it astounding. We might not reach perfection, but we should at least aim to make the Norwegian guy jealous. And if anyone feels like chipping in a 1957 Porsche Speedster for the simulations, we will make good use of it.

The Classification of Ministers

To save myself 1000 words, this was my Twitter stream yesterday.

[via parents.wfu.edu and a HT to @JulieLeask]

[via parents.wfu.edu and a HT to @JulieLeask]

This was because I follow many researchers and scientists and the absence of anyone with the word “Science” in their title amongst the many merry cabinet ministers was enough to unleash much despair, incredulity and generalized wailing and gnashing of teeth. I think I even saw a bit of alternate career planning and proposals to set up new reality programs for the times (“I’m a Scientist! Get Me Out of Here!”).

Now, Twitter gets good traffic spreading outrage. Those who’d been suspicious that a new government would take things back to a 1950s era desperate for the arrival of Marty McFly to liven things up were ready to pounce with the evidence at hand. (They were mistaken of course. The 1950s may have had patchy sewerage connections and smallpox, but it had a science minister.)

All this stuff about the support for science is of pretty vital interest to someone like me who is just trying to get into research and would like to think there’s a bright future. Perhaps with the benefit of a little time, it’s worth asking a couple of simple questions:

1. What does it really mean?

2. What’s to be done?

What’s In A Name?

Any onlooker could well have wondered why the angry blare of vuvuzelas erupted from some in the science community. The issue was surely that it tapped into deeper fears of the attitude of this government to science and research. Maybe those expressing fears are worried by a political party with more than a few climate deniers. Maybe the prospect of them interfering with grants and “redirection” made people skittish.  Perhaps health researchers were skeptical of the level of engagement of a health minister with responsibility for the NHMRC who didn’t feel the need to develop much policy or ask a question on his portfolio for 3 years. It could have even been the realization that the teams working on stupendous bionic vision might lose the brilliance of the team at NICTA who are part of restoring vision, but have had their funding reduced. It all certainly squeezed a nerve right against the collective bony bit.

The concern is evidently that banishing the word “science” from polite ministerial conversation is an attempt to downplay its significance. It’s certainly a little hard to claim that retaining the name is just the handwringing claptrap of symbolism when you’re happy to latch yourself to the ANZAC legacy by including a minister to put on a truly excellent ANZAC march or promote the “border protection” label into the cabinet.

On the emotion, it’s possible that the other variegated disappointments of the cabinet, most particularly the gathering of the Bratwurst Brethren (+ Significant Other) just in time for an Oktoberfest party, spilled onto this turf. They are separate though.

There is scope for the new government to clarify its relationship with science, research and higher education over the coming weeks and months. In opposition you don’t get to put much of that into practice. There is nothing stopping an opposition from actively grooming and mentoring the next generation. To make no effort to meaningfully engage with the challenge to strengthen the role of committed and able women within their ranks is rather pathetic. The cabinet announcement should have shown the fruits of that labour, not a demonstration of an entrenched old boys attitude they couldn’t be bothered with.

The scorn heaped on the cabinet selections for including just one woman is entirely appropriate. The science issue is separate. The sensible response is probably to take on board all of this and retain a skeptical eye to further developments.

And next?

After a day, some more nuanced discussion has emerged from people far cleverer than me.  Will J Grant and Rod Lamberts cast about some excellent pearls here, most particularly that there may be some trolling going on and mounting outrage may be counterproductive. Certainly groups such as the Australian Association of Medical Research Institutes can see that setting up a wailing circle is probably not a constructive way forward.

Prof Brian Schmidt (super Nobel laureate) had already pointed out that the label matters far less than the actions of the government. There’s more in this video, including some useful critique of the previous government. He even provides a little leeway – 8 weeks for the government to nail its colours to the mast in a manner more effective than nailing jelly to a wall.

Chief Scientist Ian Chubb has again advocated a comprehensive, whole-of-government approach. This will be a space worth watching because responsibility for science and research will fall across multiple areas, a not entirely new arrangement. To make that a strength, a lot of coordinated effort will be required. The risk is of a fragmented approach to innovation, with researchers interacting with different groups with different rules. Seems like a recipe for an ongoing administrative nightmare for researchers rather than the efficient streamlining hoped for.

So maybe the next bit is to keep talking, but not howling. Continuing interested discourse and active engagement should be standard behaviour. It may well turn out to be business as usual, and the research community has always been pretty effective at producing great stuff in interesting circumstances. If the business changes of course, I hope there’s more to the response than a Twitter storm and some consolatory animal gifs and “Keep Calm” memes.