Sense and simplicity in science

I recently finished Atul Gawande’s book The Checklist Manifesto, which I highly recommend. It’s all about how very simple measures can have profound outcomes in fields as diverse as aviation, construction, and surgery.

What struck me the most about it wasn’t the author’s endorsement of using basic checklists to ensure things are done right in complex scenarios. Instead, it’s Dr Gawande’s insistence on testing the influence of everything, including a piece of paper with 4 or 5 reminders stuck to his operating theatre wall, that I found inspiring.

Why bother collecting evidence for something so apparently simple, so clearly useful, at all?

Talk of the town

Ischemic stroke, caused by the blockage of an artery in the brain by a blood clot, is as complex as anything in medicine. In fact, for such a common and debilitating illness, we have surprisingly few treatments at hand. Until recently, only two had been proven to help patients who suffered a stroke: giving them a drug that dissolves the clot and keeping them in a “stroke unit” where they receive specialised care that goes beyond what is offered in a general neurology ward.

But that all changed last year. The lectures and posters at the 2015 European Stroke Organisation conference in Glasgow, which I attended, were dominated by one thing. A new treatment for acute ischemic stroke had emerged – mechanical thrombectomy.

In the four months leading up to the conference, a number of large clinical trials had proven that this intervention worked wonderfully. Literally everyone at the conference was talking about it.

Isn’t that obvious?

Mechanical thrombectomy involves guiding a tube through a blood vessel (usually an artery in the groin) all the way up through the neck and into the brain, finding the blocked artery, and pulling out the clot. Just let that sink in for a moment. In the midst of stupendous amounts of research since the mid-90s into convoluted pathways leading to brain damage after stroke, fancy molecules that supposedly protect tissue from dying, and stem cells that we’ve banked on repairing and replacing what’s been lost, the only thing that’s worked so far is going in there and fishing out the clot. That’s all it takes.

After returning to Berlin, I told a former student of mine about the news. “Well, duh?”, she responded, just a bit sheepishly. My first instinct was to roll my eyes or storm out yelling “You obviously know nothing about how science works!”. But is this kind of naïveté all that surprising? Not really. Somehow we’re wired to believe that if something makes sense it has to be true (here’s a wonderful article covering this). As a scientist, do I have any right to believe that I’m different?

Science is not intuitive.

To paraphrase part of a speech given recently by Dr Gawande, what separates scientists from everyone else is not the diplomas hanging on their walls. It’s the deeply ingrained knowledge that science is not intuitive. How do we learn this? Every single day common sense takes a beating when put to the test of the scientific method. After a while, you just kind of accept it.

The result is that we usually manage to shove aside the temptation to follow common sense instead of the evidence. That’s the scientific method, and scientists are trained to stick to it at all costs. But we don’t always – I mean if it makes such clear and neat sense, it just has to be true, doesn’t it?

Never gonna give you up

The first few clinical trials showed that thrombectomy had no benefit to patients, which just didn’t make sense. If something is blocking my kitchen pipes, I call a plumber, they reach for their drain auger and pull it out, and everything flows nicely again. Granted, I need to do so early enough that the stagnant water doesn’t permanently damage my sink and pipes, but if I do, I can be reasonably sure that everything will be fine. But in this case, the evidence said no, flat out.

It works, I’ve seen it work and I don’t care what the numbers say.

Despite these initial setbacks, the researchers chased the evidence for the better part of a decade and spent millions of dollars on larger trials with newer more sophisticated equipment. I’m wondering if what kept them going after all those disappointing results was this same flawed faith in common sense. It works, I’ve seen it work and I don’t care what the numbers say – you hear such things from scientists pretty often.

Another important flaw in the way researchers sometimes think is that we tend to do is explain the outcomes of “negative” studies in retrospect by looking for mistakes far more scrupulously than before the studies started. I don’t mean imperfections in the technique itself (there’s nothing wrong with improving on how a drug or surgical tool works, then testing it again, of course). I’m talking about things that are less directly related to the outcome of an experiment, like the way a study is organised and designed. These factors can be tweaked and prodded in many ways, with consequences that most researchers rarely fully understand. And this habit tends to, in my opinion, propagate the unjustified faith in the authority of common sense.

There’s good evidence to suggest that the earlier mechanical thrombectomy trials were in some ways indeed flawed. But I still think this example highlights nicely that the way scientists think is far from ideal. Of course, in this case, the researchers turned out to be right – the treatment made sense and works marvellously. It’s hard to overemphasise what a big deal this is for the 15 million people who suffer a stroke each year.

Deafening silence

More than a year has passed since the Glasgow conference and this breakthrough received little attention from the mainstream media. Keep in mind, this isn’t a small experiment of some obscure and outrageously complex intervention that showed a few hints here and there of being useful. It is an overwhelming amount of evidence proving that thrombectomy is by far the best thing to happen to the field of stroke for almost two decades. And not a peep. In fact, if you’re not a stroke researcher or clinician, you’ve probably never even heard of it.

Now, if you read this blog regularly, I know what you’re thinking. I rant a lot about how the media covers science, now I’m complaining that they’re silent? But doesn’t it make you wonder why the press stayed away from this one? I suppose it’s extremely difficult to sell a story about unclogging a drain.

 The best thing to happen to the field of stroke for almost two decades.

 

Signing the Dotted Line: Four Years of New Beginnings

Yesterday marked four years since I moved to Europe and started my master’s. I tend to forget these kinds of milestones, but was reminded just a few days ago by one of my best friends, who admirably always seems to be on top of such things (she wrote a wonderful blog post about it).

It’s a bit ironic that I hadn’t realized this was coming up – just last week I wrapped up the latest issue of our graduate program’s newsletter. It’s a celebration of the fifteenth anniversary of the program, and my editorial team and I spent a great deal of time reflecting on the past decade-and-a-half in preparation.

In retrospect, I suppose there was nothing exceptionally momentous about the move itself for me. Compared to some of my fellow students, I hadn’t travelled particularly far, nor was Europe very unfamiliar to me. But the 21st of August 2012 is when I started, as the internet would put it, learning how to adult.

I tell this anecdote all the time, but perhaps it’s worth mentioning one more time. During our master’s program’s orientation week, we were given contracts to sign for the scholarships we were about to receive. They essentially stated that we’re committed to seeing out the whole two years of the program. I stared at that thing for what seemed like an eternity, taking it with me on a walk around the Université Bordeaux Segalen campus.

Two years, I thought – two whole years. That’ll feel like ages – not that I wasn’t going to sign it anyway, it was an incredible opportunity and I knew it. A week and a fourteen-hour train journey later, I was sitting on a bench waiting for the staff to prepare my hostel room. The street, Schönhauser Allee, has since become one of my favourite places in Berlin. I sat there thinking about how the next few years would pan out. I didn’t realize at the time how, like most things in the city, the train whizzing past was paradoxical, an “underground” line running half a dozen metres above street level.

Since then, time has hurried on just like that train – studying in two different cities (three if you count Edinburgh, which, as the initial inspiration for this blog, I definitely do) and simultaneously adapting to both a new career and a life outside my childhood comfort zone.

Almost two years ago, I once again signed a similar piece of paper for my PhD without so much as batting an eyelid.

The Fault in Our Software

Although the vast majority of scientific articles fly well below the radar of the mainstream media, every once in a while one gets caught in the net. A few weeks ago, a research paper attracted a lot of public attention. It wasn’t about some fancy new drug running amok and causing dreadful side-effects or a bitter battle over the rights to a groundbreaking new technology. It was a fairly math- and statistics-heavy paper that found a flaw in an analysis program used in neuroimaging research.

Soon after the article came out and the media took hold of the situation (with gusto), I received a flood of emails, messages, and tags on Facebook and Twitter. These came from well-meaning friends and colleagues who had read the stories and were concerned. So what was all the fuss about?

The headlines were along the lines of (I’m paraphrasing here but if anything my versions are less dramatic, just google it) “Decades of brain research down the drain”. Several scientists have already come out to explain that the whole thing has been blown out of proportion. In fact, it’s a typical example of irresponsible science reporting (see this previous post). After all, people love a good story. And that’s often all that matters.

Inaccurate reporting of science is nothing new.

The “damage” is exaggerated.

Not to state the obvious, but I feel like it’s worth emphasizing that it’s not all brain research that is affected by this bug. Brain imaging is a great tool, and over the past few decades its use in neuroscience has flourished. But neuroscientists use many, many other techniques to investigate the brain. This bug has nothing whatsoever to do with most brain research.

It’s not even all imaging research that’s affected by the bug. We have so many different neuroimaging techniques – like PET, CT, NIRS, SPECT – that I’m expecting we’ll run out of palatable acronyms soon. MRI is just one of them, and functional MRI (fMRI) is a single application of this imaging technique.

A new take on an old problem.

Not since the infamous Case of the Undead Salmon (2009) has fMRI attracted so much criticism and attention. Actually, both the salmon study and the paper describing the bug are similar. The flaws they highlight mainly pertain to what is known as task-based fMRI.

Here, what essentially happens is a subject is presented with a stimulus or instructed to perform a task. The resulting tiny changes in blood flow and oxygenation are disentangled from the brain’s massive “background” activity and all kinds of other (for these purposes) irrelevant signals from inside and outside the body. In fMRI, the brain is divided up into many small units of space called voxels. To find out if the tiny changes caused by the stimulus are distinguishible from the background, statistics are applied to each voxel (there are tens of thousands).

However, every time you run a statistical test you have a certain chance of getting a false positive, and the more times you run the test the higher that chance becomes. Some form of correction for doing this test many times needs to take place. In a nutshell, the Undead Salmon paper showed that if you don’t apply a correction at all, you’ll see things going on in the brain that should definitely not be there (because the salmon is … well, dead).

The new paper showed that one approach used to limit the number of false positives, implemented in several commonly used fMRI analysis programs, doesn’t work. This failure was caused by two things – a bug in the code of one of the programs and because, as the paper showed, fMRI data violates an important statistical assumption needed for the approach to be valid (basically, because the characteristics of the data do not fit the analysis strategy, the result is unreliable).

Both a bug in the code and an inherent problem with the analysis are to blame.

The reality in my case.

After reading the news, I read the actual paper. Several times, in fact, and I’m not completely sure if I fully understand it yet. It’s not really my research focus. Although I do use fMRI, I do it in an entirely different way. My project actually repurposes fMRI – which is one of the reasons why I like it so much, because I get to do a lot of creative/innovative thinking in my work.

It also comes with the seemingly obvious yet still underestimated realization that making something new – or putting an existing technique to new use – is very, very hard. In my field my peers and I rely heavily on people far smarter than me (this isn’t humility, I’m talking objectively smarter here). These are the biomedical engineers, physicists, statisticians, computer scientists, and bioinformaticians who develop the tools used in neuroimaging research. Ask any imaging researcher – these people are not “para-researchers” – their role is not “just” supportive, it’s fundamental.

Hoping the hyperbole brings about change.

The trouble is, most of the time we use these tools to test hypotheses without thinking much about how they’re doing the things that they do. That’s understandable in a way – these things can be very, very complicated. It’s just not what biomedical researchers (especially those with a medicine/biology background) are trained to do.

The stakes are high for research software developers.

But incidents like these give us reason to stop and think. It’s a fact that people make mistakes, and if your role is as important as developing a tool that will be used by thousands of researchers, the stakes are much higher. When I mess up, the damage is usually far less widespread and hence controllable.

But that doesn’t mean we can’t do something to help. As the authors of the article pointed out, this problem would probably have been discovered much earlier if people would share their data. If the data were accessible, someone would have realized that something was amiss much sooner, not after the software had been in use for a whole 15 years and thousands of papers had been published.

Data sharing would have limited the damage.

Many research software developers encourage people to use their tools and openly share their experience with others so that bugs can be identified and fixed (all 3 software programs assessed in the paper are open source). Sadly, the way we currently do science is stubbornly resistant to that kind of policy.

Reflections on the Mechanics of Research

As I sit in my room on a lazy Sunday afternoon, I start to think about my last blog post – it’s been three days already. “I need to keep up the momentum”, I tell myself. That drive that, less than a week ago, relaunched this blog after I procrastinated for ages. If I don’t write something now, I won’t again for months, so here I am. But this isn’t a post reflecting about my writing habits, it’s about me and my peers – specifically, why we do things the way we do.

I’m writing this because I realize that I’m fortunate to work with incredible people who know many other incredible people and who enjoy collaborating and discussing ideas. Which means that, in my PhD, things are constantly in motion, new ideas pop up almost every day and there is always lots and lots to do.

That brings me back to a ubiquitous concept – momentum. In scientific research, after an idea is proposed, a plan is made to explore this idea. But every scientist knows the familiar feeling – weeks, months or years later the initial excitement fizzles out. Over the past few years of doing research, I’ve come to realize that this loss of momentum can’t be blamed solely on failure, it’s a combination of several different things.

Looking back, the idea just doesn’t seem as exciting as it used to.

Not because you later realize that someone has already tried it or that it’s generally a useless idea. It’s just not novel anymore – neither to you nor to the people with whom you’re working. Simply, it stops being new and therefore stops being attractive.

When the idea is first proposed, you know little about it – the possibilities are endless. Then during the research process, you (hopefully) learn more. And that should be enough to keep us going – as scientists we like to think that we find pleasure and motivation in the pursuit of knowledge. But the more familiar the topic becomes, the less drive we have to follow that particular line of research.

This is counter-productive of course because, although the idea is now familiarly boring to you, it is probably still novel and interesting to the scientific community. So the idea itself hasn’t lost its merit, but your valuation of it has diminished.

Is novelty more important to scientists than the pursuit of knowledge?

We lose confidence in the goals we set out to achieve.

It’s not that we lose confidence in our ability to achieve these goals. Although that’s also something that is extremely common and important, it’s a more gradual process that affects some people more than others. What I mean is that the goals themselves seem less within reach because of trivial and often illogical reasons.

Every setback – however small or easy to overcome – leaves a lasting mark on how likely we view a goal as “achievable”.  It seems absurd that the fact that a reagent turned out to be faulty and you wasted three weeks running Western blots in vain should affect how likely protein X protects against disease process Y. But this rather subtle logical fallacy (not sure if there’s a specific term for it in psychology, but there should be) commonly affects researchers and can be devastating.

This cumulative process sneaks up on you – seemingly harmless assumptions about your data pile up and little workarounds coalesce into the stuff of nightmares. Until finally, a swarm of bees invade the lab because someone forgot the window open and you just go home and give up on the entire project.

To me, proof that these small frustrations are responsible for destroying good ideas is that the people “in the trenches” are most prone to this loss of momentum. The undergraduate, MSc, and PhD students doing the hands-on work. I don’t think it’s because they’re less experienced and therefore less resilient. It’s just that they’ve seen things during the course of a project that their seniors, preoccupied with “bigger picture” thoughts, have not. Things that eventually have these poor souls perpetually repeating “Yeah, it was a nice idea – but it’s just not that simple“.

Give us a little push, and we’re off.

The problem may be that researchers, myself included, have low inertia – it’s easy to get us excited. Indeed, it’s very easy for us to excite ourselves as well (“What a great idea, if this works it could be groundbreaking!”). At least in the beginning when an idea is still fresh. This intellectual enthusiasm is a fundamental characteristic of a scientist, and I’m not saying it’s a bad thing.

What’s obviously bad is not pausing to think – really think – about an idea before diving into testing it. The sad thing is, the vast majority of researchers do stop and think. We list every possible outcome and every setback we can think of, until we’re convinced that we’re prepared for what’s to come. But often it’s still not enough to keep the momentum going down the road.

At this point in my career, I’m not sure if our low inertia is the gift/curse that will bless/haunt researchers forever no matter what we do, or if we can learn to be better at taking advantage of its perks and avoiding its drawbacks.

Scientists have low inertia.

 

I’d really like to hear what other scientists think about all this – because it could be just a matter of personality. Perhaps I’m an impatient defeatist with low self-confidence (trust me, I’m at least a little bit of each of those things) and thus, everything I mentioned applies to people like me but not the vast majority of researchers.

I have no clue – all I know is, even as I’m writing this post, it’s starting to look less like something that should be read by intelligent people and more like the incoherent ramblings of a frustrated graduate student.


Featured image: “Blurred motion Seattle Wheel at dusk 1” by Brent Moore, Flickr http://bit.ly/1UikTFI 

Scientists’ Creative Conundrum

It’s a great privilege of mine being one of the editors of our graduate program’s Charité Neuroscience newsletter (CNS – http://bit.ly/1HglVvV). One of the things I’ve noticed while editing the past seven issues is that, in general, scientists are great at journalism. We do our research diligently and excel at fact-checking and carefully interpreting what we find. But one thing that has stood out for me is that, in our quest to be accurate and factual, truly “scientific” if you will, creativity often takes a beating.

Billboards are designed to draw the reader’s attention to the article as they flip through the pages. In journalistic style, billboards are short excerpts from an article, usually placed somewhere on the page in large letters. They’re meant to be concise and catchy, but informative. Many of our writers, myself included, struggle with these little snippets – they don’t tell the whole story can be outright biased or easily misleading (intentionally or otherwise). They’re also often fairly abstract and, as scientists, we just find them plain shady. A lot of substance is lost when brevity and attractiveness are the main goal – and we can’t really handle that – it feels … icky.

I think our reluctance to season our writing with creativity stems from the relationship between science, scientists, and the media. It’s no secret that scientific studies are sometimes outrageously misconstrued by the mainstream media. I’m not talking about when the media reports on bad science as if it were good – that’s ignorance, and it’s forgivable – but when good science is misrepresented for the sake of attracting readers. This happens on a daily basis and any scientist will tell you that it makes our jobs much more difficult. Not only does it spread false or inaccurate information, it also makes people’s trust in science diminish, especially when these stories are eventually debunked. So that’s why, when scientists write, our (somewhat condescending) attitude is: we should know better (than to do what they do)!

Of course, one difference between journalists’ and scientists’ writing is the intended audience. If, like the former, you intend to reach everyone then sensationalizing and exaggerating (at least just a little) might be unavoidable. Perhaps scientists need to accept that they’ll always have a limited readership (some call it the “wider scientific community”, whatever that means). That might help us write in a more attractive way without undermining our principles. The question is then, is it really “science journalism” if we know, even as we write it, that it’s not meant for everyone?

I want to write!

Probably not something you hear very often from a PhD student. Too reminiscent perhaps of that troublesome manuscript gathering proverbial dust for the past twelve months deep in the recesses of your hard drive, writing isn’t something a typical scientist yearns for. But after an embarrassingly long hiatus from posting on this blog, I realize – just now, tonight, as I’m writing this – how much I miss it. And how important it is to me.

Not that I haven’t tried – over the past (gulp) almost more than two years, I’ve told myself repeatedly that I want to get back on the blogging bandwagon – revamp this site, give it some flair to reflect the “new” me. The now seasoned (hah!) doctoral student – my countless meetings and discussions outnumbered perhaps only by the ensuing failures and frustrations.

All I’ve managed to do in that time is accumulate a dozen or so drafts of different blog posts (I truly despise that word now, draft, errgh). Sitting there on my computer, waiting. For what? Inspiration, I often tell myself – typical scientist behaviour.

So here it is – no long-winded comeback speech, promises of inspiring life lessons or anything of the sort. Just this – a short post to kick it off. Like everything in my life at the moment, the rest will come slowly – piece by piece.

C’est fini!

I finally wrapped up a short but eventful semester in France. The term ended with a brutal two weeks of exams followed by a presentation of my master’s thesis proposal in front of a jury from the Université Bordeaux Segalen. Overall, the three months and a half I spent in France were pleasant. The exams were some of the most challenging I’ve had – I want to say ever, but at least that I can clearly remember. I must say in learnt a lot about psychopharmacology and addiction in Bordeaux (also, some French – enough to buy a baguette or ask for directions). Despite the linguistic challenges (see my ‘Douleur dans le pouce’ post), my time in France was a worthwhile experience.
As for Edinburgh, I’m about to finish the second module of the year before the Christmas holidays. Since I posted last, we’ve been focusing on acute medicine and clinical decision making. Surprisingly, I’ve found this module to be closely linked to neuroscience – cognitive neuroscience and psychology to be specific. We learned about all sorts of theories relevant to the making of decisions in clinical practice. Many of these theories are related to theories of basic cognition and working memory (another area of neuroscience which I learned a lot about in Bordeaux). Also, I’m writing up a review of a neurological emergency for this modules assessed assignment, somewhat bringing my two worlds together.
Juggling both my neuroscience MSc and my MSc at Edinburgh has been particularly challenging this semester. Exams overlapped, assignment deadlines for both programs fell often within the same week and I found myself stretched quite thin at times. This might be due to the fact that lectures in Bordeaux are long – four hours apiece twice a day! Now I have a month (until mid-January) away from all my studies to relax – well, kind of (I’ll be working on PhD and specialization applications a bit – but how bad can that be?). I’ll be spending this precious time at home in Sudan with my family. Also, blogging (I promise). I’ll be spending this time blogging about all the little ideas I’ve had since October and that I haven’t had time to put up here.