Head-first into the research abyss

The beginning of October officially marked the start of my career as a scientific author. The project was a small epidemiological study I conducted in my second year of medical school in collaboration with a colleague of mine and a visiting physiology professor. The survey took months of hard work, including hours of data collection from random subjects around the streets of Khartoum. This was before I had received any formal education in research methodology (although I had taken a course in Biostatistics), and I have learned a lot since then. The process of publication itself turned out to be an even more lengthy and tedious task than the data collection, analysis and writing up of the article – which I had not expected (this being my first experience with scientific journals). I finally breathed a sigh of relief a couple of weeks ago when I saw the article in print. Below is a link to the full-text version of the paper.

http://www.jpma.org.pk/full_article_text.php?article_id=3712

Inspiration at Edinburgh

Despite having only been a student at the University of Edinburgh for five weeks, it’s not hard to see why its style of education has created so many visionaries and pioneers in the past. Just last week I read a paper provided as part of my statistics course – it claimed that the majority of scientific research findings are false. Well, it actually did more than that. Being a scientific paper itself, it actually proved that most findings are false. And I’m talking proved in the ways that medical people rarely understand – mathematics-ey ways! The first thing that crossed my mind was, this guy (the author) must be some untrustworthy/unknown person trying to make a name for himself by claiming something radical, and attempting to prove it with some complex mathematics which he knows us poor math-illiterate medical folk would never comprehend. It turns out, the author is one of the most respected authorities on the subject in the world, and the maths make sense (I know, because after trying to wrap my head around the first equation, I gave up and asked a statistician).

So I thought – this guy made some really good points. I had been taught in medical school (in my biostatistics/research methodology courses) that the reliability of any given research result depends on p-values, sample sizes, the presence of bias, etc – and that if all these are ‘good enough’, I can assume that the finding itself is correct. But I wasn’t so sure anymore. The paper’s conclusions surprised me, and I quite honestly didn’t know what to believe anymore! In medicine, decisions that affect a person’s health and well being (which drug to use, what test to run) depend on research results being valid. The idea that the anti-hypertensive medication I prescribed to that pleasant 65 year-old lady a few months ago may not have been a good choice is frightening. As doctors, we rely on evidence-based decisions every day – it’s what we fall back on. The days of ‘I chose drug A because I read it in a book/so-and-so told me it’s the right choice’ have passed, long before I could even say the word ‘anti-hypertensive’ in fact. Then came the days of ‘I chose drug A because this study proved it to be better than drug B’. Now it seems that this phase in medical practice may soon be drawing to an end as well, at least if we don’t change the way that we interpret research.

After the initial shock subsided, it dawned on me that this was a good thing – it’s important to know that what we think works may not work as well as we think it does! It gets you thinking of ways to improve – whether it’s being more critical when reading a scientific paper or trying to develop more reliable statistical methods for future use. I now realize that only a truly inspirational academic institution like the University of Edinburgh can get someone like me, so fixed in the ways that were hammered into him during medical school, to challenge the status quo and think outside the box.