Tuesday, July 23, 2013

Examining a PhD

I was talking to a colleague in another university recently about a candidate she had just examined as the internal examiner. Like many internal examiners she didn't know much about the topic - which was a fairly technical topic which non-specialists feel, perhaps erroneously, that they can cope with. So she was reassured to meet the external, and realize that he was a genuine expert - he definitely knew what he was talking about.

From then on, my colleague's sense of reassurance started to disappear. First the external asked if there was any reason why the candidate must pass. He was obviously referring to financial ties with the sponsoring organization. The university administrator mumbled no, of course not, in a rather embarrassed way, and the viva got under way.

It was obvious that the candidate knew little about the topic, and his research seemed to consist of little more than the application of a computer program to his case study. Strangely some of the outputs from this program were negative, in a context where negative number made little sense. It was a bit like estimating the age of some fossils and getting a negative number indicating that the fossils were laid down in the future! The candidate was asked for an explanation. He did not know. He was also asked about the computer program. What models was it based on? Where did the answers come from? Again the candidate obviously did not know.

At the end of the viva the candidate was asked if he had any questions or comments. The candidate's supervisor, sitting listening to the viva, then put his hand up and said, yes, he had something to say. He explained that the reason for negative numbers was that the program was comparing two things. So it was a bit like saying that the fossil was a million years younger than another fossil, which of course made sense. But the candidate did not understand this well enough to explain it himself during the viva.

What to do? My colleague's view was that the candidate should fail, or perhaps be asked to do some extra work and resubmit for an MPhil. At the very least, as well as explaining the negative numbers, she thought the candidate should explain and evaluate the model on which the program was based.

The external, however, disagreed. He thought the candidate was not capable of doing this and so should not be asked. He was the expert. My colleague had no real expertise in the area, and was supporting the home team, so she agreed. The candidate was asked to do a few simple things, tailored to what he was thought to be capable of. He was awarded his PhD a few months later, despite the fact that he really did not know much about the topic.

Does this PhD really mean anything?

Tuesday, March 26, 2013

Winning an Oscar, living longer, and the strange idea of a p value



“Win an Oscar, live longer” said the Sunday Times headline on 27 February 2011. Oscar winning actors, apparently, live 3.9 years longer than other actors. Presumably Daniel Day-Lewis, with his three Oscars, has booked himself an additional 12 years to savour his success! 

This was based on an article, Survival in Academy Award–Winning Actors and Actresses, published in the Annals of Internal Medicine in 2001. How can we be sure this is right? The statistic given in the article to answer this question is p = 0.003. This is the so-called p value and is the standard way of describing the strength of evidence in statistics. 

The p value tells us that the probability of the observing data as extreme as this (from the perspective of winners surviving longer than non-winners), on the assumption that winning an Oscar actually conferred no survival advantage at all, is 0.003, so there must be something about winning an Oscar that makes people live longer. Obviously the lower this p value is the more conclusive the evidence for winners living longer.

Confused? Is this really obvious? The p value is a measure of the strength of the evidence that does not tell us how likely the hypothesis is to be true, and has the property that low values indicate high levels of certainty. But this is the system that is widely used to report the results of tests of statistical hypotheses. 

Another way of analyzing the result would be to say that the evidence suggest that we can be 99.85% confident that Oscar winners do, on average, live longer – as suggested in “P values, confidence intervals, or confidence levels for hypotheses?”. This seems far more straightforward, but nobody does it this way. P values dominate, despite, or perhaps because of, their obscurity.

There is another big problem with this research. In 2006 the journal published another article, Do Oscar Winners Live Longer than Less Successful Peers? A Reanalysis of the Evidence”, pointing out a major logical flaw in the research design. Actors who live a long time obviously have more chances to win an Oscar than those who die young. The authors cite an 1843 study pointing out “the greater longevity of persons who reached higher ranks within their professions (bishops vs. curates, judges vs. barristers, and generals vs. lieutenants).” The original study failed to take account of this; when this factor is taken into account, the additional life expectancy is only one year and the confidence that winners will live longer is 93% (which is conventionally not considered statistically significant). This is obviously a separate problem to the p value problem, but it does make me wonder whether obscure statistics, of which the p value is just a minor part, can help researchers hide the logical flaws in their study, perhaps even from themselves.

Even more worryingly, the Sunday Times article claiming Oscar winners live longer was published five years after the article challenging the original research, and included a quote from the author of the original research saying that they get “more invitations to cool parties. Life is better for Oscar winners.” Why let truth get in the way of a good story?

The curse of deep skepticism



My problem is that I don’t believe in most of the things I am supposed to hold dear as an academic. Few of the usual assumptions about what is worth teaching and researching make sense to me. Values like the importance of striving for the truth (what’s that?), and of preserving academic “standards” seem, to me, misguided, meaningless, or at best, over-rated. I can’t get worked up about plagiarism like most of my colleagues – why should using the exact phrase used by someone else be such a sin, and if your first language isn’t English, isn’t this the obvious thing to do? I have never been able to join groups with common, comfortable assumptions about what is worthwhile, and to get on in academia that is what you need to do. I pay lukewarm lip service to some of it – like the conventional methods of statistics as an approach to research, like papers published in top journals being a gold standard for research, like the value of a first class degree, and so on – but I can never actually believe in much of it. 

This blog is an attempt at therapy for my condition. It is intended to articulate some of my skeptical thoughts, to help me understand what I really think. Obviously, I am not expecting many readers, but if you are reading this then I am pleased, and would appreciate your comments, even, perhaps especially, critical ones. (Incidentally, many of the details, like the project on leadership and the third age, are fictional, but the spirit of my comments is deeply felt.)