Saturday, May 14, 2005

The death of a university

I’ve just come across an interesting tale from a land far away where a major university went out of business a few weeks ago. It used to have an outstanding international reputation: researchers from all over the world came to ply their trade, and the competition for undergraduate places allowed fees to rise to levels never before seen. Life in these ivory towers used to be very good indeed.

But then, the dream turned sour. The main factor which put the university out of business was the success of a small software company started by academics from the university itself - deceptively named EasyWise. In its early days this company went along with prevailing fashions in its business, knowledge engineering. They tried to develop so called expert systems, which embodied traditional academic expertise. This wasn't a threat to the university because the systems never really worked; it turned out that the academic experts didn't understand their theories well enough to explain them to the software experts. And besides, EasyWise and other companies in the field needed experts trained by the universities to develop the software which never worked. Far from being competition, this was actually creating more demand for the university's products.

The second phase of the knowledge engineering game was even more in the university's favour. All pretence of software systems replacing the experts was now gone; the experts now needed the software do their business, and the software industry needed the universities both to supply the experts to write the software, and to provide a demand for the software. Business boomed all round.

Up to this point EasyWise had been a very minor player. Then they saw their chance. Much of the expertise which the experts possessed was unnecessarily complex. It was like using Roman numerals instead of the modern place value system, or using a Dos operating system rather than Windows. The dream was to simplify quantum mechanics and analysis of variance so that it could be understood, in a useful sense, by any moderately intelligent person with a bit of patience. EasyWise aimed to be at the forefront of marketing this new brand of "knowledge which is so simple you don't need to attend lectures or fail exams" - as their promotional literature had it.

The academics, of course, all knew that this wasn't possible. They were so sure that they hardly bothered to acknowledge the challenge. But they were wrong. Quantum mechanics has yet to succumb, but analysis of variance has been largely superseded by computer simulation methods, and many other areas used by academics to fail their students are either ignored or replaced by far more straightforward perspectives. Many EasyWise projects, following the slogan "if people don't understand it, change it", were startlingly successful. (In a few areas the simple version was so obvious and vacuous that even EasyWise couldn't sell it.) The reason it seemed impossible was that no one had tried it because everyone knew it was impossible.

EasyWise was seen as the local Microsoft or Google. Their business boomed, but the university's slumped as employers realised that someone armed with some "easy wisdom" (to quote the sales literature again) was far more use than a university trained expert. They increasingly came to see that conventional courses encouraged students to focus on assessment to the exclusion of almost everything else. Successful students passed their exams with flying colours, but could not apply their ideas in different contexts. Everybody had always known this, of course, but now EasyWise was providing a genuine alternative, the sheer uselessness of examination oriented training was becoming clear for everyone to see. Prospective students quickly realised that employers realised all this, and that the "easy way" was actually more fun than sitting in lectures and failing exams. EasyWise "graduates" even started to get research jobs in universities.

But then, gradually, EasyWise's dominance lessened as their role became less and less necessary. Their product was simple ideas supported by very simple software. Simple ideas couldn't be copyrighted or patented, and simple software was easy to mimic. Soon the web was full of copies of everything EasyWise had strove so hard to achieve. All of it was free, and much of it was better than the original. They may have been the catalyst for the new idea, but EasyWise's business was now on just the same downward slope as the University's.

At this point the university could probably have got itself back into the market. Simple ideas were by no means easy to develop, and their use does benefit from a training of some kind. Here were two useful roles the university could have stepped into.

However, they didn't. They chose to compete on two very different fronts, with disastrous consequences. First, they had a campaign to reinforce the prestige of university learning, and to deride the "simpletons of the easy way" and the "dumbing down" of everything. And to ram the point home they made their wares even more esoteric and complex than previously. Second, they were forced to lower their exam pass marks to keep potential students enrolling, despite the work being made more difficult.

Both tacks were disastrous. The lowering of standards was obvious to all, and the their reputation descended to that of peddlers of pretentious twaddle, understood by nobody, not even many of those supposedly teaching it.

This story is little known outside the land in question. The university's decline is obvious, but the reason - competition from the simple way - is hidden by deeply-held assumptions that this could not be a real substitute for traditional learning. Those in the knowledge business who do understand what happened are grateful that their customers don't: they understand the common interest of universities and all the other players in the business in keeping things difficult so that the rot of the simple way cannot take hold.

Sunday, April 17, 2005

A research proposal on risk management in a college of bungee jumping science

Reading lots of masters students’ research proposal at this time of year. Some of them are commendably thorough – like one on risk management at a college of bungee jumping science. I’ve summarised this below; the full version gave much more detail and was peppered with references, mainly to justify the obvious and the stupid.

The aim of the proposed research project is “to review the effectiveness of the college’s risk management strategy, and to recommend any necessary improvements”. The methods proposed were “qualitative, because this will enable the researcher to investigate the issues in depth and generate insights into contextual meaning for the situational actors.” Quantitative methods were rejected on the grounds that they are “positivist” and “superficial” they “ignore the social construction of reality”. The proposal then went into more detail about the selection of a “purposive” sample of “key stakeholders” within the organisation, about “in-depth, structured interviews”, and about how the data was to be “triangulated” (checked from other sources in ordinary language). This was to be backed up by documentary analysis of key internal documents, and a benchmark study of another college recognised as “best in class” for risk management. One possibility specifically rejected was looking at any other colleges: restricting it to the student’s college and the best in class college would make the project more “focused” and, besides this, was necessary because the college was “unique” so “cross college generalisations of a statistical nature would be demonstrably meaningless”.

This was all very impressive and pressed all the right academic buttons, and so I gave it an excellent mark. Despite, this, of course, in the real world it’s a complete waste of time because all it will do is recycle the prejudices and biases of the “key stakeholders”. I made a few gentle comments about extending the database with data from other sources within the college and from other colleges, about statistical information having a useful place in this sort of research, and about casting the empirical web as wide as possible so as to find out about as many possible risks and risk management strategies as possible.

Even this really misses the point because it’s all based on what’s actually happened. The problem is that the real disasters to come have probably not occurred anywhere yet. Any research into risk management – particularly for a college of bungee jumping - needs a way of exploring possibilities which have not yet occurred. The research needs to consider what might happen as well as what has happened

But research is based on facts, so I can’t say this.

In fact all these criticisms largely miss the point. Projects like this, and probably risk management strategies too, are really just games. Nobody seriously expects them to deliver anything useful, so all that matters is that they score highly by the conventional rules of the game.

Saturday, March 26, 2005

Is academic knowledge really any use?

I frequently notice reports in the media about people who have achieved a lot despite lacking a university education. Successful millionaires, authors, media people, chefs, inventors, criminals, artists - who’ve managed to get where they are without the benefits of a formal grounding in academia.

Obviously, this should not be allowed, and in some fields it isn’t. The universities largely run the show in science and sociology, medicine and mathematics, so in fields like this you do need your degree.

To my way of thinking there are three possibilities. There are (or there may be?) fields – like perhaps rocket science – where you really do need a university education to do anything serious. The knowledge is cumulative in the sense that to design your rocket you need to be able to do A and B, and in order to do A you need to do C and D, and in order to do B you need to do E and F, and in order to do C you need to be able to do J and H … No short cuts: you really do need to understand Z. You can’t do it by browsing the web and finding instructions for A and B, because you won’t understand C and D and so on. You need the discipline of a structured course and the support of experts to succeed.

Or do you? Einstein didn’t.

At other extreme are fields which the universities have not colonised. You wouldn’t normally enrol at your local university if you want to be a pop star or a TV presenter.

Which leaves an enormous grey area in the middle. Running a business, writing a novel, teaching – the universities would like you to think that courses on business studies, creative writing or education are essential here, but experience and common sense often suggests the opposite. Does an academic background here really help, or does it just help you translate the obvious into jargon?

Tuesday, March 15, 2005

Why am I writing this?

When I was considering setting up this blog a colleague asked me why I wanted to do it. The implication was that it would not score in the RAE (the UK universities’ Research Assessment Exercise) and so was not worth doing. We only think and write stuff for brownie points which can be cashed in for more money.

I find this very depressing. I’ve just been to a meeting to try and get some research moving. It was all about how to find out which journals would score highly and how to organise co-authorships to everyone’s advantage. But almost nothing on the actual topics of our research.

Similarly, there’s no culture of reading and commenting on each other’s work. No real interest in the power of thought, and in how the world can be improved. Just scoring points for a meaningless competition. If we get lots of points we’ll be able to hire more people to do more research to score more points and get more exhausted. But what’s the point of these points?

This is where this blog comes in. It will score no points but I can say what I think.

Monday, March 07, 2005

Marking the exams

Marking now finished. What a relief. I find marking very difficult to stick at. I will mark one question, then get up for a coffee. Or enter the mark on a spreadsheet. Or count the number of questions yet to be marked, or the percentage of the job I’ve completed. Not sure why it is so difficult to force myself to mark exams. But the task is now 100% complete. Just the second marking to come – about which more later.

It isn’t as though I do the job particularly conscientiously. In fact it feels pretty arbitrary. One part of one question on one script gets 6 out of 20 marks. Why? Because they haven’t got the main point, but have scribbled down something vaguely relevant. The next script gets just 3 for the same question. The scribbling seems a little less relevant. Or is it? What do I mean? I should check back. In fact I should be cross checking all the time. But I don’t, of course. The lure of the end of the pile is too strong.

Again, I am struck by how silly some of my questions are. One part asks candidates to explain how to do something. The next part asks about difficulties and how to get round them. But this should be part of a good answer to the first part. So a good candidate will have little to say here. I try to remember to fudge the marking scheme, but have a nagging worry that some candidates might have given such a good answer to the first part that they had nothing to say for the second.

Another question is about a formal method of making decisions, applied to menu planning for a person allergic to carrots (they do turn some people red). One of five menu options, according to the question, is carrot soup. The first question asks for recommendations which can be made on the basis of the information given. The answer I expected and intended as the right answer was simply not to serve carrot soup. Few candidates got this right!

Instead they came up with the recommendation to use the formal method, or they thought about their tastes and the tastes of their friends and came up with a recommended menu (usually the chicken with fried chocolate ice cream) on this basis. Either answer got no marks. They almost certainly dismissed the right answer as too obvious to be worth writing down. As would any sensible person.

The next part asked candidates to describe the process for eliciting taste preferences and health data from the customer. Candidates were expected to describe the process of doing this. Many thought they had answered this in the first part. And others made up data about typical customers, instead of describing how to get this data.

There are real confusions here. The obvious common-sense answer is right for the first part – don’t serve carrot soup. But not for the second part: the obvious answer that everyone is going to like chicken with fried chocolate ice cream is deemed unacceptable.

I really must set clearer questions next time which make it clear what can be assumed and what can’t be, and what type of answer is expected. But then the questions may turn into philosophical treatises and the difficulties may well be increased? Help!

Wednesday, March 02, 2005

Oh what rubbish a lot of this research is

I’ve just had to dip into the research literature again. Always depressing.

It’s easy to mock a lot of the qualitative style of research. Talk to a few friends and write it up with a liberal sprinkling of long words like social constructionism, hermeneutics, ethnography, phenomenology etc. Then it counts as more than the prejudices of your friends, or an excuse to visit gastro pubs (the focus of one colleague’s research), rather, it’s empirical data and every word is recorded and transcribed and taken very seriously.

But at least the reader (a rare beast in the research game but we have to believe they exist) gets to know something about the topic. They can read about the beer and the food. The so called positivist style of research is usually far worse, partly because positivists assume it’s so much better. I’ve just come across one article which is fairly typical. It’s on the newly discovered broccoli-carrot ratio as a predictor of gym membership.

According to the article the researchers looked at lots of possible variables to see if they could relate them to gym membership. Number of children, marital status, education level, and so on and so forth. The only significant relationship was with this ratio between the amount of broccoli consumed and the amount of carrot. The p value cited was less than 0.01% - so they obviously think it’s pretty important.

It isn’t of course. The significance level just tells you that it’s not a chance thing, but reading the detailed statistics the relationship is actually a very small one. People who eat more broccoli than carrot are ever so slightly more likely to belong to a gym. That’s it.

They make no comments on why it might be so, or on what use knowing about this relationship might be, or on whether there’s some causal mechanism driving those who eat more broccoli than carrot to the gym. And no mention at all that this is a tiny correlation. They assume that because it uses esoteric statistical methods which show it’s statistically significant it must be important.

Not so. And unfortunately this paper is typical of the stuff we’ve got to force our students to read in the name of academic rigour

Wednesday, February 23, 2005

What to tell the students about the exam?

Exam time approaches. There’s always a fine line to be drawn between telling the students the answers and finding they don’t know them. One of my colleague’s courses last year collapsed amid complaints of bad teaching and unfair assessment. And to cap it all off he lost a lot of prime time in the summer setting another exam paper, organising revision sessions and marking. Not a good idea. I must make sure the students know how to do the questions in the exam paper.

Telling them the questions and how to do them would obviously be cheating, but to play fair to everyone (especially me) I need to make sure I show them similar questions.

I’ve left the last session for a review of the course. The students take this to mean a preview of the exam and turn up in large numbers. They sit waiting expectantly. I’ve told them I will respond to questions only. I would like questions on some interesting aspect of the subject. Instead, the first question is

“How do you do Q4 on the 2003 paper?”

I try to answer it but without writing it out word for word, because if I do that they will memorise what I write and it will appear in their answers to the real exam. So I write some bullet point notes and talk in rough terms about the answer.

But even with this modest aim, I find two other problems. First, bits of the question are silly. What other forms of analysis does the package provide – for twelve marks! The straight answer is just a simple list of the names of four features of the software, which is trivial. Perhaps I need more, but I didn’t ask. And even these four things should have been included in good answers to the previous part.

I try to skate over this by mentioning the four features quickly and moving on. Seems to work. Nobody says anything. But then, their agenda is to see what I say, and the more they say, the less they get to learn about what I say. Which is obviously the key to finding out what’s in the exam.

The second problem is that I don’t really know some of the answers myself. I tell them about the importance of two dimensions being separate in a fairly technical sense. This is illustrated by a convincing sounding story about meals and holidays. But this has nothing to do with the question at hand, which is about people being appointed to jobs. How to explain the connection? I’m not all sure, but have a sneaking sort of suspicion that the issue is either too obvious to be worth mentioning, or too complicated for mentioning it to be feasible. (This suspicion is confirmed later after a few hour mulling on the issue.) Either way, I’m bullshitting, so I talk quickly and move on.

This all seems to work. Last year only one student failed, although the external examiner thought the paper fair and challenging. The reason they all passed was that they’d practised on the previous year’s paper, which was very similar. All I have to do is go on setting very similar papers. The fact that some questions are silly, and others are impossible, doesn’t matter because the all the students need to do is to absorb the criteria I use for judging good answers. And they manage to do it wonderfully well, and the course is a marvellous success.

It isn’t really of course. The questions I asked are just a tiny proportion of the sort of questions the students should be facing. But if I did have the ability (a dubious assumption) to make the papers less stereotyped and set more varied questions, the students would fail their exams. No, I have to go on setting the same questions year after year. To do otherwise would be breaking the rules of the game and would be unfair to everyone except students who want a genuine challenge. But then such characters won’t be on our course, will they? We have to respond to the market.