Reading lots of masters students’ research proposal at this time of year. Some of them are commendably thorough – like one on risk management at a college of bungee jumping science. I’ve summarised this below; the full version gave much more detail and was peppered with references, mainly to justify the obvious and the stupid.
The aim of the proposed research project is “to review the effectiveness of the college’s risk management strategy, and to recommend any necessary improvements”. The methods proposed were “qualitative, because this will enable the researcher to investigate the issues in depth and generate insights into contextual meaning for the situational actors.” Quantitative methods were rejected on the grounds that they are “positivist” and “superficial” they “ignore the social construction of reality”. The proposal then went into more detail about the selection of a “purposive” sample of “key stakeholders” within the organisation, about “in-depth, structured interviews”, and about how the data was to be “triangulated” (checked from other sources in ordinary language). This was to be backed up by documentary analysis of key internal documents, and a benchmark study of another college recognised as “best in class” for risk management. One possibility specifically rejected was looking at any other colleges: restricting it to the student’s college and the best in class college would make the project more “focused” and, besides this, was necessary because the college was “unique” so “cross college generalisations of a statistical nature would be demonstrably meaningless”.
This was all very impressive and pressed all the right academic buttons, and so I gave it an excellent mark. Despite, this, of course, in the real world it’s a complete waste of time because all it will do is recycle the prejudices and biases of the “key stakeholders”. I made a few gentle comments about extending the database with data from other sources within the college and from other colleges, about statistical information having a useful place in this sort of research, and about casting the empirical web as wide as possible so as to find out about as many possible risks and risk management strategies as possible.
Even this really misses the point because it’s all based on what’s actually happened. The problem is that the real disasters to come have probably not occurred anywhere yet. Any research into risk management – particularly for a college of bungee jumping - needs a way of exploring possibilities which have not yet occurred. The research needs to consider what might happen as well as what has happened
But research is based on facts, so I can’t say this.
In fact all these criticisms largely miss the point. Projects like this, and probably risk management strategies too, are really just games. Nobody seriously expects them to deliver anything useful, so all that matters is that they score highly by the conventional rules of the game.
No comments:
Post a Comment