Tuesday, August 12, 2014

The cult of the truth

Everyone seems to believe in the truth. By which, of course, they don’t mean the truth in which other, misguided souls believe, but in their truth which is obviously the right one. The devout Christian has a different version from the devout Muslim, and the devout atheist will think they are both mad.

It is not just religious maniacs who believe in the truth. It is deeply embedded in the world view of science, of common sense, and even fields of academic inquiry which see themselves as being hostile to what they perceive as science. The truth rules supreme everywhere, or so it seems.

But what is truth? When we say something is true we usually mean, I think, that it corresponds to reality – the so-called correspondence theory of truth. But what is reality, and how can human ideas “correspond” to it? Surely human ideas are a completely different type of thing from reality, so what sense does the idea of correspondence make? Perhaps what we see as the truth is part of a dream, or part of a way of seeing the world we make up in collaboration with other people – as the social constructivists would have us believe? This latter perspective seems obviously true (whatever that may mean!) to me - but this may just be the dream into which I've been socialized.

However, let’s accept the idea of truth and try to guess where it might have come from? If we accept the theory of evolution by natural selection, the answer is simple: the idea of truth helped our ancestors survive. A belief in the truth about lions and cliffs helped our ancestors avoid being eaten by the former and falling off the latter. The idea of a fixed reality, which we can apprehend and see as the truth, is obviously a very powerful tool for living in the everyday world. People who did not believe in the reality of lions and cliffs would not have survived to pass on their genes.

This implies that the idea of objective reality and the assumption that we can apprehend the truth about it is merely a human convenience. Frogs or intelligent aliens would almost certainly view the world in very different ways; what we see as truth and what they see as truth would, I think, be very different.

Most statements in ordinary languages presuppose the idea of truth. When I say that Sally was at home at 10 pm on 1 August 2014, I mean that this is a true statement about what happened. Further, if Sally is suspected of murdering Billy 50 miles away at 10 pm on 1 August 2014, then if it's true that she was at home then it can't be true that she murdered Billy. She can only be in one place at one time - "obviously". Ideas of truth, and the "objective" reality of objects in time and space, and the fact that one object can only be in one place at one time, are all bound up in our common sense world view. It is almost impossible to talk in ordinary language without assuming the truth of this world view - it is just "obviously" true.

However the concept is truth is often taken far beyond everyday comings and goings of everyday objects. So we might say that it is true that God exists, that all water molecules comprise two hydrogen and one oxygen atoms, that married people are happier than unmarried people, and that the solutions of the equation x2+1=0 are x=+i and x=-i.

The difficulty is that, outside of the realm of everyday experience, the notion of truth is actually rather vague, may be difficult to demonstrate conclusively, and may come with implications that are less than helpful. Short of taking the skeptic to meet God, demonstrating his existence is notoriously difficult.  We can't "see" molecules of water in the way we can see Sally at home, so the truth about water molecules needs to be inferred in other ways. Saying that the married are happier than the unmarried is obviously a statement about averages - there will be exceptions - and it also depends on what is meant by "happier". And mathematical statements are statements about concepts invented by mathematicians: applying the word true is obviously stretching the concept considerably. It is all much less straightforward than the truth that Sally was at home at 10 pm on 1 August 2014.

The idea of truth has a very high status in many circles. Saying you are seeking the truth sounds unquestionably praiseworthy. If you say something is true, then obviously you can't argue with it. Truth is good so we like to apply the concept all over the place. I'll refer to this assumption that the idea of truth, and the inevitably associated idea of an objective reality with solid objects persisting through time, apply to everything, as the cult of the truth. This notion is rather vague in terms of the assumptions about reality that go hand in hand with the idea of truth - but this is inevitable as the idea of truth gets extended further and further from its evolutionary origin. Cults, of course, depend on vagueness for their power, so that the cult's perspective can be adjusted to cater for any discrepancies with experience.

Does the cult of the truth matter? Does it matter that the idea of truth is extended far beyond its original focus? Let's look at some different areas of knowledge.

Some of the conclusions of modern physics contradict the implicit assumptions of the cult of the truth. At very small scales things can be in two places at once, and reality only makes sense in relation to an observation; for observers at high speeds measurements of physical processes are different, and the notion of  things happening at a particular time depends on the motion of the observer. This all does considerable violence to everyday assumptions about reality, but physicists would simply that these are outdated and that their notion of reality is more sophisticated. It seems to me as a non-physicist, that these theories have sabotaged the idea of the truth about an objective reality beyond repair. I am reading Brian Greene's book, The hidden reality, about parallel universes, but I can't take the idea of truth seriously in relation to universes hovering out of reach which we will never, ever, be able to see in any sense. The hypothesis that the book seems to be driving towards is that we are living in a simulated world devised by Albert Einstein whose theory of general relativity seems to underpin everything.

Does this matter? Probably not for physics. The illusion of the quest for the truth about everything is probably necessary to keep physicists motivated. But in the wider sphere it is worrisome if naive and outdated ideas of physics underpin other disciplines.

The idea of truth is best regarded as a psychological convenience - usually necessary, often useful, but occasionally a nuisance. Am I claiming this statement itself is true? Of course not! My argument obviously undermines itself. But I do think it’s a useful perspective.

Beyond the rarefied world of modern physics the cult of the truth does create problems. Perhaps the most serious is that the status of truth (and science and the study of objective reality) undermines important areas which can't easily be incorporated into the cult of the truth. The ultimate aim of many social sciences is to make the world a better place in the future. We might, for example, be interested in making workplaces happier. The idea of truth fits comfortably with the obvious first stage of such a project - to do a survey to find out how happy workers are at the moment, and what their gripes are. The obvious things to do next would be to look at what the workers want, at what they value, and try to design workplaces to fit these requirement. This seems a more important part of the research than the initial survey, but value judgments, and the design of possible futures, do not fit neatly with the cult of the truth. So they are not taken as seriously as they should be. Most of the thought and work goes into studying the past, and the more important issues of working out what people want and how to design a suitable future, tends to get ignored. OK, so the idea of truth could be extended to include these, but only by bending it so that it gets stupid; the cult of the truth tends to deflect our attention from the problem of designing futures.

In fact the situation is even worse than this because the truths studied in many social science tend to be of rather limited scope. So we study how happy people are in particular organizations at particular times. So what? Everyone knows the situation may be very different elsewhere. The truths studied by physicists are assumed to apply everywhere throughout time (although this can be challenged over billions of years or light-years), but the truths of many social sciences are very parochial. The cult of the truth restricts our attention to trivial questions, dismisses the big questions as trivial because the idea of truth does not apply.

There are further unfortunate side effects from taking truth too seriously. If we have one theory which is deemed true, this may be taken to imply that other theories covering the same are assumed to be false. This may be too restrictive: there could be different ways of looking at the same thing, some of which may, perhaps be more aesthetically appealing, or easier to learn about or use. Truth is not the only important criterion. This is particular true of statistical truths, which may sometimes be so fuzzy as to be almost useless.

So, to recap, truth is best treated as a necessary illusion, often, but by no means always, necessary: it should not be taken too seriously outside the realm of statements about the comings and goings of everyday objects. The last sentence is itself close to asserting a truth whose validity it denies: a fully coherent argument here is not possible, but does this matter? Incoherence gives us more flexibility.

Friday, May 30, 2014

Cambridge University closed to undergraduates

I'm lucky enough to have a friend who has solved the knotty problem of travelling backwards through time. She sent me this news report from the Mumbai based World News dated 1 January 2050:

Cambridge University in the UK has finally bowed to the inevitable and closed its doors to new undergraduates. The last cohort started in October last year: their final ceremonial dinner in the historic dining halls was on Christmas day, and they will formally receive their degrees in the New Year. For the last two years Cambridge has been the only university in the world offering degree courses. This new move brings to an end an era which has lasted for centuries.

Until about 2010 a university degree was regarded as proof of the bearer's competence, knowledge, or expertise in some domain. Doctors and engineers with degrees were considered safe to practice; any degree was treated as giving the holder the status necessary to teach their subject. Even degrees in disciplines without any obviously useful or fundamental knowledge at their core , such as English Literature, or Golf Studies, were treated as valid, and marketable, evidence of general competence. If someone had a degree then they could be trusted to do a good job. Or so the assumption went until about 2010.

Then things changed, gradually at first, but then faster, so that now the idea that a university degree is evidence of any kind of competence is frankly as quaint and old fashioned as the idea that serious sport could be drug free.

For some time it had been obvious that many really successful people did not have university degrees - they either never went or dropped out. And most important developments did not seem to require or use university expertise. It was the geeks (Bill Gates, Mark Zuckerberg et al) who hit the headlines, but there was more to it: the stuff taught in degree courses was becoming increasingly old-fashioned and irrelevant.

But the thing that lit the fuse that destroyed degree courses was less obvious. It was the obsession with detecting and punishing "plagiarism" (I've omitted the rather lengthy explanation of this, and other terms in quotes which are not familiar to 2050 readers). Rules were drawn up, software was developed to detect the crime, and there was a strict culture of intolerance to any hint of illicit copying.

From a 2050 perspective this is very odd indeed. Culture depends on copying, maintaining clear links to individual ownership of intellectual property is often difficult, and is now generally agreed to hinder progress. But old-style degrees were based on the assumption that acquiring wisdom is hard, and incentives and measures of attainment are necessary, so individual students need to be "assessed" on the basis of work they have done on their own without any illicit help. It was a sort of sport: the degree material was kept deliberately difficult and often unpleasant, and students had to demonstrate their competence by "assignments" and "exams". Plagiarism was simply a way of cheating, like taking drugs in sports competitions in the early years of the century.

(Students in the last Cambridge cohort did take exams, but their original purpose, and the fuss over plagiarism, was long forgotten - students bought standard answers from the university to copy out in the exam ceremony. This year there was a surge in demand for third class answers, which cost three times as much as answers that would yield a first class degree.)  

This obsession with plagiarism led to two big problems. First, more and more assessments were designed primarily to prevent cheating. So instead of a sensible piece of work which students could have completed with any relevant technological aids, the focus was on short exams where technological aids, even books and notes, were banned. Which, of course, meant that the expertise which was taught and assessed became more and more useless.

The second problem was less predictable, and it took the universities a long time to acknowledge. Plagiarism detection had developed into an arms race, with progressively more sophisticated methods and software both on the university and on the student side. Many of the students treated it as a game which the brightest did very well at. Then employers started to realize what was going on, and that the brightest students were those who were guilty of plagiarism but had not been caught. This meant that the best cv's had two components: a certificate stating that the student's studies had been plagiarism free from the university, and some clear evidence from the students that, in fact, the student had plagiarized extensively but not been detected.


The end for the universities came when a university sponsored study demonstrated conclusively that students with this type of cv were more successful than students with good degree classifications.

Tuesday, February 11, 2014

Being a student in the twenty first century: challenging the consensus

Just been to a seminar on being a student in the twenty-first century. Lots of clichés - increasing complexity and "supercomplexity" of the world, inadequacy of knowledge and skills, "lifewide" education, etc, etc. The world is changing and the student experience needs to change too. Obviously.

The speaker encouraged all comments from the floor, so the clichés were interspersed with a random selection of comments as everyone got on their own particular hobby horse. The seminar leader contrived to turn every comment into a platitude he could agree with - we must treat students as people, there are no right answers, things are getting progressively more complex, and so on and so forth.

There are two general sets of assumptions behind this sort of discussion - mutually contradictory, and both unhelpful. The first is that students and teachers, or facilitators, are always engaged in a collaborative, consensual process with no right answers, and the teacher does not possess superior expertise. This was certainly the philosophy espoused and practised by the seminar leader. He did not set himself up as the expert, and all contributions were accepted and valued. However, it's probably more accurate to say that there were no wrong answers, because all suggestions were accepted as right.

The second is that learning is hard, often unpleasant, and requires incentives, which means that it is inevitable that many learners will fail, and certification is required to distinguish the successful from the failures. Failure obviously implies that the learners' answers are wrong, and that the teachers' answers are right: the teacher is the expert and the teacher and the learner do not agree about right and wrong. This is never made explicit, but is implicit in the talk about dealing with learners' anxieties. Assessment in some form is always assumed, and this makes little sense without clear definitions of right and wrong.

This prompts two thoughts. First, the contradiction between the two sets of assumptions needs to be faced. The first set of assumptions is actually too silly to be worth probing in detail: experts obviously do have some expertise (although usually not as much as they think they do), and some answers are obviously wrong. The second set of assumptions is less obviously flawed, but I think that overturning it, which would mean redefining education, would be hugely beneficial. If the system could be redesigned so that there is more success, and the blame for a lack of progress is not laid at the door of the poor anxious student - this would surely be a good thing. I have outlined some thoughts along these lines briefly in this article, and in more detail at http://woodm.myweb.port.ac.uk/nothard.pdf.

The second thought is about the sterility of this kind of session.  The introductory ideas proposed by the seminar leader were really platitudes: the sort of things you couldn't disagree with without feeling like an idiot or a villain. And then the interjections were mostly along the same lines, and any that weren't were either ignored, or redefined so that they are consistent with the dominant mood.


Sessions like this would be more productive if they had more of an edge, if they incorporated some negative or disruptive thoughts to challenge the cosy consensus. But for this to work, we need to learn to suspend our initial distrust of uncomfortable ideas, and give them a chance to see where they lead.