Marking now finished. What a relief. I find marking very difficult to stick at. I will mark one question, then get up for a coffee. Or enter the mark on a spreadsheet. Or count the number of questions yet to be marked, or the percentage of the job I’ve completed. Not sure why it is so difficult to force myself to mark exams. But the task is now 100% complete. Just the second marking to come – about which more later.
It isn’t as though I do the job particularly conscientiously. In fact it feels pretty arbitrary. One part of one question on one script gets 6 out of 20 marks. Why? Because they haven’t got the main point, but have scribbled down something vaguely relevant. The next script gets just 3 for the same question. The scribbling seems a little less relevant. Or is it? What do I mean? I should check back. In fact I should be cross checking all the time. But I don’t, of course. The lure of the end of the pile is too strong.
Again, I am struck by how silly some of my questions are. One part asks candidates to explain how to do something. The next part asks about difficulties and how to get round them. But this should be part of a good answer to the first part. So a good candidate will have little to say here. I try to remember to fudge the marking scheme, but have a nagging worry that some candidates might have given such a good answer to the first part that they had nothing to say for the second.
Another question is about a formal method of making decisions, applied to menu planning for a person allergic to carrots (they do turn some people red). One of five menu options, according to the question, is carrot soup. The first question asks for recommendations which can be made on the basis of the information given. The answer I expected and intended as the right answer was simply not to serve carrot soup. Few candidates got this right!
Instead they came up with the recommendation to use the formal method, or they thought about their tastes and the tastes of their friends and came up with a recommended menu (usually the chicken with fried chocolate ice cream) on this basis. Either answer got no marks. They almost certainly dismissed the right answer as too obvious to be worth writing down. As would any sensible person.
The next part asked candidates to describe the process for eliciting taste preferences and health data from the customer. Candidates were expected to describe the process of doing this. Many thought they had answered this in the first part. And others made up data about typical customers, instead of describing how to get this data.
There are real confusions here. The obvious common-sense answer is right for the first part – don’t serve carrot soup. But not for the second part: the obvious answer that everyone is going to like chicken with fried chocolate ice cream is deemed unacceptable.
I really must set clearer questions next time which make it clear what can be assumed and what can’t be, and what type of answer is expected. But then the questions may turn into philosophical treatises and the difficulties may well be increased? Help!