There is reluctance from both instructors and students to adopt machine marking (MM) for assessment because of the lack of granularity in the marking. For the MM answers are either completely right at points where they are compared with that of the model solution or wrong and score zero. However the ideal human marker (HM) will catch algebraic slips and deploy an error carried forward marking algorithm or allocate marks for methods that had some level of validity to them. These are completely justifiable concerns to be balanced against the efficiency and consistency of MM. In this paper we seek to quantify the depth of feedback as a function of the mark achieved and to construct a blended marking model which delivers comparable individual feedback whilst retaining significant efficiency due to the MM.
![3.7 Watkins.pdf](https://s3.eu-west-2.amazonaws.com/assets.creode.advancehe-document-manager/documents/thumbnails/default-thumbnail_1568625340.jpg)