For the sake of viewer convenience, the content is shown below in the alternative language. You may click the link to switch the active language.
In the last couple of weeks there have been a couple of news stories that have concerned the development of TEF. The most obvious is the announcement of Chris Husbands as the first Chair of the TEF panel. The other was the announcement that HEFCE has commissioned the University of Cambridge’s examinations arm to run a pilot around learning gain.
Chris Husband’s appointment is clearly to be welcomed. He is Vice-Chancellor of Sheffield Hallam University and has a strong academic track record in the field of education policy and practice, having held senior leadership positions at Warwick, East Anglia, the Institute of Education and University College London. It is really good news that we have somebody in this position who is clearly rooted in the sector. Sheffield Hallam is a University which has been developing a strong reputation for balancing teaching excellence and growth in research and Chris also has considerable experience of institutions from across the sector. Hopefully he will bring all his experience and academic insight into the role and will act both as an advocate of the TEF to the sector as well as a voice of the sector within the machinery of the TEF.
The other news is perhaps more difficult to assess. Learning gain is a concept that has been developing alongside the TEF. It was being investigated as a concept by HEFCE before the White Paper and has been picked up by the political narrative as that ‘thing’ that, above all, TEF is aimed to measure. The difficulty, of course, is that nobody really knows what learning gain is, or more accurately there are probably as many understandings of learning gain as there are people talking about it around the sector. In its most simplistic form ‘learning gain’ is that added something that students have when they leave University compared to what they came in with. Final degree outcomes do not measure learning ‘gain’, they simply provide a snapshot of knowledge and ability at the point when the assessment is made, traditionally at the end of the programme.
HEFCE set up a funded programme in 2015 in order to undertake 13 pilots across the sector in order to see whether it is possible to measure learning gain. Very loosely there are five methodologies that are being trailed. The first two use metrics in order to assess learning gain, or perhaps the best proxy for learning gain, either through standardised tests or by reference to grades. The second two are looking at qualitative data, either devising questionnaires that ask students to provide a narrative of their own learning gain over their time at the institution or using other qualitative methods. The fifth group are exploring some combination of these methodologies.
Having raised the question of ‘learning gain’ this has been picked up by the TEF as something that would be really good to measure and compare across institutions. They cannot do this within the first couple of rounds, however, because the pilots are still ongoing and the outcomes are already looking problematic. The problem with the qualitative methods is that they are difficult to code and almost impossible to compare across institutions. The problem with the quantitative methods is that many do not appear to measure actual ‘gains’ (most assessments are level specific and get harder with each level so a student who is getting the same mark each year is ‘gaining’ in learning, but we cannot say by how much, and certainly cannot compare one institution with another). The mixed methods simply add complexity where the government, and other supporters of the TEF, want simplicity and clarity.
Learning gain, as a concept arrived, as with so many other things, from the States. Here the emphasis is not placed so much on subject specific knowledge and skills, but rather on core critical competencies that all students are expected to ‘gain’, or at least improve in, over their time at University. This leads to a process whereby students are tested on arrival, then tested again, with what is essentially the same test (not raised by level), at one or more other points during their career as a student. These tests are controversial. It is not always clear what is being tested (and whether this is really useful). There is also some evidence that some subject areas do better at these generalised tests than others (although the testers are careful to remove overt subject bias). The scores appear to have little value to the students and are essentially a measure of the institutions ability to prepare their students for the test. Brazil is one country which has taken to these tests in a big way and rolled them out nationally and it will be interesting to observe their experience over time.
The news this week, as reported in the Times Higher, is that HEFCE have now commissioned Cambridge Assessments (owned by the University of Cambridge) to begin the process of devising and piloting a series of tests of this kind across the sector based on their current test of critical thinking and problem solving. This clearly indicates that the government, and HEFCE, believe that this might be the way of measuring ‘learning gain’ and suggests that somewhere down the line this kind of universal competency testing – at entry, at the end of each year, and on exit – might become a part of University life, and a key element of how each institution will be measured. If such testing does eventually get into TEF then, from the current perspective, I personally think this will be a bad thing. It will inevitably lead most institutions (or at least those who are interested primarily in league tables) to teach to the tests rather than concentrating on what we should be doing, which is providing fully rounded graduates with in depth knowledge and expertise in their own particular discipline.