Challenges to some assumptions about the proposed Module Evaluation Questionnaire to be introduced at the University of Birmingham

February 1, 2012 § 2 Comments

School and College education committees at the University of Birmingham have been told that a university wide Module Evaluation Questionnaire is to be introduced in the very near future. Aspects of both the format and administration of this ‘MEQ’ suggest that the procedure is based on a number of assumptions which can be readily inferred from various documents in circulation. These documents include an ‘Action Plan,’ in response to the National Student Survey, from the Business School, which states that ‘all programmes and staff must comply’ with ‘new minimum standards’, including ‘minimum module questionnaire performance (with the cessation of contracts of PTVL’s and performance management of permanent staff with scores below 3.5 out of 5 unless there are clear extenuating circumstances)’. The purpose of this brief paper is to make explicit some of the assumptions underlying the introduction of this MEQ and to challenge them with reference to relevant research evidence.

 

Assumption 1

  • By its name alone, it is obvious that the proposed Module Evaluation Questionnaire is an instrument whose purpose is to evaluate the modules taught at UoB.

‘Module Evaluation Questionnaire’ is a misnomer; the survey can only generate information about some aspects of students’ feelings and attitudes. Both teaching and evaluation are complex processes, and no single questionnaire survey completed by only one of the groups involved – students – can produce the kind of data about teaching and learning which will support genuine improvements in either (Light, Calkins & Cox 2009: 238; Hand & Rowe 2001). The MEQ is clearly an instrument for controlling employees at UoB.

While SETs [student evaluations of teaching] are promoted with certain connotations of virtue (in claims that they serve institutional objectives of excellence and accountability), the mechanism itself can conflict with both the instructional objectives of faculty and the university’s goals to educate. The rhetoric of quality appears as a legitimating device, but it actually obscures its own implication in monitoring and controlling the conditions and practices of academic work. (Titus 2008: 414)

 

Assumption 2

  • The MEQ is a fair and unbiased way to get a picture of the quality of teaching.

The questions provide no evidence at all of what students taking the module have learned.  It is not an instrument for measuring the effectiveness of teaching. ‘An evaluation of teaching effectiveness … must be based on outcomes. Anything else is just rubbish.’ (Emery, Tamer and Tian 2003: 45)

… students’ enjoyment gains a distorted level of importance in SETs. Because their sense of enjoyment is so widely used by students as the sole criterion by which they rate every item on the form, their level of pleasure becomes conflated with teaching quality. The ratings these students give are not considerations of specific teaching behaviors; instead, their ratings represent their general opinion of the instructor’s acceptability. (Titus 2008: 403)

 

Assumption 3

  • Using a single evaluation form across the institution produces valid and reliable information.

‘[T]he use of a standard instrument throughout an institution implies that it is applicable to all modes and types of teaching’ (Kember & Wong 2000: 70), leading to a bias against teaching styles other than the most traditional. Gender bias has been found in students’ evaluations of their tutors (e.g. Young, Rush & Shaw 2009; Weinberg, Hashimoto & Fleisher 2009), while subject discipline, module type and class size can lead to differential evaluations (Emery, Kramer & Tian 2003). ‘Using the same evaluation system for everyone almost guarantees that it will be unfair to everyone’ (Emery, Tamer and Tian 2003: 44). In addition, the multiple use of a single instrument across numerous modules leads to ‘survey fatigue’ (Porter, Whitcomb & Weitzer 2004), when respondents are likely to pay minimal attention to the questions. Both validity and reliability are challenges in any sort of survey:

… careful consideration of the exact aims of the study is essential. Too often little care is taken in the development of good measures, use being made of existing instruments for convenience sake rather than because they accurately reflect the outcomes the researchers are seeking to measure. (Mujis 2007: 54 – 55)

Truly useful evaluations of teaching require in-depth (and therefore usually costly) explorations of professional practice in context, triangulating the findings from a range of research instruments. Skilled evaluators of teaching receive training, and their judgements are subject to scrutiny in relation to the consistency of their interpretations of the elements under review. Without including such steps in this evaluation process, these surveys will be of extremely doubtful reliability and validity.

 

Assumption 4

  • The best time to evaluate a module is around the end of the formal teaching period.

The MEQ has been designed to be used at the same time for all modules. In addition to the problem of inviting ‘survey fatigue’ (see above), this pattern disregards the evidence that students’ evaluation of their teaching works well when their responses can be incorporated – if appropriate, using academics’ expertise and judgement – into later phases of the teaching on which they are being asked to comment. As with assessment, formative evaluations have a place in demonstrating that both teaching and learning are continuing processes, and not units of ‘experience’ to be given scores like performances on a television talent show. Likewise, reflection after a period of consolidation – e.g. after the teaching in focus has been completed, examined and students’ achievements reported on – may be a more useful way of identifying how teaching has led to learning; in any case, students should not be asked, for example, about summative feedback before they have received it (Rowley 2003: 147). Like assessment, evaluation can be used as an aspect of the students’ learning in itself, if programmes incorporate it – with trust, respect and understanding – into the teaching cycle (Hand & Rowe 2001). The proposed MEQ has none of these features.

 

Assumption 5

  • Good teachers have nothing to fear from the introduction of this MEQ.

The survey encourages students to respond to teaching in terms of what they recognise and like. This is a both a mechanism for making teaching practices at UoB more uniform, and a discouragement to teaching which breaks new ground and/or presents challenges to the students. ‘[S]tandard feedback questionnaires do not relate to innovative teaching’ (Kember & Wong 2000: 71).  By thus constraining academic staff, innovation, development and improvement are all threatened, so that the exercise is actually counter-productive.

[L]imited instructor autonomy may curtail the impact instructors have on student learning and student satisfaction with learning. (Abrami et al 1990: 228)

 

References

Abrami, P.C., d’Apollonia, S. and Cohen, P.A. (1990) Validity of student ratings of instruction: what we know and what we do not. Journal of Educational Psychology 82 (2), 219 – 231.

Emery, C.R., Kramer, T.R. and Tian, R.G. (2003) Return to academic standards: a critique of student evaluations of teaching effectiveness. Quality Assurance in Education 11 (1), 37 – 46.

Hand, L. and Rowe, M. (2001) Evaluation of student feedback. Accounting Education 10 (2), 147-160.

Kember, D. and Wong, A. (2000) Implications for evaluation from a study of students’ perceptions of good and poor teaching. Higher Education 40, 69 – 97.

Light, G., Calkins, S. and Cox, R. (2009) Learning and Teaching in Higher Education: the reflective professional. London: Sage.

Muijs, D. (2006) Measuring teacher effectiveness: some methodological reflections. Educational Research and Evaluation 12 (1), 53 – 74.

Porter, S.R., Whitcomb, M.E. and Weitzer, W.H. (2004) Multiple surveys of students and survey fatigue. New Directions for Institutional Research 121, 63 – 73.

Rowley (2003) Designing student feedback questionnaires. Quality Assurance in Education 11 (3), 142 – 149.

Titus, J.J. (2008) Student ratings in a consumerist academy: leveraging pedagogical control and authority. Sociological Perspectives 51 (2), 397 – 422.

Weinberg, B.A., Hashimoto, M. and Fleisher, B.M. (2009) Evaluating teaching in higher education. The Journal of Economic Education 30 (3), 227 – 261.

Young, S., Rush, L. and Shaw, D. (2009) Evaluating gender bias in ratings of university instructors’ teaching effectiveness. International Journal for the Scholarship of Teaching and Learning 3 (2), 1 – 14.

§ 2 Responses to Challenges to some assumptions about the proposed Module Evaluation Questionnaire to be introduced at the University of Birmingham

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

What’s this?

You are currently reading Challenges to some assumptions about the proposed Module Evaluation Questionnaire to be introduced at the University of Birmingham at Birmingham UCU.

meta

%d bloggers like this: