University of Sydney Papers in HMHCE 1 Relationship between blogs and high stakes examinations in preservice teacher understanding of the teaching games for understanding approach to physical education DEAN DUDLEY ERIC DRINKWATER RACHEL KELLY Charles Sturt University ABSTRACT Background: Research has shown that pre-service teachers have difficulty comprehending and implementing the Teaching Games for Understanding (TGfU) model if they do not have a deep understanding of the theory and practices underpinning it. Purpose: This study examined the relationship between blogging and examinations when assessing TGfU knowledge to ascertain whether difficulties in understanding TGfU could be detected prior to a summative assessment. Methods: Participants were 26 pre-service teachers enrolled in a physical education teaching degree. The study involved assessing understanding of TGfU through weekly blogs and a high stakes examination. Pearson correlations were conducted with weekly blog grades as the independent variables and with high-stakes examination grades as the dependent. Results and Discussion: The subject coordinator s grading of the blogs could highly predict the exam grade (r = 0.59) but this was not the case for the casual marker (r= - 0.05). The correlation between the casual marker and subject coordinator on the exam was moderate (r= 0.40). The results of the correlation suggest that there are relationships between the levels of student achievement on both the blogs and an exam. Thus we learned that there are similarities in assessment outcomes from these findings.
2 Dean Dudley INTRODUCTION The TGfU curriculum instruction model is a learner- and game centred approach to sportrelated games learning that challenges the traditional teacher and skills, drills centred approaches of teaching physical education. Traditional approaches of physical education have a high focus on the skill level and ability of the learner, where TGfU focuses on developing deep knowledge and understanding, critical thinking and problem solving during game play (Bunker & Thorpe, 1982). However research has shown that pre-service teachers have difficulty comprehending and implementing the TGfU curriculum instruction model without a deep understanding of its theory and practical application (Randall, 2003; Baxter & Dudley, 2008, Dudley & Baxter, 2009; Robinson, 2011). Specifically, fundamental misunderstandings exhibited by pre-service teachers include the dual nature of TGfU (i.e. teaching skills and tactics concurrently in a game context rather than the traditional Skill/Practice/Game approach) (Randall, 2003, pp. 54 56) and the role of planning (i.e. viewing lesson planning as unnecessary) (Randall, 2003, p. 56). Also, teachers inexperienced in the implementation of the TGfU approach to physical education can resort to simply facilitating game play and neglect to plan lessons, as they perceive their own knowledge of games to be sufficient to do so. Effective questioning and planning are essential when implementing a TGfU approach to physical education in order to facilitate the deep understanding that produces both game skill and tactical aptitude (Turner, 2005). The primary method of assessing deep understanding in university students worldwide has been through the use of high stakes examinations (Trigwell, 1992; Biggs, 2003; Killen, 2005). However as higher education becomes more accessible to a wider range of students (Biggs & Tang, 2007), it appears that this assessment type may not be solely optimal for promoting higher order thinking. When a student has the ability to think and apply knowledge using both surface and deep learning approaches is when higher order thinking will occur (Boulton-Lewis, 1998; Biggs, 2003; Killen, 2005). Both surface and deep understandings are equally important concepts in terms of learning and are vital in learning about new knowledge, and building upon existing knowledge. It is important that both these elements are assessed (Entwistle, 1998). High stakes examinations (any tests used to make important decisions about whether a student has earned a grade or award) are a widely used assessment method in university settings and have been used constantly for many years without question (Trigwell, 1992)
University of Sydney Papers in HMHCE 3 because they are able to sample a range of learning abilities within the same task. The exam has some disadvantages though, such as encouraging students to memorise information rather than try to learn and understand the content from the entire course, thus assessing surface rather than deep understanding (Trigwell, 1992). This has resulted in suggestions that exams may work best by using alternative assessment methods as well to ascertain whether a grade has been earned or whether progression toward an award has been made (Trigwell, 1992). This includes the use of low-stakes assessment that is used to measure academic achievement, identify learning problems, or inform instructional adjustments. What distinguishes a high-stakes examination from a low-stakes assessment is not its form (how the assessment is designed) but its function (how the results are used). Low-stakes assessments generally carry no significant consequences and therefore the results typically matter far more to an individual teacher or student than to anyone else. Assessment tasks such as a reflective weekly web log (blog) allows teachers to provide formative methods of assessment with less pressure than an exam and with a focus on extracting all the knowledge a student gains during a subject rather than a single high stakes exam. Blogs are a discussion or informational site published on the World Wide Web and consisting of discrete entries posts typically displayed in reverse chronological order. They are becoming increasingly popular in assessment though are yet to be researched extensively in the area of education and have not yet known to be relational to undergraduate performance. By using a weekly blog, teachers who regularly review and analyse the posts of their students are able to monitor a student s progression of learning. This is vital when assessment tasks in university are assessing the depth of knowledge acquired in a given area of study (Boulton-Lewis 1998; Biggs 2003). By having several smaller assessment tasks such as a blog, a teacher is able to analyse when and how the student is learning and at what level of understanding. The highest level of understanding that a student displays can be assessed using tools of cognitive assessment such as the Structure of Observed Learning Outcomes (SOLO) taxonomy (Biggs & Collis, 1982). SOLO is a versatile meta-cognitive tool as it can assess a range of assessment tasks and has been used in Australian universities across a variety of courses in Queensland (Teaching and Educational Development Institute University of Queensland, 2009), Tasmania (University of Tasmania, 2007) and South Australia (Centre for Learning and Professional Development University of Adelaide, 2009). The SOLO taxonomy has not gone without its criticisms and
4 Dean Dudley was found to have large differences within each SOLO level (Chan et al., 2002). This prompted Chan et al. to modify the model and implement sub-levels into each SOLO level. While this modification was found to reduce ambiguity, it had low inter-rater reliability compared with the original SOLO model (Biggs & Collis 1982). Pegg and Panizzon (1997) developed a two-cycle SOLO model based on the original SOLO taxonomy (Biggs & Collis 1982). This two-cycle SOLO model was thought to be a better alternative than splitting the levels as attempted by Chan and others (2002). Pegg and Pannizon (1997) decided a second level was appropriate as learning and developing understanding went through two distinct levels surface and deep understanding (See Figure 1). First Cycle SOLO Second Cycle SOLO R2 EA M2 R1 U2 M1 U1 Narrative & Surface Understanding Conceptual & Deep Understanding Concrete-Symbolic Mode Formal Mode Figure 1. Two-Cycle SOLO Model (After Pegg 2003) The two-cycle SOLO taxonomy has recently been used in Australian universities to evaluate pre-service teachers understanding of the curriculum instruction model Teaching Games for Understanding (TGfU) while being assessed using a high stakes examination (Baxter &
University of Sydney Papers in HMHCE 5 Dudley, 2008; Dudley & Baxter, 2009). They found the two-cycle SOLO taxonomy can effectively discriminate student understanding of the TGfU curriculum instruction model and provided good inter-rater reliability The overall inter-rater reliability through the use of the simple percentage-agreement figure was 90.3% (Cohen s kappa = 0.849, p = 0.001). Dudley and Baxter (2009) have used the TGfU instruction model and the SOLO taxonomy together in a study looking how deep understanding and metacognition of TGfU can be assessed through a high stakes exam. They also recently examined the qualitative strength of using blogs as a formative assessment tool in higher education (Dudley & Baxter, 2013). There are no known studies however, that have examined the statistical relationship that exists between metacognitive performance in a blogging task and undergraduate metacognitive performance in university high stake examination. The use of continuous evaluation is a valuable method of assessment in higher education but assessing the relationship between high stakes examinations and low stakes assessment (such as a weekly blog) is unclear. Therefore, the purpose of this study is to evaluate the relationship between blogs and examinations when grading student performance based on the two-cycle SOLO taxonomy. METHODS Setting and Participants The sample for this study were 26 third year undergraduate pre-service teachers (16 male and 10 female) from an Australian university enrolled in a Health and Physical Education teaching degree. The subject these students were enrolled in was a 12-week course set out to investigate the theory, research and practice of the TGfU instruction model in a PE teaching context. Each week consisted of a two-hour practical workshop and a one-hour lecture. Students were required to complete three assessment tasks during the subject which included a) a weekly blog post of at least 500 words reflecting on their experiences with TGfU during the two-hour practical workshop; b) a TGfU teaching presentation and teaching resource package; and c) a three-hour high stakes examination on TGfU theory and research. These students were identified as a convenience sample for the purpose of this study. The study was verbally explained to the sample. The sample was then asked to read and sign a record of informed consent. It was explained that students had no requirement to participate
6 Dean Dudley and could withdraw their consent at any time. This study was approved by the institutional Human Research Ethics Committee. Each week for eight non-consecutive weeks, students attended a one hour lecture and two hour workshop that focused on the principles and techniques of the TGfU curriculum instruction model in which a blog was to be kept by all students. This model of delivery was conducted over a four month period from July to October and was designed to give preservice teachers exposure to TGfU lessons in context. The workshops and lectures consisted of TGfU as a curriculum instruction model (Butler & McCahn, 2005), applying the fundamental rules of games (Grehaigne, Richard, & Griffin, 2005), the role of assessment (Olsin, 2005), and integrating other curriculum instruction models (Collier, 2005). Measurement Pre-service teachers were required to complete a set of weekly blogs over eight weeks as an assessment task for their subject. They were given one week to complete and submit each blog entry. The blogs were designed to reflect their thoughts and feelings about how the classes were taught and the experiences they had in relation to learning to teach through a TGfU approach. The students were required to address the following questions in each blog entry: 1. What did we cover in this week s workshop? 2. What are my thoughts on teaching games in this way? 3. What impact will this week s workshop have on my own teaching? 4. What did I like OR what would I do differently if I taught this lesson? The student responses to the blog questions were graded by the subject coordinator and an independent (casual) marker according to the two-cycle SOLO taxonomy and each student was given a score which reflected their highest level of thinking and understanding each week. At the completion of the course and after all the blog entries were completed, the pre-service teachers completed a three hour high stakes exam. There were three questions in the exam; two questions tested surface understanding and the third tested deep understanding. The question requiring students to show their level of deep understanding was chosen as the question to examine for the purposes of this study. The question for the exam was
University of Sydney Papers in HMHCE 7 purposefully open so it probed for the deepest level of understanding and therefore allowed the students a large opportunity to showcase the higher-order conceptual understandings of the TGfU model they had reached for this topic. Specifically, the exam question graded on the two-cycle SOLO taxonomy was: In as much detail as possible, justify whether TGfU is a sound pedagogical approach to teaching physical education. The written response from the student was assessed using a two-cycle SOLO model. Therefore, the variables for this study were the weekly blog responses as defined by their two-cycle SOLO level, and the responses in the high stakes examination question as defined by their two-cycle SOLO level. The exam was independently graded twice, once by the subject coordinator and once by an independent casual marker. Efforts to Minimise Bias All students were de-identified for both the blogs and high stakes exams in order for the independent marker to be blind with de-identification being performed by the subject coordinator by replacing all student numbers and names on each task with a novel coding system (Phonetic alphabet coding). The subject coordinator was unable to have any deidentification of the students due to the requirement to provide grading information for the university subject. However, both markers performed their grading independently and without any collaboration. Statistical analysis Means for the independent and subject coordinator exam scores are expressed as mean ± standard deviation. The difference in means was compared by independent samples t-test and expressed with a mean and 95% confidence limits (Cohen, 1988). A Pearson product-moment correlation coefficient was used in order to establish the relationship between the scores of all eight weeks of blog entries combined and the high stakes exam. The calculated Pearson correlation coefficient was used to make inferences based on the correlation between two variables, in this case, the blogs and high stakes exam (Berg & Latin 2008). Several separate Pearson correlation analyses were conducted: for the independent marker (weekly blogs to exam and highest blog to exam), for the subject
8 Dean Dudley coordinator (weekly blogs to exam and highest blog to exam) and finally between the independent marker and the subject coordinator on the exam. Results for all correlation coefficients were interpreted as trivial (0.00 to 0.09), low (r=0.1 to 0.29), moderate (0.30 to 0.49) or high (>0.50) (Cohen, 1988). Summary results are expressed as Pearson correlation coefficients with a standard error of estimate and 95% confidence limits. RESULTS Difference between independent marker and subject coordinator The independent marker scored the exam higher (5.54 ±1.39) than the subject coordinator (4.65 ±1.47) by 0.89 on an 8-point scale (0.09 to 1.68, d=0.60). See Figure 2. Influence of blog scores on exam score The correlation coefficient between the independent marker and highest blog score was negatively correlated (r= -.049, 95% confidence limits: -.436 to.353). The model is summarised in Table 1. However, the correlation coefficient between the subject coordinator and combined blog scores was high (r=0.59, -.17 to.76). Only the Week 4 blog score significantly predicted the exam grade (r=.431,.01 to.72). The model is summarised in Table 2.
University of Sydney Papers in HMHCE 9 Figure 2. Mean and standard deviation of blog scores based on two-cycle SOLO grading Note: (Y-axis legend: 1 = Prestructural response, 2 = Unistructural first cycle, 3 = Multistructural first cycle, 4 = Relational first cycle, 5 = Unistructural second cycle, 6 = Multistructural second cycle, 7 = Relational second cycle, 8 = Extended abstract)
10 Dean Dudley Table 1. Pearson correlation analysis of independent marker s blog to exam mark Pearson correlation coefficient 95% Confidence limits Week 1 blog -Exam -.202 -.553 to.210 Week 2 blog -Exam -.230 -.586 to.202 Week 3 blog -Exam -.185 -.562 to.257 Week 4 blog -Exam 0.000 -.422 to.422 Week 5 blog -Exam -.395 -.713 to.058 Week 6 blog -Exam -.478 -.766 to.030 Week 7 blog -Exam -.326 -.672 to.136 Week 8 blog -Exam.125 -.336 to.538 Overall Correlation Coefficient 95% Confidence limits Highest Blog - Exam -.049 -.436 to.353 Table 2. Pearson correlation analysis of subject coordinator blog to exam mark Pearson correlation coefficient 95% Confidence limits Week 1 blog -Exam -.010 -.404 to.386 Week 2 blog -Exam -.093 -.486 to.332 Week 3 blog -Exam -.167 -.550 to.274 Week 4 blog -Exam.431.011 to.722 Week 5 blog -Exam.115 -.345 to.531 Week 6 blog -Exam -.348 -.693 to.126 Week 7 blog -Exam -.097 -.517 to.361 Week 8 blog -Exam.078 -.377 to.504 Overall Correlation Coefficient 95% Confidence limits Highest Blog Exam.592 -.169 to.764 DISCUSSION The purpose of this study was to evaluate blogs as a formative assessment tool and their relationship to a high stakes examination of the TGfU curriculum instruction model. This
University of Sydney Papers in HMHCE 11 study demonstrates that there are substantial relationships between the two tasks and that there was a substantial difference between the grades awarded on the examination between the independent marker and the subject coordinator. The main finding of this study was the substantial relationship between the subject coordinator s exam marks and the highest blog marks with the relationship (r = 0.59). This relationship between blog scores and exam scores is sufficiently strong to demonstrate that high stakes examinations and weekly blogs are testing similar knowledge and a relationship exists that can predict likely achievement in an exam. The relationship is, however, not strong enough to conclude that the two assessment methods are sufficiently similar that only one of the two tasks should be used. Therefore, the two tasks are assessing similar thinking pertaining to TGfU, but in different ways. This conclusion has implications in how academics might conceptualise the process of assessment in pre-service teachers. Specifically, the concepts formative assessment and assessment for learning (AfL) refer to formal or informal types of assessment that are used throughout a topic or unit of work in order to give feedback and contribute to student learning throughout. The importance of this is acknowledged in student learning but is not well understood in higher education (Yorke, 2003). Formative assessment helps students to appreciate and understand the standards that are expected of them and give them something to work towards for final assessments or examinations. This type of assessment also allows educators to learn what the students have been able to produce and to tailor their teaching accordingly. Based on the data presented in this paper, blogs are a useful formative assessment method which allows for an educator to provide feedback before a formal task and assist in a student s learning. In this study, blogs were useful as a low-stakes and formative method of assessment. The results indicate that having different types of assessment tasks may promote many different manifestations of learning pertaining to how pre-service teachers conceptualise the TGfU approach to physical education.. This is important because if the knowledge between a high stakes examination and a formative assessment task such as a blog are different, then it is important in university assessment to have a variety of assessment tasks in order to capture this different expression of understanding if they exhibit the same metacognitive depth. In this light, the results indicate a similarity which demonstrates the importance of having formal examinations is still vital in order to assess knowledge as a
12 Dean Dudley complete whole and that there is a relationship between the depth of understanding captured by tasks. This study demonstrates that neither task high stakes examinations or low stakes weekly blogs capture all the learning acquired by pre-service teachers in their understadning of TGfU. It is therefore logical that university assessment protocols should include at least both low stakes formative and high stakes summative assessment tasks. In relation to education, Dudley and Baxter (2013) demonstrated the qualitative and formative potential of using blogs in higher education assessments. There are however, no known statistical studies investigating the use of blogs as a formative assessment tool or providing evidence in their ability to predict performance in high stakes assessment in higher education. The findings from this study highlight the need for varied assessment tasks in universities in order to capture the depth of knowledge and thinking exhibited by a variety of different the students pertaining to their understanding of the TGfU approach. A variety of assessment strategies may ensure students are given the opportunity to retain knowledge and build on knowledge to latter apply it in a variety of physical education contexts. The substantial correlation between the two types of assessment suggests that students who excel in one strategy of assessment will not be penalised by including another. There was also a substantial difference between the independent marker and the subject coordinator when marking the responses. The mean difference of 0.88 points between the independent marker and subject coordinator was moderate (d=0.60). This substantial difference demonstrated a clear inconsistency between the two markers and raises issues around the value of ensuring marking within universities is moderated when employing sessional or multiple teaching staff on a subject. A recent study highlighted that there are very different concepts of moderation in Australian universities (Wallace et al., 2009). The authors concluded that there is great variability in moderation practices and it is still too early to determine best practice in university settings. Whilst best practice may have yet to be established, to increase inter-marker reliability of examination grading, the use of pilot marking is suggested. Pilot marking refers to markers collaboratively grading a sample of assessments before the marking of a task. It is undertaking by all those involved in the marking process to both practice marking and ensure they know exactly how they should the task is to be graded. This process enables the
University of Sydney Papers in HMHCE 13 generation of a reliable marking standard that is honest and the consistent, regardless of the marker. As a result of pilot marking, changes may be made to marking guidelines to ensure that they adequately capture evidence of the outcomes and content that individual questions are designed to assess. Pilot marking should be conducted in order to produce fair and equitable student outcomes. The pilot marking process or anything similar is generally not implemented into a university s marking policy but, according to the results of this study, the margins of difference can be substantial, therefore pilot marking interventions may need to be implemented in order to reduce this inter-rater error in interpreting depth of understanding in high stakes examinations. CONCLUSION The purpose of this study was to examine the use of blogs as an assessment task in comparison with a high stakes examination to assess pre-service teacher understandings of TGfU. The two-cycle SOLO taxonomy was used as the framework to examine the assessment tasks. The main finding of this study was that when using a two-cycle SOLO taxonomy as a basis for grading matrices, a substantial relationship between the subject coordinator exam marks and the highest observed blog marks existed. The relationship between blog performance and exam performance is strong enough to demonstrate that they are evaluating similar thinking, but it is also weak enough to conclude the low stakes blog and high stakes examination assessments are not testing identical thinking. The research finding supports that neither exams nor blogs sufficiently assessed TGfU independently but when combined were able to assess the learning outcomes in both a formative and summative way. Using both assessment tasks in a pre-service teacher program focusing on TGfU provides the participants with an increased chance of capturing the understanding they have exhibited during their studies.
14 Dean Dudley REFERENCES Baxter, D., & Dudley, D. (2008). Assessing for deeper understanding in tertiary examinations in physical education using a SOLO taxonomy. Australian College of Educators Online Refereed Articles, No. 52. Retrieved February, 2009, from http://www.austcolled.com.au/ articlepurchase/assessing-deeper-understanding-tertiaryexaminations-physical-education-using-solo-t Berg, K. E., & Latin, R. W. (2008). Essentials of research methods in health, physical education, exercise science, and recreation (3 rd ed.). Baltimore: Lippincott Williams & Wilkins. Biggs, J. (2003). Teaching and learning at university (2 nd ed.). United Kingdom: Open University Press. Biggs, J., & Collis, K. F. (1982). Evaluating the quality of learning: The SOLO taxonomy. New York: Academic Press. Biggs, J., & Tang, C. (2007). Teaching for quality learning at university: What the student does (3rd ed.). Maidenhead, Berkshire: Open University Press. Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education. 5(1), 7-74. Boulton-Lewis, G. (1998). Applying the SOLO taxonomy to learning in higher education. In B. Dart & G. Boulton-Lewis. (Eds.), Teaching and learning in higher education (pp. 201-221). Melbourne: The Australian Council for Educational Research. Bunker, D., & Thorpe, R. (1982). A model for teaching games in secondary schools. Bulletin of Physical Education. 18(1), 5-8. Butler, J., & McCahan, B. (2005) Chapter 3: Teaching games for understanding as a curriculum model. In Teaching Games for Understanding: Theory, Research and Practice (2005). Griffin, L., & Butler, J. (Eds). Human Kinetics: Champaign Centre for Learning and Professional Development, University of Adelaide. (2009). Assessment design and rubrics. Retrieved March, 2009, from htpp://www.adelaide.edu. au/clpd/resources/assessment/design/ Chan, C., Tsui, M.S., Chan, M., Hong, J.H. (2002). Applying the structure of the observed learning outcomes (SOLO) taxonomy on students learning outcomes: An empirical study. Assessment and Evaluation in Higher Education, 27(6), 511-27. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Earlbaum Associates. Collier, C. (2005) Chapter 9: Integrating tactical games and sport education models. In Teaching Games for Understanding: Theory, Research and Practice (2005). Griffin, L., & Butler, J. (Eds). Human Kinetics: Champaign
University of Sydney Papers in HMHCE 15 Dudley, D., & Baxter, D., (2009). Assessing levels of student understanding in pre-service teachers using a two-cycle SOLO model. Asia-Pacific Journal of Teacher Education. 37(3), 283-293. Dudley, D., & Baxter, D., (2013). Metacognitive analysis of pre-service teacher conception of Teaching Games for Understanding (TGfU) using blogs. Asia-Pacific Journal of Teacher Education.41(2), 186-196. Entwistle, N. (1998). Approaches to learning and forms of understanding. In B. Dart & G. Boulton-Lewis. (Eds.), Teaching and learning in higher education (pp. 72-101). Melbourne: The Australian Council for Educational Research. Grehaigne, J.F., Richard, J.F., & Griffin, L.L. (2005). Teaching and learning: Team sports and games. Routledge: London. Killen, R. (2005). Programming and assessment for quality teaching and learning. South Melbourne: Thompson Social Science Press. Olson, J.L.. (2005) Chapter 8: The role of assessment in Teaching Games for Understanding. In Teaching Games for Understanding: Theory, Research and Practice (2005). Griffin, L., & Butler, J. (Eds). Human Kinetics: Champaign Pegg, J. (2003). Assessment in mathematics: A developmental approach. In M. Royer (Ed.), Mathematical cognition (pp. 227 254). Greenwich, CT: Information Age. Pegg, J., & Panizzon, D. (1997). Investigating students understanding of diffusion and osmosis: A post-piagetian analysis. Australian Association for Research in Education, Annual Conference, Brisbane, Queensland, 30 November 4 December. Randall, L. (2003). Preservice teachers understanding of teaching games for understanding approach to content delivery. Avante. 9(1), 49-61. Robinson, D. (2011). Possibilities for physical education teacher education: service learning and TGfU. PHENex Journal. 3(1), 1-16. Teaching and Educational Development Institute, University of Queensland. Biggs structure of the observed learning outcome (SOLO) taxonomy. Retrieved March, 2009, from http://www.tedi.uq.edu.au/downloads/biggs_solo.pdf Trigwell, K. (1992). Information for UTS staff on assessment. Sydney: UTS Working Party on Assessment. Turner, A.P. (2005). Teaching and learning games at the secondary level. In L.L. Griffin & J.L. Butler (Eds), Teaching games for understanding: Theory, research, and practice (pp. 71 90). Champaign, IL: Human Kinetics. University of Tasmania Assessment Working Group, University of Tasmania. (2007). Guidelines for good assessment practice. Retrieved March, 2009, from http://www.assessment.utas.edu.au/docs/guide-for-good-assessment.pdf Wallace, M., Yeo, S., Sanderson, G., Hukam-Singh, P., & Thuraongsam, T. (2010). Moderation of Assessment in Transnational Higher Education, In Gavin Sanderson
16 Dean Dudley (ed), Collaborative Approaches to Moderation of Assessment in Transnational Education, Oct 16 2009, pp. 1-11. RMIT University, Melbourne, Vic 3000: Australian International Education Conference 2010.Yorke, M. (2003). Formative assessment in higher education: moves towards theory and the enhancement of pedagogic practice. Higher Education. 45, 477-501.
University of Sydney Papers in HMHCE 17