Assessment, testing and evaluation is critical for fostering coherence and collaboration in the education sector. These areas are essential to understanding and improving learning outcomes, yet they are often explored in fragmented ways across disciplines and contexts.
By uniting researchers under this shared collective, we aim to address pressing issues like equity in assessment, the validity and reliability of testing methods, and the alignment of evaluation with curricular goals.
This collective approach encourages the pooling of expertise, resources, and data, leading to richer insights and more impactful outcomes than isolated efforts.
Translating theory into practice
Our unified research theme bridges the gap between theory and practice. We enable educators, policymakers, and researchers to work collaboratively on evidence-based strategies that improve teaching and learning.
For instance, studying how assessments influence student motivation or how standardised testing impacts marginalised groups requires a multidisciplinary lens. Our research ensures these questions are examined holistically, promoting innovative approaches that are responsive to diverse educational contexts.
Driving innovation and change
Establishing a collective research theme focus also helps drive systemic change. It fosters a shared vision for improving assessment practices to meet the evolving demands of education in the 21st century.
This includes addressing challenges like integrating technology into testing, ensuring fair evaluation practices, and preparing students for real-world competencies. Such coordination ensures that research findings translate into practical reforms that enhance educational equity and effectiveness globally.
Theme leads
Members
HDR students
Mr James Toohey (HDR)
Mr Luke Beck (HDR)
Our work
Building foundations
The work establishes a solid base of evidence that highlights the importance of fairness, validity, and reliability in these processes.
This includes:
- exploring innovative assessment models
- refining standardised testing practices
- addressing biases that affect diverse learners, and
- foundational efforts laying the groundwork for meaningful, data-driven insights that guide educational reforms and inform policy decisions.
Publications
Toohey, J. F., Grainger, P. & Carey, M.D. (2023), Measuring Australian preservice teachers' Asia capability and perceived readiness to teach the Asia cross-curricular priority. Australian Journal of Teacher Education, 48(6)
Grainger, P. Carey, M. Johnston, C. (2024), Why not rubrics in doctoral education? Assessment and Evaluation in Higher Education. (Q1)
Carey, M.D., & Szocs, S. (online first). Revisiting raters’ accent familiarity in speaking tests: Evidence that presentation mode interacts with accent familiarity to variably affect comprehensibility ratings. Language Testing. Doi: 10.1177/026553222312008 Q1, IF:5.60
Carey, M.D., Rugins, O., & Grainger, P.R. (2023). Investigating the effects of varying Accelerative Integrated Method instruction on spoken recall accuracy: A case study with junior primary learners of French. International Review of Applied Linguistics in Language Teaching, 61(3). doi:10.1515/iral-2023-0001 Q1, IF:1.97
Martin, D., Carey, M.D., & McMaster, N. (Accepted in-press). Assessing primary school preservice teachers’ confidence to apply their TPACK in specific categories of technologies using a self-audit survey. The Australian Educational Researcher. Q1, IF:2.38
Interested in research or collaboration?
Contact the theme leads.