In an age of accountability, a considerable amount of school performance data is released directly to the public, and parents rightfully ask of schools, “Why is this so?’
In response to this type of interrogation, schools and school leaders develop a potential pool of positive and negative responses which are displayed in school newsletters following the release of publicly available data such as the NAPLAN results.
When schools have introduced new programs, which require, at times, a considerable financial investment, the leadership teams often seek evidence of the program’s success to justify implementation and continued investment. However, relying solely on Attribution Theory, which explains how individuals infer the causes of outcomes, can lead to biased evaluations - correlation does not always equate to causation. These biases may cause leadership teams to attribute positive outcomes exclusively to the program, while overlooking other critical factors that influence school improvement. Thus, “the danger for those keen to follow these recipes for success is that the historical educational contexts are in a constant state of flux, and the uncontrolled variables often get in the way of promised success” (MacNeill & Boyd, 2020).
Knowing: Controlling the Variables
In the early development of research methodology in agriculture and psychology validity and inference were identified as key factors. Donald Campbell and Julian Stanley (1963, pp. 5-6) identified 12 factors that jeopardise internal and external validity, all of which are still relevant to decisions made in schools today. Operationally, schools are places where tens of variables are at play at any one time, and this is far removed from the discipline of action research and quasi-experimentation. As a result, the school community is forced to make a best decision, which often fits the political intents of the key decision makers.
Attribution Bias in Action
Attribution Theory suggests that people tend to credit successes to specific actions or programs and downplay, or simply ignore broader contextual influences. For example, if a school introduces a literacy intervention program and reading scores improve, then the leadership may attribute the success solely to the program without examining the multitude of contributing variables. This singular focus, known as the Fundamental Attribution Error, parallels theories like the Matthew Effect in which Boyd and MacNeill, (2020) noted that “success in schooling is a multi-factored affair, with a kaleidoscope of conflicting influences impacting on students’ performances daily” and the Pygmalion Effect where Boyd and MacNeill, (2020) suggested that “factors that influence students’ educational success may be the result of either personal qualities or external influence, but some success may be a combination of both” (Boyd & MacNeill, 2020). The Matthew Effect highlights how initial advantages compound over time, while the Pygmalion Effect emphasises the power of expectations in driving outcomes. Both theories suggest that attributing success exclusively to a program may ignore pre-existing advantages or the influence of positive perceptions and high expectations, which can significantly amplify results.
Education is no different to every other aspect of human judgement, and we can be certain that the beliefs and values of the decision-makers will influence their decisions. It is usually the case that the decision-makers’ decisions that do not match the observers’ expectations will be pejoratively labelled “bias”. And, in the case where there is consensus between the decision-makers’ judgments and those of the observers, then such decisions are seen as “wise”, “rational” and unbiased”.
Considering Other Factors
Leadership teams must recognise that many variables contribute to school performance. By ignoring these factors, they risk oversimplifying the narrative around success and failing to make data-informed decisions. This can also lead to survivorship bias where we are complicit “each time we positively accepted many of the predetermined judgements made by those who have experienced idiosyncratic pathways to success” (MacNeill & Boyd, 2020), and where leadership focuses only on visible successes and ignores instances where similar efforts did not yield positive results. Key overlooked factors include:
1 Enrolment Variations: An influx of new students, particularly those from higher socioeconomic backgrounds, can significantly impact school performance. New families might bring additional resources, parental involvement, and higher prior achievement levels, which could skew outcomes in ways unrelated to the new program.
2 Coaching Practices: Improved instructional coaching and professional learning may lead to enhanced teacher performance and student outcomes. If coaching practices have been strengthened concurrently with the new program, attributing success solely to the program would ignore the impact of professional development.
3 Teacher Movements: Changes in teaching staff, such as hiring highly effective educators or redistributing experienced teachers, can drastically influence student achievement. Stronger teaching capacity may align with program implementation but is often an independent driver of success.
4 Demographics and Socioeconomic Status: Shifts in the school’s demographics—such as an increase in students from more advantaged backgrounds, or migrant intake can affect overall performance. The schools’ testing results reflect the educational standards of the incoming students, and the time taken to embed the new students in the school’s learning culture.
5 Community Engagement and Culture: Enhanced collaboration among teachers, parents, and the broader school community can create an environment conducive to learning. These cultural shifts are rarely accounted for when attributing success to a specific program.
Addressing Attribution Bias
To ensure fair and accurate evaluation of new programs, school leadership teams should:
• Conduct quasi-experimentation: At school level there is good testing material that is available, and PAT tests presented in pre-test and post-test form give us an indication of each student’s learning, and it provides useful data to underwrite performance management discussions aimed at improving teaching and learning throughout the school.
• Consider Longitudinal Data: Evaluate trends over time to determine whether improvements align with the program’s introduction or other changes in the school environment.
• Engage in Critical Reflection: Encourage leadership and staff to question assumptions about causality and actively seek alternative explanations for observed outcomes.
• Gather Stakeholder Input: Solicit feedback from teachers, students, and parents to gain insights into other factors contributing to success.
• Benchmark Against “Like” Schools: Compare results with schools of similar size, demographics, location, and resources to identify patterns that may not be tied to the programs. The political move to comparison of students with similar background as determined by parental occupation and education, has obfuscated this measure of “likeness”.
Teacher-Student relations and Attribution Theory
Secondly, in the world of teaching, Attribution Theory is an important factor in the nature of teacher-student relationships. Wang and Hall (2018) acknowledged that: “According to attribution theory, individuals are particularly motivated to seek specific explanations for negative educational outcomes, with these causal attributions, in turn, having important consequences for academic development.” This theory is not dissimilar to aspects of the Pygmalion Effect, and its effects in the classroom need to be kept in mind.
Case Study: the Cause of Gastric Ulcers
Medicine is replete with examples of Attribution Error, and a modern Australian example of this was the research of Dr Barry Marshall and Dr Robin Warren who discovered and named the bacterium Helicobacter pylori that existed in the human stomach, caused gastritis and gastric ulcers. This was contrary to the teaching of generations of gastroenterologists who were taught that nothing could live in the stomach’s highly acidic environment. Furthermore, there was push-back from the makers of acid neutralising drinks and tablets, who attributed the gastric ulcers to acidity. An early paper to a prestigious medical journal suggesting that H. pylori might be the cause of gastric ulcers was rejected. Dr Marshall then drank a solution taken from an infected person’s stomach and developed gastritis, which he then cured with antibiotics. A Nobel Prize followed in 2005, and general practitioners now routinely order H. pylori tests for their patients. This was a classic case of Attribution Error persisting over time and becoming so embedded that new research was being rejected.
Discussion
Attribution Bias can cloud judgment when evaluating new programs, leading to overestimation, or underestimation of their effectiveness. School leadership teams must adopt a holistic view of evaluation, considering all potential influences on student outcomes. By doing so, school leaders ensure that decisions are grounded, paving the way for sustainable and meaningful improvements in education.
References
Boyd, R., & MacNeill, N. (2020, May 22). The Matthew Effect: School Boundaries, School Funding and Resources, and School Staff. Education Today. https://www.educationtoday.com.au/news-detail/The-Matthew-Effect-4936
Boyd, R., & MacNeill, N. (2020, July 5). How teachers’ self-fulfilling prophecies, known as The Pygmalion Effect, influence students’ success. Education Today. https://www.educationtoday.com.au/news-detail/How-teachers-4986
Campbell, D.T., & Stanley, J.C. (1963). Experimental and quasi-experimental designs for research. Rand McNally.
MacNeill, N., & Boyd, R. (2020, September). Redressing Survivorship Bias: Giving voice to the voiceless. Education Today. https://www.educationtoday.com.au/news-detail/Redressing-Survivorship-Bias-5049
Wang, H., & Hall, N.C. (2018). A systematic review of teachers’ causal attributions: Prevalence, correlates, and consequences. Frontiers in Psychology. doi: 10.3389/fpsyg.2018.02305