Did you know the average salary for teachers in the US is $56,000/year according to The Bureau of Labor Statistics? While I trust the bureau’s information, that figure doesn’t quite match reality for teachers where I live. I’m not saying the bureau’s estimate is wrong; only that it has limitations. Since $56k is the average of teachers’ salaries across the entire US, it cannot simultaneously perfectly represent a veteran teacher in Silver Spring, Maryland (located within 5 miles of America’s more expensive cities) and a newly-minted teacher residing in small-town Athens, Tennessee.
I’ve just offered an example of something known as the tyranny of averages: the idea that group averages fail to impart the whole story of your data. That’s because, as explained by author and Harvard scientist Todd Rose, averaging flattens otherwise jagged information to a single point. In other words, averaging distills information such that we lose nuance within the data. William Briggs, a statistics professor at Cornell University, warns that we ask the average “to bear burdens far beyond its capability” when we draw firm conclusions based on this piece of data alone. To clarify his argument and discuss one of my favorite topics, allow me to belabor the point with a running example.
Let’s consider a run where I logged 4 miles. On the whole, I had an average pace of 9 minutes and 34 seconds per mile. I could draw some conclusions about my athletic ability based on that information (the overly critical part of myself has certainly offered her opinion on the matter), but I’d be better served by examining additional data. What could I learn, for instance, by considering just a little more detail like my average pace at each mile (in runner’s parlance, these are known as “splits”)? Since I’m a glutton for punishment, here’s a graph of my average pace at each split, with my overall average in red:
We can see right away that my splits aren’t even (i.e., my pace varies). We also see that I ran my slowest pace at mile 2 and my fastest at mile 4. We wouldn’t see any of this nuance from my overall average because it smoothes over the peaks and valleys of my data, evenly distributing my pace across my total distance; it’s this behavior that Todd Rose is referencing when he says the average flattens otherwise jagged information. The outliers in the data get hidden/smoothed.
Outliers get a bad rap for skewing information and account for why we report certain measures of central tendency in different contexts (e.g., the median when reporting average salary). In the realm of education, however, it’s often the outliers we want to better understand. Consider the data you’ll find most valuable if your goal is to determine which of your students are most likely to attend and benefit from an after school tutoring program. Baron Schwartz, CEO of VividCortext, explains:
“Averages are troubling and unhelpful when it comes to monitoring. If you’re merely looking at averages, you’re probably missing the data that’s of greatest import to you and your system: in seeking to identify problems, the data you’re most interested in are likely, by definition, outliers.”
Don’t misunderstand me. I’m not saying you shouldn’t use averages, or other convenience measures, to address your research questions. I’m just wanting to illustrate some ways you can make the most of your data. And, bear with me, because this is where things get controversial… I’m going to argue for using both qualitative and quantitative data.
It’s easy to get the impression that numeric data is the only data. Our culture is all about “big data” these days. Harvard Business Review claims that data scientists have the sexiest job of the 21st century. CNN Money and Forbes claim analytics related jobs are slated for the most growth. However, it’s important to remember that data comes in all shapes and sizes. In other words, “data” is not synonymous with “quantitative data.” Getting a comprehensive sense of any situation requires an examination of information from as many angles as possible. In other words, you should be reviewing both students’ quantitative and qualitative data. The weakness of one is the strength of the other: Where quantitative data allows for easy comparisons and summations, it lacks the richness of meaning so characteristic of qualitative data.
By richness of meaning, I’m referencing the ways in which qualitative data reveals contextual clues to put your data in perspective and offer opportunities for connection. For instance, going back to the running example, your opinion about my pace might be completely different depending on the other details I provide. What if that was my pace on an indoor, temperature-controlled, flat track? Alternatively, what if that was my pace for a run in the smothering heat of August along a neighborhood route with an elevation gain of 340 feet? See how that detail or “jagged information” adds another layer to our story that we simply can’t get from, “I had an average pace of 9:34”?
Hopefully I’ve illustrated that qualitative and quantitative data are complementary to one another. At the very least, I hope it’s clear that there’s more than one type of data, and that each type offers unique perspective. Here are a couple of examples where educators can use both types of data to draw conclusions.
Recently, a district approached me for a report summarizing teachers’ performance evaluations. The request was pretty straightforward: send back data indicating how many times each teacher received a positive mark and/or negative mark on each performance standard. At first, I was thinking this was like most reports of teacher performance. Then, the requester added, “when you see that any teacher had 3 or more positive marks on any one performance standard I want you to flag that area as a ‘strength’ for that teacher.”
Performance evaluations tend to be a controversial topic. Critics raise concerns about validity of results due to their subjectivity. Love them or hate them, performance reviews are probably here to stay. The district in this example had a creative solution to the concern of subjectivity: Instead of disregarding performance evaluations due to their subjective nature, the district reframed what qualified as a strength. Quantifying their qualitative data, by establishing a standard threshold, allowed them to increase their confidence in the validity of the results.
The professional development literature has convinced me that teachers discuss their lesson plans, activities, and classroom formative assessment results during team meetings to inform their pedagogy. Through this discussion, they identify novel ways to engage their students and perhaps avoid pitfalls in the classroom by learning from a peer’s trial and error. Student achievement data could complement teachers’ efforts to identify peers that would make good mentors or mentees for professional development.
For instance, teachers could review students’ percentile rank on benchmark results to identify grade or subject level peers whose students’ performed best on summative assessments. Teachers that consistently have students in the top percentiles term over term and year over year may have strategies that could help other educators. Examining the results by demographic groups might reveal other strengths and weaknesses.
The takeaway from these examples is this: data is most useful to us when it's robust. You’ll have more confidence in your results, and ultimately your decisions, if you’ve answered your research question with both qualitative and quantitative data. In closing, I’ll make one last remark about making the most of your data.
Due to the sheer volume of data in education, it’s easy to feel pressured into creating an endless ream of reports. We think that, because we have all this data, we should be analyzing all this data. We busy ourselves with one-off reports thinking that any analysis offers at least marginal benefit. At a certain point, though, all of that data just becomes noise. We can do better than marginal gains. Learn to distinguish information that’s merely interesting from information that will help you drive change in your school or classroom. High performing schools don’t just randomly analyze data; they establish measurable goals and analyze data to see where they stack up in relation to those goals. Before crunching numbers, ask yourself if the results generated will help your classroom, school, or district achieve its goals. In other words, distinguish signal from noise by asking, “is this information interesting or is it helpful?”
Let’s consider an example for a school with an attendance problem that’s set a goal of reducing chronic absenteeism. Administration is contemplating two different reports, one that’s merely interesting and one that is actually helpful in relation to their goal.
- Goal: Reduce the number of students who are chronically absent by 5%
- Interesting: Report listing students with the most absences this year
- Helpful: Report listing all students, with indicator flagging those who were chronically absent last year; ongoing report tallying students’ weekly or monthly absences
A list of students with the most absences might be interesting, but how does it help you reach your goal of reducing chronic absenteeism? Sure, you get a list of kiddos to fret over, but now what? There’s no plan to do anything with that information. On the other hand, knowing who has been historically chronically absent gives you a hint regarding who is likely to miss a lot of school this year, and you have a built-in monitoring system with the ongoing report, which sets you up nicely to engage in intervention efforts.
Rather than expending energy on multiple reports that offer small gains, focus on generating high quality reports that help you drive results.
Joy Smithson is known around SchoolStatus HQ as Dr. Data. With a keen eye focused on the real-world needs of students and educators, Dr. Smithson conducts research and creates custom reports and analysis for data-driven educators. Check out how SchoolStatus serves data needs here.