Improvement Ratings: Which XC Programs Make the Most Progress from Year to Year?

For the last 5 years, I've been adding data to a formula I devised in an attempt to measure the progress that a cross country program makes from the end of one season to the end of the next season. Called Improvement Ratings, they have helped shape my preseason top 25 rankings and also added context to the yearly discussions about the best teams in the state. In theory, Improvement Ratings account for all the things that can happen to make a program better or worse: transfers, injuries, coaching changes, etc. I've compiled the stat over 3-year, 5-year, and 7-year periods (and soon we'll have it for a whole decade, which will be fun!).

Here's how it works:

  • I start by comparing a team's returning 5-runner average time from the end of one season to the team's final top 5 average for the next season. This allows me to determine the raw improvement for the team over that year, using only returning runners. If the team loses or gains a top 5 runner in that time, it will be reflected in their improvement (or lack thereof).
  • The next step is to compare that improvement with the competitive level of the team, under the principle that it is harder for top-flight teams to make big improvements. If your team has returning runners that are already well trained and successful, then the law of diminishing returns kicks in - it's tougher for a 15:30 5K runner to drop 15 seconds than for a 17:00 runner to drop 30 seconds.
  • I accomplish this adjustment for team strength by comparing the improvement to the team's finish at the state meet for the season in question. (Teams that don't make the state meet are given a place of 17.) This also helps to adjust for the size of the team - it's not fair to compare the improvement of a 4A school with a 2000 student population to that of a 1A school with less than 200, simply because a larger population means a higher likelihood of discovering new talent that increases the team's improvement.
  • To determine the final rating over a given time period, I take the average improvement during those years and divide by the average finish at the state meet (I actually use the square root of that second number, because it brings the numbers into a tighter range and reduces the spread of the outliers).

It's certainly not a perfect measure of a program's ability to progress from season to season, which is why I also always look at track times before I finish the preseason rankings. I have found the Improvement Ratings useful for identifying some patterns: what does it look like when a team is getting ready to break out to the next level? What does it look like when a long-time, successful coach leaves a program? What does it look like when a school just has a wave of talent that passes through, versus a program that develops new talent consistently? I hope you find it as interesting as I have!


Size-Adjusted Improvement Ratings:

Girls 3-Year  -  Boys 3-Year

Girls 5-Year  -  Boys 5-Year

Girls 7-Year  -  Boys 7-Year