I gave it lots of thought; polls are meaningless.
But then again, so is the entire structure that is in place to determine the “national champion” in the top-tier of college football. They might as well hold a lottery and just pick two teams from a hat.
Polls are meaningless because it is all based on perception, something covered here at U.P. ad nauseam, based predominantly on which team is on television the most. In turn, the teams that are on television the most — at least in prime spots [not Tuesday nights] — are so-called power conference teams. Thus, you are going to have an abundance of big name teams at the top at the expense of strong, smaller-name teams. A recent table showing the over/undervaluation of teams backs up this point. And while teams like Boise State and TCU have begun to crack into the upper echelon, think of how long it took for that to happen. And, to be fair, this occurs in college basketball as well [see Gonzaga and Butler].
Now, over the past two years, we have maintained our own poll for college football. In 2009, we had actual pollsters. It was fun, but I could not get people to do it on a consistent basis [and the sampling was quite small]. So, in 2010, we ran a formula based on several factors, tweaking it as the season progressed. It worked well, but there still seemed to be something missing.
Consider the following: pre-season polls are as much a reflection of recent history (i.e. last season’s performance as well as the past couple of seasons) as much as it is the potential for the upcoming season. Remember Michigan in 2007 or Alabama in 2000 — teams ranked in the top five in pre-season polls that ended up with dismal records. So, it is all a crapshoot to reshuffle the polls and guess which team should be number one.
The Harris Poll has the right idea by waiting a few weeks into the season before releasing its first poll. However, the fallacy there is that there still appears to be a tendency to follow the herd of other polls.
Since the “national champion” is just made up, and that polls reflect past seasons, why not create a ranking system that is a continuous process? Or, to put it another way, why not have a poll that directly takes into account recent history, as well as the current season? I mean, since choosing the national champion in the FBS is no different than choosing Homer to drive the Monorail, why not do away with the poll system and have a more flexible system.
Therefore, the new U.P. Top 23 will do just that. To hell with this arbitrary national championship. The poll seen here will take into account recent history, as well as the present season. It will weight conferences and recruiting and stats and bowl games evenly. No manipulation or perception.
Think about this for a moment. Because there is no playoff at the top-tier of college football, determining a “champion” is similar to tennis or golf. Tiger Woods was the world’s number one player for years before his recent decline. Similiarly, Roger Federer was the top dog in tennis year after year. The rankings did not start over every year; it was continuous.
Thus, the new Uncle Popov Top 23 will take into account the last three years of play. I chose three years over five years because a three-year span covers a full four years of a recruiting class [previous three years plus the current season]. Plus, it is not as cumbersome as five years.
The following measures are used in the formula:
- Winning Percentage
It makes sense that winning is everything. Thus, winning percentage is the heaviest weight. The overriding portion here is winning percentage over the last three seasons, plus the current season’s winning percentage. In other words, by the end of the 2011 season, there will be four seasons worth of data to determine the number one team. After this season, the 2008 season will be dropped. However, the 2011 winning percentage will not become part of the equation until the first week in October [in order to give it time to balance out]. However, wins in 2011 will be added to the overall winning percentage beginning in week one.
There is also a balance of each of the three seasons’ winning percentage. While all three seasons are included (for this season, 2008, 2009 and 2010), more weight is given to the most recent season. I determined that it was best to be more “what have you done for me lately” so that an isolated case of success in 2008 [e.g. Buffalo] does not overrate a team.
Additionally, a strength of schedule measure was implemented so that teams that feast on weak opponents do not become overrated as well. Part of this was used to remove FCS schools.
- Conference Measure
A second measure is based on the conference. I wanted to be able to isolate conference power so that teams in “tougher conference” can carry some of that weight. BUT, in the process, I also placed a measure for out-of-conference scheduling. So, while the SEC may have stiff competition within the conference, the same cannot be said about its OOC scheduling.
Again, the previous three seasons will be incorporated here. A team’s winning percentage within the conference will be measured against a conference weight; the same will be done for out-of-conference winning percentages. For example, the SEC had the highest conference weight while the Sun Belt had the lowest. So, even though Troy has a higher conference winning percentage than Georgia, Georgia’s conference weight is higher than Troy. Same thing applies to the OOC weight, where Air Force has a higher OOC weight than Alabama, despite the latter going undefeated over the past three seasons.
- Bowl Games
I felt it was necessary to separate the bowl games from the regular season. I opted NOT to separate out conference championship games since not every conference possesses one.
With the bowl games, teams are awarded a “bonus” for simply making it to a bowl game. There is weight giving to winning a bowl game, but losing a bowl game with give a time points as well [think of how the NHL awards points for overtime losses]. I decided against weighing “prestigious” bowl games because sometimes teams will not bother putting much effort into a bowl game [see Alabama v. Utah in the 2009 Sugar Bowl]. Thus, all bowl games — including the arbitrary BCS Championship Game — are initially weighed evenly.
The variance comes from a conference bowl weight. With this, a conferences’ winning percentage is taken into account. Thus, because the SEC does well in bowl games, they will get a boost while the Big Ten gets hammered. Hey, if conferences can be rewarded with multiple bowl games, then they should also be punished for consistently losing them!
I nearly opted NOT to use this as a measure. Recruiting and the “star-system” is no different than pre-season polls — guessing! But, it is a glimpse into the future. So, even if it is the lowest weight in the formula, it was worth while to include it.
For this measure, I opted to use Rivals.com’s rating system. Also, instead of simply measuring all 120 teams against one another, I measured each team against its conference counterparts. For a multitude of reasons, it is unfair to measure Alabama and Texas against Boise State and TCU. Thus, it is best to measure Alabama against SEC teams and Boise State against WAC and now Mountain West teams.
The method I employed will not punish a team like Texas because of their high number of top-notch recruits — the conference weight will still push them up — but it brings teams like TCU to a much more level playing field when comparing the two; like examining per capita GDP rather than total GDP.
For my sanity, the recruiting measure will only be updated once a month.
- Team Statistics
Finally, team statistics are brought into play, just as it was last year. It is not straight stats, but the offensive and defensive stats are measured against the stats of the opponents. Thus, while a team like Boise State may have dominant offensive stats, it comes against comparatively inferior defenses. Ergo, it is adjusted accordingly.
This year, I expanded the statistics to include measures for passing and rushing rather than relying solely on total stats.
Also, just like last year, a measure is included for point differential.
- THE RATING!!!
All of this is plugged into my “secret formula.” As I noted last year, I will anally protect my formula and not publicly release just like those other computer polls. There is some fun in the secrecy, anyway. It is actually a two-step process.
With all of the above, an initial rating is produce. That rating is then plugged back into a team’s schedule to determine quality wins and quality losses. Again, FCS teams are removed from the wins. However, a loss to an FCS team does carry weight…a lot of negative weight! After this, a new formula is run and the final ranking is produced.
All of this stated, no one really gives a shit about our poll! But, at least I explained from where I am coming. The first U.P. Top 23 will be released next week and will actually include all 120 teams!