Tuesday, December 7, 2010

We need more transparency in the BCS, computers and polls

This week's snafu with with one of the computers missing a score in its initial week 14 rankings and that causing LSU to incorrectly be #10 ahead of Boise State has heightened calls for there to be more transparency and validation of the BCS number crunching and you can add my voice to that chorus.

Interestingly, the reason this was caught by Jerry Palm was because the one computer that does publish its algorithm, Colley Matrix, is the one that made the error.  Kudos to Wes Colley for making his algorithm public, but how do we know that any of the other five computers didn't make a similar error this week or in past weeks?  Some of the computers certainly have strange results at times but even small errors which don't appear to be that odd can obviously affect the BCS rankings as this case shows.  The BCS seems to simply trust the computer operators to get it right.

One of the oddities that doesn't make sense to me is in the Billingsley Report.  Specifically how it handles bye weeks.  For example, in week 14 many teams didn't play and as you can see, their ratings didn't change.  This means that he is seemingly not incorporating the results of past opponents after the head to head game in his ratings.  For example, Stanford did not play in week 14 but several prior opponents did, yet Stanford's rating is unchanged.  Does this mean that Boise State's rating was based on Virginia Tech's sole loss to them and Boise State's rating did not benefit from Virginia Tech turning out to be a very good team?  That seems to be major flaw.

Billingsley does explain this somewhat, but admits that what happens to an opponent after you play them is "water under the bridge".  He uses an example where due to injury the makeup of a team changed and that is valid, but that is certainly not the norm, so he has built a system to handle the exception at the expense of what is common, as most teams don't have a major injury to their star player.  Secondly, particularly early in the season, where a team is rated and ranked may not be accurate yet, so basing a teams rating on that and not allowing it to change to reflect how the team really performs seems very iffy.  He goes on to say that he does have a special rule where a team on a bye week can have their rating change if they are undefeated to make sure they stay ahead of teams they've beaten.  This sounds nice on the surface, but a system made from having special rules and exceptions isn't sound.  Why not a special rule for 1-loss teams too to keep them ahead of all the teams they've beaten?

Billingsley is also a system that uses a pre-season rating/ranking.  Fundamentally I have nothing wrong with that, my system does too just like Billingsley, but by the end of the season the pre-season rating is not a factor at all.  Because Billingsley doesn't factor in a teams performance after you play them, that means that the pre-season ranking is a huge factor.  For example, Alabama was his #1 last year which is fine, but that means that South Carolina not only got credit for beating an undefeated #1 team when they did it, but still are today even though they ended up as a 3-loss team.  This seems to be another big flaw.

Like Jerry Palm, I've looked at Wes Colley's algorithm and implemented it myself too and it is mathematically sound and does not have any exceptions or special rules like Billingsley.  He also allows a teams rating to change based on how prior opponents play the rest of the year.  So despite the administrative error that started this whole debate, I applaud him for having a sound algorithm and publishing it.  And when I get a chance, I'll put together a comparison of the 6 computers (based on the limited info available).

But the computer operators aren't the only place we need transparency.  The polls too should have their voting public each week.  Thankfully, the AP does, but they aren't used in the BCS anymore.  The coaches poll tried to not publish their final regular season ballots but caved to the pressure, but that doesn't fully address the situation as where teams are in the poll or BCS is going to influence where pollsters vote them in subsequent weeks, and those ballots from the formative early season polls aren't public so things can still be skewed by pollsters with agendas.  And as we saw in the final coaches poll, there is certainly bias.

No comments:

Post a Comment