College Football: Understanding How the BCS Rankings Work


Preceding every exciting autumn Saturday filled with college football comes the confusing Sunday filled with questions about the Bowl Championship Series rankings and how it works. The questions most fans have center around how the BCS Standings are compiled and how those rankings affect their beloved team.

Putting logic to the BCS rankings can only makes college football more confusing than needed. The following is an attempt to explain how the BCS rankings are tallied week after week for better or for worse.

What components contribute to the BCS rankings?

The BCS Standings are comprised of three parts: the Harris Interactive College Football Poll, the USA Today Coaches’ Poll, and six computers called the computers. After all the games of a given week have been played, the voter polls which accounts for two parts of the BCS rankings and the computer analysis, one-third of the BCS rankings, are combined to list out those teams that are in the running for the BCS National Championship Game or an at-large Bowl Championship bid preceding any given week.

The Associated Press Poll has not been an active part of the BCS formula since 2005.


Understanding Each Part of the Process

The Harris Interactive College Football Poll (HICFP) is a weekly poll of the thought top 25 Football Bowl Subdivision teams in the nation. The poll is voted upon by 114-115 voters from across the nation in any given year. Once their votes are counted, their opinions account for a third of the BCS Standings.

In 2011 there are 115 voters that actively participate week in and week out in the Harris Interactive Poll. The 115 members are randomly selected before the season begins from nominations put forth by every team in FBS including independent schools. The panelists include former administrators, players, and coaches along with current and former media members.

Some of the current voters of the HICFP include Tommie Frazier, Lloyd Carr, Jackie Sherill, and Tommy Bowden.

The voting is calculated by awarding 25 points for every 1st place vote, 24 for every 2nd place ranking, dwindling down to 1 point for every 25th place awarded. The HICFP then takes the total of a perfect score for that given year, 2,875 for 2011, and divides said teams totals by 2,875 for their BCS Score.

For example, the Arkansas Razorbacks BCS Score or share for week five is .7989. This number is calculated by taking 2,297 (their total HICFP points) divided by 2,875 (the perfect BCS Score) totaling .7989.

HICFP Formula: 2,297/2,875 = .7989 share of the votes

Investigating the totals a step further, if one takes the Razorbacks total of 2,297 points and divides that number by the 115 voters, their average ranking in the HICFP is 7th with an average of 19.974 points per voter.

The Coaches’ Poll

The USA Today Coaches’ Poll works in the same manner as the HICFP. The Coaches’ Poll is comprised of 59 voting coaches out of the 120 FBS teams. Once the 59 votes are accumulated, points are awarded for each ranking within the top 25 starting with 25 points for first place, 24 for second, and so on.

A perfect Coaches’ Poll score is 1,475. Sticking with the Arkansas Razorbacks, their November 13, points totaled was 1,170.

Coaches’ Poll formula: 1,170/1,475 = .793 share of the votes.

Breaking down the analysis of all six computers

The computer rankings are the third and final portion of the overall BCS college football equation. There are six computers that contribute to the rankings. The six computer calculations and results are contributed from Peter Wolfe, Jeff Sargin of USA Today, Kenneth Massey, Anderson & Hester, Richard Billingsley, and Colley Matrix.

The weekly computer results will vary as each computer is programmed to attain different results. For example Peter Wolfe’s formula is not completely known to the general public but what is known is he weighs previous outcomes, game locations, common opponents, and the probability of winning versus losing.

Jeff Sargin’s formula includes strength of schedule, location of the games, wins, and losses.

The Colley Matrix claims to be non-bias toward any team or conference. One of the more interesting attributes is there are no preseason rankings applied to any team thus all teams start out the same in the computer.

Home field advantage is not a key component in the Colley Matrix rankings but strength of schedule is highly regarded. Teams like Texas A&M with five-losses could be ranked higher than a two loss team due to their schedule.

The Billingsley Report could be argued as one of the most forgiving computer ranking systems. The Billingsley Report is run by Richard Billingsley of the College Football Research Center. He awards teams points based off of last year’s results as a starting point for each team’s ranking then adds wins and losses during the current season.

Strength of schedule is an important component of the Billingsley Report. If “Team A” loses to “Team B” in week two of the season, “team A” could bypass “team B” the following week depending on quality of win over “teams C & D”.

Jeff Anderson and Chris Hester take wins, losses, home field advantage, records vs. Top 25 teams, records vs. non-Top 25 teams, and conference strength of schedule. When a friend starts talking about strength of schedule three teams removed, they are using the Anderson and Hester approach to college football rankings.

Kenneth Massey’s rankings place more emphasis on games later in the season to games at the beginning of the season. He also takes into account location, wins, and losses.

All computer results do not factor margin of victory in college football games per BCS rules. Margin of victory is thought to influence bias through computer results as found in human polls.

Once the six computers have produced their Top 25, the top ranking and lowest ranking are removed for each team. If Oklahoma State received one first place ranking, four second place rankings, and one third place ranking, the first and third place votes would be removed leaving four second place finishes per the computer listings.

The point system awards each team 25 points for a first place ranking, 24 for second, etc… just like the HICFP and Coaches’ Poll.

In this example OSU has received 96 points after four second place rankings (4 x 24). The total, 96, is then divided by a perfect score of 100 for a final total.

96/100 = .96

The final totals from the HICFP, Coaches’ Poll, and computer rankings are added together then divided by 3 for a team’s weekly BCS Ranking.

Curiously missing from all of the input feed into the computers is each team’s overall defensive and offensive rankings week after week. All statistics can be skewed but taking into account how well one team stops the run but not the pass or how another team excels with their passing attack would seemingly make the formulas more robust and interesting.

A Top 10 team may be able to run the table within their conference but could have trouble against another team outside of their conference due to schemes. Should this be factored into each team’s BCS rankings?

For example the University of  Houston is No. 10 in the current USA Today Coaches’ Poll. Should the Cougar’s offensive and defensive stats be included into their weekly rankings as part of their strength of schedule?

On a linear comparison the computers should be able to calculate how well or poorlyHoustonwould perform against other Top 10 teams such as LSU, Oklahoma State, and Alabama.

Part of what makes college football fun is the debatable stance each fan, pundit, or non-biased person has towards the best and worse teams in college football. Through all of the highly thought out ways to “correctly” calculate the rankings of the best teams, each week most fans are going to feel slighted one way or another.

Does the BCS always get it right? That’s another debate for the ages.