Between the Lines: Tournament Special

Once again this year, the NCAA used an objective selection process (the decision regarding Niagara notwithstanding) to pick the national tournament field. And once again, U.S. College Hockey Online’s Pairwise Rankings (PWR) correctly predicted the field that was selected, for the fourth straight year.

Why is this so accurate? Well, it’s not because anyone at USCHO is brilliant or clairvoyant, but because PWR is essentially the same system the men’s ice hockey selection committee uses.

I’m still not sure enough people understand this point, and this continues to frustrate me. Mind you, there are plenty of people who do, more and more each year, both among hockey insiders and fans. But too often, I hear misconceptions about the process, from not only fans, but media members and even some hockey people.

The misconceptions fall into two camps.

Some people have the impression that the Pairwise is the process. So, let’s be clear: The selection committee does not take USCHO’s PWR and use that to determine the field, and I don’t want to foster that misimpression. PWR is the opposite; it takes what the committee does and represents their process in an empirical way, for all to see.

The goal of PWR is not to drive the process. The process drives PWR. USCHO trying to take credit for picking the field would be like printing the NHL standings in the newspaper, then gloating that they accurately predicted who would be in the Stanley Cup playoffs. We’re glad PWR works, but, in theory, it’s simply standings.

But, just as there is a mistaken impression by some that the committee uses PWR directly, there is also a misimpression by others that PWR is irrelevant — that it is merely USCHO’s own self-created “Power Rankings” system.

This is equally incorrect. PWR is relevant because it is a representation of the process … it is the “standings,” so to speak. And it is accurate.

Unfortunately, for some reason, some hockey people have a reluctance to acknowledge that PWR is, in the end, an accurate representation of the same system the committee uses. I’m not sure why that is. Is it because they are afraid that admitting so makes them look bad — which it doesn’t at all — or are they just unaware of what PWR is really doing?

Either way, distancing themselves from the PWR and failing to acknowledge what it does, is a disservice to the process and those trying to understand it. It only clouds the issue for those perhaps already confused about the often complicated and arcane objective criteria that lie beneath the selection process.

Unlike other sports, men’s ice hockey has preferred a completely objective process for picking its NCAA Tournament field. Obviously, if something is done by a logical step-by-step progression, there must be a method of clearly defining that process.

A number of years ago, a fellow by the name of Keith Instone, who worked at Bowling Green, urged the NCAA to shed some light on the objective selection process it created. By doing so, he figured he could help educate fans, many of whom still believed that conspiracy theories or backroom deals determined who made the tournament. In exchange for this openness, Instone offered to create a computer program for the selection committee that would sort all of the data and quickly spit it out in a format that was easier to comprehend.

When USCHO came along, it took this one step further, quantifying the process with something it called the “Pairwise Rankings,” allowing everyone to see what was happening. Thus, PWR is simply a “crack” or “reverse engineer” (to use cyber-geek terms) of a process that already existed.

That this information was now out in the open was tremendous for college hockey fans, coaches and players alike. But the NCAA did nothing to publicize the fact, and despite a lot of education, the misconceptions remain.

For one, there are those who don’t understand the selection process at all and still believe the committee makes subjective decisions. Three years ago, after Vermont lost in the ECAC quarterfinals, it took a week to convince school officials, media and fans that Vermont was still a lock to make the tournament. Some still didn’t believe, as selection day came, that the criteria had Vermont way in front of most teams and that the committee wouldn’t care that Vermont lost early in its playoffs.

We’re slowly educating these people, as even ESPN2 correctly identified the five selection criteria this year.

But then there are those who understand the PWR process and the criteria used, but either think that PWR is the process, or that PWR is irrelevant to the process.

One hockey person recently knocked the PWR, saying it was the RPI that decided the teams. Yet, it is a fact that the committee uses five criteria as a whole, not just RPI, to create a “comparison” between two teams. Sure, the RPI may enable the committee to quickly see those teams who are clearly in. But when it comes down to that “bubble,” the committee will start looking at those individual “comparisons.”

If there’s an objective process, what’s wrong with acknowledging it’s an entirely objective process? And if there’s nothing wrong with acknowledging that, then there’s nothing wrong with acknowledging that PWR is accurately representing that process. That acknowledgment doesn’t make college hockey look bad, any more than acknowledging that the Boston Globe’s NHL standings are accurate makes the NHL look bad.

USCHO realizes that PWR isn’t exactly what the committee does. The committee doesn’t actually total all of the “comparison wins” and come up with a final number, as PWR does, and then take the Top 12 and presto, there’s the field. We know that. But, when everything is all said and done, the same 12 teams are picked.

For those still confused or unconvinced, we can see that by breaking down the process. After the various conference tournaments ended this season, seven teams — Boston University, St. Lawrence, Michigan, Wisconsin, Maine, Michigan State and North Dakota — had earned automatic bids.

Then, the committee can usually identify some at-large teams from a cursory glance at the comparison numbers.

For example, this year — if we leave Niagara out of the discussion for the moment — Colgate, New Hampshire and Boston College won their comparisons with every other at-large team, making them obvious picks no matter what system you used to identify the “bubble” teams. That brings us to 10 teams already in the tournament, and leaves one sure at-large bid remaining.

The committee then takes the remaining Teams Under Consideration, and makes the same kind of evaluative decision — i.e., there are a number of teams clearly losing comparisons to all the teams “above” them. The teams you are left with form the bubble. In this year’s case, that was Rensselaer, St. Cloud and Mankato.

On the basis of the total set of criteria — what we call the individual “comparisons” — those three teams beat everyone below them; hence, these are the same three teams committee chair Bill Wilkinson identified as “the bubble” on ESPN2’s selection show. Specifically, when comparing those three teams to each other, St. Cloud defeats both other teams in head-to-head comparisons, making the Huskies the final team.

This is the objective, criteria-based process that all of the coaches have wanted, and that the committee adheres to.

Where the committee and PWR diverge is that PWR totals these comparison wins from the beginning, for all teams as a whole. But when you total the amount of comparison wins, via PWR, you get the same result. If you step through the process, you’ll realize that PWR does in one step, what the committee does in a number of steps. The end result, however, is the same thing, not by chance, but because PWR is designed to mimic the process. That’s the goal. It’s like saying 4*6=24 or 4+4+4+4+4+4=24. Same thing.

Media members who ask about the selection process and PWR are sometimes told by hockey people that there is no connection between the two — that PWR doesn’t truly represent what happens. This helps no one because it makes the committee look bad to people who know, makes USCHO look bad to people who think we’re making false claims, and only confuses people further.

So there is still a strong need, I believe, for everyone to get on the same page. There is no need to give the PWR too much credit, and there’s no need to disregard it either. Once everyone can come to the same understanding, we can all do much better justice to the process.

(I’ve been told it’s possible to devise a complex and rare scenario whereby the PWR process and the committee process don’t exactly mesh, but it’s impossible to know because it’s never happened and, while we’re 99.9 percent sure, no one is 100 percent sure exactly what the committee does behind closed doors.)

As a side issue, my wish is that the committee would just take the total Comparison Wins — as PWR does — take the top 12, and use that to pick the field. Not because it would change the end result, but because it would make everyone’s life easier. Since the end result is the same, why not just take the numbers, list them as PWR does, and pick the teams that way? That would eliminate the mystery and save the committee a lot of time.

The seeding process is still a bit complicated, because of the desire to avoid intraconference matchups, and potential crowd maximization issues, but you’d be able to pick the field in about 10 seconds — and you could finally, once and for all, assure coaches around the country that what they are seeing on that “gosh-darn Internet” in the weeks leading up to selection day are accurate figures, giving everyone a sense of just what they have to do to get in.

The committee created this process so that there would be no more subjectivity. This is what the coaches wanted, so they must all know there is no subjectivity, whether or not they know the mechanisms it takes to get to that point. Some people, of course, do get it, as evidenced by those coaches, officials and fans who come to USCHO throughout the season for information on where their teams stand.

The PWR is merely trying to bring that process out of the closet. If the committee were to take the next logical step, and quantify the data in the same way PWR does, I believe that would be the final step in opening up the process for everyone to see once and for all. The NCAA could officially publish the data, and run them each week, just like they do for the Bowl Championship Series standings in football.

If the NCAA decides tomorrow to publish the criteria data, just like PWR does, I’ll be the first to applaud them and make believe this article never existed.

On a related issue, the committee’s process — and, consequently PWR — may be seeing its final days. As I wrote in last week’s BTL, the current set of criteria was a fair and just way of picking the tournament field when there were just four relatively equal conferences. But it is unable to handle the influx of minor conferences with wildly divergent strengths of schedule, making it necessary to create subjective exceptions to the rules — like the one this year that admitted Niagara while barring Quinnipiac. The problem will only get worse as hockey expands.

As a result, if the committee wishes to maintain its current goal of an objective system, it may be time to devise an algorithm that is able to handle all of the teams.

Such a system can be created: in fact, there are plenty of them out there, but I’ll leave that up to the math Ph.Ds.