Sports rating system

A sports rating system is a system that analyzes the results of sports competitions to provide ratings for each team or player. Common systems include polls of expert voters, crowdsourcing non-expert voters, betting markets, and computer systems. Ratings, or power ratings, are numerical representations of competitive strength, often directly comparable so that the game outcome between any two teams can be predicted. Rankings, or power rankings, can be directly provided (e.g., by asking people to rank teams), or can be derived by sorting each team's ratings and assigning an ordinal rank to each team, so that the highest rated team earns the #1 rank. Rating systems provide an alternative to traditional sports standings which are based on win-loss-tie ratios.

UCF at the Texas goal line
College football players in the United States

In the United States, the biggest use of sports ratings systems is to rate NCAA college football teams in Division I FBS, choosing teams to play in the College Football Playoff. Sports ratings systems are also used to help determine the field for the NCAA men's and women's basketball tournaments, men's professional golf tournaments, professional tennis tournaments, and NASCAR. They are often mentioned in discussions about the teams that could or should receive invitations to participate in certain contests, despite not earning the most direct entrance path (such as a league championship).[1]

Computer rating systems can tend toward objectivity, without specific player, team, regional, or style bias. Ken Massey writes that an advantage of computer rating systems is that they can "objectively track all" 351 college basketball teams, while human polls "have limited value".[2] Computer ratings are verifiable and repeatable, and are comprehensive, requiring assessment of all selected criteria. By comparison, rating systems relying on human polls include inherent human subjectivity; this may or may not be an attractive property depending on system needs.

History

Sports ratings systems have been around for almost 80 years, when ratings were calculated on paper rather than by computer, as most are today. Some older computer systems still in use today include: Jeff Sagarin's systems, the New York Times system, and the Dunkel Index, which dates back to 1929. Before the advent of the college football playoff, the Bowl Championship Series championship game participants were determined by a combination of expert polls and computer systems.

Theory

Sports ratings systems use a variety of methods for rating teams, but the most prevalent method is called a power rating. The power rating of a team is a calculation of the team's strength relative to other teams in the same league or division. The basic idea is to maximize the amount of transitive relations in a given data set due to game outcomes. For example, if A defeats B and B defeats C, then one can safely say that A>B>C.

There are obvious problems with basing a system solely on wins and losses. For example, if C defeats A, then an intransitive relation is established (A > B > C > A) and a ranking violation will occur if this is the only data available. Scenarios such as this happen fairly regularly in sports—for example, in the 2005 NCAA Division I-A football season, Penn State beat Ohio State, Ohio State beat Michigan, and Michigan beat Penn State. To address these logical breakdowns, rating systems usually consider other criteria such as the game's score and where the match was held (for example, to assess a home field advantage). In most cases though, each team plays a sufficient amount of other games during a given season, which lessens the overall effect of such violations.

From an academic perspective, the use of linear algebra and statistics are popular among many of the systems' authors to determine their ratings. Some academic work is published in forums like the MIT Sloan Sports Analytics Conference, others in traditional statistics, mathematics, psychology, and computer science journals.

If sufficient "inter-divisional" league play is not accomplished, teams in an isolated division may be artificially propped up or down in the overall ratings due to a lack of correlation to other teams in the overall league. This phenomenon is evident in systems that analyze historical college football seasons, such as when the top Ivy League teams of the 1970s, like Dartmouth, were calculated by some rating systems to be comparable with accomplished powerhouse teams of that era such as Nebraska, USC, and Ohio State. This conflicts with the subjective opinion that claims that while good in their own right, they were not nearly as good as those top programs. However, this may be considered a "pro" by non-BCS teams in Division I-A college football who point out that ratings systems have proven that their top teams belong in the same strata as the BCS teams. This is evidenced by the 2004 Utah team that went undefeated in the regular season and earned a BCS bowl bid due to the bump in their overall BCS ratings via the computer ratings component. They went on to play and defeat the Big East Conference champion Pittsburgh in the 2005 Fiesta Bowl by a score of 35-7. A related example occurred during the 2006 NCAA Men's Basketball Tournament where George Mason were awarded an at-large tournament bid due to their regular season record and their RPI rating and rode that opportunity all the way to the Final Four.

Goals of some rating systems differ from one another. For example, systems may be crafted to provide a perfect retrodictive analysis of the games played to-date, while others are predictive and give more weight to future trends rather than past results. This results in the potential for misinterpretation of rating system results by people unfamiliar with these goals; for example, a rating system designed to give accurate point spread predictions for gamblers might be ill-suited for use in selecting teams most deserving to play in a championship game or tournament.

Rating considerations

Home advantage

France national team fans
France national basketball team fans

When two teams of equal quality play, the team at home tends to win more often. The size of the effect changes based on the era of play, game type, season length, sport, even number of time zones crossed. But across all conditions, "simply playing at home increases the chances of winning."[3] A win away from home is therefore seen more favorably than a win at home, because it was more challenging. Home advantage (which, for sports played on a pitch, is almost always called "home field advantage") is also based on the qualities of the individual stadium and crowd; the advantage in the NFL can be more than a 4-point difference from the stadium with the least advantage to the stadium with the most.[4]

Strength of schedule

Strength of schedule refers to the quality of a team's opponents. A win against an inferior opponent is usually seen less favorably than a win against a superior opponent. Often teams in the same league, who are compared against each other for championship or playoff consideration, have not played the same opponents. Therefore, judging their relative win-loss records is complicated.

We looked beyond the record. The committee placed significant value on Oregon's quality of wins.

— College football playoff committee chairman Jeff Long, press conference, week 12 of the 2014 season,[5] after ranking 9–1 Oregon above 9–0 Florida State

The college football playoff committee uses a limited strength-of-schedule algorithm that only considers opponents' records and opponents' opponents' records[6] (much like RPI).

Points versus wins

A key dichotomy among sports rating systems lies in the representation of game outcomes. Some systems store final scores as ternary discrete events: wins, draws, and losses. Other systems record the exact final game score, then judge teams based on margin of victory. Rating teams based on margin of victory is often criticized as creating an incentive for coaches to run up the score, an "unsportsmanlike" outcome.[7]

Still other systems choose a middle ground, reducing the marginal value of additional points as the margin of victory increases. Sagarin chose to clamp the margin of victory to a predetermined amount.[8] Other approaches include the use of a decay function, such as a logarithm or placement on a cumulative distribution function.

In-game information

Beyond points or wins, some system designers choose to include more granular information about the game. Examples include time of possession of the ball, individual statistics, and lead changes. Data about weather, injuries, or "throw-away" games near season's end may affect game outcomes but are difficult to model. "Throw-away games" are games where teams have already earned playoff slots and have secured their playoff seeding before the end of the regular season, and want to rest/protect their starting players by benching them for remaining regular season games. This usually results in unpredictable outcomes and may skew the outcome of rating systems.

Team composition

Teams often shift their composition between and within games, and players routinely get injured. Rating a team is often about rating a specific collection of players. Some systems assume parity among all members of the league, such as each team being built from an equitable pool of players via a draft or free agency system as is done in many major league sports such as the NFL, MLB, NBA, and NHL. This is certainly not the case in collegiate leagues such as Division I-A football or men's and women's basketball.

Cold start

At the beginning of a season, there have been no games from which to judge teams' relative quality. Solutions to the cold start problem often include some measure of the previous season, perhaps weighted by what percent of the team is returning for the new season. ARGH Power Ratings is an example of a system that uses multiple previous years plus a percentage weight of returning players.

Rating methods

Permutation of standings

Several methods offer some permutation of traditional standings. This search for the "real" win-loss record often involves using other data, such as point differential or identity of opponents, to alter a team's record in a way that is easily understandable. Sportswriter Gregg Easterbrook created a measure of Authentic Games, which only considers games played against opponents deemed to be of sufficiently high quality.[9] The consensus is that all wins are not created equal.

I went through the first few weeks of games and redid everyone’s records, tagging each game as either a legitimate win or loss, an ass-kicking win or loss, or an either/or game. And if anything else happened in that game with gambling repercussions – a comeback win, a blown lead, major dysfunction, whatever — I tagged that, too.

— Bill Simmons, sportswriter, Grantland[10]

Pythagorean

Pythagorean expectation, or Pythagorean projection, calculates a percentage based on the number of points a team has scored and allowed. Typically the formula involves the number of points scored, raised to some exponent, placed in the numerator. Then the number of points the team allowed, raised to the same exponent, is placed in the denominator and added to the value in the numerator. Football Outsiders has used[11]

The resulting percentage is often compared to a team's true winning percentage, and a team is said to have "overachieved" or "underachieved" compared to the Pythagorean expectation. For example, Bill Barnwell calculated that before week 9 of the 2014 NFL season, the Arizona Cardinals had a Pythagorean record two wins lower than their real record.[12] Bill Simmons cites Barnwell's work before week 10 of that season and adds that "any numbers nerd is waving a “REGRESSION!!!!!” flag right now."[13] In this example, the Arizona Cardinals' regular season record was 8-1 going into the 10th week of the 2014 season. The Pythagorean win formula implied a winning percentage of 57.5%, based on 208 points scored and 183 points allowed. Multiplied by 9 games played, the Cardinals' Pythagorean expectation was 5.2 wins and 3.8 losses. The team had "overachieved" at that time by 2.8 wins, derived from their actual 8 wins less the expected 5.2 wins, an increase of 0.8 overachieved wins from just a week prior.

Trading "skill points"

Originally designed by Arpad Elo as a method for ranking chess players, several people have adapted the Elo rating system for team sports such as basketball, soccer and American football. For instance, Jeff Sagarin and FiveThirtyEight publish NFL football rankings using Elo methods.[14] Elo ratings initially assign strength values to each team, and teams trade points based on the outcome of each game.

Solving equations

Researchers like Matt Mills use Markov chains to model college football games, with team strength scores as outcomes.[15] Algorithms like Google's PageRank have also been adapted to rank football teams.[16][17]

List of sports rating systems

Bowl Championship Series computer rating systems

In collegiate American football, the following people's systems were used to choose teams to play in the national championship game.

Further reading

Bibliographies

  • Wilson, David. "Bibliography on College Football Ranking Systems". University of Wisconsin–Madison. Retrieved 18 November 2014.

Popular press

Academic work

References

  1. ^ Fagan, Ryan (2011-03-09). "Sorting through teams on one big bubble". Sporting News. Retrieved 2011-03-24. This is a look at 20 of the teams (in alphabetical order) residing on this year’s big ol’ bubble. We’ve included three statistical rankings. The RPI (ratings percentage index, taken from collegeRPI.com) is considered the standard and is provided to committee members during the selection process. The two other ranking indexes include margin of victory in their formulas—the Pomeroy ratings (at kenpom.com) and Sagarin ratings (via USA Today)—aren’t new but have played an increased role in discussions about potential seeds during this college basketball season.
  2. ^ Ken Massey [@masseyratings] (3 Nov 2014). "@kenpomeroy human polls have limited value. Computer systems can objectively track all the teams. www.masseyratings.com/cb/compare.htm #all351" (Tweet). Retrieved 9 Nov 2014 – via Twitter.
  3. ^ Jamieson, Jeremy P. (2010). "The Home Field Advantage in Athletics: A Meta-Analysis" (PDF). Journal of Applied Social Psychology. 40 (7): 1819–1848. doi:10.1111/j.1559-1816.2010.00641.x. Retrieved 11 November 2014.
  4. ^ Barnwell, Bill (December 20, 2013). "Safe at Home". Grantland. Retrieved November 11, 2014.
  5. ^ Russo, Ralph D. (11 November 2014). "Oregon up to 2 in playoff rankings; TCU to 4th". Associated Press. Retrieved 12 November 2014.
  6. ^ Stewart Mandel [@slmandel] (12 Nov 2014). "Committee doesn't use an SOS ranking. It looks at opponents' record and opponents' opponents record" (Tweet). Retrieved 12 Nov 2014 – via Twitter.
  7. ^ Richards, Darryl (2001). "BCS removes margin-of-victory element". Fox Sports. Retrieved 12 November 2014.
  8. ^ Sagarin, Jeff (Fall 2014). "NCAAF Jeff Sagarin Ratings". USA Today. Retrieved 12 November 2014.
  9. ^ Easterbrook, Gregg (18 November 2014). "More flags on D spins scoreboards". ESPN. Retrieved 19 November 2014.
  10. ^ Simmons, Bill (24 October 2014). "Week 8 Picks: A Gambling Epiphany". Grantland. Retrieved 19 November 2014.
  11. ^ Schatz, Aaron; Alamar, Ben; Barnwell, Bill; Bill Connelly; Doug Farrar (2011). Football Outsiders Almanac 2011: The Essential Guide to the 2011 NFL and College Football Seasons. CreateSpace. p. xviii. ISBN 978-1-4662-4613-3.
  12. ^ Barnwell, Bill (November 5, 2014). "NFL at the Half: Breaking Down the Numbers". Grantland. Retrieved January 7, 2015.
  13. ^ Simmons, Bill (7 November 2014). "Revisiting the Y2K-Compliant Quarterbacks". Retrieved 10 November 2014.
  14. ^ Silver, Nate (4 September 2014). "Introducing NFL Elo Ratings". FiveThirtyEight. Retrieved 10 November 2014.
  15. ^ Mills, Matt (21 December 2014). "Using Continuous-Time Markov Chains to Rank College Football Teams". The Spread. Retrieved 21 December 2014.
  16. ^ "Ranking NFL teams using Network Science". LinkedIN. 17 March 2016. Retrieved 17 March 2016.
  17. ^ "Modifying Google's Page Ranking Algorithm to rank teams". Reddit. 21 December 2014. Retrieved 22 December 2014.
  18. ^ Weng, Ruby C.; Lin, Chih-Jen (2011). "A Bayesian Approximation Method for Online Ranking" (PDF). Journal of Machine Learning Research. 12: 267–300.
  19. ^ "Wayne Winston: Analytics in the World of Sports". Indiana University Bloomington - Kelley School of Business - Operations & Decisions Technologies. Nov 25, 2013. Retrieved 8 Nov 2014.
  20. ^ "Numbers game". Washington Times. April 13, 2004. Retrieved 8 Nov 2014.
ARGH

ARGH may refer to:

ARGH Power Ratings, a sports rating system often associated with NCAA football and basketball.

ARGH, another name for the small G protein RhoG - an important component of cell signalling networks.

ARGH Power Ratings

The ARGH Power Ratings are a sports rating system created by Stewart Huckaby in 1990 and designed to identify the best team from within a closed system. They are most closely identified with NCAA football. The system is designed to be both predictive and retrodictive in nature, although in practice it has proven to be more strongly predictive than retrodictive.

The ARGH Power Ratings evaluate teams taking into account a team's wins and losses, its points scored both for and against, its schedule, and the location of the team's games. It is unusual in that for the purposes of providing meaningful ratings throughout the season it seeds teams based on several stated criteria, including records from the previous two seasons, the previous year's strength of schedule, the number of returning starters, whether the head coach is returning, whether the starting quarterback is returning, and published recruiting rankings from several preseason publications. Preseason seedings disappear over the course of the season, and are completely absent from consideration by the time the season ends.

This system is also sometimes used for NCAA basketball, but due to the relative lack of available preseason information, ARGH basketball power ratings are strictly retrodictive in nature.

Chess rating system

A chess rating system is a system used in chess to calculate an estimate of the strength of the player, based on his or her performance versus other players. They are used by organizations such as FIDE, the US Chess Federation (USCF or US Chess), International Correspondence Chess Federation, and the English Chess Federation. Most of the systems are used to recalculate ratings after a tournament or match but some are used to recalculate ratings after individual games. Popular online chess sites such as chess.com and Internet Chess Club also implement rating systems. In almost all systems a higher number indicates a stronger player. In general, players' ratings go up if they perform better than expected and down if they perform worse than expected. The magnitude of the change depends on the rating of their opponents. The Elo rating system is currently the most widely used.

The first modern rating system was used by the Correspondence Chess League of America in 1939. Soviet player Andrey Khachaturov proposed a similar system in 1946 (Hooper & Whyld 1992:332). The first one that made an impact on international chess was the Ingo system in 1948. The USCF adopted the Harkness system in 1950. Shortly after, the British Chess Federation started using a system devised by Richard W. B. Clarke. The USCF switched to the Elo rating system in 1960, which was adopted by FIDE in 1970 (Hooper & Whyld 1992:332).

Colley Matrix

Colley Matrix is a computer-generated sports rating system designed by Dr. Wes Colley.

The site is one of more than 40 polls, rankings, and formulas recognized by the NCAA in its list of national champion major selections. In 2018, the Mountain West Conference moved away from using the poll, along with three others, to determine the host site for its conference championship game in football.

Pomeroy College Basketball Ratings

The Pomeroy College Basketball Ratings are a series of predictive ratings of men's college basketball teams published free-of-charge online by Ken Pomeroy. They were first published in 2003.The sports rating system is based on the Pythagorean expectation, though it has some adjustments. Variations on the Pythagorean expectation are also used in basketball by noted statisticians Dean Oliver and John Hollinger. According to The New York Times, as of 2011, the Pomeroy College Basketball Ratings have a 73% success rate, which is 2% better than the Ratings Percentage Index.Pomeroy is routinely mentioned on, or interviewed for, sports blogs, including ESPN's 'College Basketball Nation Blog, SB Nation, Basketball Prospectus, The Topeka Capital-Journal, Mediaite and The Wall Street Journal's 'Daily Fix'. He has also been a contributing writer for ESPN's "Insider" feature. In addition, his rating system has been mentioned in newspapers and sites including the New York Daily News,

Rating system

A rating system can be any kind of rating applied to a certain application domain:

Motion picture rating system

Motion Picture Association of America film rating system

Canadian motion picture rating system

Television content rating systems

Video game rating system

Marvel Rating System

Elo rating system

Glicko rating system

Chess rating system

Rating system of the Royal Navy

Star rating

Sports rating system

Wine rating

Texas Education Agency accountability ratings system

Sonny Moore Power Ratings

Sonny Moore's Power Ratings are a sports rating system devised in 1974.

This is a hobby that began so a comparison could be made of any two teams in a given sport, so as to get an indication on which team would win and by how many points if a game were played between them. Its intent is predictive as opposed to retrodictive. It compares the strength of all the teams in a given sport in a numerical valued order from the best to the worst, as if all the teams were to play against one another. A team's power rating reflects how a team has performed from all the games played, not just one or two, taking into account wins and losses, the opposing teams' power ratings and the actual score difference of the games played. This explains why team A may have a higher power rating than team B, even though team B may have beaten team A or have a better won/lost record. The ratings are compiled by using only statistical and historical data. The most recent games played are more meaningful in compiling the ratings. Only games played against the teams in the ratings are used to formulate the power ratings. A diminishing returns principle is used to prevent higher rated teams from gaining power rating points and moving up in the rankings when the victory margin is being run up against a weaker team.

Sports analytics

Sports analytics are a collection of relevant, historical, statistics that when properly applied can provide a competitive advantage to a team or individual. Through the collection and analyzation of these data, sports analytics inform players, coaches and other staff in order to facilitate decision making both during and prior to sporting events. The term "sports analytics" was popularized in mainstream sports culture following the release of the 2011 film, Moneyball, in which Oakland Athletics General Manager Billy Beane (played by Brad Pitt) relies heavily on the use of analytics to build a competitive team on a minimal budget.

There are two key aspects of sports analytics - on-field and off-field analytics. On-field analytics deals with improving the on-field performance of teams and players. It digs deep into aspects such as game tactics and player fitness. Off-field analytics deals with the business side of sports. Off-field analytics focuses on helping a sport organisation or body surface patterns and insights through data that would help increase ticket and merchandise sales, improve fan engagement, etc. Off-field analytics essentially uses data to help rightsholders take better decisions that would lead to higher growth and increased profitability.As technology has advanced over the last number of years data collection has become more in-depth and can be conducted with relative ease. Advancements in data collection have allowed for sports analytics to grow as well, leading to the development of advanced statistics as well sport specific technologies that allow for things like game simulations to be conducted by teams prior to play, improve fan acquisition and marketing strategies, and even understand the impact of sponsorship on each team as well as its fans.Another significant impact sports analytics have had on professional sports is in relation to sport gambling. In depth sports analytics have taken sports gambling to new levels, whether it be fantasy sports leagues or nightly wagers, betters now have more information than ever at their disposal to help aid decision making. A number of companies and webpages have been developed to help provide fans with up to the minute information for their betting needs.

The Hidden Game of Football

The Hidden Game of Football is an influential book on American football statistics published in 1988 and written by Bob Carroll, John Thorn, and Pete Palmer. It was the first systematic statistical approach to analyzing American football in a book and is still considered the seminal work on the topic.

Sports rating systems
Concepts
Methods and computer models
Polls and opinion
People
Sports world rankings

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.