Jump to content
You are a guest user Click to join the site

Ducks Hockey Forum Coyotes Hockey Forum Bruins Hockey Forum Sabres Hockey Forum Flames Hockey Forum Hurricanes Hockey Forum Blackhawks Hockey Forum Avalanche Hockey Forum Blue Jackets Hockey Forum Stars Hockey Forum Red Wings Jackets Hockey Forum Oilers Hockey Forum Panthers Hockey Forum Kings Hockey Forum Wild Hockey Forum Canadiens Hockey Forum Predators Hockey Forum Devils Hockey Forum Islanders Hockey Forum Rangers Hockey Forum Senators Hockey Forum Flyers Hockey Forum Penguins Hockey Forum Sharks Hockey Forum Blues Hockey Forum Lightning Hockey Forum Maple Leafs Hockey Forum Canucks Hockey Forum Golden Knights Hockey Forum Capitals Hockey Forum Jets Hockey Forum

Search the Community

Showing results for tags 'elo'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Blogs

There are no results to display.

There are no results to display.

Forums

  • Latest News and Discussion
    • Around the NHL
    • NHL Playoffs
    • Rumors -Trades - Free Agents
    • Prospects & Draft
    • General Hockey
    • Other Leagues
  • Atlantic Division
    • Boston Bruins
    • Buffalo Sabres
    • Detroit Red Wings
    • Florida Panthers
    • Montreal Canadiens
    • Ottawa Senators
    • Tampa Bay Lightning
    • Toronto Maple Leafs
  • Central Division
    • Chicago Blackhawks
    • Colorado Avalanche
    • Dallas Stars
    • Minnesota Wild
    • Nashville Predators
    • St Louis Blues
    • Winnipeg Jets
  • Metropolitan Division
    • Carolina Hurricanes
    • Columbus Blue Jackets
    • Philadelphia Flyers
    • Pittsburgh Penguins
    • New Jersey Devils
    • New York Islanders
    • New York Rangers
    • Washington Capitals
  • Pacific Division
    • Anaheim Ducks
    • Arizona Coyotes
    • Calgary Flames
    • Edmonton Oilers
    • Los Angeles Kings
    • San Jose Sharks
    • Vancouver Canucks
    • Vegas Golden Knights
  • Non Hockey Talk
    • Other Sports Forums
    • Entertainment
  • Getting To Know
  • HF.net Fantasy Hockey Leagues's Forum
  • HF.net Fantasy Cup Playoffs's Forum
  • Pick'em's NHL Game Matchups

Categories

There are no results to display.

There are no results to display.

Calendars

  • NHL Calendar
  • Community Calendar
  • HF.net Fantasy Hockey Leagues's Events

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Specific Location


Favorite Team


2nd Favorite Team

Found 9 results

  1. After a period of deep dormancy we welcome the 2018/19 on our website with three major changes: 1. MoreHockeyStats is now a network of sites: MoreHockeyStats.com - the unusual statistics from the NHL for the league, the teams, the players, the coaches and the draft. HockeyEloRatings.com - the Elo ratings for teams, coaches and players, in general and for particular stats and situations. NHLErrata.com - the errors we discovered while crawling and testing the NHL data. 2. All our data is now available through a sort of API. When a table is displayed, there is a set of links on how you can access the data: JSON - receive the data displayed in the table as a JSON via direct link. CSV - receive the data in a CSV table. Table - display the data as a pure HTML table with a direct link. Link - direct link to the results displayed. These links have a systemic structure and thus can be crawled by a bot. 3. Most of our tables now feature links to personal cards for teams, players, coaches and even the rinks. These cards display the visual changes to the stat displayed in the table for the particular team, player etc. We welcome ideas for the cards in the pages that do not have these cards yet. Here's a sample playercard: Welcome, and have a great 2018/19 hockey season.
  2. I created a new metric on my website: Penalty Shots Elo Ratings. For both goalies and skaters. This time I went a little bit further and created personal player cards - how they fared in PS so far. I also added an option to evaluate the probability of scoring between certain players and goalies. [Hidden Content] Here's a screenshot of the skaters' table: And here's an output of evaluation:
  3. Original post. Now that we obtained a way to estimate players' performances for a season, we can move on to estimate their performances for a specific game. For the season of interest, we compute the average against for each teams, just like we computed the season averages. I.e. we calculate how many goals, shots, hits, blocks, saves are made on average against each team. Thus we obtain the team against averages Tavg. The averages are then further divided by the number of skaters and goalies (for respective stats) the team had faced. After that we can calculate the "result" Rt of each season average stat in a chess sense, i.e. the actual performance on the scale from 0 to 1: For Goalie Wins/Losses: Rtwins = 0.5 + Tavgwins/(Tavgwins+Tavglosses) For Plus-Minus: Rt+/- = 0.5 + (Tavg+/- - Savg+/-) / 10 (10 skaters on ice on average) For the rest: Rstat = 0.5 + (Tavgstat - Savgstat) / K where K is a special adjustment coefficient that is explained in part VI (and, as we remind, describes the rarity of each event) And from the result Rt we can produce teams' Elo against in each stat, just like we computed the players' Elos. Then, the expected result Rp of a player against a specific team in a given stat is given by: Rp = 1/(1 + 10(Et - Ep)/4000) where Et is the team's Elo Against and the Ep is the player's Elo in that stat. From the expected result Rp, we can compute the expected performance Ep just like in the previous article: Pexp = (Rp - 0.5) * A * Savg + Savg (See there exceptions for that formula). Please note that we do not compute "derived" stats, i.e. the number of points (or SHP, or PPP), or the GAA, given the GA and TOI, or GA, given SA and SV. Thus, if we want to project expected result of a game between two teams, since it's the expected amount of goals on each side, we compute the sum of the expected goals by each lineup (12 forwards and 6 defensemen): Shome = SUMF1..12(MAX(PexpG)) + SUMD1..6(MAX(PexpG)) for the home team Saway = SUMF1..12(MAX(PexpG)) + SUMD1..6(MAX(PexpG)) for the away team while filtering the players that are marked as not available or on injured reserve. Please note that we assume the top goal-scoring cadre is expected to play, if we knew the lineups precisely, we would substitute the exact lineup for the expected one. You can see the projections at our Daily Summary page. So far we predicted correctly the outcome of 408 out of 661 games, i.e. about 61.7% . Yes, we still have a long way to go. Now to the different side of the question. Given that a player expectation overall is a vector of [E1, E2, ... En] for all the stats, what is the overall value of that player. And the answer is, first and foremost, who's asking. If it's a statistician, or a fantasy player, then the value V is simply: V = SUM1..n(WnEn) where Wn are the weights of the stats in the model that you are using to compare players. Fantasy Points' games (such as daily fantasy) are even giving you the weights of the stats - this is how we compute our daily fantasy projections. Now, if you're a coach or a GM asking, then the answer is more complicated. Well, not really, mathematically wise, because it's still something of a form V = SUM1..n(fn(En)) where fn is an "importance function" which is a simple weight coefficient for a fantasy player. But what are these "importance functions"? Well, these are the styles of the coaches, their visions of how the team should play, highlighting the stats of the game that are more important for them. These functions can be approximated sufficiently by surveying the coaches and finding which components are of a bigger priority to them, for example, by paired-comparison analysis. Unfortunately, there are two obstacles that we may run into: the "intangibles", and the "perception gap". But that's a completely different story.
  4. I ran the teams Elo predictions for the playoff series since 1988 through 2016. Scores - amount of exact series outcome predictions Winners - amount of correct winner predictions. Missed - amount of missing on the winner predictions. Season: 1988 Scores: 3 Winners: 9 Missed: 3 Season: 1989 Scores: 2 Winners: 11 Missed: 2 Season: 1990 Scores: 1 Winners: 8 Missed: 6 Season: 1991 Scores: 3 Winners: 5 Missed: 7 Season: 1992 Scores: 4 Winners: 8 Missed: 3 Season: 1993 Scores: 3 Winners: 4 Missed: 8 Season: 1994 Scores: 4 Winners: 5 Missed: 6 Season: 1995 Scores: 3 Winners: 5 Missed: 7 Season: 1996 Scores: 3 Winners: 8 Missed: 4 Season: 1997 Scores: 3 Winners: 5 Missed: 7 Season: 1998 Scores: 2 Winners: 7 Missed: 6 Season: 1999 Scores: 3 Winners: 9 Missed: 3 Season: 2000 Scores: 1 Winners: 7 Missed: 7 Season: 2001 Scores: 3 Winners: 7 Missed: 5 Season: 2002 Scores: 3 Winners: 5 Missed: 7 Season: 2003 Scores: 5 Winners: 6 Missed: 4 Season: 2004 Scores: 7 Winners: 4 Missed: 4 Season: 2006 Scores: 2 Winners: 7 Missed: 6 Season: 2007 Scores: 3 Winners: 5 Missed: 7 Season: 2008 Scores: 3 Winners: 7 Missed: 5 Season: 2009 Scores: 4 Winners: 7 Missed: 4 Season: 2010 Scores: 2 Winners: 9 Missed: 4 Season: 2011 Scores: 3 Winners: 6 Missed: 6 Season: 2012 Scores: 3 Winners: 8 Missed: 4 Season: 2013 Scores: 4 Winners: 5 Missed: 6 Season: 2014 Scores: 6 Winners: 5 Missed: 4 Season: 2015 Scores: 3 Winners: 5 Missed: 7 Season: 2016 Scores: 3 Winners: 9 Missed: 3 Over 420 series (28 seasons X 15 series) in total, the model predicted 66% of the winners correctly (275). Remarkable is that in the wild 2012 (8th-placed Kings defeated 6th-placed Devils in the finals) it got 11 out of 15 correctly. Only once less than half was predicted correctly (7 out of 15 in 1993). The highest level was in 1989 (13 out of 15), but in the last one it was pretty decent with 12 out of 15. Highest level of prediction of the exact score was 7 out of 15 in 2004.
  5. Original Post The most important conclusion of the last chapter that dealt with goalies' Elos is that it is defined by actual performance of a goaltender versus the expected performance of the team he is facing. That is the approach we are going to inherit for evaluating skaters. For the start we compute the average stats of a league for each season. We do that for most of the stats that are measured, from goals and assists to faceoffs taken, up to the time on ice for the goaltenders. This is a trivial calculation. Thus we obtain season stat averages Savg. Now we can begin to work with the skaters. We assign them a rating of 2000 in each stat. The first and the most difficult step is to coerce the actual performance of a skater in each stat to a chess-like result, on the scale from 0 to 1. This is a real problem, since the result distribution for the number of players looks something like one of these chi-squares: Therefore we need to rebalance it somehow while preserving the following rules: They should be more or less distributive, i.e. scoring 1 goal thrice in a row in a game should produce approximately the same performance as scoring a hat trick in one game and going scoreless in the other two. They should still have the same shape as the original one. The average rating of the league in each stat should remain 2000 at the end of the season. So first, we do not apply rating changes after a single game. We take a committing period, for example, five games, and average players' performance in every rated stat over that period. Second, we apply the following transformation to the performance: P'player = (Pplayer - Savg) / Savg where Savg is the season average on that stat. It could be more precise to compute against the averages against of the teams played (see the first paragraph), but we decided to go via a simpler route at this stage. Then we scale the performance by the Adjustment Factor A: P'playeradj = P'player / A The adjustment factor sets the result between -0.5 and 0.5. More or less. There still are outliers, but they are very infrequently beyond 0.5 . The A factor depends on the rarity of the scoring in the stat and varies from 6 (Shot on Goal) to 90 (Shorthanded goal). The adjustment for goals, is, for example, 9. The adjustment for faceoffs won is 20. The latter one might look a bit surprising, but remember that many players do not ever take faceoffs, e.g. defensemen. Naturally, only skaters stats are computed for skaters, only goalie stats for goaltenders. The final Result Rplayer is then: Rplayer = P'playeradj + 0.5 So for the rare events we have a lot of results in the 0.48-0.5 area and a few going to 1. For the frequent events (shots, blocks, hits), the distribution is more even. Now that we got the player's "result" R, we can compute the elo change through the familiar formula: ΔElo = K * (R - (1/(1+10(2000 - Eloplayer)/400))) where K is the volatility coefficient which we define as: 16 * √(A) * √(4 / (C + 1)) A is the aforementioned Adjustment Factor and C is the Career Year for the rookies (1) and the sophomores (2), and 3 for all other players. 'What is 2000', an attentive reader would ask? 2000 is the average rating of the league in each stat. We use, because the "result" of the player was "against" the league average. If we used team averages, we would put the average "Elo against" of the teams faced instead. After we have the ΔElo, the new Elo' of a player in a specific stat becomes: Elo' = Elo + ΔElo And from that we can derive the expected average performance of a player in each stat, per game: Rexp = 1/(1+10(2000-Elo')/400) Pexp = (Rexp - 0.5) * A * Savg + Savg which is an "unwinding" of the calculations that brought us from the actual performance to the new rating. The calculation differs for the three following stats: SVP - processed as described in Part V. Win/Loss - processed as a chess game against a 2000 opponent, where the result is: Rw = Pw/(Pw+Pl), Rl = Pl(Pw+Pl) over the committing period. The only subtlety here is that sometimes a hockey game may result in goalie win without a goalie loss. PlusMinus - R+/- = 0.5 * (P+/- - Savg+/-) / 10 (10 skaters on ice on average) Then, via the regular route we get the Elo' and the expected "result" Rexp, and the expected performance is: Pexp+/- = (Rexp+/- - 0.5) * 10 + Savg+/- Please note that we do not compute "derived" stats, i.e. the number of points (or SHP, or PPP), or the GAA, given the GA and TOI, or GA, given SA and SV. An example of the computed expected performances that lists expectations of top 30 Centers in Assists (Adjustment Factor 9) can be seen below: # Player Pos Team Games A a/g Avg. g. Avg.a E a/g E a/fs 1 CONNOR MCDAVID C EDM 43 34 0.791 44.00 33.00 0.706 61.54 2 JOE THORNTON C SJS 41 24 0.585 74.11 52.00 0.665 51.27 3 NICKLAS BACKSTROM C WSH 40 24 0.600 69.20 50.10 0.663 51.85 4 EVGENI MALKIN C PIT 39 27 0.692 62.09 44.73 0.659 55.33 5 SIDNEY CROSBY C PIT 33 18 0.545 61.67 51.50 0.655 46.15 6 RYAN GETZLAF C ANA 36 25 0.694 68.58 45.42 0.648 50.26 7 EVGENY KUZNETSOV C WSH 40 22 0.550 54.75 27.75 0.605 47.43 8 ANZE KOPITAR C LAK 36 16 0.444 72.73 41.55 0.594 40.33 9 ALEXANDER WENNBERG C CBJ 40 28 0.700 59.00 25.67 0.583 52.50 10 CLAUDE GIROUX C PHI 43 25 0.581 61.70 37.60 0.579 47.56 11 TYLER SEGUIN C DAL 42 26 0.619 66.86 31.14 0.566 48.65 12 RYAN O'REILLY C BUF 30 16 0.533 66.00 26.38 0.553 39.23 13 DAVID KREJCI C BOS 44 18 0.409 60.64 32.36 0.528 38.05 14 RYAN JOHANSEN C NSH 41 22 0.537 65.33 27.00 0.523 43.43 15 JOE PAVELSKI C SJS 41 23 0.561 69.64 29.09 0.517 44.21 16 HENRIK SEDIN C VAN 43 17 0.395 75.56 47.81 0.517 37.17 17 DEREK STEPAN C NYR 42 22 0.524 68.00 30.86 0.508 42.31 18 VICTOR RASK C CAR 41 19 0.463 67.00 22.67 0.497 39.37 19 MARK SCHEIFELE C WPG 40 20 0.500 44.50 17.83 0.493 39.23 20 JASON SPEZZA C DAL 35 18 0.514 62.71 37.79 0.490 37.60 21 JOHN TAVARES C NYI 38 16 0.421 68.50 35.00 0.488 37.46 22 MITCHELL MARNER C TOR 39 21 0.538 39.00 21.00 0.484 41.82 23 STEVEN STAMKOS C TBL 17 11 0.647 65.11 29.00 0.474 29.97 24 ALEKSANDER BARKOV C FLA 36 18 0.500 56.75 21.00 0.463 36.51 25 MIKAEL GRANLUND C MIN 39 21 0.538 55.80 24.40 0.460 40.80 26 PAUL STASTNY C STL 40 13 0.325 65.09 34.55 0.457 31.74 27 JEFF CARTER C LAK 41 15 0.366 69.67 24.33 0.448 33.35 28 MIKE RIBEIRO C NSH 41 18 0.439 62.88 33.06 0.447 36.32 29 MIKKO KOIVU C MIN 39 16 0.410 66.83 34.25 0.445 35.14 30 ERIC STAAL C MIN 39 22 0.564 74.46 36.77 0.442 40.99 You can see more of such expectation evaluations on our website,http://morehockeystats.com/fantasy/evaluation . Now, we ask ourselves, how can we use these stats evaluations to produce an overall evaluation of a player? To be concluded...
  6. Original post. The goalkeeper is half of the whole team Soviet proverb from Lev Yashin's times. After a foray into the calmer lands of teams' evaluation using the Elo rating, it's time to turn our attention to the really juicy stuff - the evaluation of a single player. And we'll start with the most important one - the goaltender. DISCLAIMER: this evaluation concept is still a work in progress and one of several possible implementations of the idea. By coincidence, it's also the simplest evaluation to make. While many stats describe the performance of a skater (goals, assists, shots, hits, blocks, faceoff wins, etc. - and even one that is accounted usually for goaltenders) only one stat truly describe the goalie's performance: the saves percentage. Usually, whole four stats are used to compare the goalies: wins (W), saves percentage (SVP), goals against average (GAA) and shutouts (SHO), but will show you first, why three of them are mostly unnecessary. Also, the name saves percentage is a bit of a misnomer, since the values of svp are usually not multiplied by 100 to look like real percent, but are shown more frequently between 0 and 1, and therefore would be more properly named as 'Saves Ratio', or 'Saves Share'. Wins are truly results of team efforts. I always cringe when I read that a goaltender "outdueled" his opponent, when the both barely got see each other. The GAA is much more of an indication of how well the defense operates in front of the goalie. Shootouts are first, and foremost, a very rare thing, and secondly a 15-save shootout should not be the same as 40-save shootout, although for any of the four stats listed above they create two identical entry. Therefore we feel ourselves on a firm ground evaluating goalie's performance through SVP only (with a slight input from shootouts, as described below) - and the Elo function, of course. For the start, each goaltender is assigned an Elo rating of 2000 for his first career appearance. We discard performances in which goalies faced less than four shots, because these usually are late relief appearances in the garbage time, not really an evidence of goaltending in a true hockey game. We only account for them to display the real SVP accrued in the season so far, and we consider dropping these appearances completely. After the game we get the pure SVP from the real time stats. We adjust it in two ways: If, in the very rare case, the performance is below 0.7, we set it to 0.7 . If there was a shootout (not the shootout as defined by the NHL, but a performance where a goaltender was on the ice for at least 3420 seconds and did not let a single goal in during that time), we add a shootout bonus for the performance: Bonus = (Saves - 10) / 200 If there were less than fifteen saves in the shootout, the bonus is assigned the minimum value of 0.025. We consider adding this bonus necessary, because the opposing team is usually gives an extra effort to avoid being shut out even during the garbage time. Then, given the actual performance we can calculate the "Elo performance rating": Rperf = 2000 + (SVP - SVPvsopp) * 5000 Where SVPvsopp is the SVP against the opponent the goalie is facing - effectively the shooting % of that team minus the shots resulting in empty-net goals, sort of "Expected SVP against that opponent". That means that for every thousandth of the SVP above the expectation, the performance is five points above 2000 (the absolute average). Wait, there seems to be an inconsistency. Don't we need ratings of opponents for Elo changes calculation? Actually, no. Given an Elo performance of a player, we can calculate the rating change as a "draw" against a virtual opponent with that Elo performance, i.e. ΔR = K * (0.5 - 1 / ( 1 + 10 ** (( Rperf - Rg)/ 400)) ) ) Where K is the volatility factor mentioned in the earlier posts. Right now we are using the volatility factor of 32, but that may change - including introducing a dependency of this factor on goaltender's experience. And the new rating, is naturally, Rg' = Rg + ΔR Now we can calculate the expected remaining svp: SVPrem = SVPavg + (Rg' - 2000) / 5000 Where SVPavg is the league average SVP. It would be more correct to substitute that value with the weighted averages of the remaining teams to face (with accordance to the matches remaining), and we'll be switching to this index soon. We can also calculate the SVP expected from the goalie at the start of the season: SVPexp = SVPavg0 + (Rg0 - 2000) / 5000 where SVPavg0is the average SVP of the league during the previous season and the Rg0 is the rating of the goalie at the conclusion of the previous season (including playoffs), or the initial rating of 2000. We post a weekly update on our Elo ratings for goaltenders, and their actual and expected SVPs on our Twitter feed. You can also access our daily stats on our website page. It looks like we're ready to try to take on the skaters' performances. But I'm not sure it's going to fit into one posting. To be continued...
  7. Original post. Catching up... We left our reader at the point where we demonstrated how to produce Elo ratings for hockey teams over season (and over postseason too, if anyone wondered) and how to apply it to the up and coming next games of the rated teams. However, in its main eparchy, chess, Elo is rarely used to produce single match outcome projections. It's much more popular when used to create a long-term projection, such as the whole tournament, which in chess lasts between five to thirteen rounds, usually. Therefore, the question arises, shouldn't we try to use our newborn Elo ratings to long-term projections? And the answer is an unambiguous 'Yes!' We can and should create the projections for the team over longer spans such as a seven days ahead, thirty, or even through the end of the season! How do we do it? Since we computed the Elo ratings for all teams, and we know the schedule ahead of all teams, we can run the Elo expectation on all matchups during the requested span and sum them. And since we assume that each team performs according the expectation, their Elo ratings do not change during the evaluation span. Eteam = Σ(Ematch1, Ematch2, ... , Ematchn) All good? No. There is one more finesse to add. The produced expectations will all be calculated in 2-0 span per game, assuming only 2 points are in play in each matchup. However, due to the loser's point it's not so. Therefore on average there are 2 + NOT/SO / Ntotal points are handed out during the season in every match (where NOT/SO is the number of games that get decided in OT or SO). So we need to compute the NOT/SO value, divide it by two (because there are two teams in each match) and multiply the expectation of each team by this factor. By doing so we receive the reliable Elo expectation, such as one in the table below, as of Jan 2nd, 2017. Spans of 7 days, 30 days and through the end of the season are displayed (games, expected points and total). Elo ratings for season 2016 # Team Div Elo Pts Gin7 Pin7 Tin7 Gin30 Pin30 Tin30 GinS PinS TinS 1 Columbus Blue Jackets MET 2265.22 56 4 6 62 14 23 79 47 79 135 2 Pittsburgh Penguins MET 2186.57 55 1 2 57 11 16 71 44 65 120 3 Minnesota Wild CEN 2180.88 50 3 4 54 14 21 71 46 68 118 4 San Jose Sharks PAC 2137.87 47 3 4 51 14 20 67 45 62 109 5 Washington Capitals MET 2135.54 49 4 4 53 15 18 67 46 59 108 6 Montreal Canadiens ATL 2117.99 50 4 5 55 14 18 68 45 58 108 7 New York Rangers MET 2135.43 53 3 4 57 11 14 67 43 54 107 8 Chicago Blackhawks CEN 2103.27 51 3 4 55 12 15 66 42 52 103 9 Anaheim Ducks PAC 2105.41 46 3 4 50 13 18 64 43 55 101 10 Edmonton Oilers PAC 2092.89 45 4 4 49 14 16 61 44 53 98 11 Ottawa Senators ATL 2088.34 44 2 2 46 11 11 55 45 52 96 12 Toronto Maple Leafs ATL 2097.27 41 3 4 45 12 14 55 46 54 95 13 St. Louis Blues CEN 2066.58 43 2 2 45 12 12 55 44 51 94 14 Boston Bruins ATL 2079.41 44 4 5 49 15 17 61 43 49 93 15 Carolina Hurricanes MET 2093.06 39 4 5 44 13 13 52 46 53 92 16 Los Angeles Kings PAC 2066.68 40 4 4 44 14 16 56 45 52 92 17 Philadelphia Flyers MET 2079.35 45 3 3 48 12 13 58 43 46 91 18 Calgary Flames PAC 2076.79 42 4 5 47 14 16 58 43 49 91 19 Tampa Bay Lightning ATL 2068.90 42 4 4 46 13 14 56 44 48 90 20 New York Islanders MET 2070.87 36 2 3 39 12 14 50 46 51 87 21 Florida Panthers ATL 2059.66 40 4 5 45 13 14 54 44 46 86 22 Nashville Predators CEN 2055.15 38 4 4 42 14 14 52 46 48 86 23 Dallas Stars CEN 2052.77 39 3 3 42 13 13 52 44 46 85 24 Vancouver Canucks PAC 2049.05 37 4 5 42 12 15 52 44 46 83 25 Detroit Red Wings ATL 2033.62 37 3 3 40 13 12 49 45 43 80 26 Winnipeg Jets CEN 2017.50 37 4 4 41 14 14 51 43 40 77 27 Buffalo Sabres ATL 2009.45 34 3 3 37 13 12 46 46 41 75 28 New Jersey Devils MET 1994.66 35 5 4 39 14 12 47 45 37 72 29 Arizona Coyotes PAC 1921.41 27 3 2 29 12 8 35 45 30 57 30 Colorado Avalanche CEN 1910.42 25 3 2 27 12 7 32 46 29 54 The NOT/SO value right now is about 1.124 (i.e. about quarter of all games are decided past the regulation). So you know what's good for the people? But the people consists of men... Iconic Soviet movie The team projection leaves us wanting more. After all, don't we want to be able to evaluate individual players and factor it somehow in the projection to reflect the injuries and other reasons that force top players out of the lineups? Stay tuned.To be continued...
  8. Part I. Part II. Sherlock Holmes and Dr. Watson are camping in the countryside. In the middle of the night Holmes wakes up Watson: 'Watson, what do you think these stars are telling us? 'Geez, Holmes, I don't know, maybe it's going to be a nice weather tomorrow? 'Elementary, Watson! They are telling us our tent has been stolen! Iconic Soviet joke. Estimating a hockey player via Elo ratings is a highly complex task. Therefore, we shall wield the dialectic approach of getting from the simpler to the more complicated, and will tackle a seemingly simplistic task first. Let's work out the Elo ratings for the NHL teams as a whole first. After all, it's the teams who compete against each other, and the outcome of this competition is a straightforward result. So, let's examine a match between Team A and Team B. They have ratings Ra and Rb. These ratings, or, more precisely, their difference Ra-Rb, defines the expected results Ea and Eb on the scale from 0 to 1. The teams play, one wins (S=1), another loses (S=0). To adapt this to the Elo scale, let's consider win 1 point, loss 0 point. The new ratings Ra' and Rb' will be (K is the volatility coefficient): Outcome Sa Sb Sa-Ea Sb-Eb dRa dRb Ra' Rb' Team A Wins 1 0 1-Ea -Eb K-K*Ea -K*Eb Ra+K-K*Ea Rb-K*Eb Team B Wins 0 1 -Ea 1-Eb -K*Ea K-K*Eb Ra-K*Ea Rb+K-K*Eb and the teams are ready for usage in the next meeting with their new ratings Ra' and Rb', reciprocally. 'Wait!', will ask the attentive reader, 'Not all possible outcomes are listed above! What about the OT/SO wins where both teams get some points.' And he will be correct. In these cases we must admit that the loser team scores 0.5 points, so unlike a chess game where the sum of the results is always 1, in the NHL hockey the total sum of results varies and can be either 1 or 1.5. Note, were the scoring system 3-2-1-0, then we could scale the scores by 3 rather than by two and get the range 1-⅔-⅓-0 where every result sums to 1. Alas, with the existing system we must swallow the ugly fact that the total result may exceed 1, and as the result the ratings get inflated. Which is a bad thing, sure. Or is it? Remember, the Elo expectation function only cares about the differences between ratings, not their absolute values. And all teams' ratings get inflated, so all absolute values shift up from where they would've been without the loser's point. Whom would it really hurt? The new teams. Naturally, we must assign an initial rating to every team at the starting point. One way could be assigning the average rating of the previous season to the new team. But we prefer a different and a much more comprehensive solution. We claim that since the teams that at the start of the next season are different enough beasts from those that ended the previous ones, so that the Elo ratings should not carry over from season to season at all! Therefore all the teams start each season with a clean plate and an identical Elo rating Ro. Once again, the attentive reader might argue, 'What about mid-season trades and other movements?' Well, dear reader, now you have a tool to evaluate impact of the moves on the team. If there is a visible tendency change, you can quite safely associate it with that move. Overall, the 82 game span is huge to soften any bends and curves in the progression of the Elo ratings along the season. Speaking of game spans, we must note one more refinement being done to the ratings. In the chess world, the ratings of the participants are not updated throughout the length of the event, which is usually 3-11 games. The ratings of the participants are deemed constant for the calculation of rating changes, which accumulate, and the accumulation is actually the rating change of each participant. We apply a similar technique for the teams' Elo calculations: we accumulate the changes for the ratings for 5 games for each team and "commit" the changes after the five-game span. The remainder of the games is committed regardless of its length, from 1 to 5. Why 5? We tried all kinds of spans, and 5 gave the smoothest look and the best projections. Now, as a demonstration, let's show how we calculate the possible rating changes in the much anticipated game where Minnesota Wild is hosting Columbus Blue Jackets on December, 31st, 2016: Rcbj = 2250, Rmin = 2196, Ecbj = 0.577, Emin = 0.423, K = 32 (standard USCF). Outcome Scbj Smin S-Ecbj S-Emin dRa dRb Ra' Rb' CBJ W Reg 1 0 0.423 -0.423 +13.53 -13.53 2263.53 2182.47 CBJ W OT 1 0.5 0.423 0.077 +13.53 +2.47 2263.53 2198.47 MIN W OT 0.5 1 -0.077 0.577 -2.47 +18.47 2247.53 2214.47 MIN W Reg 0 1 -0.577 0.577 -18.47 +18.47 2231.53 2214.47 Note: MIN gains rating when it gets a loser's point. Here is a dynamic of Elo changes (without five game accumulation) for the Metropolitan Division, as an example. See more detailed tables on our website: http://morehockeystats.com/teams/elo Ok, we got the ratings, we got the expected results, can we get something more out of it? To be continued...
  9. Happy New Year everyone! Original post. The Elo rating system is the system used for evaluation and comparison of competitors. Up until today it's been mostly applied in the domain of board games, most well-known in chess, but also in disciplines such as draughts or go. The Elo system, named after its inventor, Prof. Arpad Elo, who first published it in the 1950s in the US, is capable to produce a reliable score expectation for an encounter between two competitors who oppose each other.For those who are not familiar with chess or draughts, let's take a look on how the Elo ratings work:1) In an encounter between two competitors, A and B, assume they have ratings Ra and Rb.2) There is a function that maps the expected result for each player given the opponent: Ea = F(Ra, Rb) Eb = F(Rb, Ra) where F is a monotonic non-decreasing function bounded between minimum and maximum possible scores, such as 0 and 1 in chess. An example for such a function would be arctan(x)/π + 0.5 .Ea+Eb should be equal to maximum possible score.In practice a non-analytical table-defined function is used that relates only on the difference between Ra and Rb, and not their actual values. The function can be reliably approximated by the following expression: E = 1 / [ 1 + 10(Rb-Ra) / 400 ] which works well with ratings in low 4-digit numbers and rating changes per game in 0-20 range. 3) After the encounter, when real scores Sa and Sb have been registered, the ratings are adjusted: Ra1 = Ra + K*(Sa-Ea) Rb1 = Rb + K*(Sb-Eb) Where K is a volatility coefficient, which is usually higher for participants with shorter history, but ideally it should be equal for both participants. The new ratings are used to produce the new expected results and so on. The Elo rating has several highly important properties:1) It gravitates to the center. As rating R of a participant climbs higher, so does the expected result E, which becomes difficult to maintain, and a failure to maintain it usually results in a bigger drop in the rating.2) It's approximately distributive. If we gather N performances and average the opponents as Rav, the expected average performance as Eav = F(Ra, Rav), and the actual performance as Sav, then the new rating RaN' = Ra + N*K*(Sav-Eav) will be relatively close to RaN obtained via direct Rareciprocal update after each of the N games.3) It reflects tendencies, but overall performance still trumps it. Given the three players with ten encounters against other players with the same rating, when the performances are (W - win, L - loss): For player 1: L,L,L,L,L,W,W,W,W,W For player 2: L,W,L,W,L,W,L,W,L,W For player 3: W,W,W,W,W,L,L,L,L,L player 1 will end up with the highest rating of the three, player 2 will be in the middle, and player 3 will have the lowest one - but not by a very big margin. Only when the streaks become really long the Elo of a lower performance may overcome the Elo of a higher one.And how does Elo stack against the four Brits?* Goodhart's Law: pass. It measures the same thing it indicates.* Granger's Causality: pass. It is a consequence of a performance by definition, and a prediction of future peformance, by definition.* Occam's Razor: pass. The ratings revolve around the same parameter they measure.* Popper's Falsifiability: partial pass. The predictions of Elo sometimes fail, because they are probabilistic. However, the test of time and the wide acceptance indicate that the confidence level holds. Elo was even used for "paleostatistics" when the ratings were calculated backwards until middle XIX century, and the resulting calculations are well-received by the chess historians' community.The only well-known drawback of Elo is the avoidance by top chess players of competition against much weaker oppositions, especially when facing White, as such a game can be drawn relatively easily by the opponent, and the Elo rating of the top player could take a significant hit resulting in a drop of several places in the rating list.Now, to the question of the chicken and the egg - where do the initial Elo ratings come from? Well, they can be set to an arbitrary value of low 4-digit number. Currently a FIDE beginner starts with the rating of 1300. If the newcomer is recognized as being more skilled than a beginner, then a higher rating is assigned based on rating grades for each skill level, sort of an historical average of the newcomer's peers.And... What does all this have to do with hockey?To be continued...

Game Room 1

    You don't have permission to chat.
    ×
    ×
    • Create New...