Jump to content

More Hockey Stats

Blogger
  • Posts

    143
  • Joined

  • Last visited

Everything posted by More Hockey Stats

  1. Why does the cat lick his balls? Because it can. Recently I saw a request on a stats of goal posts / crossbars hit per game. While I do have that statistic per player, I haven't one for games, so - since I can - why shouldn't I produce one? About half an hour of Perl-ing created the following summary: Irons altogether, top: AWAY HOME P C T OTT vs BUF on 2011/12/31: 8 0 8 VAN vs FLA on 2010/02/11: 7 0 7 WPG vs FLA on 2009/12/05: 6 1 7 TOR vs BUF on 2007/10/15: 6 1 7 TBL vs FLA on 2006/04/01: 6 1 7 PHI vs PIT on 2006/03/12: 7 0 7 COL vs NYI on 2005/12/17: 7 0 7 NSH vs DAL on 2016/03/29: 4 2 6 PIT vs NSH on 2014/03/04: 5 1 6 NYI vs TBL on 2014/01/16: 3 3 6 DAL vs VAN on 2013/02/15: 5 1 6 STL vs CAR on 2012/03/15: 5 1 6 WPG vs MTL on 2011/01/02: 6 0 6 OTT vs VAN on 2011/02/07: 6 0 6 MTL vs CAR on 2011/11/23: 6 0 6 LAK vs DAL on 2010/03/12: 4 2 6 NJD vs TBL on 2009/10/08: 6 0 6 LAK vs DAL on 2009/10/19: 5 1 6 DAL vs CBJ on 2009/01/31: 5 1 6 COL vs CHI on 2009/11/11: 6 0 6 PIT vs WPG on 2008/01/30: 5 1 6 NYR vs NJD on 2008/04/09: 4 2 6 STL vs ARI on 2007/01/15: 5 1 6 followed by 109 games with 5 irons hit. Crossbars, top: AWAY HOME P C T CGY vs CBJ on 2008/11/08: 1 4 5 NYR vs FLA on 2007/11/23: 0 4 4 PHI vs FLA on 2006/12/27: 1 4 5 BUF vs DAL on 2017/01/26: 1 3 4 EDM vs DAL on 2016/01/21: 2 3 5 TOR vs STL on 2015/01/17: 1 3 4 CHI vs ANA on 2015/05/19: 1 3 4 BOS vs VAN on 2015/02/13: 1 3 4 NYI vs TBL on 2014/01/16: 3 3 6 CHI vs ANA on 2008/01/04: 2 3 5 CAR vs FLA on 2007/11/12: 1 3 4 followed by 50 games with 2 crossbars hit. The data is extracted from the PBP files of NHL.com, from the year 2005 on. However I consider this a one-time effort and will not add this to the website itself.
  2. I added a new statistic on my website: allowing quick goals after getting a lead-changing goal and vice versa. http://morehockeystats.com/teams/relaxation Enjoy.
  3. Original post. So after remembering the Botwinnik's quote, and after publishing the stats how the teams actually play after different breaks, a new idea came to me - check whether the teams on streaks are affected positively or negatively by breaks.For the sake of the analysis, I assumed the following: A break is a period of three days at least between games. A streak is a sequence of at least three wins in a row, or at least seven points in four games. So we check for the last thirty years (as far as NHL.com would let us in) if the streaking team was able to keep the streak alive, or whether the streak was broken: SEASON ALIVE BROKEN 1987/1988 5 11 1988/1989 12 7 1989/1990 8 14 1990/1991 13 11 1991/1992 17 13 1992/1993 20 16 1993/1994 19 20 1994/1995 2 7 1995/1996 15 11 1996/1997 15 11 1997/1998 12 20 1998/1999 12 9 1999/2000 18 12 2000/2001 21 11 2001/2002 17 6 2002/2003 13 10 2003/2004 12 14 2005/2006 31 15 2006/2007 16 16 2007/2008 23 24 2008/2009 15 20 2009/2010 14 17 2010/2011 19 11 2011/2012 22 11 2012/2013 6 3 2013/2014 15 15 2014/2015 16 16 2015/2016 16 14 2016/2017 8 11 TOTAL 432 376 Actually, it looks like the streaks weren't affected by the break either way. 53.4% of the times the streak continued, 46.6% of the time it went dead. There is a very large discrepancy between the seasons, although I'd attribute it to lesser parity between the teams overall in these years. For the last 5 years, the probability for the streak to stay alive has been 50.8% (61 cases of extended streaks out of 120).Now, what would change, if we define a break a little bit longer, by a single day: SEASON ALIVE BROKEN 1987/1988 2 2 1988/1989 4 1 1989/1990 3 1 1990/1991 4 4 1991/1992 7 8 1992/1993 8 2 1993/1994 7 7 1994/1995 1 1 1995/1996 6 7 1996/1997 6 2 1997/1998 5 5 1998/1999 6 2 1999/2000 9 3 2000/2001 7 4 2001/2002 6 4 2002/2003 5 1 2003/2004 3 6 2005/2006 16 4 2006/2007 8 5 2007/2008 10 6 2008/2009 8 8 2009/2010 6 6 2010/2011 9 3 2011/2012 8 4 2012/2013 2 1 2013/2014 3 9 2014/2015 5 9 2015/2016 7 6 2016/2017 3 8 TOTAL 174 129 The changes are rather interesting. Now, overall, the chances of streak to continue are up to 57.4%, and only in 42.6% of the cases it came to a stop. But in the last five years - since the last lockout - and with the schedule changes so that there are at least two games between every team (increasing travel), the ratio drops from 50.8% to the humble 37.7% (20 out of 53!)Extending the breaks to five days provides too little data to draw any conclusions.So I am inclined to agree with Dr. Botwinnik, that extended breaks of more than three days throw teams off their pace and should be reduced to minimum. Three days are borderline alright.
  4. whoops, got the title wrong Original post. A rule change suggestion There's no irreplaceable people. I.V. Stalin Rushing this one up, because this idea already came to my mind before, but I forgot about it. The age is taking its toll. Anyways. Everyone is talking these days about rule changes. I've already expressed a few thoughts on the scoring systems, but I am not original there. Now, however, I want to make a suggestion I haven't seen mentioned yet. Allow soccer (baseball, too)-like substitutions in hockey. Allow the coaches to replace players in the original lineup at the start of the game with one of the "healthy scratches", as submitted in the roster sheet, like the one Peter DeBoer recently messed up in the game against Edmonton. The substitution goes ONE-WAY. That means that the player that was substituted cannot return to the game. The substitutions may occur: During the intermissions During the commercial breaks During a time-out First and foremost this will allow teams to handle early injuries much better. Your D-man got injured at the 7:04 mark of the 1st period? Around 10:00 there will be a commercial break, you can substitute him with one of the scratches! Second, it may allow coaches to send stronger messages to players they deem slacking. Rather than shorten the roster by benching that guy, you can send an eager healthy scratch in. Of course, then the "slacking" player is benched for the whole remainder of the game. Third (oh, I did military service, so I have a natural obsession of providing three reasons for each thing), it may give the coaches some extra flexibility if a designated roster player gets slightly injured in the warm-ups. Then a scratch takes his place as usual, but if the original player is fixed by the 1st intermission, he can substitute the starting scratch. The substitutes will have to come from the "scratch" list with the exception of the emergency goaltending contracts. Oh, and I am sure the NHL website will make a mess out of it in their game reports.
  5. Original post. Just to not let the month of January slip away without another post, I got sentimental and decided to tell a small story about how my website came to life.There was a void. A lot of time people on hockey boards would wonder if specific statistics of players and teams were available, and they wouldn't, although the raw data seemed to be there. Then, there was the fantasy hockey world, with its pizzazz, and asking for a predictive tool, - and again, the raw data seemed to be there.Now, I am a sysadmin by trade, with occasional forays into software development, and since I've been doing Perl for all of my career, I got a few exposures to the Web development process and to databases. I've got a college degree in Engineering, so that gave me some idea about statistics.So I got a look at the publicly available NHL reports, but was unsure of how to use them. I tried some standard database approach, but it wasn't working.The turning point came when I attended a lecture on MongoDB. That one turned out to be perfect, with the loosely compiled NHL stats documents, just spill them into the Mongo database. Then extract data from them and summarize them into tables. Store the tables in an SQL database for quick serving on the website. And along came more luck - a lecture on the Mojolicious Perl Web framework which equipped me with an easy solution for how to run a website.Thus, I was able to actually implement what I had in mind. First came the spider part, to crawl and collect the data available on NHL.com. Fortunately, I was able to scrape everything before the website's design changed drastically, and the box scores prior to 2002 stopped being available. I got everything from the 1987/88 season on.Then, I started writing the parsers,.. and had to take a step back. There was quite a lot of inconsistent and missing reports. Therefore I had to a) add a thorough testing of every report I scraped to ensure it came together, b) look for complementing sources for whatever data was missing. So before I got done with the parsers, I had a large testing framework, and also visited all corners of the hockey-related websites to get the missing or conflicting data resolved, even the online archives of newspapers such as USA Today. Some of the downloaded reports had to be edited manually. Then, NHL.com landed another blow, dropping draft pick information from their player info pages. Luckily, the German version of the website still had it, so I began to scrape the German NHL website too.I was able to produce the unusual statistics tables relatively quickly and easily. However I decided that the website will not open without the prediction models I had in mind. Being a retired chess arbiter and a big chess enthusiast I decided to try to apply the Chess Elo rating model to the performances of hockey teams and players. Whether it really works, or not, I don't know yet. I guess by the end of the season I can make a judgement on that.In October 2016 I opened my website by using a free design I found somewhere online. Unfortunately, I quickly realized it was not a good fit with the contents the site was serving, so I sighed, breathed some air, opened w3schools.com in my browser, and created my own design. And a CMS too. At least I am happy with the way the site looks now, and even more happier that when someone asks a question - on Twitter, Reddit or hockey forums - whether it's possible to measure a specific metric, I am able to answer, 'Already done! Welcome to my website!'At the end I'm a software developer, a web designer, a DBA, a sysadmin, a statistician and an SEO amateur. Oh, and a journalist too, since I'm writing a blog.
  6. Wanted to share with you some updates that I've made to my website lately: Frequent Dancers Penalty Killers Team Performances After Various Breaks Corrections, advice and suggestions very much welcome!
  7. Thanks. That means, my translation wasn't so bad. Sosonko is a terrific writer, his books full of remarkable quotes and great anecdotes. Here's a book. It's not about chess itself, you don't have to play the game to enjoy it.
  8. Original post. One "intangible" being tossed around is "motivation" of the players. Which brings memories of an episode I was witness to. In 2003/04, in the Israeli Top Tier Chess League (which is indeed no slouch) our club managed to assemble an outstanding team, featuring, among others, a former Champion of Russia and a former Champion of Europe. I was part of the management team, and orchestrated bringing the first of the two, who also happened to be my childhood friend back in Leningrad, Soviet Union. And so, in round III we were to face our main rival for the title, and the club's GM (also a pedestrian chess player) gathered the team and carried out a pronounced motivational speech, how we have to beat the team we're facing, and so on, and so on. We lost 1½-4½ without winning a single game and lost any chance for the championship we could have.
  9. Original post. Often the general managers, the coaches and the players talk about "intangible values". Sometimes it's about the "locker room contributions". Sometimes it's about "passion". In my opinion, these two are actually negligible and in certain cases even harmful. I remember such references, especially the latter one, made about Israeli soccer players, and that usually meant that the player doesn't have a lot of talent to go along, but contributes a lot of passion into the game. While a passionate play can indeed ignite the play and carry the team along, more often it indicated dumb physical low-talent execution that actually harmed the team. However, there is one intangible that I take my hat off in front. It's the one that I always admired, and myself did not have enough in my chess career. It's the ability to go for the throat of the opposition at even momentary display of weakness by it, or as Terry Pratchett put it one of its books, 'Carpe Jugulum1'. So what is it, in my understanding? It is the situation when your opponent puts itself into an inferior position in a volatile situation (for example, in a close score), such as by a penalty, or by an icing at the end of a long shift, or by allowing an odd-man rush, and you are able to capitalize on it, yanking any remains the carpet of security from under the feet of the opposition. And then, you continue to hammer the blows on the opposition until it collapses completely. Some also call it the 'killer instinct'. This blog (and this article too) sins with abundance of examples from chess, so let me plant one from tennis... Before the match between Lleyton Hewitt and Taylor Dent at the New York Open, 2005, the latter one complained: 'He displays a poor sportsmanship: taking joy in double errors at the opponent services as well as in unforced errors.' 'I don't care what Dent thinks about it', parried Hewitt, 'I always go for a win, and on the way to it many things are allowed.' Machiavelli advised the rulers and the politicians, 'Don't be kind'. Winston Churchill also knew something about achieving the goals when he was recommending: 'If you want to get to your goal, don't be delicate or kind. Be rough. Hit the target immediately. Come back and hit again. Then hit again with the strongest swing you can...' All the chess champions had it, the extremes going to Alexander Alekhine, Robert J. Fischer and Garry Kasparov. Many wonderful players that never got the title complained that they couldn't commit themselves to going for the throat of the opponent time after time. These qualities were elevated to perfection by the two best teams of the first half of 2010s, by the Los Angeles Kings and the Chicago Blackhawks that split between themselves five cups out of six from 2010 to 2015. Even when both teams seem to be struggling and wobbling, they seemed to be able to instill some kind of uncertainty into their opponents - and certainty into the spectators that these teams are going to be able to make a fist out of themselves that is going to hammer their opponents once they display any kind, and minimal level of weakness. That capability was championed by their leaders, Anze Kopitar, Drew Doughty and Jeff Carter for the Kings, and Jonathan Toews, Patrick Kane and Duncan Keith for the Hawks. When the playoffs series between the Blackhawks and their opponents were tied 3-3, Chicago has always been the favorite to win the game 7 because of their Carpe Jugulum reputation. The Kings gained even more notoriety, first by burying their sword to the hilt into each and every opponent in 2012 en route from the #8 seed to their first Stanley Cup, and then from the reverse sweep they managed against the Sharks that started their 2014 Cup run - which included two more comings from behind, 2-3 and 1-3. And even in 2016, down 1-3 to the Sharks in the first round of the playoffs somehow fans around the league were not ready to commit to the Sharks as the favorites to win the series, because the Kings were a hair away from the Sharks' throat in game 4, from 0-3 to 2-3 in the 3rd period, and then in game 5, they indeed were able to erase the 0-3 deficit into a 3-3 tie. Well, that tie didn't hold, the Sharks broke the stranglehold and got a boost that carried them all the way to their own first even Stanley Cup Finals, and that outcome got the Kings' reputation as the Carpe Jugulum team damaged to a degree. So did the Blackhawks' one, losing their game 7 to a team that - along with the Sharks and, for instance, the Washington Capitals - had a reputation of a somewhat nonplussed one - the St. Louis Blues. It would be entertaining to see whether the Carpe Jugulum landscape changes this year in the league, and whether the teams who were able to overcome their "benign" reputation will be able to go all the way to the Cup Finals - through their opponents' throats. Chess Grandmaster Gennady Sosonko wrote, 'A real professional, having thought about the situation on the board, acts most decisively. He knows, that during the game, there should be no place either for doubt, nor for compassion, because a thought which is not converted into action, isn't worth much, and an action that does not come from a thought isn't worth anything at all.' And it's important to remember, Carpe Jugulum is a necessary key to success in a competitive environment only. Albert Einstein used to say that chess "are foreign to me due to their suppression of intellect and the spirit of rivalry." 1Carpe Jugulum (lat.) - seize the throat
  10. Original post. Now that we obtained a way to estimate players' performances for a season, we can move on to estimate their performances for a specific game. For the season of interest, we compute the average against for each teams, just like we computed the season averages. I.e. we calculate how many goals, shots, hits, blocks, saves are made on average against each team. Thus we obtain the team against averages Tavg. The averages are then further divided by the number of skaters and goalies (for respective stats) the team had faced. After that we can calculate the "result" Rt of each season average stat in a chess sense, i.e. the actual performance on the scale from 0 to 1: For Goalie Wins/Losses: Rtwins = 0.5 + Tavgwins/(Tavgwins+Tavglosses) For Plus-Minus: Rt+/- = 0.5 + (Tavg+/- - Savg+/-) / 10 (10 skaters on ice on average) For the rest: Rstat = 0.5 + (Tavgstat - Savgstat) / K where K is a special adjustment coefficient that is explained in part VI (and, as we remind, describes the rarity of each event) And from the result Rt we can produce teams' Elo against in each stat, just like we computed the players' Elos. Then, the expected result Rp of a player against a specific team in a given stat is given by: Rp = 1/(1 + 10(Et - Ep)/4000) where Et is the team's Elo Against and the Ep is the player's Elo in that stat. From the expected result Rp, we can compute the expected performance Ep just like in the previous article: Pexp = (Rp - 0.5) * A * Savg + Savg (See there exceptions for that formula). Please note that we do not compute "derived" stats, i.e. the number of points (or SHP, or PPP), or the GAA, given the GA and TOI, or GA, given SA and SV. Thus, if we want to project expected result of a game between two teams, since it's the expected amount of goals on each side, we compute the sum of the expected goals by each lineup (12 forwards and 6 defensemen): Shome = SUMF1..12(MAX(PexpG)) + SUMD1..6(MAX(PexpG)) for the home team Saway = SUMF1..12(MAX(PexpG)) + SUMD1..6(MAX(PexpG)) for the away team while filtering the players that are marked as not available or on injured reserve. Please note that we assume the top goal-scoring cadre is expected to play, if we knew the lineups precisely, we would substitute the exact lineup for the expected one. You can see the projections at our Daily Summary page. So far we predicted correctly the outcome of 408 out of 661 games, i.e. about 61.7% . Yes, we still have a long way to go. Now to the different side of the question. Given that a player expectation overall is a vector of [E1, E2, ... En] for all the stats, what is the overall value of that player. And the answer is, first and foremost, who's asking. If it's a statistician, or a fantasy player, then the value V is simply: V = SUM1..n(WnEn) where Wn are the weights of the stats in the model that you are using to compare players. Fantasy Points' games (such as daily fantasy) are even giving you the weights of the stats - this is how we compute our daily fantasy projections. Now, if you're a coach or a GM asking, then the answer is more complicated. Well, not really, mathematically wise, because it's still something of a form V = SUM1..n(fn(En)) where fn is an "importance function" which is a simple weight coefficient for a fantasy player. But what are these "importance functions"? Well, these are the styles of the coaches, their visions of how the team should play, highlighting the stats of the game that are more important for them. These functions can be approximated sufficiently by surveying the coaches and finding which components are of a bigger priority to them, for example, by paired-comparison analysis. Unfortunately, there are two obstacles that we may run into: the "intangibles", and the "perception gap". But that's a completely different story.
  11. Original Post The most important conclusion of the last chapter that dealt with goalies' Elos is that it is defined by actual performance of a goaltender versus the expected performance of the team he is facing. That is the approach we are going to inherit for evaluating skaters. For the start we compute the average stats of a league for each season. We do that for most of the stats that are measured, from goals and assists to faceoffs taken, up to the time on ice for the goaltenders. This is a trivial calculation. Thus we obtain season stat averages Savg. Now we can begin to work with the skaters. We assign them a rating of 2000 in each stat. The first and the most difficult step is to coerce the actual performance of a skater in each stat to a chess-like result, on the scale from 0 to 1. This is a real problem, since the result distribution for the number of players looks something like one of these chi-squares: Therefore we need to rebalance it somehow while preserving the following rules: They should be more or less distributive, i.e. scoring 1 goal thrice in a row in a game should produce approximately the same performance as scoring a hat trick in one game and going scoreless in the other two. They should still have the same shape as the original one. The average rating of the league in each stat should remain 2000 at the end of the season. So first, we do not apply rating changes after a single game. We take a committing period, for example, five games, and average players' performance in every rated stat over that period. Second, we apply the following transformation to the performance: P'player = (Pplayer - Savg) / Savg where Savg is the season average on that stat. It could be more precise to compute against the averages against of the teams played (see the first paragraph), but we decided to go via a simpler route at this stage. Then we scale the performance by the Adjustment Factor A: P'playeradj = P'player / A The adjustment factor sets the result between -0.5 and 0.5. More or less. There still are outliers, but they are very infrequently beyond 0.5 . The A factor depends on the rarity of the scoring in the stat and varies from 6 (Shot on Goal) to 90 (Shorthanded goal). The adjustment for goals, is, for example, 9. The adjustment for faceoffs won is 20. The latter one might look a bit surprising, but remember that many players do not ever take faceoffs, e.g. defensemen. Naturally, only skaters stats are computed for skaters, only goalie stats for goaltenders. The final Result Rplayer is then: Rplayer = P'playeradj + 0.5 So for the rare events we have a lot of results in the 0.48-0.5 area and a few going to 1. For the frequent events (shots, blocks, hits), the distribution is more even. Now that we got the player's "result" R, we can compute the elo change through the familiar formula: ΔElo = K * (R - (1/(1+10(2000 - Eloplayer)/400))) where K is the volatility coefficient which we define as: 16 * √(A) * √(4 / (C + 1)) A is the aforementioned Adjustment Factor and C is the Career Year for the rookies (1) and the sophomores (2), and 3 for all other players. 'What is 2000', an attentive reader would ask? 2000 is the average rating of the league in each stat. We use, because the "result" of the player was "against" the league average. If we used team averages, we would put the average "Elo against" of the teams faced instead. After we have the ΔElo, the new Elo' of a player in a specific stat becomes: Elo' = Elo + ΔElo And from that we can derive the expected average performance of a player in each stat, per game: Rexp = 1/(1+10(2000-Elo')/400) Pexp = (Rexp - 0.5) * A * Savg + Savg which is an "unwinding" of the calculations that brought us from the actual performance to the new rating. The calculation differs for the three following stats: SVP - processed as described in Part V. Win/Loss - processed as a chess game against a 2000 opponent, where the result is: Rw = Pw/(Pw+Pl), Rl = Pl(Pw+Pl) over the committing period. The only subtlety here is that sometimes a hockey game may result in goalie win without a goalie loss. PlusMinus - R+/- = 0.5 * (P+/- - Savg+/-) / 10 (10 skaters on ice on average) Then, via the regular route we get the Elo' and the expected "result" Rexp, and the expected performance is: Pexp+/- = (Rexp+/- - 0.5) * 10 + Savg+/- Please note that we do not compute "derived" stats, i.e. the number of points (or SHP, or PPP), or the GAA, given the GA and TOI, or GA, given SA and SV. An example of the computed expected performances that lists expectations of top 30 Centers in Assists (Adjustment Factor 9) can be seen below: # Player Pos Team Games A a/g Avg. g. Avg.a E a/g E a/fs 1 CONNOR MCDAVID C EDM 43 34 0.791 44.00 33.00 0.706 61.54 2 JOE THORNTON C SJS 41 24 0.585 74.11 52.00 0.665 51.27 3 NICKLAS BACKSTROM C WSH 40 24 0.600 69.20 50.10 0.663 51.85 4 EVGENI MALKIN C PIT 39 27 0.692 62.09 44.73 0.659 55.33 5 SIDNEY CROSBY C PIT 33 18 0.545 61.67 51.50 0.655 46.15 6 RYAN GETZLAF C ANA 36 25 0.694 68.58 45.42 0.648 50.26 7 EVGENY KUZNETSOV C WSH 40 22 0.550 54.75 27.75 0.605 47.43 8 ANZE KOPITAR C LAK 36 16 0.444 72.73 41.55 0.594 40.33 9 ALEXANDER WENNBERG C CBJ 40 28 0.700 59.00 25.67 0.583 52.50 10 CLAUDE GIROUX C PHI 43 25 0.581 61.70 37.60 0.579 47.56 11 TYLER SEGUIN C DAL 42 26 0.619 66.86 31.14 0.566 48.65 12 RYAN O'REILLY C BUF 30 16 0.533 66.00 26.38 0.553 39.23 13 DAVID KREJCI C BOS 44 18 0.409 60.64 32.36 0.528 38.05 14 RYAN JOHANSEN C NSH 41 22 0.537 65.33 27.00 0.523 43.43 15 JOE PAVELSKI C SJS 41 23 0.561 69.64 29.09 0.517 44.21 16 HENRIK SEDIN C VAN 43 17 0.395 75.56 47.81 0.517 37.17 17 DEREK STEPAN C NYR 42 22 0.524 68.00 30.86 0.508 42.31 18 VICTOR RASK C CAR 41 19 0.463 67.00 22.67 0.497 39.37 19 MARK SCHEIFELE C WPG 40 20 0.500 44.50 17.83 0.493 39.23 20 JASON SPEZZA C DAL 35 18 0.514 62.71 37.79 0.490 37.60 21 JOHN TAVARES C NYI 38 16 0.421 68.50 35.00 0.488 37.46 22 MITCHELL MARNER C TOR 39 21 0.538 39.00 21.00 0.484 41.82 23 STEVEN STAMKOS C TBL 17 11 0.647 65.11 29.00 0.474 29.97 24 ALEKSANDER BARKOV C FLA 36 18 0.500 56.75 21.00 0.463 36.51 25 MIKAEL GRANLUND C MIN 39 21 0.538 55.80 24.40 0.460 40.80 26 PAUL STASTNY C STL 40 13 0.325 65.09 34.55 0.457 31.74 27 JEFF CARTER C LAK 41 15 0.366 69.67 24.33 0.448 33.35 28 MIKE RIBEIRO C NSH 41 18 0.439 62.88 33.06 0.447 36.32 29 MIKKO KOIVU C MIN 39 16 0.410 66.83 34.25 0.445 35.14 30 ERIC STAAL C MIN 39 22 0.564 74.46 36.77 0.442 40.99 You can see more of such expectation evaluations on our website,http://morehockeystats.com/fantasy/evaluation . Now, we ask ourselves, how can we use these stats evaluations to produce an overall evaluation of a player? To be concluded...
  12. @hf101, I must once again state that there is a confusion in terms. For me, what the NHL calls the "skill", is just the part of what I call "skill" - the one that involves handling the puck. I am pretty much ambivalent about the shootouts. They are a different game, just like the penalties at the end of a soccer match are a different game from the rest of the encounter. I am much more concerned about the inconsistency in point scoring, since it provides a wrong incentive. Now, regarding measurement of speed and accuracy, I don't think we can measure it ourselves, however, I think if the yearly pre-draft scouting combine reports are public, they should be the ones that give an idea, both qualitatively and quantitatively. Other than that since the science of nutrition and the physiology and the pharmacology move ahead all the time, the physical limits of the players grow as well, and the sportsmen are simply more physically capable than before. Also since they constantly build upon the knowledge accumulated before that, the players and the teams should be mentally better too than their predecessors. In chess, practically nobody argues that even without the computer-assisted preparation today's top players understand and play chess better than the leading grandmasters did 50 or 100 years ago (with the exception of a couple of geniuses). Talent is an inborn quality and should remain steady across generations in a skill so unrelated to mundane as hockey is, however - IMHO - the culture of effort, the level of commitment increased tremendously for all age groups of hockey players up from the preschoolers coming up for their first skating lesson.
  13. Original post. The goalkeeper is half of the whole team Soviet proverb from Lev Yashin's times. After a foray into the calmer lands of teams' evaluation using the Elo rating, it's time to turn our attention to the really juicy stuff - the evaluation of a single player. And we'll start with the most important one - the goaltender. DISCLAIMER: this evaluation concept is still a work in progress and one of several possible implementations of the idea. By coincidence, it's also the simplest evaluation to make. While many stats describe the performance of a skater (goals, assists, shots, hits, blocks, faceoff wins, etc. - and even one that is accounted usually for goaltenders) only one stat truly describe the goalie's performance: the saves percentage. Usually, whole four stats are used to compare the goalies: wins (W), saves percentage (SVP), goals against average (GAA) and shutouts (SHO), but will show you first, why three of them are mostly unnecessary. Also, the name saves percentage is a bit of a misnomer, since the values of svp are usually not multiplied by 100 to look like real percent, but are shown more frequently between 0 and 1, and therefore would be more properly named as 'Saves Ratio', or 'Saves Share'. Wins are truly results of team efforts. I always cringe when I read that a goaltender "outdueled" his opponent, when the both barely got see each other. The GAA is much more of an indication of how well the defense operates in front of the goalie. Shootouts are first, and foremost, a very rare thing, and secondly a 15-save shootout should not be the same as 40-save shootout, although for any of the four stats listed above they create two identical entry. Therefore we feel ourselves on a firm ground evaluating goalie's performance through SVP only (with a slight input from shootouts, as described below) - and the Elo function, of course. For the start, each goaltender is assigned an Elo rating of 2000 for his first career appearance. We discard performances in which goalies faced less than four shots, because these usually are late relief appearances in the garbage time, not really an evidence of goaltending in a true hockey game. We only account for them to display the real SVP accrued in the season so far, and we consider dropping these appearances completely. After the game we get the pure SVP from the real time stats. We adjust it in two ways: If, in the very rare case, the performance is below 0.7, we set it to 0.7 . If there was a shootout (not the shootout as defined by the NHL, but a performance where a goaltender was on the ice for at least 3420 seconds and did not let a single goal in during that time), we add a shootout bonus for the performance: Bonus = (Saves - 10) / 200 If there were less than fifteen saves in the shootout, the bonus is assigned the minimum value of 0.025. We consider adding this bonus necessary, because the opposing team is usually gives an extra effort to avoid being shut out even during the garbage time. Then, given the actual performance we can calculate the "Elo performance rating": Rperf = 2000 + (SVP - SVPvsopp) * 5000 Where SVPvsopp is the SVP against the opponent the goalie is facing - effectively the shooting % of that team minus the shots resulting in empty-net goals, sort of "Expected SVP against that opponent". That means that for every thousandth of the SVP above the expectation, the performance is five points above 2000 (the absolute average). Wait, there seems to be an inconsistency. Don't we need ratings of opponents for Elo changes calculation? Actually, no. Given an Elo performance of a player, we can calculate the rating change as a "draw" against a virtual opponent with that Elo performance, i.e. ΔR = K * (0.5 - 1 / ( 1 + 10 ** (( Rperf - Rg)/ 400)) ) ) Where K is the volatility factor mentioned in the earlier posts. Right now we are using the volatility factor of 32, but that may change - including introducing a dependency of this factor on goaltender's experience. And the new rating, is naturally, Rg' = Rg + ΔR Now we can calculate the expected remaining svp: SVPrem = SVPavg + (Rg' - 2000) / 5000 Where SVPavg is the league average SVP. It would be more correct to substitute that value with the weighted averages of the remaining teams to face (with accordance to the matches remaining), and we'll be switching to this index soon. We can also calculate the SVP expected from the goalie at the start of the season: SVPexp = SVPavg0 + (Rg0 - 2000) / 5000 where SVPavg0is the average SVP of the league during the previous season and the Rg0 is the rating of the goalie at the conclusion of the previous season (including playoffs), or the initial rating of 2000. We post a weekly update on our Elo ratings for goaltenders, and their actual and expected SVPs on our Twitter feed. You can also access our daily stats on our website page. It looks like we're ready to try to take on the skaters' performances. But I'm not sure it's going to fit into one posting. To be continued...
  14. Original post. One of the greatest chess methodologists, if not the greatest one, the sixth World Champion, Mikhail Botvinnik, wrote in one of his books (about the 1948 World Chess Championship Tournament):A tournament must go on a uniform schedule, so that the participants would get used to a certain pace of competition. ... The Dutch organizers neglected that. They didn't take into account that plenty of free days (because of the holidays, and because the number of the participants was odd) may break that rhythm and take the participant out of the equilibrium. When I found out that one of the participants is going to "rest" for six days before the last gameday of the second round, I suggested to my colleagues Mr. Keres and Mr. Smyslov that we would submit a protest together. Alas, they didn't support me! Angrily, I told them then: "You'll see, one of us is going to rest six days in a row at the Hague, and on the seventh day he'll lose without putting up any resistance..." And here came true the first part of my prophecy: after the six-day rest, Keres, pale as a sheet, sat at the chess table across from me, worrying, probably, that the second part of it will also come true...Keres lost a rather short and lopsided game.
  15. Hi guys, Is any of you in touch with the guy that runs and maintains the FlyersHistory/HockeySummaryProject website? Thanks!
  16. More Hockey Stats

    random

  17. More Hockey Stats

    sb.gif

    From the album: random

  18. More Hockey Stats

    bu.gif

    From the album: random

  19. Original post. Catching up... We left our reader at the point where we demonstrated how to produce Elo ratings for hockey teams over season (and over postseason too, if anyone wondered) and how to apply it to the up and coming next games of the rated teams. However, in its main eparchy, chess, Elo is rarely used to produce single match outcome projections. It's much more popular when used to create a long-term projection, such as the whole tournament, which in chess lasts between five to thirteen rounds, usually. Therefore, the question arises, shouldn't we try to use our newborn Elo ratings to long-term projections? And the answer is an unambiguous 'Yes!' We can and should create the projections for the team over longer spans such as a seven days ahead, thirty, or even through the end of the season! How do we do it? Since we computed the Elo ratings for all teams, and we know the schedule ahead of all teams, we can run the Elo expectation on all matchups during the requested span and sum them. And since we assume that each team performs according the expectation, their Elo ratings do not change during the evaluation span. Eteam = Σ(Ematch1, Ematch2, ... , Ematchn) All good? No. There is one more finesse to add. The produced expectations will all be calculated in 2-0 span per game, assuming only 2 points are in play in each matchup. However, due to the loser's point it's not so. Therefore on average there are 2 + NOT/SO / Ntotal points are handed out during the season in every match (where NOT/SO is the number of games that get decided in OT or SO). So we need to compute the NOT/SO value, divide it by two (because there are two teams in each match) and multiply the expectation of each team by this factor. By doing so we receive the reliable Elo expectation, such as one in the table below, as of Jan 2nd, 2017. Spans of 7 days, 30 days and through the end of the season are displayed (games, expected points and total). Elo ratings for season 2016 # Team Div Elo Pts Gin7 Pin7 Tin7 Gin30 Pin30 Tin30 GinS PinS TinS 1 Columbus Blue Jackets MET 2265.22 56 4 6 62 14 23 79 47 79 135 2 Pittsburgh Penguins MET 2186.57 55 1 2 57 11 16 71 44 65 120 3 Minnesota Wild CEN 2180.88 50 3 4 54 14 21 71 46 68 118 4 San Jose Sharks PAC 2137.87 47 3 4 51 14 20 67 45 62 109 5 Washington Capitals MET 2135.54 49 4 4 53 15 18 67 46 59 108 6 Montreal Canadiens ATL 2117.99 50 4 5 55 14 18 68 45 58 108 7 New York Rangers MET 2135.43 53 3 4 57 11 14 67 43 54 107 8 Chicago Blackhawks CEN 2103.27 51 3 4 55 12 15 66 42 52 103 9 Anaheim Ducks PAC 2105.41 46 3 4 50 13 18 64 43 55 101 10 Edmonton Oilers PAC 2092.89 45 4 4 49 14 16 61 44 53 98 11 Ottawa Senators ATL 2088.34 44 2 2 46 11 11 55 45 52 96 12 Toronto Maple Leafs ATL 2097.27 41 3 4 45 12 14 55 46 54 95 13 St. Louis Blues CEN 2066.58 43 2 2 45 12 12 55 44 51 94 14 Boston Bruins ATL 2079.41 44 4 5 49 15 17 61 43 49 93 15 Carolina Hurricanes MET 2093.06 39 4 5 44 13 13 52 46 53 92 16 Los Angeles Kings PAC 2066.68 40 4 4 44 14 16 56 45 52 92 17 Philadelphia Flyers MET 2079.35 45 3 3 48 12 13 58 43 46 91 18 Calgary Flames PAC 2076.79 42 4 5 47 14 16 58 43 49 91 19 Tampa Bay Lightning ATL 2068.90 42 4 4 46 13 14 56 44 48 90 20 New York Islanders MET 2070.87 36 2 3 39 12 14 50 46 51 87 21 Florida Panthers ATL 2059.66 40 4 5 45 13 14 54 44 46 86 22 Nashville Predators CEN 2055.15 38 4 4 42 14 14 52 46 48 86 23 Dallas Stars CEN 2052.77 39 3 3 42 13 13 52 44 46 85 24 Vancouver Canucks PAC 2049.05 37 4 5 42 12 15 52 44 46 83 25 Detroit Red Wings ATL 2033.62 37 3 3 40 13 12 49 45 43 80 26 Winnipeg Jets CEN 2017.50 37 4 4 41 14 14 51 43 40 77 27 Buffalo Sabres ATL 2009.45 34 3 3 37 13 12 46 46 41 75 28 New Jersey Devils MET 1994.66 35 5 4 39 14 12 47 45 37 72 29 Arizona Coyotes PAC 1921.41 27 3 2 29 12 8 35 45 30 57 30 Colorado Avalanche CEN 1910.42 25 3 2 27 12 7 32 46 29 54 The NOT/SO value right now is about 1.124 (i.e. about quarter of all games are decided past the regulation). So you know what's good for the people? But the people consists of men... Iconic Soviet movie The team projection leaves us wanting more. After all, don't we want to be able to evaluate individual players and factor it somehow in the projection to reflect the injuries and other reasons that force top players out of the lineups? Stay tuned.To be continued...
  20. More Hockey Stats

    forum.png

    From the album: random

  21. Part I. Part II. Sherlock Holmes and Dr. Watson are camping in the countryside. In the middle of the night Holmes wakes up Watson: 'Watson, what do you think these stars are telling us? 'Geez, Holmes, I don't know, maybe it's going to be a nice weather tomorrow? 'Elementary, Watson! They are telling us our tent has been stolen! Iconic Soviet joke. Estimating a hockey player via Elo ratings is a highly complex task. Therefore, we shall wield the dialectic approach of getting from the simpler to the more complicated, and will tackle a seemingly simplistic task first. Let's work out the Elo ratings for the NHL teams as a whole first. After all, it's the teams who compete against each other, and the outcome of this competition is a straightforward result. So, let's examine a match between Team A and Team B. They have ratings Ra and Rb. These ratings, or, more precisely, their difference Ra-Rb, defines the expected results Ea and Eb on the scale from 0 to 1. The teams play, one wins (S=1), another loses (S=0). To adapt this to the Elo scale, let's consider win 1 point, loss 0 point. The new ratings Ra' and Rb' will be (K is the volatility coefficient): Outcome Sa Sb Sa-Ea Sb-Eb dRa dRb Ra' Rb' Team A Wins 1 0 1-Ea -Eb K-K*Ea -K*Eb Ra+K-K*Ea Rb-K*Eb Team B Wins 0 1 -Ea 1-Eb -K*Ea K-K*Eb Ra-K*Ea Rb+K-K*Eb and the teams are ready for usage in the next meeting with their new ratings Ra' and Rb', reciprocally. 'Wait!', will ask the attentive reader, 'Not all possible outcomes are listed above! What about the OT/SO wins where both teams get some points.' And he will be correct. In these cases we must admit that the loser team scores 0.5 points, so unlike a chess game where the sum of the results is always 1, in the NHL hockey the total sum of results varies and can be either 1 or 1.5. Note, were the scoring system 3-2-1-0, then we could scale the scores by 3 rather than by two and get the range 1-⅔-⅓-0 where every result sums to 1. Alas, with the existing system we must swallow the ugly fact that the total result may exceed 1, and as the result the ratings get inflated. Which is a bad thing, sure. Or is it? Remember, the Elo expectation function only cares about the differences between ratings, not their absolute values. And all teams' ratings get inflated, so all absolute values shift up from where they would've been without the loser's point. Whom would it really hurt? The new teams. Naturally, we must assign an initial rating to every team at the starting point. One way could be assigning the average rating of the previous season to the new team. But we prefer a different and a much more comprehensive solution. We claim that since the teams that at the start of the next season are different enough beasts from those that ended the previous ones, so that the Elo ratings should not carry over from season to season at all! Therefore all the teams start each season with a clean plate and an identical Elo rating Ro. Once again, the attentive reader might argue, 'What about mid-season trades and other movements?' Well, dear reader, now you have a tool to evaluate impact of the moves on the team. If there is a visible tendency change, you can quite safely associate it with that move. Overall, the 82 game span is huge to soften any bends and curves in the progression of the Elo ratings along the season. Speaking of game spans, we must note one more refinement being done to the ratings. In the chess world, the ratings of the participants are not updated throughout the length of the event, which is usually 3-11 games. The ratings of the participants are deemed constant for the calculation of rating changes, which accumulate, and the accumulation is actually the rating change of each participant. We apply a similar technique for the teams' Elo calculations: we accumulate the changes for the ratings for 5 games for each team and "commit" the changes after the five-game span. The remainder of the games is committed regardless of its length, from 1 to 5. Why 5? We tried all kinds of spans, and 5 gave the smoothest look and the best projections. Now, as a demonstration, let's show how we calculate the possible rating changes in the much anticipated game where Minnesota Wild is hosting Columbus Blue Jackets on December, 31st, 2016: Rcbj = 2250, Rmin = 2196, Ecbj = 0.577, Emin = 0.423, K = 32 (standard USCF). Outcome Scbj Smin S-Ecbj S-Emin dRa dRb Ra' Rb' CBJ W Reg 1 0 0.423 -0.423 +13.53 -13.53 2263.53 2182.47 CBJ W OT 1 0.5 0.423 0.077 +13.53 +2.47 2263.53 2198.47 MIN W OT 0.5 1 -0.077 0.577 -2.47 +18.47 2247.53 2214.47 MIN W Reg 0 1 -0.577 0.577 -18.47 +18.47 2231.53 2214.47 Note: MIN gains rating when it gets a loser's point. Here is a dynamic of Elo changes (without five game accumulation) for the Metropolitan Division, as an example. See more detailed tables on our website: http://morehockeystats.com/teams/elo Ok, we got the ratings, we got the expected results, can we get something more out of it? To be continued...
  22. Happy New Year everyone! Original post. The Elo rating system is the system used for evaluation and comparison of competitors. Up until today it's been mostly applied in the domain of board games, most well-known in chess, but also in disciplines such as draughts or go. The Elo system, named after its inventor, Prof. Arpad Elo, who first published it in the 1950s in the US, is capable to produce a reliable score expectation for an encounter between two competitors who oppose each other.For those who are not familiar with chess or draughts, let's take a look on how the Elo ratings work:1) In an encounter between two competitors, A and B, assume they have ratings Ra and Rb.2) There is a function that maps the expected result for each player given the opponent: Ea = F(Ra, Rb) Eb = F(Rb, Ra) where F is a monotonic non-decreasing function bounded between minimum and maximum possible scores, such as 0 and 1 in chess. An example for such a function would be arctan(x)/π + 0.5 .Ea+Eb should be equal to maximum possible score.In practice a non-analytical table-defined function is used that relates only on the difference between Ra and Rb, and not their actual values. The function can be reliably approximated by the following expression: E = 1 / [ 1 + 10(Rb-Ra) / 400 ] which works well with ratings in low 4-digit numbers and rating changes per game in 0-20 range. 3) After the encounter, when real scores Sa and Sb have been registered, the ratings are adjusted: Ra1 = Ra + K*(Sa-Ea) Rb1 = Rb + K*(Sb-Eb) Where K is a volatility coefficient, which is usually higher for participants with shorter history, but ideally it should be equal for both participants. The new ratings are used to produce the new expected results and so on. The Elo rating has several highly important properties:1) It gravitates to the center. As rating R of a participant climbs higher, so does the expected result E, which becomes difficult to maintain, and a failure to maintain it usually results in a bigger drop in the rating.2) It's approximately distributive. If we gather N performances and average the opponents as Rav, the expected average performance as Eav = F(Ra, Rav), and the actual performance as Sav, then the new rating RaN' = Ra + N*K*(Sav-Eav) will be relatively close to RaN obtained via direct Rareciprocal update after each of the N games.3) It reflects tendencies, but overall performance still trumps it. Given the three players with ten encounters against other players with the same rating, when the performances are (W - win, L - loss): For player 1: L,L,L,L,L,W,W,W,W,W For player 2: L,W,L,W,L,W,L,W,L,W For player 3: W,W,W,W,W,L,L,L,L,L player 1 will end up with the highest rating of the three, player 2 will be in the middle, and player 3 will have the lowest one - but not by a very big margin. Only when the streaks become really long the Elo of a lower performance may overcome the Elo of a higher one.And how does Elo stack against the four Brits?* Goodhart's Law: pass. It measures the same thing it indicates.* Granger's Causality: pass. It is a consequence of a performance by definition, and a prediction of future peformance, by definition.* Occam's Razor: pass. The ratings revolve around the same parameter they measure.* Popper's Falsifiability: partial pass. The predictions of Elo sometimes fail, because they are probabilistic. However, the test of time and the wide acceptance indicate that the confidence level holds. Elo was even used for "paleostatistics" when the ratings were calculated backwards until middle XIX century, and the resulting calculations are well-received by the chess historians' community.The only well-known drawback of Elo is the avoidance by top chess players of competition against much weaker oppositions, especially when facing White, as such a game can be drawn relatively easily by the opponent, and the Elo rating of the top player could take a significant hit resulting in a drop of several places in the rating list.Now, to the question of the chicken and the egg - where do the initial Elo ratings come from? Well, they can be set to an arbitrary value of low 4-digit number. Currently a FIDE beginner starts with the rating of 1300. If the newcomer is recognized as being more skilled than a beginner, then a higher rating is assigned based on rating grades for each skill level, sort of an historical average of the newcomer's peers.And... What does all this have to do with hockey?To be continued...
  23. Original post. In the previous post we mentioned the Goodhart's Law and how it threatens any evaluation of an object. We said that it traps the Corsi/Fenwick approach because it substitutes the complex function of evaluation of a hockey player by a remarkably simple stat - shots. Goodhart's law is not alone. In any research it is preceded by the two pillars: Popper's law of falsifiability and the Occam's razor. A theory willing to bear any scientific value must comply with both, i.e. to produce hypotheses that can be overthrown by experiment or observation (and then relegated to the trashcan), and to avoid introduction of new parameters beyond the already existing ones. Add Granger causality into the mix and we see that the four Brits presented the hockey analytics society with pretty tough questions that the society - at least the public one - seems to be trying to avoid. The avoidance will not help. Any evaluation system will not be able to claim credibility unless it complies with the four postulates above, and within that compliance issues measurable projections. To be continued...
  24. Original post. Goodhart's law is the bane, the safeguard and the watchdog of everyone who tries to make conclusions from sample data. The "Schroedinger Cat of Social Sciences" practically says, if you want people to do X, but you reward them for doing Y, they will be doing Y rather than X. We start seeing that in the "possession analytics", based on shots taken, that the players begin to shoot from everywhere to get their possession ratings up. But we digress - the topic is the scoring system, we'll save that note for another blog entry. We want the NHL hockey to be spectacular. That's the main objective (beyond being fair and competitive, otherwise look for Harlem Globetrotters). In the past the spectacular was fighting as much scoring and winning games, but that taste of the public changed, and the fighting went away. It was not directly related to scoring and winning, it was just an extra free show provided. Now we're left with scoring and winning. These two are closely tied, and not necessarily as a positive feedback, since not allowing your opponent to score also helps winning. A 2-1 win is practically just as valuable as a 7-2. So in the mid-2000s, the winning objective, the points objective took over the scoring objective. And from the previous post we see that the existing 2-2-1-0 points system encourages low-intensity game preferably slipping into the OT. On the other hand we also noted that the 3-2-1-0 points system would encourage teams to clamp down and protect their minuscule leads. Looks like a circle to break... Well, here comes Goodhart's law. You want teams to score, or at least try to score, but you reward them for achieving points. So what they do is concentrate on getting points. Therefore, if the NHL want to see score-oriented hockey, the NHL needs to reward scoring, and not points. Still, the points have been used to determine playoff spots, so something has to give. First, let's take a wild ride by suggesting that we rank teams by the amount of goals scored. That would lead to a pretty drastic change and the end of hockey as we know it. This will lead to situations where a team might play for a period without a goaltender in a playoff race. In general, the goaltending position will deteriorate, and aren't we loving the spectacular saves just as we love slick goals? Probably, that would be too much. Thus, we can mitigate to allow the goal differential to be the ranking criteria. At the end of the season, the teams with the higher goal differentials will be ranked at the top, and the wins-ties-losses, well, they get relegated to tie-breaks. The incentive to score rather than to hold the opponent increases, because while now the competitors in their games cannot score more than two points, they still can score a bigger goal differential! All the lazy skating to finish the game after it's 4-0 or 5-1? Gone. This idea is actually not novel. It's been used for a long while in team chess tournaments. Such tournaments consist of matches, where each player of one team plays against an opponent of the opposing team at the same time. Each player's individual score (win, draw or loss) is accumulated into the total score. So a match of 8-player teams, where one team had 5 wins, 1 draw and two losses ends up with 5.5-2.5 score, essentially "the goal differential". At the end of the tournament, the scores accumulate, and the teams are ranked according to them. You can see the crosstables of historical chess tournaments at the wonderful Olimpbase website. And if you feel that the fact of winning or losing the game should be have more weight than just a tie-break (by the way, there will be less tie-breaks on goal differential), that is easy to factor in, just add a bonus "goal" to the winner, like it is done in the shootout now. Or, add two bonus "goals" for winning in regulation, one bonus "goal" for winning in the OT, abolish shootouts.
×
×
  • Create New...