I think it's normal that the winrates for Rumble jungle in higher elo is lower because the pick is newer than Rumble mid, and jungler have not grown accustomed to how he should be played; therefore got abused in higher elos
I think it's normal that the winrates for Rumble jungle in higher elo is lower because the pick is newer than Rumble mid, and jungler have not grown accustomed to how he should be played; therefore got abused in higher elos
It's actually accounted for, though:
If Rumble jungle is just 5% worse than Rumble mid because players are bad at him, you'd see a flat win rate difference, but you don't. The difference grows with player skill, and it's bad for Rumble jungle.
See, players in higher MMR play more games. So higher-MMR players have more practice on a given pick. That means you would expect an uptick in new champ performance at higher ratings. But you don't.
Yes, it's imperfect. We're using MMR as a dummy variable to represent practice games. I'm aware that it's flawed. But you'd still intuitively expect that relationship to hold. Instead, we get the opposite.
For whatever reason, as players play more games and play higher-quality League of Legends, Rumble jungle is even worse of a pick and Rumble mid is even better.
I completely disagree with your point about context of statistics. If you provide stats and use it for an argument, the onus is on YOU to provide the CORRECT context and interpretation of the stats and when it can be applied and when it cannot be applied. The onus is NOT on people to rectify your misuse of the statistics.
Example: Hecarim had a 48-49% win rate in plat+ soloq forever, but it turns out people were building wrong/using the wrong rune. Before people figured out phase rush made him broken, phreak’s ENTIRE argument could be used to make it seem like Hecarim is not a good pick for pro play in the jungle. This was categorically false, and people could have said “yes the soloq winrate for hecarim doesn’t matter because people are playing/building/runing him incorrectly” is absolutely a sound argument. Yes, that person needs to then say “this alternative playstyle, build path, rune choice is wholly superior”.
Saying that soloq data is useful because of large sample size is absolutely garbage for the above reason that if everyone is playing a champ incorrectly, then the soloq winrates will reflect that incorrect playstyle/build/rune, NOT the true strength of a champion when played/built/rune’d correctly.
In the same way, saying soloq and pro win rates for champs don’t matter is an absolutely correct statement because they include SO many more variables than just the champion’s power within a pro game using a specific playstyle/build/rune.
The analogy would be if an economist or epidemiologist made an argument like “people of a certain race are inherently more unhealthy because they have higher rates of heart disease in the US”, which is complete horseshit. A competent statistician would look into the hundreds of factors that could influence this and figure out that “people of this race are X times more likely to live in urban areas and on average have much lower access to affordable healthcare and healthy foods, explaining why the incidence of heart disease amongst them is higher than other races”.
Every competent researcher would discredit the person who made the blanket statement using one or two population-wide overarching stats as incompetent and as having an agenda and not using sound research methodologies. (Source: I TA’d probability theory and stats for economics and financial engineering majors at Princeton).
You can use soloq data for gushing strength in pro play IF and ONLY IF: 1. The vast majority of people are using the optimal core build and rune 2. The pick itself doesn’t depend on coordination to be played (i.e. fasting senna souls) or doesn’t depend on chaos (i.e. most low elo-stomping assassins/fighters) 3. The strength of the champion DOES NOT from coordinated team strategies (i.e using multiple ults combined to blow enemy summoners 50s before an obj when your ults will come online faster than the enemy ults/summoner spells).
Phreak has a degree in economics but his use of stats makes it hard to believe. (I don’t actually think his degree is make-believe, btw. I’m just shocked that an economics major has presented this as public-facing analysis he believes is correct).
He incorrectly determines R squared using categorical variables (tier) which cannot be ordered (as opposed to numerical data such as MMR) but somehow thinks he can make the claim that pro play exists “to the right” of these categorical variables?? If he used MMR average by division (Gold 4, Plat 2, Masters, or by 200 LP increments), we would probably see that Lillia’s winrate as MMR increases is parabolic, not linear. But for some reason, he still uses a linear regression R squared value to claim Lillia is consistently weak??
The same thing is probably true for Rumble jungle Phreak shows at the end of his video. But because he uses data aggregated ant categorized poorly and thus uses linear regression incorrectly, Phreak incorrectly believes that the data supports his assertion that Rumble jungle is bad. Both Rumble and Lillia see a huge upswing between Diamond and Masters. That just means that people below Masters probably are not good enough to pilot those champions in the jungle. How on earth do you think that the skill difference between Plat and Diamond is the same as Diamond to Masters+? How on Earth are you grouping Masters+ as one category when the difference between Masters and high Challenger is the same as the difference between silver and diamond (using LP as numerical basis)??? This all makes no sense.
If a student presented this video as work in intro stats at Princeton, I’m not sure if this video would even get a passing mark. Riot internally has a wealth of data that we cannot access via the developer APIs (it would be sick to run regressions of jungle matchups across MMRs to compare item builds, for example), yet this video is somehow making money as a forefront of analysis in League of Legends using at best high school statistics that the teacher would say “the methods are technically not used correctly, but good effort because your calculations themselves are correct”.
I have no idea if Rumble is actually a good jungler or not. Phreak could be correct about him being better in solo lane vs jungle. But Phreak’s whole video can be boiled down to 1) there are strong soloq strategies that could be picked in competitive but are not yet because pro players don’t know everything, and 2) people bad at rumble shouldn’t play rumble. These are statements any League player could tell you.
The only point in this video that has merit is that there are many different builds and strategies that are found in soloq, and it is the responsibility of pro players and analysts and coaches to research and test these builds and strategies (referring to his example of Rylais Seraphine). This is definitely true, as many OP and META-defining builds in the past have actually come from not just from soloq, but low-elo soloq (Blue Ezreal, AP Tryn, AP YI, Shaco support etc.).
Sorry for typos and poor formatting; I’m on mobile. But this video had me fuming.
Example: Hecarim had a 48-49% win rate in plat+ soloq forever, but it turns out people were building wrong/using the wrong rune. Before people figured out phase rush made him broken, phreak’s ENTIRE argument could be used to make it seem like Hecarim is not a good pick for pro play in the jungle. This was categorically false, and people could have said “yes the soloq winrate for hecarim doesn’t matter because people are playing/building/runing him incorrectly” is absolutely a sound argument. Yes, that person needs to then say “this alternative playstyle, build path, rune choice is wholly superior”.
This one is actually fairly easy to combat for the same reason that it's reasonable to use MMR trending toward pro: High MMR players have their ear to the ground. They pick up the new OP builds. For example, here is the purchase rate of Turbo Chemtank on Hecarim in patch 11.5, when his pro presence spiked to 97%
Once literally every pro team had figured it out, half of players above diamond had finally followed on. Considering that Chemtank is in fact his best build, the exact same analysis done in the video would bear the same fruit when analyzing Hecarim.
Now to be fair, this still requires something to be figured out. But if we're using solo queue data in a reactionary way and saying, "Hey, this champion has overtaken pro" then we will observe the effects that caused the pickup skewed by MMR. It's why every single graph was using MMR as the independent variable to explain champion performance, because virtually every single skill in pro play trends in the same direction.
Sorry for wall of text.
It's clear you still don't understand Phreak's argument at all.
If we give tiers for each soloQ level's amount of coordination I think it would be made clearer. (I'm making these up to show Phreak's point, but I hope we can all agree that coordination level goes up when you go up in rank)
Iron: 0/10 Coordination
Bronze: 1/10 Coordination
Silver: 2/10 Coordination
Gold: 3/10 Coordination
Platinum: 4/10 Coordination
Diamond: 6/10 Coordination
Master/Gm: 7/10 Coordination
Challenger: 8/10 Coordination
Now, you are right that Pro play is another level on top of this, for now we can just assume that Pro play is the highest level of coordination, AKA a 10/10.
So, in the video Phreak demonstrates that Rumble's SoloQ jungle relative win-rate goes down with increasing rank. This simply correlates with the idea that as players get better, coordination gets better, but Rumble jungle becomes a worse pick.
So, why would we expect this phenomenon to suddenly not continue to pro play where coordination is at its highest level? LS argues that there's essentially some unseen level of coordination needed to show that Rumble is a great pick, whereas Phreak is arguing that there is no Pro play standard in which the trend should not continue.
FWIW, I'd argue it's more like 0 / 0.3 / 0.6 / 1 / 2 / 3 / 4 / 10.
Just, things like Taliyah ganks only require a 3-4 to be good whereas Ryze ultimates need a 6 or something.
Very much an oversimplification still, but that's the general idea.
But isnt there something to be said for Rumble not really being analogpus to other league of legebds champs in the same eay that other picks are? IE experience on other champions does not translate well to rumble. At least for mid and toplaners, they have probably laned against rumble and are more familiar with the mechanics and playstyle than junglers necessarily would.
I agree that learning Rumble is more unique than, say, learning Cassiopeia. We are in total agreement here.
So, wouldn't learning Rumble be a byproduct of playing more games? If so, then why does Rumble jungle's performance drop as he's measured in higher-playing skill brackets?
I've never had issues with Phreak being wrong, the issue is him being unable to admit it. Wasn't long ago he had that whole Zeal item Caitlyn argument about how building a Zeal item on Caitlyn was objectively bad because it's not as gold efficient as full AD Crit items, and when multiple pros added how Zeal items give movement speed and lets you stack headshots faster (which can't be measured in gold effiency) he literally said "i don't care, it's a bad build" while casting.
Rumble could literally go 100% wr 100% pb for the rest of MSI and Phreak would still argue he's not a good jungler and that he's right
Stacking Headshot is easily measured in gold efficiency: Headshots scale linearly with both AD and AS. It's a non-argument. Simply scaling DPS scales Headshots. To argue that AS is somehow a better Headshot scaler should make you trust them EVEN LESS.
"I need movement speed" is the exact same argument as "I need life steal first on Kog'Maw." I'm glad you feel that way. You're still wrong.
Meanwhile, I can not only prove out the DPS-per-gold, the exact same metric I used to predict new Phantom Dancer as overpowered (spoiler: it is), but also look at solo queue win rate data to see that in real-world applications, a second Zeal is still bad.
So, every single non-subjective form of proof is on my side. And on the other is... Opinion
IMO Phreak shouldn't answer them so directly so quickly like this.
For a part of Riot to argue out in the open directly with two other big league personnalities feels very unprofessional to me
I'm not interested in arguing specifically against Dom or LS or anyone. The first draft of this video was two days ago unscripted and it was so rambly I left I unpublished and spent a couple hours yesterday writing a script. It so happened that they put out a new video in between those points in time.
I mean Lillia has been around 44-46% win rate jungler in D2+ as far back as u.gg allows (which is 11.6) and yet is still highly prioritized as a competitive jungler.
Trying to conflate solo queue winrate with competitive viability is not good analysis. If the argument was/is Rumble isn't good in solo queue and you could make arguments for why it's not sure that would be more reasonable as Lillia requires a good amount of coordination with her ultimate. Rumble is probably similar in his equalizer and fighting in spots where his ult is most impactful.
But Phreak wants to use this data to say it's a bad competitive pick when he also tweeted about Rumble having I think a 3-7 record after taking out RNGs "Free" wins (it's actually 4-7) without taking into account some of the free losses from teams like DWG beating Infinity on the last day. If we take the RNG wins into account Rumble is 7-7 in jungle, if we take out Group A games his record is 4-6 in jungle.
I'm curious to see how Rumbles winrate does though when the rumble stage games are slightly more even. But if someone like DWG beats Pentanet even if they have Rumble jungle and using that to justify Rumble being bad is not good analysis.
Except she's not "still highly prioritized as a competitive jungler."
Her p/b at MSI is 14%. Her two picks are from players eliminated from the tournament. She lost both games.
Literally no one good at League of Legends is playing Lillia today.
I maybe stupid here, but I just feels like where you see the huge drop off in winrates from diamond to master because people are better at abusing inexperienced players and the margin of errors got significantly lower, which I feels like it will get better as times goes on.
Edit: I still feels like the total games played have to still be significantly lower than Rumble mid, as the pick just became recently popular in the Jungle
I have no doubt that Rumble junglers are relatively inexperienced. But unless I see an Azir curve, I'm not buying it.
It also means, regardless, that teams are still hard trolling their drafts. Even if you are in the camp of, "He's actually OP when played well, but teams aren't there yet" then why did virtually every team int all of their drafts by picking or banning him? How bad are you at understanding how your scrims went?
The end result is still the same: Draft better.
You didn't use MMR as your independent variable. You used tier (Bronze - Masters+). The analysis SIGNIFICANTLY changes if you actually use MMR (more precisely MMR range) as the independent variable. I quickly made a visual where each cell = a standard unit of MMR range (100 LP). (Not sure if this is actually the case as obviously MMR and LP are not 1-to-1, but this is the best I can do as a non-Rioter).
https://i.imgur.com/g0VwYdL.jpg
You CANNOT group Masters+ when the range for Masters+ is LARGER than all of Iron - Diamond combined. Sure, you might have sample size issues at the highest of MMR ranges, but for popular champions the analysis would at least be somewhat correct if done in this way. I promise you that with enough games, the Lillia and Rumble plots you made would look parabolic and their winrates would get higher and higher as MMR increases from masters to GM and beyond. In that case, wouldn’t it actually be that Rumble and Lillia jungle are so hard to play relative top opponents of the same skill level that you need to be at least Masters or GM MMR to be good enough to pilot those champs in the jungle, and maybe by the time player MMR is high challenger those two champs are giga busted in jungle? This could be true, I don’t know. But you don’t know either, because of the poor grouping of player MMR in creating your independent variable categories.
I have NO idea where on the right pro play is, but it likely doesn’t belong on the chart at all, as pro play is a different independent variable entirely versus soloq MMR.
I agree that higher MMR players pick up new builds faster, but many OP builds come from lower MMR. Blue Ezreal, AP Yi, AP Tryndamere, and many other broken builds in the past legit came from Bronze/Silver soloq strategies. Soloq data is a starting point from which to begin analysis. It is NOT a source from which conclusions can be independently drawn.
So, valid point on word choice.
However, keep in mind, pro play is only far to the right on things like coordination. Canyon may be rank 1 in EUW solo queue right now, but where should we put some of the other players? Cody from INF is 50-48 on his EUW account and is firmly in Master. If we're graphing how well champions play based on their hands and game knowledge... should he be to the right of challenger? If so, why? He's not even to the right of Master in solo queue. It's the same player. Yeah the coordination is different and I agree that one goes off the graph. But not hands. Not laning. Not positioning. Not target selection.
How about his teammates? His opponents? Should we give Rumble jungle the same lofty possibility that we give Wei to every team at MSI, or should teams maybe draft better?
The issue with the underlying point is that your entire argument only holds up based on the global data that is scewed towards EUW and NA. If we look just at the Korean server, your argument falls flat, since Rumble jungle winrate is actually high and continues to go up with ranks. Considering the initial point was that western teams and western players in general are behind on the pick, it actually supports the argument of LS and IWD.
Another thing, even though mid Rumble in Korea has slightly higher winrates, the amount of games is significantly lower, i.e. on u.gg there ~560 mid Rumble games and ~2900 jungle Rumble games tracked in dia2+(cumulative 11.9 and 11.10). So it is kinda expected that a lower pick rate would inflate winrate, it's a rather common occurrence. However, if you still want to use it to say that Rumble is not a jungler because his winrate is somewhat lower than in mid, you'd still be wrong - if a champion has 52%+ wr in both roles, I'd say it's good in both roles.
For the source I've checked the w/r on u.gg and leagueofgraphs, and the data in both sources was pretty similar. Btw, it'd be nice if you would list your sources on the graphs you use in the video.
edit: what I am trying to say is that you did a rather good job at explaining different factors that might influence your data, but you've missed a large correlation between the winrate and servers. It's not a diss, you did a good job otherwise, but it is very important in the context of the discussion.
Ooh, that's a good point, but also unsupported by data.
Now I'll grant you that if you slice the data down far enough until you find data that supports you, you can eventually find data that starts to close the gap, but then you get a host of new issues. For example, there were under 500 games of Rumble mid in master+ Korean solo queue over the last two weeks. If we're going to debate 1-2% win rate shifts as indicative of champion performance, that's not enough samples to support those claims. You also have metagame issues that aren't relevant to pro play.
See, Korean solo queue is rife with early surrenders. Over the last two weeks in master+ Rumble jungle games, Korean solo queue had 23.25% of games end in 15-20 minute surrenders and another 33.23% end pre-25. Worldwide for the same skill bracket and time, it was 15.02% pre-20 and another 28.95% pre-25. There are also 33% more open mids that end pre-15 in Korea than worldwide.
Rumble is an early game champion, especially in jungle. He falls off much less steeply in mid lane. Korean solo queue DOES NOT represent pro play. The player skill might be higher but it has its own cultural issues. You introduce a host of new biases by locking yourself to one region and none of those tenets hold true. So only when you forcibly introduce a ton of pre-20 and pre-25 surrenders that don't exist elsewhere in the world nor exist in pro play do you close the gap on jungle-mid. That doesn't give me a lot of confidence.
Wouldn't you expect that as Jungle Rumble is played in higher and higher skill brackets that the people playing against Jungle Rumble would better know how to punish an off pick like that?
And pros don't know how to do that? Why would this relationship suddenly flip?
Hey man! Think your content is great and the fact that you're interacting with people on reddit is awesome.
Some other people have mentioned it, but I think your grouping of Masters+ as one tier is misleading, because the skill difference between 0 LP Master and 1k LP Challenger is very significant.
Looking on u.gg, looks like Rumble jungle's Masters+ winrate is 47.71%. When you break it down between Masters, GM, and Chall, his winrate goes from 45.74% in Masters to 46.58% in GM to 51.45% in Chall. So his winrate isn't even dropping in higher skill brackets, the way you group the data just makes it seem like it is.
If you're looking at just 11.10 data, there are currently 49 recorded challenger Rumble jungle games. If you do 14 days, you see a pretty flat ~-3% win rate compared to the average jungler.
I'm interested in the sample being large enough that I'm confident in not having huge swings in win rate because of 1 result (keep in mind 55% win rate in 49 games is just winning 2 more than you lost and that's just unacceptable for this). Second, once I'm confident in low variance, I care about the trend. If a trend doesn't exist when travelling from bronze to diamond, why do you think one suddenly shows up when travelling from masters to challenger? It's possible for one to exist, but shouldn't be a leading theory.
Ah my bad then
It really felt like an answer to their previous stream where they talked a lot about your tweet and your usage of soloQ data since it came right after
Yeah I feel that. It's a popular topic, so it makes sense. It's just a reply to the topic in general as opposed to any one piece of content.
Yeah I have no idea how to account for any of that. There’s no proper mathematical way of doing so. That’s why pro play shouldn’t be on the chart at all.
BUT that doesn’t mean that the chart itself is useless. If done correctly, I actually think your analysis has the beginnings of actually statistically correct analysis that can provide meaningful insights for the pro scene as well as soloq in general.
The best way of approaching this IMO would be to keep soloq analysis as soloq, and then draw conclusions from soloq as to new possible strategies and in what contexts they could be good for pro.
From the conclusions you draw from soloq, you can then form hypotheses as to what new strategy (champ/build/rune) might be OP. You then can test these hypotheses in customs in-house and in scrims.
If a strategy passes all these tests and then doesn’t perform well on stage, it doesn’t necessarily mean you were wrong (low srs he sample size). But it could mean that there were other conditions/counter playstyles that you didn’t account for while theorizing, practicing, and testing this new strategy. So what you need to do then is go back and watch the replay as a team with your coaches and see if something happened that you didn’t account for in practice.
I hope what I’m saying through all my comments makes sense and if not please let me know/ask.
I’m only approaching this through the lens of statistics and the scientific method. I am not approaching your video/this thread as an authority on League of Legends - I’m not and don’t pretend to be one.
It makes perfect sense and I agree with virtually everything you wrote.
At the end of the day, it'd be great if teams could test everything. Certainly, it's wise for them to at least test promising candidates. But ultimately they just can't. There just isn't enough time to hypothesis test everything in League of Legends. There are 5.9 * 1021 possible team compositions. Good luck getting enough data on each one.
So by necessity you have to cut corners. You have to go more general. There are billions of possible drafts after the first six bans and a first pick Rumble. No team in the world has accounted for all the possible mid/jungle matchups by the time they made that pick. So I think to some degree a rigorous testing regimen is just not possible anyway, so don't try to hold anything to that standard.
This isn't to say you shouldn't practice anything. Of course your should. But you realistically can't VoD review every game of every champion you play. Get some confidence that it's a pick or build worth pursuing, put the time in on scrims, and if it felt like it went pretty well, that should be enough.
I'm seeing 51.45% winrate over 344 Challenger games on 11.10 from u.gg but maybe the site is wrong? 344 games probably a big enough sample size to read into the variation, but again I'm not sure. Also not sure how 51.45% that compares to the avg jungler.
I think the intuition has been presented elsewhere, and makes sense to me: people in Bronze-Master are not learning the champ well/fast enough, so the trend is they get punished harder as they get closer to master; then from Master-Chall the players are learning the champ quickly and getting rewarded for it in terms of winrate.
I think this intuition makes a lot of sense because even in pro play junglers have been getting criticized for not playing the champion optimally (heat management during clear, equalizer placement, etc.) so it makes sense that soloq players below Challenger are also having a rough time.
That's a pretty reasonable count of games. I was using lolalytics.
Regardless, I'd want to see more games when we're trying to nail down pretty small deviations. For reference, across 11.9 Rumble mid was 6.7% higher win rate across the patch for challenger (54.7, 48.0) according to u.gg. Right now, Rumble mid is sitting at 45.5 on that site, which is clearly not accurate.
That said, I'd expect some growth, but the changes seem too big. For reference, across all of 11.9 according to u.gg, challenger and GM Rumble jungle were within 0.7%. GM to master was under 0.1%. Master to diamond, 1%. That low-difference trend continues all the way down to Silver. In 11.10, that trend is the same except for the challenger-GM divide is 5%. Nothing makes me believe that Challenger players suddenly uniquely excel at Rumble after May 12. I believe the problem is more with sample size.
The difference grows with player skill, and it's bad for Rumble jungle.
That is true when we are comparing Rumble Mid against Rumble jungle, but imo for the most part they should be considered on their own, because:
Sure, maybe Morgana (liked you suggested in the video) is the best jungler, maybe mid rumble is the absolutely best champion in the game. I think the part that people most take offense to/disagree with is that Rumble jungle is just bad.
And Rumble Mid has the "Azir curve", whereas Rumble Jungle looks to either have the bellcurve (if we want to interpret into the -0.5 between Plat and Master/include Bronze, Silver) or is pretty much completely flat (past gold).
Rumble jungle is underperforming very simlarily among multiple brackets of play. And that flat underperformance can again be attributed to players having less experience on the champion than would be ideal.
One final note, somewhat unrelated: I always think the numbers can be used to ask question (like why not Guinsoo before Bork), but they generally aren't the answer. Guinsoo versus Bork would just have to be tested if you are actually that reliant on the Lifesteal, for Rumble Jungle I don't think there is a straightforward way to test it, but I think we should at least be able to describe why he is underperforming. You had a nice 2 liner explaining why Tristana into Syndra underperforms, I think it is fairly easy to make a similar oneliner about Lillia (frail early clear, fraile in general, power is very conditional in her ult), but I personally can't make a nice succinct explanation what Rumble is missing. When I play him I feel like the clear is fast, it is safe, his duelling is strong, his objective control between passive and ult is superb and his ganking with ult is excellent (and ultimate hunter means you have a fairly low CD on the ultimate - even without stacks it helps considerably).
In a similar way to how you say "you can trust your own judgement over what pros are picking when you look at data" I also feel like I can trust my own judgement over the data if I don't see why the data is the way it is.
Can you tell me what Rumble is lacking that numerous other junglers are offering? That makes him a subpar pick?
I'd propose it's due to his actual crowd control profile and lack of a real role on the team. He's there providing damage when teams already have champions who do that. FWIW I looked up Rumble jungle's highest winrate mid laners after hearing Pabu's interview about AD mids. The top of the list are all melee AD mids (Nocturne, Renekton, Sett, etc). In some cases, those champions just have very high win rates, so beware the biases, but it also feels accurate: Teams don't need substantial magic damage out of the jungle if they already have it in mid. They don't need a CC-less jungler because how exactly are you going to gank someone who already builds Mercury Treads for the lane matchup? It's the exact same reason people like Taliyah-Renekton and Elise-Renekton.
Absolutely, you’re 100% correct on this. It’s impossible to play test every situation.
But (and I might be wrong on this) it shouldn’t be hard to play test versus META champs and META styles that are common in pro. You just need to beat what’s common and what you’re opponent are trying to do - you aren’t trying to beat a 99.9999% optimally drafting and playing AI.
For example, you can just play test Rumble jungle versus Lucian or Tristana mid or Leona or Alistar support. Of course different champs have important differences and nuances, both both have similar flavors/playstyles.
Similarly, you can play test the rumble-Morgana and Rumble-Udyr jungle matchup in customs in isolation from laners and then in in-houses with laners. Again, I’m not a league expert, so I’ll leave it to the pro players and coaches who have experience in the various ways champs need to be tested as to the best way for doing this.
Most importantly, I genuinely think your analysis can be useful for identifying potentially new OP strats for pro from soloq. There just needs to be a bit more rigor and more ancillary analyses to support the initial claims.
But again, the best results from such analyses are hypotheses as to what might be sleeper OP. You can’t actually draw meaningful conclusions from this type of analysis on its own.
Generally agree, yeah.
I’ll play monkey in the middle here
The problem with his initial take is that he selectively removed data from his statistics by removing all of RNG’s rumble games because “they were free wins”. Sorry? But they were still wins and he didn’t remove lesser teams that picked Rumble that also stood no chance of winning. That means any conclusions you draw from the sample will be flawed. The bottom line is that in group stages, the close games that can be used as champion data are few and far between. If he really wanted to get his point across, he could have simply used c9 vs DK as his case study, and suggest a better jungler Canyon could play in Rumbles place.
LS also likes to say how drafts should be assessed on champions being executed perfectly. While that sentiment can be appreciated and even applied in trading card games and even which draft is technically better ON PAPER, humans are inherently not perfect. League is much more open ended than any TCG and at one single point in time, everyone on the map could be blundering. This was a roundabout way of agreeing with Phreak in his latest video in that yes, solo queue data can and should be used to assess what the best champions are. However, you can’t use data from below GM IMO because in a hilarious take from my platinum ass, the players just aren’t good enough.
The tweet wasn't meant to be a well-formed argument. Maybe I should have known better, but it's not like I expected the topic to blow up.
Except what statistics you choose to look at, and how you gather statistics is a system of bias. My sisters PHD is literally about how Big Data skews people’s perspective because they perceive statistics as objective truth. I feel like he’s doing something similar here where he makes stats seem like an objective truth.
That's a really valid point. There are a ton of champions I looked up and prepped graphics for that didn't fit the script purely for length than anything else. Malphite and Sion both have Garen curves and very low pro win rates. That said, I don't put a lot of faith in +/-5% pro win rates and even if they're true, there are good explanations.
I'll say that Azir and Garen were the first champions I looked up and didn't know their data for certain before collection. I also didn't know what I expected from Lillia when I gathered it.
I have no doubt that Rumble junglers are relatively inexperienced. But unless I see an Azir curve, I'm not buying it.
Hey Phreak, on somewhat the same topic:
In your Lee Sin example, you show Jungle/Mid winrate differential favoring jungle the higher the Elo. Could one not make the argument that this is because Lee does have an "Azir curve", in both mid & jungle, but it's significantly more prominent in jungle because junglers (especially high level) has years of experience on this champ, where all the solo laners are only just picking it up?
My hypothesis is somewhat contrary to that. I just looked up Lee Jungle and Lee mid as stand-alone picks and jungle has an Azir curve while mid has a very slight Garen curve. (it's pretty flat, all within 1%, but still trending down)
To be direct about the hypothesis, though: I expect Azir curves to be pronounced on any new champion (or new in role). Higher MMR players play more games thus are on average more practiced thus on average are less hampered by being mechanically bad at the champion. Admittedly, this is somewhat of a stretch since we're measuring several different things (hands, brain, coordination, etc.) through the same thing. But the end result is still ultimately the same from what I can see so far.
Sorry but how is 49 games a large enough sample size? For all we know half the losses on those could have been due to autofilled people.
That's exactly my point.
My point wasn't to select the data that suits me, but rather to point out that there are obvious regional differences, that are core to the conversation at hand.
Another thing, the data we are looking at is very different. For example, if we look at the data filtered by korean server from u.gg (my filters: vs all champions, soloq, 11.9, KR):
11.9.
P+. Jungle w/r: 48,86, games: 47k. Midlane w/r 51.55%, games: 15.5k. W/r diff 2.69
D+ Jungle w/r: 49.99, games: 11.2k. Midlane w/r 51.8%, games: 3.2k. W/r diff 1.81
D2+ Jungle w/r: 51.05, games: 5.6k. Midlane w/r 52.58%, games: 1.3k. W/r diff 1.53
M+ Jungle w/r: 51.07, games: 2.5k. Midlane w/r 52.03%, games: 0.5k. W/r diff 0.96 (though it is an anomaly, the number of games is too low)
I didn't use 11.10 in the reply even though the numbers there strongly support my point, but the amount of games is insignificant since the patch is too fresh.
Korean Rumble's Mid minus Jungle win rate in Master+: +3.12
Sadly, I fail to see it, cause I am observing the opposite trend - the advantage of mid Rumbles start to go away the higher we go on KR. That's why I am asking for the sources, because it is important, and that's why I am listing the tool I use with all its settings. If we are looking at a different thing, we wouldn't arrive at the agreement no matter how hard we try.
Also you really had a very good way of describing how to use data, but you've missed server bias and game volume bias. KR has twice as many games of Rumble jungle compared to NA if we account for ranked population difference, and winrates on NA are rather poor. On the other hand, the amount of games of Rumble in the Jungle on EUW is pretty similar to KR, but winrates are absurdly poor compared to KR and even NA. There is clearly something going on with these servers, making any data comparisons that mix all the data together meaningless. Another thing is that we don't have any data for the Chinese server, so here is another offender, but we sadly have to ignore it.
You introduce a host of new biases by locking yourself to one region and none of those tenets hold true.
I am not locking myself only to Korea, but we should clearly see that the data is wildly different based on the server choice, hence this factor has to be considered when talking about data. Me talking about Korea is an illustration of it. Thing is, League is way too complex of a game for us to put any data to make conclusions without accounting for all the different underlying factors. Your point now is based on an inherently flawed dataset, and you've failed to account the difference in data when using different factors for the set selection. I absolutely agree that korean server doesn't represent pro play, just like any other server's soloq, but we can use this data to validate some hypothesis.
Your hypothesis is "Rumble is better mid than jungle". Does the data support it? Yes (with a caviat that it's not conclusively true for KR). The hypothesis you argue against is "western regions are behind on the jungle Rumble pick and it's good". Does the data support it? Definitively. As a result I can say that the argument you've made actually is correct, but it is not a counter to the position you've been arguing against. The explanation in the video of how and why you use the soloq data to relate it to proplay is very solid. However, if used as an argument in the discussion that sparked the video, it's not a valid point.
I hope I wasn't too convoluted with the way I've argued, because I am rather tired due to the timezone I am in.
P.S. Let me conclude with a good ol' russian joke: "The clerk eats meat, while I'm eating cabbage, but on average we are both eating cabbage rolls"
I appreciate the nuance added. I was using lolalytics.com for most of my info since it has a very convenient 30-day filter to grab lots of information any time you're looking at a more unpopular pick or more constrained brackets.
That point was just an aside/additional note as to why the data might not be relevant. Rumble as a champion is disproportionately strong early (especially midlane) because of his ability to fight anyone pre-6 and tilt 2v2s in his favor. Because Korean soloq has a tendency to FF, early game champions are more advantaged there. You can frequently see this difference in stats sites; op.gg almost always rates champs very differently than how lolalytics/u.gg tier them with their global statistics.
He's not excluding Korea's winrates from the calculation; his initial argument was based around global winrates which obviously include Korea. He's talking about why using Korean soloq as the statistical model alone could be problematic.
At least from my understanding.
That's more or less accurate. Though unless I mistyped above, Rumble jungle is actually more early-game skewed. In other words, Rumble jungle falls off faster and harder than Rumble mid. So any server where players FF early is biased toward making Rumble look good.
In general, yes, there are lots of factors that keep solo queue from looking just like pro play. Some of them are server behavior. But we can softly cover many of them by grabbing data globally and comparing the skill levels of players and seeing if that gives us any results. Azir is our poster-child here.
/u/PhreakRiot Good video, and good to see you explain yourself in a better format than twitter. However you missed the mark on Lillia: Lillia did in fact get a nerf - in patch 11.6 her ult CD was increased by 20 seconds at all levels, and this nerf lines up with her decline in pro play.
That's a good point and one I overlooked.
Slightly off topic, which website do you use for your data Phreak?
lolalytics. Lets me do a lot more item and rune delving, plus has more back patches.
So, every single non-subjective form of proof is on my side. And on the other is... The opinion of someone who didn't make playoffs.
Really?
If anyone can be right/wrong (according to your video), regardless of their level of play, why are you discrediting someone's opinion because they didn't make playoffs?
Incredibly hypocritical.
E: props for the edit
Yeah, you're right, it's a cheap shot.
[deleted]
Headshot stack do not fall off outside of combat. Caitlyn is thus constantly somewhere between 0-5 stack of headshot in any given teamfight. Thus, any team fight with only two auto attacks automatically supports +AD.
The math is equivalent. Neither AS nor AD favors Headshot damage output.
This would change if Headshots fell off outside of combat (e.g. Master Yi passive, Kraken Slayer) but they don't.
Is there a reason u focused on mid lee sin instead of top lee sin?
So far at MSI by my calculations there have been 12 lee sin picks with most (7/12) going top lane. Now I'm sold that jungle lee can be better than mid lee. However why aren't you focusing on comparing top lee sin to jungle lee sin?
I touched on this in the video, I'm pretty sure but Lee Sin top has the exact same issue as with mid.
Quick question thought: other junglers (in particular Rek Sai for this case) provide better winrate in masters+ and have better azir curves. Shouldn't they be played in jungle and lee sin not be picked?
I think Rek'Sai is likely an underpicked jungler.
Can confirm, I'm not good at League of Legends.
Hahahaha <3
344 is actually not really enough for effects of this size. A 95% confidence interval for the win rate (assuming the “true value” is reasonably close to 50%) is about 2/sqrt(n) wide (using some basic repeated Bernoulli trials as the underlying model), over 10% for this sample size, which in isolation is terrible. People in general tend to underestimate variance and overestimate how good a sample size is, so it’s important to at least come up with some sort of statistical basis behind claims that a sample seems “large enough.”
You can definitely argue that in this case we have other (much more statistically significant!) supporting data, such as extrapolation from win rates and trends in other Elos, which will strongly affect our priors on the topic in question (so perhaps a 95% interval is overkill, and we’d be satisfied with a much weaker level). But taken by itself, a sample size of 344 is not nearly enough to measure effects whose size is <5%.
Thank you for this. It's been so long since I did any real hypothesis testing that I've forgotten all rules of thumb for confidence intervals and such.
Intuitively, I really tried to only use samples with at least 1,000 games. That's less than ~6% wr change 95% of the time, which is somewhat reasonable as long as you're not trying to take small movements as telling.
This logic only works in practice mode looking at the damage numbers on the dummy and disregards any "real" world factors of the movement speed or the life steal you mention.
What you're comparing here is two cars on a straight line, ofc the one with more raw power will come out ahead (ignoring downforce/acceleration/grip/whatever for simplicity) while on an actual track with bends n shit the result will be something completely incomparable.
Except in this case I'm also quoting practice times and the Ferrarri is still outperforming.
But don't worry, keep choosing the weaker car because its handling is slightly better. Sorry you can't catch the other racers in the straights.
Maybe not the ideal time or place, but just wanted to say been silently watching your videos for a long time and really appreciate all of the content you put out. Is some of the highest quality content within the scene that actually talks about the game itself. Long form analytical content in which one explains their thoughts/opinions does not seem to perform well compared to other forms of content, but me and my friend group are incredibly thankful someone puts the time into making the sort of content you do so thanks!
<3!
No you straight up don't. You argue pure DPS and your "practice times" is average people following their GPS.
And if the handling is slightly better you're contradicting yourself.
I'm glad you feel that way. You're still wrong.
Let me spell out the metaphor for you:
When racing, you're optimizing for fastest lap time. A mix of top speed, acceleration, and handling contributes to that lap time. The flawed argument here is "My acceleration is better, so I'll be faster." The counterpoint is "except look at these lap times of the car with higher acceleration and top speed, it's a full two seconds faster than your car."
"No, I like my better handling."
Up to speed?
The flawed argument here is "My acceleration is better, so I'll be faster."
Isn't that your entire argument? Just with power instead of acceleration?
I honestly think you're trolling me at this point, so one last try:
Many factors are relevant. Tunneling onto only a single factor and saying, "No, only this matters" is flawed.