• Changing RCF's index page, please click on "Forums" to access the forums.

2019 NBA Draft

Do Not Sell My Personal Information
Who is the top guy all time from 2011-2018 out of curiosity.

I’d imagine it is Zion. He is so far outside the bounds of the NBA data analyzed that I cant imagine anyone could be close. We used NBA data for the projection because there is far more of it available at the game level. I’ll double check but Jordan’s best NBA season was I think a 24 adjusted average game score....which is what the projection is for prospects. LeBron’s was a hair under 24. Zion is more than 10% above that pace......which I would imagine is not sustainable but we’ll see.

Given how manual data entry is at this point, it is unlikely I would pursue a complete prospect this. I’m happy to add any past guys to the spot check though, if people are curious.

My main focus will be filling out the lottery this year and since the Cavs pick in the 20’s, possibly extending it down that far....and then seeing how it correlates with how players are chosen and then later, how they do.
 
Last edited:
I’d imagine it is Zion. He is so far outside the bounds of the NBA data analyzed that I cant imagine anyone could be close. We used NBA data for the projection because there is far more of it available at the game level. I’ll double check but Jordan’s best NBA season was I think a 24 adjusted average game score....which is what the projection is. LeBron’s was a hair under 24. Zion is more than 10% above that pace......which I would imagine is not sustainable but we’ll see.

Given how manual data entry is at this point, it is unlikely I would pursue a complete prospect this. I’m happy to add any past guys to the spot check though, if people are curious.

My main focus will be filling out the lottery this year and since the Cavs pick in the 20’s, possibly extending it down that far....and then seeing how it correlates with how players aren’t chosen and then later, how they do.

That is why I said 2018, outside of Zion, who is it?
 
That is why I said 2018, outside of Zion, who is it?

It’s probably going to be JA this year.

He has a historically high confidence score and a projected outcome of borderline MVP. The confidence level, in relation to the projection level, is one of the highest in the NBA or college dataset.

Based on efficiency stats, the two guys with a chance to (possibly) be close to JA are Grant Williams and Brandon Clarke. But that is just taking a guess purely using PER ranks, which this does not consider. I can add both later this weekend to see.

And it’s worth pointing out there aren’t age adjustments right now and Clarke will be 24 at the draft. So I’d be less confident his projection would be accurate and more confident that his basement projection is likely more in line with his NBA future.
 
Last edited:
Here's the first results of a mini side project I have been working on.

This is a variation on game score, which is a holistic box score stat. The below takes in to account the difference between a prospects full season / and his full season minus a cross section of his most efficient games...... and then makes an NBA projection with an accompanying confidence score.

The confidence levels were determined based on a sampling of the top 200 qualifying PER NBA seasons. Based on NBA data, only truly great players produced a confidence score above .800.....the lower the score, the larger the range of career outcomes.

The concept here is pretty simple.....the reason great players are great is because they are, generally, insanely consistent. And the true measure of a good player is how he manages to impact a game when he doesn't have his "A" stuff.

This is manually very intensive, as we are relying on static CSV imports from CBBREF but I did get the aggregate top 10 done. I left off Garland because he doesn't have enough data.

The sanity check was just spot players from 2011 and on.....unfortunately college game log data only goes back that far on CBBREF. But I wanted to see what this spit out for a mix of guys that had varying outcomes, taken at different points.....and see where this would have rated them. Kawhi and Bennet are the outliers....but Bennett atleast had a really low confidence score. Kawhi had a lower projection (second option) but extremely high confidence he was hitting it.

For now, I haven't made any adjustments for age or SOS but just wanted to share.....I was pretty surprised by three things.....JA 1. being so high and 2. his confidence being through the roof. Additionally was very surprised Rui is considered a sure fire All-Star, with relatively high confidence.... projected in the same range of guys like Davis and Simmons who were #1 picks.....and just how luke warm it is on Barrett, relative to his mock draft projections and prospect ranking. Not surprising but kind of surprising....it pegs Zion’s floor at likely All-Star.....a projection a few of the #1 picks had as their likely best outcome.

Keep in mind these ratings should pull down some as each player gets over 30/35 games played......but just thought I would share this. At this point, it's a work in progress but interesting. This would only be one data point you would consider, in addition to things like NBA frame, Athleticism, etc.....

Can anyone think of any other recent small school players drafted high? Not just successful ones. With JA and Dame, there may be some SOS bias that needs a small correction.

gs-projec-v1-0.jpg

So to be clear, this is just looking at college game score? Is there anything stopping high-achieving seniors (e.g. John Konchar who I mentioned above) from getting super-high projections?
 
So to be clear, this is just looking at college game score? Is there anything stopping high-achieving seniors (e.g. John Konchar who I mentioned above) from getting super-high projections?

Yes, it is only considering college game level stats.

Eventually you’d have to make (probably) three adjustments to sort through the noise.....1. Some age adjustment. JA’s score is high but Davis at near 20, as a freshman, is more impressive but not reflected. 2. Some rough estimates of measureables. 3. Figure out how meaningful total games played is....or where a reliable cut off is.

The main thing it is trying to measure is consistency, in relation to overall output, so it’s also less useful with fewer games played. It’s again why Davis (next to Zion) is probably the most impressive datapoint. He played in a major conference, logged 40 games that season and had insane NBA measureables. For him to sustain that output, over that many games, as a true freshman is incredible.

Presently, this first pass only tries to score the consistency of their output, project it on a per minute basis and then asses the outcome variance.
 
@Nathan S the other thing to note is that the projection attempts to be a best case......which clearly is an educated guess.....but it's trying to make its' educated guess based on how historically great NBA players produce their stats.

For a really dead simple example of this....

Player A Game Scores - 30, 18 = 24.0 /avg
Player B Game Scores - 25, 23 = 24.0 /avg

Historical data says truly great, all-time players produce like player B does.......so you want a player that produces at a high level (on average) and has very little variance to how he produces those stats.

LeBron is a player B, so is Jordan......Steph is more of a player A.......getting to that threshold is impressive but the consistency from game to game is then what separate players in those upper tiers.

And then as for the "big board" data this has produced to this point, these numbers will shift dramatically. Most of these guys have only played 70-75% of the games they will likely play in this season. At that last 25-30% will be the toughest cross section of games....as the NCAA bubble / conference tourney takes shape....and then many of these guys play several high pressure NCAA tourney games. So there will be some funky stuff now, since roughly half the games are still out of conference, which is generally also rans where guys pad stats.
 
@Nathan S the other thing to note is that the projection attempts to be a best case......which clearly is an educated guess.....but it's trying to make its' educated guess based on how historically great NBA players produce their stats.

For a really dead simple example of this....

Player A Game Scores - 30, 18 = 24.0 /avg
Player B Game Scores - 25, 23 = 24.0 /avg

Historical data says truly great, all-time players produce like player B does.......so you want a player that produces at a high level (on average) and has very little variance to how he produces those stats.

LeBron is a player B, so is Jordan......Steph is more of a player A.......getting to that threshold is impressive but the consistency from game to game is then what separate players in those upper tiers.

And then as for the "big board" data this has produced to this point, these numbers will shift dramatically. Most of these guys have only played 70-75% of the games they will likely play in this season. At that last 25-30% will be the toughest cross section of games....as the NCAA bubble / conference tourney takes shape....and then many of these guys play several high pressure NCAA tourney games. So there will be some funky stuff now, since roughly half the games are still out of conference, which is generally also rans where guys pad stats.

Is there evidence to support the bold? Or is that just a hypothesis?
 
Is there evidence to support the bold? Or is that just a hypothesis?

It's hypothesis that, thus far, has shown enough promise to take a swing at college scouting. Just to see what the results are.

Data was run on multiple seasons of guys like LeBron, Jordan, Curry....all time greats....and then certain players who loosely could be categorized in tiers downward....i.e. Shaq, David Robinson, Wade, Lillard, etc......in an effort to see how consistently players produce at All-time, MVP, All-Star, etc. levels and how they produced those numbers.

On the players above, this model spit out this adjusted game score list ranking them top to bottom.

Jordan
Lebron
Shaq
Steph
Robinson
Wade
Lillard

You would have to run more data but that list seems to intimate there is possibly something to this. I'll re-iterate this is just a random side project, so it's a little rough.
 
It's hypothesis that, thus far, has shown enough promise to take a swing at college scouting. Just to see what the results are.

Data was run on multiple seasons of guys like LeBron, Jordan, Curry....all time greats....and then certain players who loosely could be categorized in tiers downward....i.e. Shaq, David Robinson, Wade, Lillard, etc......in an effort to see how consistently players produce at All-time, MVP, All-Star, etc. levels and how they produced those numbers.

On the players above, this model spit out this adjusted game score list ranking them top to bottom.

Jordan
Lebron
Shaq
Steph
Robinson
Wade
Lillard

You would have to run more data but that list seems to intimate there is possibly something to this. I'll re-iterate this is just a random side project, so it's a little rough.

Sure, but how does that compare to raw game score ranking?
 
Sure, but how does that compare to raw game score ranking?

Raw game score ranking from sampled seasons:

Jordan
LeBron
Robinson
Curry
Shaq
Wade
Lillard

Wade is also bunched much closer to Shaq / Curry in raw game score. And Jordan pulls away from LeBron a bit more. I’d have to lay it out in a doc to show the variation. The static list changes but the tiers look more defined in adjusted.

I’m not even sure the amount of data you would need to run to confidently prove this out though.
 
Last edited:
Raw game score ranking from sampled seasons:

Jordan
LeBron
Robinson
Curry
Shaq
Wade
Lillard

Is it really clear, though, that adjusting to reward consistency produces a more accurate ranking? Not meaning to be a hardass about this; I just want to hear more about your reasoning. Qualitatively, I've heard more people argue for the opposite approach (rewarding guys who produce the very best individual games over guys who are consistently very good but rarely great), not that I agree with that.
 
Is it really clear, though, that adjusting to reward consistency produces a more accurate ranking? Not meaning to be a hardass about this; I just want to hear more about your reasoning. Qualitatively, I've heard more people argue for the opposite approach (rewarding guys who produce the very best individual games over guys who are consistently very good but rarely great), not that I agree with that.

No worries, I think I’m doing a bad job describing my premise here.

This rating factors in the highs (very best individual games) but what it really cares about is, in relation to those highs, how far away is the bottom band of data from the top.

How close those two datasets are, possibly is an indicator of how likely it is they reach their projection. It’s not a guaruntee that they do but there’s some initial data that guys with a wider gap in that band don’t seem to be as likely to consistently pan out as prospects.

So a guy producing at a high game score level, with a really tight gap between that top/bottom dataset, possibly pans out more often...the hypothesis being that because over longer periods of time (82 games), the more consistent the output, the more likely they are to sustain it at a high level, across more games.

I’ll try to refine the language in the spreadsheet and do a better job of highlighting this statistical band I am talking about. I’m still not sure if I am doing a good job of describing this :chuckle:
 
No worries, I think I’m doing a bad job describing my premise here.

This rating factors in the highs (very best individual games) but what it really cares about is, in relation to those highs, how far away is the bottom band of data from the top.

How close those two datasets are, possibly is an indicator of how likely it is they reach their projection. It’s not a guaruntee that they do but there’s some initial data that guys with a wider gap in that band don’t seem to be as likely to consistently pan out as prospects.

So a guy producing at a high game score level, with a really tight gap between that top/bottom dataset, possibly pans out more often...the hypothesis being that because over longer periods of time (82 games), the more consistent the output, the more likely they are to sustain it at a high level, across more games.

I’ll try to refine the language in the spreadsheet and do a better job of highlighting this statistical band I am talking about. I’m still not sure if I am doing a good job of describing this :chuckle:

I see what you're trying to do, and I fully support efforts to distinguish higher-uncertainty from lower-uncertainty prospects. I'm just pointing out that most people agree it's better to roll the dice on high-uncertainty guys who have superstar upside rather than play it safe with guys who're very likely to be positive-impact roleplayers but nothing more. Of course, there are the rare guys who perform at a very high level *and* are consistent (e.g. Zion), but those guys are going to come out #1 no matter what your evaluation methodology looks like.

On a totally unrelated note, I know it's low hanging fruit but I'm going to take this opportunity to bash Okpala, if only because I wasted an hour of my life watching the second half of that game tonight. Going by the play-by-play you'd think he was on the bench for the last 10+ minutes, and he might as well have been. I damn near fell asleep waiting for him to do something.
 
Jackson Hoy dropped a new top-100 today. Big surprise is Talen Horton-Tucker up at #4, headlining an enormous 3rd tier stretching all the way down to Bol Bol at 17. Don't agree with 100% of his board, obviously, but appreciate that he's a smart guy who does his own scouting and isn't a slave to the mainstream big board echo chamber.

 
Jackson Hoy dropped a new top-100 today. Big surprise is Talen Horton-Tucker up at #4, headlining an enormous 3rd tier stretching all the way down to Bol Bol at 17. Don't agree with 100% of his board, obviously, but appreciate that he's a smart guy who does his own scouting and isn't a slave to the mainstream big board echo chamber.


This is pretty interesting for sure.

After thinking on what I was initially trying to do, I finalized a game score calculation that is far easier to calculate on a per possession basis. Unfortunately the same problem, in that the per 100 data only goes back to 2011 but it sets some benchmarks. If a player is missing, he's injured and won't accumulate enough games to matter.

GS/100/HAAS - This calculation considers age, height / athleticism relative to position, SOS...that is what HAAS stands for (Height/Athleticism/Age/SOS). 1.000 is, generally, an average height, average age, average athleticism prospect. The further above that number, the better mix of size, athleticism, age.......the further below, a prospect is less than ideal height, age or athleticism. "Height" includes length, I just chose to use that word.....meaning someone like Horton-Tucker is not tall but he is freakishly long, so he scores far better on the height metric than a typical undersized positional player.

SOS is a calculation that is shaved off the final HAAS number and it is intended to force a prospect to really counteract competition level with extreme production. Typically, the only players that survive a SOS evaluation, for the purposes of NBA prospects, are ones who are at least above the positional median, edging towards the top 1/3rd.....Lillard, Siakam, Morant, McCollum, etc. guys with NBA measurables and production, that even with a pretty severe penalty, still rate highly in game score /100 production. For example, Lillard was the #1 PG in this metric prior to SOS ding and remained #1 after the adjustment, he also posted the second best raw GS/100 since 2011 and the #5 GS/HAAS/100 (factoring in his schedule).

GS/100/ADJ - what effect a player has minus points. There is also an adjustment for shooting projection based on 3PT attempts per/100, as that has positive projection indicators for prospects. There seems to be a very clear bar prospects need to meet on this metric specifically, where only 2 NBA prospects who scored at the NBA cutoff or below turned in to above average to good NBA players (Middelton, Tobias Harris)....both SF's. I'll add more data to this but it included prospects drafted since 2011, that ranked 150 or better in game score /100 (top 1/3rd of the NBA) in either of the last 2 years. They aren't all in their yet but this initial data set was a minimum of 10 per position. Generally speaking.....if it isn't blue or purple, it doesn't seem to limit someone......if it is better than average (white), it is a positive indicator. Stat for current players (in college) is a lot nosier until players get to a typical number of games.

Barrett actually falls in this NBA cutoff area but has a much better NBA projection than either of the two (Middelton, Tobias Harris). Barrett is specially really drug down in this metric by his pedestrian STL and FTM rates. If he was more average at both, he'd move off that cutoff. He specially has shot much better in conference play from the FT line, so it is likely he moves off that cutline, if he continues boost that number (like he has in ACC play).

GS/NET - is simply GS/100/HAAS + GS/100/ADJ..... it's taking contribution with scoring, contribution without scoring and adding them together.

POS DIFF - performance above or below the NBA prospect positional median they are projected to play.

This list is sorted by POS DIFF but it isn't necessarily how you would take these players. What it is trying to assess is, how they produce vs. the median NBA prospect at their position. So, someone like Ponds is #9 based on positional difference but you wouldn't necessarily take him (a PG) over Barrett or Culver, two wings that are rated as "good" NBA wings, by prospect standards. Someone can argue with me if they choose, that a very good PG prospect is more valuable than a good wing.

In comparison to historical possession data (Hot to cold):

Red - All Time
Orange - Elite
Yellow - Very Good
Green - Good
Gray - Average
White - Below Average
Blue - NBA cutoff
Purple - Very unlikely

net-game-score-v1-0.png


I'd imagine the polarizing guys on this list are:

Bruno Fernando
Matisse Thybulle
Chuma Okeke
Brandon Clarke

The only thing I will point out with them specifically (especially Thybulle and Okeke) is that their GS/100/ADJ score.....contribution without scoring, are both just incredibly high. If you think their scoring prospects at the NBA level are average to maybe even below average, they are potentially sleepers in this draft. Past data intimates that this metric is what most reliably determines ceiling, since 2011 in this calculation.....#1 Anthony Davis, #2 Oladipo, #3 KAT, #4 Simmons, #5 Beal, #6 Mitchell, #7 Draymond, #8 Butler, #9 DAngelo Russell, #10 Drummond. The only potential star players not in the top 50% of GS/100/ADJ, since 2011, are Tatum and Fox. So guys can still do it but it is less likely.

Fernando.....I don't really have an opinion on. The data is the data.

With Clarke, he's breaking this projection. He's older than Draymond, has one of the lowest Height/Athleticism/Age/SOS scores (mainly due to his age) and he is rating out at 1.5x of Draymond's college per/100 net game score rating....and he's just head and shoulders above both Williams and Washington, even with his severe adjustment penalty.......I'm not sure what the hell to make of that. He's easily the the single strangest outlier in all of this data.

Hopefully someone finds this interesting. :chuckle:

/novel

EDIT: I don't think I'd wade in to projection on this but more so consider the quality of the draft. Red-Green means, at minimum, this suggests there are roughly 12 good or better prospects. It doesn't mean 12 will pan out, the number is much smaller based on what we know.....but it is a bit better than a typical draft since 2011 AND it has potentially 1 All Time talent (Zion) and 1 Elite one (Morant). And it tends to really like a collection of wings that are in this draft but maybe less exciting to the average person posting mock drafts.
 
Last edited:

Rubber Rim Job Podcast Video

Episode 3-15: "Cavs Survive and Advance"

Rubber Rim Job Podcast Spotify

Episode 3:15: Cavs Survive and Advance
Top