It is 2019. If you are plugged into the online dynasty fantasy football community at all, you probably know that:

- Receivers with strong draft pedigree are more likely to be successful
- Receivers with strong production in the passing game in college are more likely to be successful, especially when considering context
- Receivers with strong athleticism are more likely to be successful. (Don’t believe me? Look at these two lists.)
- Receivers playing in strong passing offenses are more likely to be successful

As a result, there is a number of cool people creating thoughtful analyses of film, stats, combine numbers, and landing spots. However, to this point, most of that work has not integrated those separate indicators into a single, statistically intuitive measure. We know a lot of the things that are important, but we don’t know very well how much each should be weighted in relation to one another. Here, I’ll begin to take a crack at that challenge.

I took a class in generalized linear models this past spring, and because I am (a) a huge nerd and (b) bad at using my education for productive purposes, I saw a need to apply some of those models to the issue in question. The cool thing about GLMs is, as the first sentence in the Wikipedia entry states, they let you model non-normally-distributed data.

Today, I’ll be using the simplest of those tools, logistic regression (like the usual linear regression, but for predicting 0-1 variables), to use each of those four categories to predict receiver breakouts. Later, I plan to move on with some lesser-known models in order to build on and enhance this analysis.

My methodology has been pretty simple: Over time, I’ve accumulated data from Pro Football Reference and collegiate production king Peter Howard to put prospect profile data and NFL production into a single data set. Then, I simply tried many, many different combinations to find what seemed like the best set of variables for predicting **whether or not a player breaks out in the NFL** (I will have different outcome predictions in the future). Machine learning would’ve been quite helpful with this process, but uh, that’s beyond me right now.

Some notes before I get started:

- A breakout season is defined as a season in which the receiver ranks in the top 24 of receivers in total Half-PPR points
- This analysis considers all players who have had their rookie seasons between 2000 and 2015. I consider those who entered the league in 2016 and after to be too early to call.
- Undrafted receivers were said to be drafted in the 8th round
- Non-FBS players did not have college production data available, and were filtered out

It is a little weird to present the final model in the middle of the article, but oh well. After comparing the many distinct models primarily using analysis of deviance and training/testing results, the most powerful and parsimonious model is:

breakout ~ College games played + Maximum college yardage market share + Number of breakout seasons in college + Returning offensive coordinator (Y/N) in rookie year * log(Round drafted)

For the math people, here’s the accompanying R printout for the coefficients:

*(This table’s been updated to reflect new numbers after a data quality improvement.)*

And for others, the asterisk between Returning OC and log(Round) indicates an interaction term: In addition to having individual variables for both measures, the model also includes a term that multiplies those two variables together. We include that term, in this context, because the round in which a receiver is drafted has a different effect on their likelihood to breakout *when the team’s OC is returning to the offense* versus *when the team has a new OC*.

That seems counter-intuitive — why would the effect be different between the two? In short, I don’t know. My best guess is that a returning OC generally has an established, successfully functioning offense, and would know the best way to put a new receiver to use. It doesn’t seem that meaningful, but it has (coincidentally or not) been a huge indicator the past 15 years: 29% of receivers drafted to a returning OC had breakout seasons, while just 21% of those without a returning OC had breakout seasons; *25 of 30* receivers taken in the first round to returning OCs broke out eventually, while just *9 of 23* receivers taken in the first to new OCs broke out. Perhaps that association weakens or even disappears over time, but in a sample of 321 receivers, it seems significant for now.

Moving on, I found it interesting that, ultimately, the age at which a player entered his rookie season (and by proxy, the age at which he was drafted) had no significant effect on the player’s likelihood to break out. We’d rather bet on a 21-year-old than a 24-year-old, but it turns out that college games played captures the significance of age by proxy, while also helping provide context for the two production measures.

A player’s number of breakout seasons in college depends on how many games they play, both simply in terms of how many seasons they play, and in terms of how many games they play in a season. The college games played variable controls for that influence, giving us better context to understand that player’s production. It also helps with the maximum yardage market share variable, of course. (For those unaware, yardage market share is one receiver’s receiving yards divided by his entire team’s receiving yards. A “breakout” season, by community standard, is defined as a season in which they produce at least 20% of their team’s touchdowns and yards.)

College production might have been the longest section to sift through. There are many, many different measures that various analysts use, often similar but not measuring the exact same thing. I tried many different combinations including Dominator rating, breakout age, average market share, etc., but ultimately, I think I came to a fairly intuitive conclusion: Number of breakouts, controlled by college games played, approximates how quickly a player breaks out and how well they sustain that production in the future, while maximum yardage market share represents their highest level of performance. Beyond that, no other college stats added *significantly *meaningful information to a prospect’s profile.

At this point, you may have noticed a complete absence of player measurables. Over time, I toyed with adding 40 yard dash time, 3 cone drill time, height, weight, and the interactions between some of those variables into the model (in addition to the other various combine drills), but in each case, they didn’t significantly improve the model. (They also decreased the sample size.)

This was a pretty big shock to me, as in a vacuum, 40-times, 3-cone times, and BMI (height-adjusted weight) are very strong indicators. But on second thought, it makes more sense to me, as college production and draft slot are partially determined by athleticism. Thus, that data would also contain the valuable information that combine numbers hold, while adding further valuable information about college production.

Still, the most surprising result of this model might be the lack of situational variables at the NFL team level. I looked at a number of factors, including: Team’s total passing yards in the previous season, percentage of returning passing yards (to account for new or injured QBs), percentage of returning receiving yards/targets, incoming receiving yards/targets, whether the head coach returns, etc. And yet, the only number that proved to add a significant amount of useful information, was the simple binary of an OC returning. My brain is still chewing on this one, but at this point, it seems like a strong indicator of a couple things:

- Previous-season stats don’t measure opportunity well enough to be useful. It’s tempting to simply say that we can’t really predict future opportunity well at all, but I believe that hard-to-quantify knowledge like “Sean McVay and Kyle Shanahan will make an offense much better” could add significance.
- Talent still matters; it’s not all situation-based. A heck of a lot of this model is based on prospect profile, and not team situation + The model works pretty well = Conclusion.
- Teams have a decent idea of what they are doing when drafting a receiver. I imagine that a good amount of the “situation” success factor is captured by the log(Round) variable — teams are spending a lot on players they select highly, so they’re invested in their development, and given that they are taking a player that high, they know they will make good use of them. Of course, this is speaking generally, as there’s an abundance of examples of high draft picks not working out, but you get the point.

Finally, there is the least surprising variable, the log of the round in which the player was drafted. While my first instinct would be to use the actual draft selection (x-th overall), I tested draft round, draft pick, and the logs of both, and the log of draft round turned out the best. So there.

So far, I’ve talked about how the model works, but not how *well* it works. Let’s look at that in two ways. First, ROC curves plot models’ sensitivity (how often it predicts a success for an observation that is a success) and 1-specificity (how often it predicts a failure for observations that fail) at various confidence thresholds (for a threshold at .25, a predicted probability of .2 is considered a predicted failure and a predicted probability of .3 a predicted success). The higher both of those measures are, the better, so we want our ROC curves to be as close to straight lines from (0,0) to (0,1), then from (0,1) to (1,1), as possible.

Below, the ROC curve for our model is in blue, and the theoretical ROC curve for a useless model is in black:

Looks like a strong improvement! But how do we quantify it? By looking at the area underneath the curve. In this case, the AUC is 0.84. The interpretation: 84% of the time, the model will predict a higher breakout probability for a player who ends up breaking out than for a player who never ends up breaking out. Not bad!

Second, let’s just look at the results; here’s who the model likes the best:

Promising and interesting! Just one of the model’s top 20 players failed to break out, and all of them make pretty good sense. Demaryius Thomas was a near-perfect prospect by this model, checking every box and boasting an otherworldly maximum market share. Calvin Johnson finishing in the top 5, despite not benefiting from his insane combine numbers, passes another test. I’m too young to remember Reggie Williams, but shame on him.

Now, here’s who the model didn’t like so much:

Sure enough, just two of the bottom 20 prospects broke out. One of those was Julian Edelman, who played quarterback at Kent State, so his numbers were far lower than we’d expect if he’d have played receiver. Otherwise, it’s Wes Welker and a whole bunch of anonymous (plus Adam Humphries, who was a bit threatening at one point).

It looks pretty good, and certainly a good basis to work from in the future! Of course, just trying to see if an incoming rookie will or won’t break out is only one question we ask about receiving production. Down the line, I’ll look to answer questions like: How likely is a player to sustain his productivity, after breaking out for the first time? How likely is a player who hasn’t broken out yet, and has played at least one season, to break out in the future? More generally, given what we know about a receiver’s past, how likely are they to have a breakout season next year?

I have varying levels of progress with each of those questions already. But for now, let’s cap it off by looking at some of the guys in their intermediate stages, who’ve played at least one year in the NFL, but no more than three years. (I will likely include the 2019 rookies in an article that goes behind the DLF paywall.)

Other notable probabilities:

- Corey Coleman: 35%
- Christian Kirk: 29%
- Cooper Kupp: 29%
- Anthony Miller: 24%
- Sterling Shepard: 22%
- Laquon Treadwell: 14%

A model is probably pretty good if it red-flags Laquon Treadwell.