Model reflections – part 2, individual matches

Part 2 of my 2016/17 model analysis looks at the performance of individual match modelling. Throughout the season I’ve used my adjusted goals rating assessment to generate probabilities for each match to test its effectiveness.

I’ve analysed probabilities generated purely from the model (i.e. based solely on retrospective shots and goals data), and also probabilities with some subjective adjustment applied – which I’ve posted throughout the season on this site (and used as the basis for “tips” on tipstrr.com).

A word about the subjective adjustment. As previously discussed, a combination of retrospective goal and shot data produces a reasonably good basis for future prediction. But, because it’s a simple and retrospective model, there are many factors (such as injuries, new players, managerial or formation changes) that it doesn’t quickly account for. I’ve made some adjustments to the data based ratings – mainly where there appear good reasons to do so, such as evidence that injured players or new formations are influencing a team’s performance. The size of adjustment hasn’t been particularly scientific, and is based more on judgement.  I realise that this introduces bias to the model – but I’m interested to see whether a subjective element improves prediction.

I’ve compared my modelled probabilities against the implied probabilities from Pinnacle closing odds (with margin removed), as these probably represent the best indication of the market driven price. For assessment of my model’s overall performance against the market I’ve used a Brier score method (explained here in a good article by Pinnacle). The lower the score, the better the model is at predicting.

The results are based on the 284 matches that I assessed, to determine whether to apply an adjustment – for match result markets.

Probability Brier score
Pure data based model 0.579
Model with subjective adjustment 0.565
Pinnacle closing odds 0.560

The good news is that the subjective adjustments did improve the predictive capability of the model (with a lower Brier score), which they should do. But…it’s still not as good as the market odds. One likely key reason is that the market takes account of more information than my limited analysis.

Although the market is best at overall prediction, the model may still identify value in certain circumstances. I’ve tested the return that the model would have produced, for a unit bet on each outcome (where the Kelly percentage is 3% or more).

Interestingly, using Pinnacle closing odds, both models just about break even. The graph below shows the cumulative pot over the season. The reason they both give a positive return is that they both identified value in some high-odds outcomes that came-off. For example, the big jump in the unadjusted model’s returns mid-season is after Watford’s win at Arsenal (odd’s 14.2) – in reality I would have never staked real money on that outcome!

ret

I also posted “tips” on tipstrr.com based on the subjectively adjusted probabilities. Here the results produced a positive return. The reasons that the returns were better than for Pinnacle odds, is that they’re usually based on odds at least a couple of days prior to kick-off, and use the best market odds (not always available in practice). Here’s the analysis for the “tips” published on Tipstrr

tipps

The overall return for match result markets was 14.3%. Interestingly this was buoyed by particularly good returns for high-odds matches, which may indicate where the model is finding value (or just random variation).

I also used the model to identify value in the over/under 2.5 goals markets. This wasn’t as good, with a -5% return. Possible reasons are that I’m not taking enough account of the factors that affect total goals, and that margins are smaller for the under/over markets.

Overall my key learns – having modelled matches for most of the season are:

  • The “adjusted goals” rating method is a good starting point for predicting match outcomes.
  • Taking subjective account of additional factors does improve the predictive qualities of the model. But this does not necessarily help identify betting value – the market is likely to be making more accurate adjustments.
  • Robustly analysing an individual match is hard work, there are so many factors that possibly affect the match outcome – ideally these should all be objectively included in the model, but data doesn’t always exist to do this effectively.
  • The model may identify value in certain circumstances – but I need more analysis to determine where! 300 matches is still a small sample.
  • More analysis is needed for the total goals markets.
  • Towards the end of the season the model often diverges significantly from the market odds – likely to be due to team motivation having an increasingly important effect (which is accounted for by the market).

I’ll carry on using the model next season – but this time try to put a stronger basis around the individual match adjustments, and ultimately aim to add these to the model itself.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s