31st-3rd Bookings Model
After an encouraging first gameweek, I was looking forward to tracking the results this week from another 60-odd games of European football. Once again, unders were the dominant recommendation, and results were pretty promising – let’s start there.
Results Breakdown
(Full Results as pdf here: Bookings Model Tracking – Week 2)
- 61 bets taken in total, 38 winners (63% win-rate)
- 17/61 bets were overs (28%)
Very similar to last week (we had a 60% win-rate, 31% unders).
With a larger sample size to learn from, here are some of the apparent trends popping out so far, and improvements to be made.
Unders are heavily favoured…
..but expanding on that, the market seems to overreact to rivalries. Examples from this week being Arsenal vs Man City, AC Milan vs Inter Milan where we got extremely low card predictions (much lower than the market lines), and subsequently very low card makeups (just 20 booking points shown in both respective games).
Again, not too many examples to go off here, but unders does seem a sensible play in these kind of fixtures just based on that data.
-3.5 and greater
One of the benefits of testing the model manually, is I get to see how the model responds to each particular game – I can be more critical in that sense. And there was one fixture in particular which stood out this gameweek, which that was Bayern vs Holstein Kiel. For this game, the market line was set at 37.5, whereas the model prediction was 47 – a pretty significant difference.
In games with a high supremacy (where one team is heavy favourites), there tends to be very few cards, which probably comes as no surprise. For this game in particular, the supremacy was -3.5 in favour of Bayern – ridiculously high.
Now, the model is able to account for supremacy – it’s one of the many variables used to generate a prediction. However, it maxes out at a 3.0 supremacy – when making the model, I hadn’t seen a game with higher. That is an improvement to be made for next week.
To account for supremacy, the model uses a negative gradient – meaning that as the supremacy difference increases, the cards multiplies drops. So, when a team is really heavy favourites to win, chances are there will not be many cards. Conversely, when a game is expected to be close, there are likely to be more cards.
Circling back to the Bayern example, for a 3.5 supremacy, the cards prediction should never be anywhere near 47.5 in any case. That game saw 2 yellow cards in total, one for each team. So not only does the model needs to begin to account for a 3.5 (and maybe even higher) supremacy, but it probably needs to be more aggressive.
Looking into each game individually, being meticulous, is important to highlight improvements.
Narrowing down
I mentioned last week that it would be unrealistic to place bets on every single European game each week – there needs to be a method to extract a higher percentage of winners. So let’s look into some more data, specifically, into which games had the biggest difference between the model and market:
- 9/10 of the highest value Unders selections proved successful.
- For the overs, 7/10 of the biggest differences were correct. Interestingly, the number one selection for both in terms of value was a loss – I’ve already mentioned the Bayern one though.
I have highlighted a range where results seem to be a bit more dicey – in week 1, it was between -0.4 to 3.7 (difference between model prediction and market line). In this range, there were 10/25 winners – a 40% win-rate, significantly lower than the 60% average.
In week 2, the dicey range was -2.2 to 5.4, a larger spread. Within this range, win-rate was 16/33, 48%. Outside of the range, win-rate improves significantly to 79%.
This is good though, it means that whenever there is a large difference between the model and market, more often than not, the model is proving closer to final results.
Building on from that, one of the most encouraging stats from last week, was that the model was out by only 1.5% from it’s predictions to actual cards shown. This week, it overestimated by 4.2%, so again, it’s going pretty close.
It is already quite aggressive on unders, but even still has overestimated total cards in successive weeks – another way to highlight the value on offer in the unders markets.
Staking
As promised, here is an example testing the Kelly Staking Plan, a practical means of utilising the model.
- Part of the model output is to suggest a probability for each line. These can then be put into the ‘Probability of Winning’ part of the Kelly Criterion Calculator.
- The sportsbook odds is obviously just the price available from the market. The betting account balance is based on your choosing.
- So lets say you want to bet on the over 4.5, and it’s priced at 3.25, here is what you would type in: (website is kellycriterioncalculator.com)
- This staking plan can be extremely aggressive, so I would recommend using a conservative multiplier set to a maximum of 0.2.
Plenty still to learn from this model, and will get cracking on the current improvements!