24th-27th Bookings Model Results
With another gameweek of European football coming to a close on Monday night, it’s time to see how the Bookings Model fared. It was a promising set of fixtures, with plenty of issues and challenges to solve going forward.
Lets start with the results breakdown.
Results
(Again, apologies for the image but it’s progress from last time at least. Next time it will hopefully be a bit more useful, but if you are interested feel free to view the results as a pdf here Bookings Model Tracking – Sheet2 (1))
- 62 games in total were covered this weekend, involving the Premier League, La Liga, Serie A, Bundesliga, Ligue 1, Championship, Turkish Super Lig and Scottish Premiership.
- Of the 62 card lines covered, the model suggested to take 19 overs (31%), and 43 unders (69%) – unders seems to be where the value lies in the bookings markets.
Just a quick comment on how the results actually work – winners don’t necessarily correspond to a betting win, the model is suggestive of value. If it suggests taking an under, it means the true line is lower than the market line. If the final result is then lower than the market line (say 40 booking points are shown for a 44 line), then the results will track as a W.
I use this specific example above, because it is a good example from the Crystal Palace vs Brentford game on Saturday. The model gave a line of 35.9, lower than the market line of 44.5, and the final count was 40. However, the o3.5 line was given out by bet365, which obviously would have lost if taken as a bet.
The best way around this (that I can currently think of) is to set a probability for each line, as shown below, and input those values into a Kelly Criterion Staking Plan which would suggest a stake for each line.
- On Friday, we got 4 / 6 winners
- Saturday, 17 / 27 winners
- Sunday, 13 / 24 winners
- Monday, 3 / 5 winners
In total, the model was correct with 37 / 62 suggestions this weekend – a 60% hit-rate which is incredibly promising.
The final column in the results table is the correction factor – it’s the difference between the actual cards shown, and the prediction. So if the model predicts 40 booking points, the 80 are shown in the actual game, the correction would be 2.0 – or 200%.
On average, the model overpredicted total booking points this week by 101.52%. Being off by less than 2% for 62 games is pretty decent actually. Again, promising results, but needs a lot more time for testing.
Now, it’s unreasonable to place bets on every single European card line every weekend. That’s wholly unnecessary. So the next challenge is to identify a method to lower the volume, and extract a higher percentage of winners.
For example, let’s look at the extreme examples with the biggest difference between the market line and model prediction.
- When the model predicted a line four booking points or greater in favour of the under, there were 17 / 22 winners (77% win rate)
- Alternatively, when the over was in favour by two or more booking points, we had 8 / 11 winners (72% win-rate).
Again, given the very low sample sizes here it’s difficult to read too much into them. If you have any suggestions on how to filter the data, feel free to give me a message – I’m extremely open to testing as much as possible at such an early stage in the project!