TensorFlow.js Meets Fantasy Basketball

By: James Deal

Using historical fantasy stats, we can train a machine learning model to predict a player's average fantasy points per game using their season averages from the preceding year.
We can leverage this wealth of historical data to provide users with unparalled insight into their fantasy basketball lineups/rosters as well as providing additional tips for transactions, trades, etc.
This enables users to continually optimize their lineups with the latest data and AI models in near real time.
Watch as a basic model is trained using an abbreviated dataset in real time below.

onEpochEnd MSE

The above chart plots the Mean Squared Error (MSE) at the end of each training Epoch (round).
While MSE provides some insight into how the model more closely fits our dataset with time, it doesn't show the whole picture as it's not representing the accuracy of the model when used to predict an outcome from data it hasn't previously encountered.
When training a model, we split the dataset into a training set and a validation set. The validation set is used as a baseline for comparison for the model to make adjustments between each training Epoch.

onEpochEnd Val. MSE

The above chart plots the Mean Squared Error (MSE) at the end of each training Epoch for the Validation Dataset.
In Machine Learning, Loss is a metric used to provide a measure of how bad the model's prediction was on a single example. If the model's prediction is perfect, the loss is zero; the further from perfect, the further the loss is from zero.

onEpochEnd Loss

The above chart plots the Loss at the end of each training Epoch.
While Loss provides insight into the how bad any given prediction of model is, this isn't representative of the accuracy of the model as it's this loss value is calculated from the dataset used to train the model.
We can plot the loss over time from the validation dataset in order to get a better idea of how bad (or, conversely, how good) each prediction of a model is.

onEpochEnd Val. Loss

The above chart plots the Loss at the end of each training Epoch for the Validation Dataset.
Ultimately, we can make improvements to the model by adjusting the data inputted to the model, the structure of the neural network itself, or by hypertuning the training parameters used by the model.
N.B. the model running for this demo is a simplified version of the proprietary model/algorithm used by our premium subscribers and is not intended to produce lossless results, but demonstrate the technologies employed.

Copyright © 2022 - All right reserved by BallerAnalytics