The Data Scientist’s Dilemma: Answering “What If?” Questions Without Experiments | by Rémy Garnier | Jan, 2025


Now, we have the features for our model. We will split our data into 3 sets:

1- Training dataset : It is the set of data where we will train our model

2 – Test dataset : Data used to evaluate the performance of our model.

3- After modification dataset: Data used to compute the uplift using our model.

from sklearn.model_selection import train_test_split

start_modification_date = dt.datetime(2024, 2,1)

X_before_modification = X[X.index < start_modification_date]
y_before_modification = y[y.index < start_modification_date].kpi
X_after_modification = X[X.index >= start_modification_date]
y_after_modification = y[y.index >= start_modification_date].kpi

X_train, X_test , y_train , y_test = train_test_split(X_before_modification, y_before_modification, test_size= 0.25, shuffle = False)

Note : You can use a fourth subset of data to perform some model selection. Here we won’t do a lot of model selection, so it does not matter a lot. But it will if you start to select your model among tenths of others.

Note 2: Cross-validation is also very possible and recommended.

Note 3 : I do recommend splitting data without shuffling (shuffling = False). It will allow you to be aware of the eventual temporal drift of your model.

from sklearn.ensemble import RandomForestRegressor

model = RandomForestRegressor(min_samples_split=4)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)

And here you train your predictor. We use a random forest regressor for its convenience because it allows us to handle non-linearity, missing data, and outliers. Gradients Boosting Trees algorithms are also very good for this use.

Many papers about Synthetic Control will use linear regression here, but we think it is not useful here because we are not really interested in the model’s interpretability. Moreover, interpreting such regression can be tricky.

Counterfactual Evaluation

Our prediction will be on the testing set. The main hypothesis we will make is that the performance of the model will stay the same when we compute the uplift. That is why we tend to use a lot of data in our We consider 3 different key indicators to evaluate the quality of the counterfactual prediction :

1-Bias : Bias controls the presence of a gap between your counterfactual and the real data. It is a strong limit on your ability to compute because it won’t be reduced by waiting more time after the modification.

bias = float((y_pred -  y_test).mean()/(y_before_modification.mean()))
bias
> 0.0030433481322823257

We generally express the bias as a percentage of the average value of the KPI. It is smaller than 1%, so we should not expect to measure effects bigger than that. If your bias is too big, you should check for a temporal drift (and add a trend to your prediction). You can also correct your prediction and deduce the bias from the prediction, provided you control the effect of this correction of fresh data.

2-Standard Deviation σ: We also want to control how dispersed are the predictions around the true values. We therefore use the standard deviation, again expressed as a percentage of the average value of the kpi.

sigma = float((y_pred -  y_test).std()/(y_before_modification.mean()))
sigma
> 0.0780972738325956

The good news is that the uncertainty created by the deviation is reduced when the number of data points increase. We prefer a predictor without bias, so it could be necessary to accept an increase in the deviation if allowed to limit the bias.

It can also be interesting to look at bias and variance by looking at the distribution of the forecasting errors. It can be useful to see if our calculation of bias and deviation is valid, or if it is affected by outliers and extreme values.

import seaborn as sns 
import matplotlib.pyplot as plt

f, ax = plt.subplots(figsize=(8, 6))
sns.histplot(pd.DataFrame((y_pred - y_test)/y_past.mean()), x = 'kpi', bins = 35, kde = True, stat = 'probability')
f.suptitle('Relative Error Distribution')
ax.set_xlabel('Relative Error')
plt.show()

3- Auto-correlation α: In general, errors are auto-correlated. It means that if your prediction is above the true value on a given day, it has more chance of being above the next day. It is a problem because most classical statistical tools require independence between observations. What happened on a given day should affect the next one. We use auto-correlation as a measure of dependence between one day and the next.

df_test = pd.DataFrame(zip(y_pred, y_test), columns = ['Prevision','Real'], index = y_test.index)
df_test = df_test.assign(
ecart = df_test.Prevision - df_test.Real)
alpha = df_test.ecart.corr(df_test.ecart.shift(1))
alpha
> 0.24554635095548982

A high auto-correlation is problematic but can be managed. A possible causes for it are unobserved covariates. If for instance, the store you want to measure organized a special event, it could increase its sales for several days. This will lead to an unexpected sequence of days above the prevision.

df_test = pd.DataFrame(zip(y_pred, y_test), columns = ['Prevision','Reel'], index = y_test.index)

f, ax = plt.subplots(figsize=(15, 6))
sns.lineplot(data = df_test, x = 'date', y= 'Reel', label = 'True Value')
sns.lineplot(data = df_test, x = 'date', y= 'Prevision', label = 'Forecasted Value')
ax.axvline(start_modification_date, ls = '--', color = 'black', label = 'Start of the modification')
ax.legend()
f.suptitle('KPI TX_1')
plt.show()

True value and forecasted value on the evaluation set.

In the figure above, you can see an illustration of the auto-correlation phenomenon. In late April 2023, for several days, forecasted values are above the true value. Errors are not independent of one another.

Impact Calculation

Now we can compute the impact of the modification. We compare the prediction after the modification with the actual value. As always, it is expressed as a percentage of the mean value of the KPI.

y_pred_after_modification = model.predict(X_after_modification)
uplift =float((y_after_modification - y_pred_after_modification).mean()/y_before_modification.mean())
uplift
> 0.04961773643584396

We get a relative increase of 4.9% The “true” value (the data used were artificially modified) was 3.0%, so we are not far from it. And indeed, the true value is often above the prediction :

True value and forecasted value after the modification

We can compute a confidence interval for this value. If our predictor has no bias, the size of its confidence interval can be expressed with:

Standard deviation of the estimator

Where σ is the standard deviation of the prediction, α its auto-correlation, and N the number of days after the modification.

N = y_after_modification.shape[0]
ec = sigma/(sqrt(N) *(1-alpha))

print('68%% IC : [%.2f %% , %.2f %%]' % (100*(uplift - ec),100 * (uplift + ec) ))
print('95%% IC : [%.2f %% , %.2f %%]' % (100*(uplift -2 *ec),100 * (uplift +2*ec) ))

68% IC : [3.83 % , 6.09 %]
95% IC : [2.70 % , 7.22 %]

The range of the 95% CI is around 4.5% for 84 days. It is reasonable for many applications, because it is possible to run an experiment or a proof of concept for 3 months.

Note: the confidence interval is very sensitive to the deviation of the initial predictor. That is why it is a good idea to take some time to perform model selection (on the training set only) before selecting a good model.

Mathematical formulation of the model

So far we have tried to avoid maths, to allow for an easier comprehension. In this section, we will present the mathematical model beneath the model.



Source link

[aisg_get_postavatar size=64]