Quick summary about the problem: we are trying to deploy our regression model, where the clients require "individual prediction error". Since we're predicting something unknown in advance, we can't measure the error the standard way of y_true - y_predicted.
I have done some research already, but the problems with existing methods are this:
Since we're using boosting algorithms (xgboost, catboost), we can't rely on normality assumptions and generate standard confidence intervals.
A solution proposed here is to train multiple models and get the average of it, but then it's not viable in the production level as it would become at least 3 times as slow to train / predict.
Another way is to create a quantile regression as stated here, but this would impact our accuracy, which we can't sacrifice.
Finally, we have tried to train a model using our validation set error and try to predict the test set error, but the accuracy is very very low.
So my question is this: Is there a way to estimate an individual prediction error, without knowing it's true value in advance, that wouldn't affect the model accuracy or speed?