Suppose that in a binary classification task, I have separate classifiers A, B, and C. If I use A alone, I will get a high precision, but low recall. In other words, the number of true positives are very high, but it also incorrectly tags the rest of the labels as False. B, and C have much lower precision, but when used separately, they may (or may not) result in better recall. How can I define an ensemble classifier that gives precedence to classifier A only in cases where it labels the data as True and give more weight to the predictions of other classifiers when A predicts the label as False.
The idea is, A is already outperforming others in catching true positives and I only want to improve the recall without hurting precision.