Traditionally ML algorithms for ranking take the features as input and then output a "ranking score" which do not have a natural probabilistic interpretation.
For example, suppose we have three laptops: "macbookAir", "macbookPro", "msSurface", and a bunch of customers preference as data and pass it to ML algorithms like XGBoostRanker, then it outputs a score for each of the three laptops and the laptop with the highest score is the one that the customer is most likely to buy.
I am trying to identify a ML technique that outputs a ranking probability instead. i.e. it outputs
p_123 = P("macbookAir" < "macbookPro" < "msSurface")
p_132 = P("macbookAir" < "msSurface" < "macbookPro")
p_213 = P("macbookPro" < "macbookAir" < "msSurface")
p_231 = P("macbookPro" < "msSurface" < "macbookAir")
p_312 = P("msSurface" < "macbookAir" < "macbookPro")
p_321 = P("msSurface" < "macbookPro" < "macbookAir")
such that p_123+p_132+p_213+p_231+p_312+p_321=1
I would like to ask is there any ML algorithm that does that? If not, is there any way to convert the score from ranking algorithms such as XGboost ranker to the ranking probabilities as above?
Edit: In my original problem, there are many different classes. I understand it becomes exponentially difficult as the number of classes n increases there will be n! possible rankings. But in my problem I only care about the top few classses. So I would like to ask is there at least a "partial ranking probability" where we can predict the probabilities P(A>B>others) so as to reduce the level of difficulty.
Thank you so much in advance.