Questions tagged [confidence]

33 questions
7
votes
1 answer

Nested-cross validation pipeline and confidence intervals

I'm hoping someone can help me think through this. I've come across a lot of different resources on nested-cv, but I think I'm confused as to how to go about model selection and the appropriate construction of confidence intervals for the training…
2
votes
2 answers

Confidence rating for regression tasks

In classification tasks, we can interpret the output vector as how "confident" the model is that the input has a certain label. For example, y = [0.01 0.20 0.99 0.10] would mean the model is 99% certain the input has label with index 2 and 1%…
Mossmyr
  • 123
  • 4
2
votes
1 answer

Relation between Cross Validation and Confidence Intervals

I've read from a source which I forgot where that 'In cross validation, the model with best scores at 95% confidence interval is picked'. But according to my stat knowledge, in order for CI (confidence interval) to works, you need normality…
Wong
  • 103
  • 4
2
votes
2 answers

Predict_proba for Binary classifier in Tensorflow

I'm working in binary classifier problem, where I have used Tensorflow low level API's. Last layer wrapped with Sigmoidal Function and just returning a single value. For my prediction I just set a standard threshold value of 0.5 and hence if it's…
vipin bansal
  • 1,282
  • 11
  • 19
2
votes
1 answer

Is there a model that can predict continuous data while also providing a level of confidence in the prediction?

The problem with Bayesian neural network seems to be that it is primarily working for classification problems. Is it possible to adjust this neural network, or even use a different model if one exists, to predict a continuous value and also provide…
tds
  • 23
  • 2
1
vote
1 answer

Plotting confidence intervals

For the following dataframe, I am trying to plot the means of a sample of 5 random rows . And also plot their respective confidence intervals using errorbars. I am unable to figure how to plot the confidence intervals using errorbars. col0 col1 …
1
vote
1 answer

Confidence score for all observations is between 0.50 - 0.55

Hello Data Science Stack Exchange Community, This question will appear to be open-ended, however any answers or thought will be much appreciated. I am trying to go-through a pre-trained Random Model Classifier with minimum documentation like what…
1
vote
0 answers

Confidence in the rewards for a RL task

For a RL task that I am trying to solve, for which I train once per day, I have the rewards stored for each of those days, so that I can see the progress on daily basis. In the beginning of the learning process, the reward for a given state…
1
vote
1 answer

Interpreting confidence interval results for datasets

I have created a dataset automatically and wanted to clarify my interpretation of the amount of noise using the confidence interval. I selected a random sample and manually annotated the sample and found that 98% of the labels were correct. Based on…
dmnte
  • 23
  • 3
1
vote
1 answer

Understand how to simulate a statistics

This solution describes how to simulate statistics to find a confidence interval. A journalist called 1000 people in town to ask who will they be voting for out of candidates A and B. The observed value came out to be 511 votes for A and 489 votes…
maindola
  • 11
  • 2
1
vote
0 answers

How to compare 55 models using AUC bootstrap confidence intervals?

I want to check if there is a difference in the confidence intervals of 55 models and select just one model. What should I do?
JAE
  • 13
  • 4
1
vote
0 answers

Correctness of derivation for binary F1 variance for F1 confidence intervals

I'm developing a python library for confidence intervals for common accuracy metrics, with both analytic and bootstrap computations. Following this paper, I implemented the Macro and Micro F1 scores analytic confidence intervals. However the…
Jacob G
  • 121
  • 2
1
vote
1 answer

Precision vs probability

Say I have a model which predicts a class $C_i$ from an input $X$, with a probability of 0.95 i.e $P(C_i| X)=0.95$. That would mean that if we do this over and over, then 95/100 times we would be correct. Having a model with a precision of 0.95 (for…
1
vote
0 answers

Is there a way to quantify uncertainty in classification?

I'm thinking of a way to build an extension to a binary classifier (actually I will get the output probabilities like in logistic regression, so technically you should call this regression) that outputs a confidence score about how "sure you are…
Tom
  • 85
  • 8
1
vote
1 answer

Overall acurracy +/- E (with 90% C.I.)

I am assessing the accuracy of my classification model. I performed a 4-folds cross-validation and I obtained the following Overall Accuracy: OA = (0.910, 0.920, 0.880, 0.910). So, the average OA is 0.905. My dataset contains 120 samples, therefore…
sermomon
  • 63
  • 7
1
2 3