1

Once the Model is built we want to check its performance, i did the following

  1. Predicted it on training set.
  2. Compute confusion matrix and ROC curve on training set.
  3. Predicted on test set
  4. Computed Confusion Matrix and ROC Curve on test set.

I want to know is it a right way and what more precise can be done?

I am using Decision Tress Random Forest SVM Both Linear Radial Naive Bayeian

Logistic regression and Neural Network. Do all of them have different

Performance Measure

1 Answers1

2

You should provide more information about your problem. How large is your data set? Do you have hyper-parameters? What is your classifier?

Normally, you should split your entire data set into three parts: training, validation and testing sets. The testing set you should never use before the model is finally tuned. If you data set is not large enough, you should you use the cross-validation method to measure the error of your classifier.

There are several metrics to assess a performance of your classifier: accuracy, precision and recall, Cohen's kappa, F1-score, etc.

If you are working with python and scikit-learn, I would recommend to check this book. It is nicely written and explains basics of model assessments.

Stephen Rauch
  • 1,831
  • 11
  • 23
  • 34
Arnold Klein
  • 513
  • 2
  • 5
  • 13