1

I'm facing a problem about making a classification on a dataset. The target variable is binary (with 2 classes, 0 and 1). I have 8,161 samples in the training dataset. And for each class, I have:

  • class 0: 6,008 samples, 73.6% of total numbers.
  • class 1: 2,153 samples, 26.4%

My questions are:

  • In this case, should I consider the dataset I used as an imbalanced dataset?

  • If it was, should I process the data before using RandomForest to make a prediction?

  • If it was not an imbalanced dataset, could somebody tell me in which situation (like what ratio for each class) I could consider a dataset as imbalanced?

Oxbowerce
  • 8,522
  • 2
  • 10
  • 26
ouyqf
  • 11
  • 4

3 Answers3

1

Intuitively, it seems like an imbalanced dataset to have ~75/25 ratio of class labels.

If you want to take a look at it theoretically, you can do a hypothesis test. For a sample size of 8161, you can assume that the dataset is 50/50 as null hypothesis and then compute the probability that a number extreme as 6008 or more of them belong to one class as p-value and then try to reject the null hypothesis if the p value is low (less than 0.05 or 0.01 as per choice.)

This can be done using a binomial distribution.

Hithesh Kk
  • 31
  • 1
0

You can try ydata-profiling (https://github.com/ydataai/ydata-profiling). There's a property that measure whether a class is imbalanced or not based on entropy, might be helpful.

https://github.com/ydataai/ydata-profiling/blob/master/src/ydata_profiling/model/pandas/imbalance_pandas.py

The concept to validate imbalanced classes is pretty straightforward - on a dataset of n instances, if you have k classes of size Ci you can compute Shanon Entropy as follows:

It is one of the most precise metrics I've found, to validate whether the dataset is imbalanced, given Shanon-Entropy is commonly used to measure the impurity or uncertainty within a set of data.

FabC
  • 31
  • 5
0

I think you can speak of imbalanced targets if (in case of a binary classification problem) the classes are not represented in a 50:50 manner. This is almost always the case.

With about 25/75 in your case, I would see this as „imbalanced“. There are some strategies to deal with this problem, such as (re)sampling so that you achieve a 50:50 balanced sample (essentially you will lose observations in the majority class here). Alternatively you can use synthetic oversampling (SMOTE) and related rechniques.

However, some packages come with built-in options to deal with unbalanced targets, e.g. sklearn‘s random forest (option class_weight). Check the docs.

Peter
  • 7,896
  • 5
  • 23
  • 50