Skip to content
chkoar edited this page Aug 10, 2016 · 10 revisions

Welcome to the imbalanced-learn wiki!

imbalanced-learn

imbalanced-learn is a python module offering a number of resampling techniques commonly used in datasets showing strong between-class imbalance.

Most classification algorithms will only perform optimally when the number of samples of each class is roughly the same. Highly skewed datasets, where the minority class is heavily outnumbered by one or more classes, have proven to be a challenge while, at the same time, becoming more and more common.

One way of addressing this issue is by resampling the dataset as to offset this imbalance with the hope of arriving and a more robust and fair decision boundary than you would otherwise.

Resampling techniques are divided in two categories: 1. Under-sampling the majority class(es). 2. Over-sampling the minority class.

Bellow is a list of the methods currently implemented in this module.

  • Under-sampling

    1. Random majority under-sampling with replacement
    2. Extraction of majority-minority Tomek links
    3. Under-sampling with Cluster Centroids
    4. NearMiss-(1 & 2 & 3)
    5. Condensend Nearest Neighbour
    6. One-Sided Selection
    7. Neighboorhood Cleaning Rule
    8. Edited Nearest Neighbours
    9. Instance Hardness Threshold
    10. Repeated Edited Nearest Neighbours
  • Over-sampling

    1. Random minority over-sampling with replacement
    2. SMOTE - Synthetic Minority Over-sampling Technique
    3. bSMOTE(1&2) - Borderline SMOTE of types 1 and 2
    4. SVM_SMOTE - Support Vectors SMOTE
    5. ADASYN - Adaptive synthetic sampling approach for imbalanced learning

All objects in this module transform a data set D = (X, y) of a (numpy) array of features and a (numpy) array of labels into a new, resampled dataset D' = (X', y'). There are three methods to do so.

Methods:

  • fit : Find the target statistics to determine the minority class, and the number of samples in each class. Takes (X, y) as parameters.
  • sample : Returns the re sampled version of the original data set (X, y) passed to fit.
  • fit_sample : Automatically performs both fit and transform. Takes (X, y) as parameters.

Example: SMOTE comparison

Dependencies.

  • scipy
  • numpy
  • scikit-learn

This is a work in progress. Any comments, suggestions or corrections are welcome.

References:

  • SMOTE - "SMOTE: synthetic minority over-sampling technique" by Chawla, N.V et al.
  • Borderline SMOTE - "Borderline-SMOTE: A New Over-Sampling Method in Imbalanced Data Sets Learning, Hui Han, Wen-Yuan Wang, Bing-Huan Mao"
  • SVM_SMOTE - "Borderline Over-sampling for Imbalanced Data Classification, Nguyen, Cooper, Kamei"
Clone this wiki locally