Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 23 additions & 27 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,29 @@
UnbalancedDataset
=================

UnbalancedDataset is a python package offering a number of re-sampling techniques commonly used in datasets showing strong between-class imbalance.

[![Code Health](https://landscape.io/github/fmfn/UnbalancedDataset/master/landscape.svg?style=flat)](https://landscape.io/github/fmfn/UnbalancedDataset/master)

Installation
============

### Dependencies

* numpy
* scikit-learn

### Installation

UnbalancedDataset is not currently available on PyPi. To install the package, you will need to clone it and run the
setup.py file. Use the following commands to get a copy from Github and install all dependencies:

git clone https://github.com/fmfn/UnbalancedDataset.git
cd UnbalancedDataset
python setup.py install

UnbalancedDataset
=================
About
=====

Most classification algorithms will only perform optimally when the number of samples of each class is roughly the same. Highly skewed datasets, where the minority is heavily outnumbered by one or more classes, have proven to be a challenge while at the same time becoming more and more common.

Expand Down Expand Up @@ -48,33 +58,19 @@ Bellow is a list of the methods currently implemented in this module.
1. EasyEnsemble
2. BalanceCascade

Example:
![SMOTE comparison](http://i.imgur.com/s8JHWPp.png)
The different algorithms are presented in the [following notebook](https://github.com/glemaitre/UnbalancedDataset/blob/master/notebook/Notebook_UnbalancedDataset.ipynb).

This is a work in progress. Any comments, suggestions or corrections are welcome.

Dependencies:
* numpy
* scikit-learn

References:

* NearMiss - "kNN approach to unbalanced data distributions: A case study involving information extraction" by Zhang et al.

* CNN - "Addressing the Curse of Imbalanced Training Sets: One-Sided Selection" by Kubat et al.

* One-Sided Selection - "Addressing the Curse of Imbalanced Training Sets: One-Sided Selection" by Kubat et al.

* NCL - "Improving identification of difficult small classes by balancing class distribution" by Laurikkala et al.

* SMOTE - "SMOTE: synthetic minority over-sampling technique" by Chawla, N.V et al.

* Borderline SMOTE - "Borderline-SMOTE: A New Over-Sampling Method in Imbalanced Data Sets Learning, Hui Han, Wen-Yuan Wang, Bing-Huan Mao"

* SVM_SMOTE - "Borderline Over-sampling for Imbalanced Data Classification, Nguyen, Cooper, Kamei"

* SMOTE + Tomek - "Balancing training data for automated annotation of keywords: a case study", Batista et al.

* SMOTE + ENN - "A study of the behavior of several methods for balancing machine learning training data", Batista et al.

* EasyEnsemble & BalanceCascade - "Exploratory Understanding for Class-Imbalance Learning" by Liu et al.
1. NearMiss - ["kNN approach to unbalanced data distributions: A case study involving information extraction"](http://web0.site.uottawa.ca:4321/~nat/Workshop2003/jzhang.pdf), by Zhang et al., 2003.
1. CNN - ["Addressing the Curse of Imbalanced Training Sets: One-Sided Selection"](http://sci2s.ugr.es/keel/pdf/algorithm/congreso/kubat97addressing.pdf), by Kubat et al., 1997.
1. One-Sided Selection - ["Addressing the Curse of Imbalanced Training Sets: One-Sided Selection"](http://sci2s.ugr.es/keel/pdf/algorithm/congreso/kubat97addressing.pdf), by Kubat et al., 1997.
1. NCL - ["Improving identification of difficult small classes by balancing class distribution"](http://sci2s.ugr.es/keel/pdf/algorithm/congreso/2001-Laurikkala-LNCS.pdf), by Laurikkala et al., 2001.
1. SMOTE - ["SMOTE: synthetic minority over-sampling technique"](https://www.jair.org/media/953/live-953-2037-jair.pdf), by Chawla et al., 2020.
1. Borderline SMOTE - ["Borderline-SMOTE: A New Over-Sampling Method in Imbalanced Data Sets Learning"](http://sci2s.ugr.es/keel/keel-dataset/pdfs/2005-Han-LNCS.pdf), by Han et al., 2005
1. SVM_SMOTE - ["Borderline Over-sampling for Imbalanced Data Classification"](https://www.google.fr/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0CDAQFjABahUKEwjH7qqamr_HAhWLthoKHUr0BIo&url=http%3A%2F%2Fousar.lib.okayama-u.ac.jp%2Ffile%2F19617%2FIWCIA2009_A1005.pdf&ei=a7zZVYeNDIvtasrok9AI&usg=AFQjCNHoQ6oC_dH1M1IncBP0ZAaKj8a8Cw&sig2=lh32CHGjs5WBqxa_l0ylbg), Nguyen et al., 2011.
1. SMOTE + Tomek - ["Balancing training data for automated annotation of keywords: a case study"](http://www.icmc.usp.br/~gbatista/files/wob2003.pdf), Batista et al., 2003.
1. SMOTE + ENN - ["A study of the behavior of several methods for balancing machine learning training data"](http://www.sigkdd.org/sites/default/files/issues/6-1-2004-06/batista.pdf), Batista et al., 2004.
1. EasyEnsemble & BalanceCascade - ["Exploratory Understanding for Class-Imbalance Learning"](http://cse.seu.edu.cn/people/xyliu/publication/tsmcb09.pdf), by Liu et al., 2009.
55 changes: 42 additions & 13 deletions unbalanced_dataset/over_sampling.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ class OverSampler(UnbalancedDataset):
*Supports multiple classes.
"""

def __init__(self, ratio=1., random_state=None, verbose=True):
def __init__(self, ratio=1., method='replacement', random_state=None, verbose=True, **kwargs):
"""
:param ratio:
Number of samples to draw with respect to the number of samples in
Expand All @@ -34,6 +34,11 @@ def __init__(self, ratio=1., random_state=None, verbose=True):
random_state=random_state,
verbose=verbose)

self.method = method
if (self.method == 'gaussian-perturbation'):
self.mean_gaussian = kwargs.pop('mean_gaussian', 0.0)
self.std_gaussian = kwargs.pop('std_gaussian', 1.0)

def resample(self):
"""
Over samples the minority class by randomly picking samples with
Expand All @@ -60,18 +65,42 @@ def resample(self):
else:
num_samples = int(self.ratio * self.ucd[key])

# Pick some elements at random
seed(self.rs)
indx = randint(low=0, high=self.ucd[key], size=num_samples)

# Concatenate to the majority class
overx = concatenate((overx,
self.x[self.y == key],
self.x[self.y == key][indx]), axis=0)

overy = concatenate((overy,
self.y[self.y == key],
self.y[self.y == key][indx]), axis=0)
if (self.method == 'replacement'):
# Pick some elements at random
seed(self.rs)
indx = randint(low=0, high=self.ucd[key], size=num_samples)

# Concatenate to the majority class
overx = concatenate((overx,
self.x[self.y == key],
self.x[self.y == key][indx]), axis=0)

overy = concatenate((overy,
self.y[self.y == key],
self.y[self.y == key][indx]), axis=0)

elif (self.method == 'gaussian-perturbation'):
# Pick the index of the samples which will be modified
seed(self.rs)
indx = randint(low=0, high=self.ucd[key], size=num_samples)

# Generate the new samples
sam_pert = []
for i in indx:
pert = np.random.normal(self.mean_gaussian, self.std_gaussian, self.x[self.y == key][i])
sam_pert.append(self.x[self.y == key][i] + pert)

# Convert the list to numpy array
sam_pert = np.array(sam_pert)

# Concatenate to the majority class
overx = concatenate((overx,
self.x[self.y == key],
sam_pert), axis=0)

overy = concatenate((overy,
self.y[self.y == key],
self.y[self.y == key][indx]), axis=0)

if self.verbose:
print("Over-sampling performed: " + str(Counter(overy)))
Expand Down