You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
EmbML is a tool written in Python to automatically convert off-board-trained models into C++ source code files that can be compiled and executed in low-power microcontrollers. The main goal of EmbML is to produce classifier source codes that will run specifically in unresourceful hardware systems, using bare metal programming.
4
5
5
6
This tool takes as input a classification model that was trained in a desktop or server computer using WEKA or scikit-learn libraries. EmbML is responsible for converting the input model into a carefully crafted C++ code with support for embedded hardware, such as the avoidance of unnecessary use of main memory and implementation of fixed-point operations for non-integer numbers.
6
7
7
-
# Input Models
8
+
## Input Models
9
+
8
10
EmbML accepts a trained model through the file that contains its serialized object. For instance, a classification model, built with WEKA, shall be serialized into a file using the _ObjectOutputStream_ and _FileOutputStream_ classes (available in Java). As for the scikit-learn models, they shall be saved using the _dump_ function, from _pickle_ module.
9
11
10
-
# Supported Classification Models
12
+
## Supported Classification Models
13
+
11
14
`embml` supports off-board-trained classifiers from the following classes:
12
-
* From WEKA:
13
-
* _MultilayerPerceptron_ for MLP classifiers;
14
-
* _Logistic_ for logistic regression classifiers;
15
-
* _SMO_ for SVM classifiers -- with linear, polynomial, and RBF kernels;
16
-
* _J48_ for decision tree classifier.
15
+
16
+
* From WEKA:
17
+
* _MultilayerPerceptron_ for MLP classifiers;
18
+
* _Logistic_ for logistic regression classifiers;
19
+
* _SMO_ for SVM classifiers -- with linear, polynomial, and RBF kernels;
20
+
* _J48_ for decision tree classifier.
17
21
* From scikit-learn:
18
22
* _MLPClassifier_ for MLP classifiers;
19
23
* _LogisticRegression_ for logistic regression classifiers;
20
24
* _LinearSVC_ for SVM classifiers with linear kernel;
21
25
* _SVC_ for SVM classifiers -- with polynomial and RBF kernels;
22
26
* _DecisionTreeClassifier_ for decision tree models.
23
27
24
-
# Installation
28
+
## Installation
29
+
25
30
You can install `embml` from [PyPi](https://pypi.org/project/embml/):
31
+
26
32
```python
27
-
pip install embml
33
+
pip install embml
28
34
```
35
+
29
36
This tool is supported on Python 2.7 and Python 3.5 version, and depends on the `javaobj` library.
30
37
38
+
## How To Use
31
39
32
-
# How To Use
33
40
```python
34
-
import embml
41
+
import embml
35
42
36
-
# For scikit-learn models
37
-
embml.sklearnModel(inputModel, outputFile, opts)
43
+
# For scikit-learn models
44
+
embml.sklearnModel(inputModel, outputFile, opts)
38
45
39
-
# For WEKA models
40
-
embml.wekaModel(inputModel, outputFile, opts)
41
-
42
-
# opts can include:
43
-
#-rules: to generate a decision tree classifier code using if-then-else format.
44
-
#-fxp <n> <m>: to generate a classifier code that uses fixed-point format to perform real number operations. In this case, <n> is the number of integer bits and <m> is the number of fractional bits in the Qn.m format. Note that n + m + 1 must be equal to 32, 16, or 8, since that one bit is used to represent signed numbers.
45
-
#-approx: to generate an MLP classifier code that employs an approximation to substitute the sigmoid as an activation function in the neurons.
46
-
#-pwl <x>: to generate an MLP classifier code that employs a piecewise approximation to substitute the sigmoid as an activation function in the neurons. In this case, <x> must be equal to 2 (to use an 2-point PWL approximation) or 4 (to use an 4-point PWL approximation).
47
-
48
-
# Examples of generating decision tree classifier codes using if-then-else format.
#-rules: to generate a decision tree classifier code using if-then-else format.
51
+
#-fxp <n> <m>: to generate a classifier code that uses fixed-point format to perform real number operations. In this case, <n> is the number of integer bits and <m> is the number of fractional bits in the Qn.m format. Note that n + m + 1 must be equal to 32, 16, or 8, since that one bit is used to represent signed numbers.
52
+
#-approx: to generate an MLP classifier code that employs an approximation to substitute the sigmoid as an activation function in the neurons.
53
+
#-pwl <x>: to generate an MLP classifier code that employs a piecewise approximation to substitute the sigmoid as an activation function in the neurons. In this case, <x> must be equal to 2 (to use an 2-point PWL approximation) or 4 (to use an 4-point PWL approximation).
54
+
55
+
# Examples of generating decision tree classifier codes using if-then-else format.
If you decide to generate a classifier code using a fixed-point format, you need to include the `FixedNum.h` library available at [https://github.com/lucastsutsui/EmbML](https://github.com/lucastsutsui/EmbML).
80
87
81
-
# Citation
88
+
## Citation
89
+
82
90
If you use this tool on a scientific work, we kindly ask you to use the following reference:
83
91
84
92
```tex
@@ -91,4 +99,4 @@ If you use this tool on a scientific work, we kindly ask you to use the followin
0 commit comments