You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+21-21Lines changed: 21 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -56,7 +56,7 @@ To collect data from the camera or microphone, follow the [getting started guide
56
56
To collect data from other sensors you'll need to write some code to collect the data from an external sensor, wrap it in the Edge Impulse Data Acquisition format, and upload the data to the Ingestion service. [Here's an end-to-end example](https://github.com/edgeimpulse/example-standalone-inferencing-linux/blob/master/source/collect.cpp) that you can build via:
57
57
58
58
```
59
-
$ APP_COLLECT=1 make -j
59
+
$ APP_COLLECT=1 make -j`nproc`
60
60
```
61
61
62
62
## Classifying data
@@ -74,7 +74,7 @@ To build an application:
74
74
1. Build the application via:
75
75
76
76
```
77
-
$ APP_CUSTOM=1 make -j
77
+
$ APP_CUSTOM=1 make -j`nproc`
78
78
```
79
79
80
80
Replace `APP_CUSTOM=1` with the application you want to build. See 'Hardware acceleration' below for the hardware specific flags. You probably want these.
@@ -96,7 +96,7 @@ For many targets there is hardware acceleration available. To enable this:
96
96
Build with the following flags:
97
97
98
98
```
99
-
$ APP_CUSTOM=1 TARGET_LINUX_ARMV7=1 USE_FULL_TFLITE=1 make -j
99
+
$ APP_CUSTOM=1 TARGET_LINUX_ARMV7=1 USE_FULL_TFLITE=1 make -j`nproc`
100
100
```
101
101
102
102
**AARCH64 Linux targets**
@@ -114,31 +114,31 @@ $ APP_CUSTOM=1 TARGET_LINUX_ARMV7=1 USE_FULL_TFLITE=1 make -j
114
114
1. Build with the following flags:
115
115
116
116
```
117
-
$ APP_CUSTOM=1 TARGET_LINUX_AARCH64=1 USE_FULL_TFLITE=1 CC=clang CXX=clang++ make -j
117
+
$ APP_CUSTOM=1 TARGET_LINUX_AARCH64=1 USE_FULL_TFLITE=1 CC=clang CXX=clang++ make -j`nproc`
118
118
```
119
119
120
120
**x86 Linux targets**
121
121
122
122
Build with the following flags:
123
123
124
124
```
125
-
$ APP_CUSTOM=1 TARGET_LINUX_X86=1 USE_FULL_TFLITE=1 make -j
125
+
$ APP_CUSTOM=1 TARGET_LINUX_X86=1 USE_FULL_TFLITE=1 make -j`nproc`
126
126
```
127
127
128
128
**Intel-based Macs**
129
129
130
130
Build with the following flags:
131
131
132
132
```
133
-
$ APP_CUSTOM=1 TARGET_MAC_X86_64=1 USE_FULL_TFLITE=1 make -j
133
+
$ APP_CUSTOM=1 TARGET_MAC_X86_64=1 USE_FULL_TFLITE=1 make -j`nproc`
134
134
```
135
135
136
136
**Apple silicon based Macs**
137
137
138
138
Build with the following flags:
139
139
140
140
```
141
-
$ APP_CUSTOM=1 TARGET_MAC_ARM64=1 USE_FULL_TFLITE=1 make -j
141
+
$ APP_CUSTOM=1 TARGET_MAC_ARM64=1 USE_FULL_TFLITE=1 make -j`nproc`
142
142
```
143
143
144
144
### AARCH64 with AI Acceleration
@@ -161,7 +161,7 @@ On the NVIDIA Jetson Orin you can also build with support for TensorRT, this ful
161
161
1. Build your application with:
162
162
163
163
```
164
-
$ APP_CUSTOM=1 TARGET_JETSON_ORIN=1 make -j
164
+
$ APP_CUSTOM=1 TARGET_JETSON_ORIN=1 make -j`nproc`
165
165
```
166
166
167
167
@@ -183,7 +183,7 @@ On the NVIDIA Jetson you can also build with support for TensorRT, this fully le
183
183
1. Build your application with:
184
184
185
185
```
186
-
$ APP_CUSTOM=1 TARGET_JETSON=1 make -j
186
+
$ APP_CUSTOM=1 TARGET_JETSON=1 make -j`nproc`
187
187
```
188
188
189
189
Note that there is significant ramp up time required for TensorRT. The first time you run a new model the model needs to be optimized - which might take up to 30 seconds, then on every startup the model needs to be loaded in - which might take up to 5 seconds. After this, the GPU seems to be warming up, so expect full performance about 2 minutes in. To do a fair performance comparison you probably want to use the custom application (no camera / microphone overhead) and run the classification in a loop.
@@ -205,7 +205,7 @@ On the Renesas RZ/V2L you can also build with support for DRP-AI using DRPAI TVM
205
205
1. Build your application with:
206
206
207
207
```
208
-
$ USE_TVM=1 TARGET_RENESAS_RZV2L=1 make -j
208
+
$ USE_TVM=1 TARGET_RENESAS_RZV2L=1 make -j`nproc`
209
209
```
210
210
211
211
#### Renesas RZ/V2L - DRP-AI
@@ -222,7 +222,7 @@ On the Renesas RZ/V2L you can also build with support for DRP-AI, this fully lev
222
222
1. Build your application with:
223
223
224
224
```
225
-
$ TARGET_RENESAS_RZV2L=1 make -j
225
+
$ TARGET_RENESAS_RZV2L=1 make -j`nproc`
226
226
```
227
227
228
228
#### Renesas RZ/G2L
@@ -235,7 +235,7 @@ To build for the Renesas RZ/G2L is as follows:
235
235
1. Build your application with:
236
236
237
237
```
238
-
$ TARGET_RENESAS_RZG2L=1 make -j
238
+
$ TARGET_RENESAS_RZG2L=1 make -j`nproc`
239
239
```
240
240
241
241
#### BrainChip AKD1000
@@ -258,7 +258,7 @@ To build the application with support for AKD1000 NSoC, you need a Python develo
258
258
1. Build your application with `USE_AKIDA=1`, for example:
259
259
260
260
```
261
-
$ USE_AKIDA=1 APP_EIM=1 TARGET_LINUX_AARCH64=1 make -j
261
+
$ USE_AKIDA=1 APP_EIM=1 TARGET_LINUX_AARCH64=1 make -j`nproc`
262
262
```
263
263
264
264
In case of any issues during runtime, check [Troubleshooting](https://docs.edgeimpulse.com/docs/development-platforms/officially-supported-ai-accelerators/akd1000#troubleshooting) section in our official documentation for AKD1000 NSoc.
@@ -278,13 +278,13 @@ You can also build with support for TIDL, this fully leverages the Deep Learning
278
278
3. Build the library and copy the folders into this repository.
279
279
4. Build your (.eim) application:
280
280
```
281
-
$ APP_EIM=1 TARGET_TDA4VM=1 make -j
281
+
$ APP_EIM=1 TARGET_TDA4VM=1 make -j`nproc`
282
282
```
283
283
284
284
To build for ONNX runtime:
285
285
286
286
```
287
-
$ APP_EIM=1 TARGET_TDA4VM=1 USE_ONNX=1 make -j
287
+
$ APP_EIM=1 TARGET_TDA4VM=1 USE_ONNX=1 make -j`nproc`
288
288
```
289
289
290
290
##### TI AM62A
@@ -295,13 +295,13 @@ $ APP_EIM=1 TARGET_TDA4VM=1 USE_ONNX=1 make -j
295
295
4. Build your (.eim) application:
296
296
297
297
```
298
-
$ APP_EIM=1 TARGET_AM62A=1 make -j
298
+
$ APP_EIM=1 TARGET_AM62A=1 make -j`nproc`
299
299
```
300
300
301
301
To build for ONNX runtime:
302
302
303
303
```
304
-
$ APP_EIM=1 TARGET_AM62A=1 USE_ONNX=1 make -j
304
+
$ APP_EIM=1 TARGET_AM62A=1 USE_ONNX=1 make -j`nproc`
305
305
```
306
306
307
307
##### TI AM68A
@@ -312,13 +312,13 @@ $ APP_EIM=1 TARGET_AM62A=1 USE_ONNX=1 make -j
312
312
4. Build your (.eim) application:
313
313
314
314
```
315
-
$ APP_EIM=1 TARGET_AM68A=1 make -j
315
+
$ APP_EIM=1 TARGET_AM68A=1 make -j`nproc`
316
316
```
317
317
318
318
To build for ONNX runtime:
319
319
320
320
```
321
-
$ APP_CUSTOM=1 TARGET_AM68A=1 USE_ONNX=1 make -j
321
+
$ APP_CUSTOM=1 TARGET_AM68A=1 USE_ONNX=1 make -j`nproc`
322
322
```
323
323
324
324
#### Qualcomm SoCs with Hexagon NPU
@@ -350,15 +350,15 @@ For Qualcomm targets that have the Hexagon NPU on board (e.g. Dragonwing QCS6490
350
350
5. Build your application with `USE_QUALCOMM_QNN=1`, for example the EIM:
351
351
352
352
```
353
-
$ APP_EIM=1 TARGET_LINUX_AARCH64=1 USE_QUALCOMM_QNN=1 make -j
353
+
$ APP_EIM=1 TARGET_LINUX_AARCH64=1 USE_QUALCOMM_QNN=1 make -j`nproc`
354
354
```
355
355
356
356
## Building .eim files
357
357
358
358
To build Edge Impulse for Linux models ([eim files](https://docs.edgeimpulse.com/docs/edge-impulse-for-linux#eim-models)) that can be used by the Python, Node.js or Go SDKs build with `APP_EIM=1`:
359
359
360
360
```
361
-
$ APP_EIM=1 make -j
361
+
$ APP_EIM=1 make -j`nproc`
362
362
```
363
363
364
364
The model will be placed in `build/model.eim` and can be used directly by your application.
0 commit comments