Skip to content

Commit de38c45

Browse files
committed
Compile using -j nproc
1 parent f3d26c3 commit de38c45

File tree

1 file changed

+21
-21
lines changed

1 file changed

+21
-21
lines changed

README.md

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ To collect data from the camera or microphone, follow the [getting started guide
5656
To collect data from other sensors you'll need to write some code to collect the data from an external sensor, wrap it in the Edge Impulse Data Acquisition format, and upload the data to the Ingestion service. [Here's an end-to-end example](https://github.com/edgeimpulse/example-standalone-inferencing-linux/blob/master/source/collect.cpp) that you can build via:
5757
5858
```
59-
$ APP_COLLECT=1 make -j
59+
$ APP_COLLECT=1 make -j`nproc`
6060
```
6161
6262
## Classifying data
@@ -74,7 +74,7 @@ To build an application:
7474
1. Build the application via:
7575
7676
```
77-
$ APP_CUSTOM=1 make -j
77+
$ APP_CUSTOM=1 make -j`nproc`
7878
```
7979
8080
Replace `APP_CUSTOM=1` with the application you want to build. See 'Hardware acceleration' below for the hardware specific flags. You probably want these.
@@ -96,7 +96,7 @@ For many targets there is hardware acceleration available. To enable this:
9696
Build with the following flags:
9797
9898
```
99-
$ APP_CUSTOM=1 TARGET_LINUX_ARMV7=1 USE_FULL_TFLITE=1 make -j
99+
$ APP_CUSTOM=1 TARGET_LINUX_ARMV7=1 USE_FULL_TFLITE=1 make -j`nproc`
100100
```
101101
102102
**AARCH64 Linux targets**
@@ -114,31 +114,31 @@ $ APP_CUSTOM=1 TARGET_LINUX_ARMV7=1 USE_FULL_TFLITE=1 make -j
114114
1. Build with the following flags:
115115
116116
```
117-
$ APP_CUSTOM=1 TARGET_LINUX_AARCH64=1 USE_FULL_TFLITE=1 CC=clang CXX=clang++ make -j
117+
$ APP_CUSTOM=1 TARGET_LINUX_AARCH64=1 USE_FULL_TFLITE=1 CC=clang CXX=clang++ make -j`nproc`
118118
```
119119
120120
**x86 Linux targets**
121121
122122
Build with the following flags:
123123
124124
```
125-
$ APP_CUSTOM=1 TARGET_LINUX_X86=1 USE_FULL_TFLITE=1 make -j
125+
$ APP_CUSTOM=1 TARGET_LINUX_X86=1 USE_FULL_TFLITE=1 make -j`nproc`
126126
```
127127
128128
**Intel-based Macs**
129129
130130
Build with the following flags:
131131
132132
```
133-
$ APP_CUSTOM=1 TARGET_MAC_X86_64=1 USE_FULL_TFLITE=1 make -j
133+
$ APP_CUSTOM=1 TARGET_MAC_X86_64=1 USE_FULL_TFLITE=1 make -j`nproc`
134134
```
135135
136136
**Apple silicon based Macs**
137137
138138
Build with the following flags:
139139
140140
```
141-
$ APP_CUSTOM=1 TARGET_MAC_ARM64=1 USE_FULL_TFLITE=1 make -j
141+
$ APP_CUSTOM=1 TARGET_MAC_ARM64=1 USE_FULL_TFLITE=1 make -j`nproc`
142142
```
143143
144144
### AARCH64 with AI Acceleration
@@ -161,7 +161,7 @@ On the NVIDIA Jetson Orin you can also build with support for TensorRT, this ful
161161
1. Build your application with:
162162
163163
```
164-
$ APP_CUSTOM=1 TARGET_JETSON_ORIN=1 make -j
164+
$ APP_CUSTOM=1 TARGET_JETSON_ORIN=1 make -j`nproc`
165165
```
166166
167167
@@ -183,7 +183,7 @@ On the NVIDIA Jetson you can also build with support for TensorRT, this fully le
183183
1. Build your application with:
184184
185185
```
186-
$ APP_CUSTOM=1 TARGET_JETSON=1 make -j
186+
$ APP_CUSTOM=1 TARGET_JETSON=1 make -j`nproc`
187187
```
188188
189189
Note that there is significant ramp up time required for TensorRT. The first time you run a new model the model needs to be optimized - which might take up to 30 seconds, then on every startup the model needs to be loaded in - which might take up to 5 seconds. After this, the GPU seems to be warming up, so expect full performance about 2 minutes in. To do a fair performance comparison you probably want to use the custom application (no camera / microphone overhead) and run the classification in a loop.
@@ -205,7 +205,7 @@ On the Renesas RZ/V2L you can also build with support for DRP-AI using DRPAI TVM
205205
1. Build your application with:
206206
207207
```
208-
$ USE_TVM=1 TARGET_RENESAS_RZV2L=1 make -j
208+
$ USE_TVM=1 TARGET_RENESAS_RZV2L=1 make -j`nproc`
209209
```
210210
211211
#### Renesas RZ/V2L - DRP-AI
@@ -222,7 +222,7 @@ On the Renesas RZ/V2L you can also build with support for DRP-AI, this fully lev
222222
1. Build your application with:
223223
224224
```
225-
$ TARGET_RENESAS_RZV2L=1 make -j
225+
$ TARGET_RENESAS_RZV2L=1 make -j`nproc`
226226
```
227227
228228
#### Renesas RZ/G2L
@@ -235,7 +235,7 @@ To build for the Renesas RZ/G2L is as follows:
235235
1. Build your application with:
236236
237237
```
238-
$ TARGET_RENESAS_RZG2L=1 make -j
238+
$ TARGET_RENESAS_RZG2L=1 make -j`nproc`
239239
```
240240
241241
#### BrainChip AKD1000
@@ -258,7 +258,7 @@ To build the application with support for AKD1000 NSoC, you need a Python develo
258258
1. Build your application with `USE_AKIDA=1`, for example:
259259
260260
```
261-
$ USE_AKIDA=1 APP_EIM=1 TARGET_LINUX_AARCH64=1 make -j
261+
$ USE_AKIDA=1 APP_EIM=1 TARGET_LINUX_AARCH64=1 make -j`nproc`
262262
```
263263
264264
In case of any issues during runtime, check [Troubleshooting](https://docs.edgeimpulse.com/docs/development-platforms/officially-supported-ai-accelerators/akd1000#troubleshooting) section in our official documentation for AKD1000 NSoc.
@@ -278,13 +278,13 @@ You can also build with support for TIDL, this fully leverages the Deep Learning
278278
3. Build the library and copy the folders into this repository.
279279
4. Build your (.eim) application:
280280
```
281-
$ APP_EIM=1 TARGET_TDA4VM=1 make -j
281+
$ APP_EIM=1 TARGET_TDA4VM=1 make -j`nproc`
282282
```
283283
284284
To build for ONNX runtime:
285285
286286
```
287-
$ APP_EIM=1 TARGET_TDA4VM=1 USE_ONNX=1 make -j
287+
$ APP_EIM=1 TARGET_TDA4VM=1 USE_ONNX=1 make -j`nproc`
288288
```
289289
290290
##### TI AM62A
@@ -295,13 +295,13 @@ $ APP_EIM=1 TARGET_TDA4VM=1 USE_ONNX=1 make -j
295295
4. Build your (.eim) application:
296296
297297
```
298-
$ APP_EIM=1 TARGET_AM62A=1 make -j
298+
$ APP_EIM=1 TARGET_AM62A=1 make -j`nproc`
299299
```
300300
301301
To build for ONNX runtime:
302302
303303
```
304-
$ APP_EIM=1 TARGET_AM62A=1 USE_ONNX=1 make -j
304+
$ APP_EIM=1 TARGET_AM62A=1 USE_ONNX=1 make -j`nproc`
305305
```
306306
307307
##### TI AM68A
@@ -312,13 +312,13 @@ $ APP_EIM=1 TARGET_AM62A=1 USE_ONNX=1 make -j
312312
4. Build your (.eim) application:
313313
314314
```
315-
$ APP_EIM=1 TARGET_AM68A=1 make -j
315+
$ APP_EIM=1 TARGET_AM68A=1 make -j`nproc`
316316
```
317317
318318
To build for ONNX runtime:
319319
320320
```
321-
$ APP_CUSTOM=1 TARGET_AM68A=1 USE_ONNX=1 make -j
321+
$ APP_CUSTOM=1 TARGET_AM68A=1 USE_ONNX=1 make -j`nproc`
322322
```
323323
324324
#### Qualcomm SoCs with Hexagon NPU
@@ -350,15 +350,15 @@ For Qualcomm targets that have the Hexagon NPU on board (e.g. Dragonwing QCS6490
350350
5. Build your application with `USE_QUALCOMM_QNN=1`, for example the EIM:
351351
352352
```
353-
$ APP_EIM=1 TARGET_LINUX_AARCH64=1 USE_QUALCOMM_QNN=1 make -j
353+
$ APP_EIM=1 TARGET_LINUX_AARCH64=1 USE_QUALCOMM_QNN=1 make -j`nproc`
354354
```
355355
356356
## Building .eim files
357357
358358
To build Edge Impulse for Linux models ([eim files](https://docs.edgeimpulse.com/docs/edge-impulse-for-linux#eim-models)) that can be used by the Python, Node.js or Go SDKs build with `APP_EIM=1`:
359359
360360
```
361-
$ APP_EIM=1 make -j
361+
$ APP_EIM=1 make -j`nproc`
362362
```
363363
364364
The model will be placed in `build/model.eim` and can be used directly by your application.

0 commit comments

Comments
 (0)