Skip to content

Commit b1d2903

Browse files
wanziyuDefTruth
andauthored
[PaddlePaddle Hackathon4 No.186] Add PaddleDetection Models Deployment Go Examples (#1648)
* [PaddlePaddle Hackathon4 No.186] Add PaddleDetection Models Deployment Go Examples Signed-off-by: wanziyu <ziyuwan@zju.edu.cn> * Fix YOLOv8 Deployment Go Example Signed-off-by: wanziyu <ziyuwan@zju.edu.cn> --------- Signed-off-by: wanziyu <ziyuwan@zju.edu.cn> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
1 parent b15df7a commit b1d2903

File tree

9 files changed

+808
-0
lines changed

9 files changed

+808
-0
lines changed
Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
English | [简体中文](README_CN.md)
2+
# PaddleDetection Golang Deployment Example
3+
4+
This directory provides examples that `infer.go` uses CGO to call FastDeploy C API and fast finish the deployment of PaddleDetection models, including PPYOLOE on CPU/GPU.
5+
6+
Before deployment, two steps require confirmation
7+
8+
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
9+
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
10+
11+
Taking inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.4 above (x.x.x>1.0.4) or develop version (x.x.x=0.0.0) is required to support this model.
12+
13+
### Use Golang and CGO to deploy PPYOLOE model
14+
15+
Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above.
16+
```bash
17+
wget https://fastdeploy.bj.bcebos.com/dev/cpp/fastdeploy-linux-x64-0.0.0.tgz
18+
tar xvf fastdeploy-linux-x64-0.0.0.tgz
19+
```
20+
21+
Copy FastDeploy C APIs from precompiled library to the current directory.
22+
```bash
23+
cp -r fastdeploy-linux-x64-0.0.0/include/fastdeploy_capi .
24+
```
25+
26+
Download the PPYOLOE model file and test images.
27+
```bash
28+
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
29+
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
30+
tar xvf ppyoloe_crn_l_300e_coco.tgz
31+
```
32+
33+
Configure the `cgo CFLAGS: -I` to FastDeploy C API directory path and the `cgo LDFLAGS: -L` to FastDeploy dynamic library path. The FastDeploy dynamic library is located in the `/lib` directory.
34+
```bash
35+
cgo CFLAGS: -I./fastdeploy_capi
36+
cgo LDFLAGS: -L./fastdeploy-linux-x64-0.0.0/lib -lfastdeploy
37+
```
38+
39+
Use the following command to add Fastdeploy library path to the environment variable.
40+
```bash
41+
source /Path/to/fastdeploy-linux-x64-0.0.0/fastdeploy_init.sh
42+
```
43+
44+
Compile the Go file `infer.go`.
45+
```bash
46+
go build infer.go
47+
```
48+
49+
After compiling, use the following command to obtain the predicted results.
50+
```bash
51+
# CPU inference
52+
./infer -model ./ppyoloe_crn_l_300e_coco -image 000000014439.jpg -device 0
53+
# GPU inference
54+
./infer -model ./ppyoloe_crn_l_300e_coco -image 000000014439.jpg -device 1
55+
```
56+
57+
Then visualized inspection result is saved in the local image `vis_result.jpg`.
Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
[English](README.md) | 简体中文
2+
# PaddleDetection Golang 部署示例
3+
4+
本目录下提供`infer.go`, 使用CGO调用FastDeploy C API快速完成PaddleDetection模型PPYOLOE在CPU/GPU上部署的示例
5+
6+
在部署前,需确认以下两个步骤
7+
8+
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
9+
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
10+
11+
以Linux上推理为例,在本目录执行如下命令即可完成编译测试,支持此模型需保证FastDeploy版本1.0.4以上(x.x.x>1.0.4)或FastDeploy的Develop版本(x.x.x=0.0.0)
12+
### 使用Golang和CGO工具进行PPYOLOE模型推理部署
13+
14+
在当前目录下,下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
15+
```bash
16+
wget https://fastdeploy.bj.bcebos.com/dev/cpp/fastdeploy-linux-x64-0.0.0.tgz
17+
tar xvf fastdeploy-linux-x64-0.0.0.tgz
18+
```
19+
20+
将FastDeploy C API文件拷贝至当前目录
21+
```bash
22+
cp -r fastdeploy-linux-x64-0.0.0/include/fastdeploy_capi .
23+
```
24+
25+
下载PPYOLOE模型文件和测试图片
26+
```bash
27+
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
28+
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
29+
tar xvf ppyoloe_crn_l_300e_coco.tgz
30+
```
31+
32+
配置`infer.go`中的`cgo CFLAGS: -I`参数配置为C API文件路径,`cgo LDFLAGS: -L`参数配置为FastDeploy的动态库路径,动态库位于预编译库的`/lib`目录中
33+
```bash
34+
cgo CFLAGS: -I./fastdeploy_capi
35+
cgo LDFLAGS: -L./fastdeploy-linux-x64-0.0.0/lib -lfastdeploy
36+
```
37+
38+
将FastDeploy的库路径添加到环境变量
39+
```bash
40+
source /Path/to/fastdeploy-linux-x64-0.0.0/fastdeploy_init.sh
41+
```
42+
43+
编译Go文件`infer.go`
44+
```bash
45+
go build infer.go
46+
```
47+
48+
编译完成后,使用如下命令执行可得到预测结果
49+
```bash
50+
# CPU推理
51+
./infer -model ./ppyoloe_crn_l_300e_coco -image 000000014439.jpg -device 0
52+
# GPU推理
53+
./infer -model ./ppyoloe_crn_l_300e_coco -image 000000014439.jpg -device 1
54+
```
55+
56+
可视化的检测结果图片保存在本地`vis_result.jpg`
Lines changed: 185 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,185 @@
1+
// Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
2+
//
3+
// Licensed under the Apache License, Version 2.0 (the "License");
4+
// you may not use this file except in compliance with the License.
5+
// You may obtain a copy of the License at
6+
//
7+
// http://www.apache.org/licenses/LICENSE-2.0
8+
//
9+
// Unless required by applicable law or agreed to in writing, software
10+
// distributed under the License is distributed on an "AS IS" BASIS,
11+
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
// See the License for the specific language governing permissions and
13+
// limitations under the License.
14+
15+
package main
16+
17+
// #cgo CFLAGS: -I./fastdeploy_capi
18+
// #cgo LDFLAGS: -L./fastdeploy-linux-x64-0.0.0/lib -lfastdeploy
19+
// #include <fastdeploy_capi/vision.h>
20+
// #include <stdio.h>
21+
// #include <stdbool.h>
22+
// #include <stdlib.h>
23+
/*
24+
#include <stdio.h>
25+
#ifdef WIN32
26+
const char sep = '\\';
27+
#else
28+
const char sep = '/';
29+
#endif
30+
31+
char* GetModelFilePath(char* model_dir, char* model_file, int max_size){
32+
snprintf(model_file, max_size, "%s%c%s", model_dir, sep, "model.pdmodel");
33+
return model_file;
34+
}
35+
36+
char* GetParametersFilePath(char* model_dir, char* params_file, int max_size){
37+
snprintf(params_file, max_size, "%s%c%s", model_dir, sep, "model.pdiparams");
38+
return params_file;
39+
}
40+
41+
char* GetConfigFilePath(char* model_dir, char* config_file, int max_size){
42+
snprintf(config_file, max_size, "%s%c%s", model_dir, sep, "infer_cfg.yml");
43+
return config_file;
44+
}
45+
*/
46+
import "C"
47+
import (
48+
"flag"
49+
"fmt"
50+
"unsafe"
51+
)
52+
53+
func FDBooleanToGo(b C.FD_C_Bool) bool {
54+
var cFalse C.FD_C_Bool
55+
if b != cFalse {
56+
return true
57+
}
58+
return false
59+
}
60+
61+
func CpuInfer(modelDir *C.char, imageFile *C.char) {
62+
63+
var modelFile = (*C.char)(C.malloc(C.size_t(100)))
64+
var paramsFile = (*C.char)(C.malloc(C.size_t(100)))
65+
var configFile = (*C.char)(C.malloc(C.size_t(100)))
66+
var maxSize = 99
67+
68+
modelFile = C.GetModelFilePath(modelDir, modelFile, C.int(maxSize))
69+
paramsFile = C.GetParametersFilePath(modelDir, paramsFile, C.int(maxSize))
70+
configFile = C.GetConfigFilePath(modelDir, configFile, C.int(maxSize))
71+
72+
var option *C.FD_C_RuntimeOptionWrapper = C.FD_C_CreateRuntimeOptionWrapper()
73+
C.FD_C_RuntimeOptionWrapperUseCpu(option)
74+
75+
var model *C.FD_C_PPYOLOEWrapper = C.FD_C_CreatePPYOLOEWrapper(
76+
modelFile, paramsFile, configFile, option, C.FD_C_ModelFormat_PADDLE)
77+
78+
if !FDBooleanToGo(C.FD_C_PPYOLOEWrapperInitialized(model)) {
79+
fmt.Printf("Failed to initialize.\n")
80+
C.FD_C_DestroyRuntimeOptionWrapper(option)
81+
C.FD_C_DestroyPPYOLOEWrapper(model)
82+
return
83+
}
84+
85+
var image C.FD_C_Mat = C.FD_C_Imread(imageFile)
86+
87+
var result *C.FD_C_DetectionResult = C.FD_C_CreateDetectionResult()
88+
89+
if !FDBooleanToGo(C.FD_C_PPYOLOEWrapperPredict(model, image, result)) {
90+
fmt.Printf("Failed to predict.\n")
91+
C.FD_C_DestroyRuntimeOptionWrapper(option)
92+
C.FD_C_DestroyPPYOLOEWrapper(model)
93+
C.FD_C_DestroyMat(image)
94+
C.free(unsafe.Pointer(result))
95+
return
96+
}
97+
98+
var visImage C.FD_C_Mat = C.FD_C_VisDetection(image, result, 0.5, 1, 0.5)
99+
100+
C.FD_C_Imwrite(C.CString("vis_result.jpg"), visImage)
101+
fmt.Printf("Visualized result saved in ./vis_result.jpg\n")
102+
103+
C.FD_C_DestroyRuntimeOptionWrapper(option)
104+
C.FD_C_DestroyPPYOLOEWrapper(model)
105+
C.FD_C_DestroyDetectionResult(result)
106+
C.FD_C_DestroyMat(image)
107+
C.FD_C_DestroyMat(visImage)
108+
}
109+
110+
func GpuInfer(modelDir *C.char, imageFile *C.char) {
111+
112+
var modelFile = (*C.char)(C.malloc(C.size_t(100)))
113+
var paramsFile = (*C.char)(C.malloc(C.size_t(100)))
114+
var configFile = (*C.char)(C.malloc(C.size_t(100)))
115+
var maxSize = 99
116+
117+
modelFile = C.GetModelFilePath(modelDir, modelFile, C.int(maxSize))
118+
paramsFile = C.GetParametersFilePath(modelDir, paramsFile, C.int(maxSize))
119+
configFile = C.GetConfigFilePath(modelDir, configFile, C.int(maxSize))
120+
121+
var option *C.FD_C_RuntimeOptionWrapper = C.FD_C_CreateRuntimeOptionWrapper()
122+
C.FD_C_RuntimeOptionWrapperUseGpu(option, 0)
123+
124+
var model *C.FD_C_PPYOLOEWrapper = C.FD_C_CreatePPYOLOEWrapper(
125+
modelFile, paramsFile, configFile, option, C.FD_C_ModelFormat_PADDLE)
126+
127+
if !FDBooleanToGo(C.FD_C_PPYOLOEWrapperInitialized(model)) {
128+
fmt.Printf("Failed to initialize.\n")
129+
C.FD_C_DestroyRuntimeOptionWrapper(option)
130+
C.FD_C_DestroyPPYOLOEWrapper(model)
131+
return
132+
}
133+
134+
var image C.FD_C_Mat = C.FD_C_Imread(imageFile)
135+
136+
var result *C.FD_C_DetectionResult = C.FD_C_CreateDetectionResult()
137+
138+
if !FDBooleanToGo(C.FD_C_PPYOLOEWrapperPredict(model, image, result)) {
139+
fmt.Printf("Failed to predict.\n")
140+
C.FD_C_DestroyRuntimeOptionWrapper(option)
141+
C.FD_C_DestroyPPYOLOEWrapper(model)
142+
C.FD_C_DestroyMat(image)
143+
C.free(unsafe.Pointer(result))
144+
return
145+
}
146+
147+
var visImage C.FD_C_Mat = C.FD_C_VisDetection(image, result, 0.5, 1, 0.5)
148+
149+
C.FD_C_Imwrite(C.CString("vis_result.jpg"), visImage)
150+
fmt.Printf("Visualized result saved in ./vis_result.jpg\n")
151+
152+
C.FD_C_DestroyRuntimeOptionWrapper(option)
153+
C.FD_C_DestroyPPYOLOEWrapper(model)
154+
C.FD_C_DestroyDetectionResult(result)
155+
C.FD_C_DestroyMat(image)
156+
C.FD_C_DestroyMat(visImage)
157+
}
158+
159+
var (
160+
modelDir string
161+
imageFile string
162+
deviceType int
163+
)
164+
165+
func init() {
166+
flag.StringVar(&modelDir, "model", "", "paddle detection model to use")
167+
flag.StringVar(&imageFile, "image", "", "image to predict")
168+
flag.IntVar(&deviceType, "device", 0, "The data type of run_option is int, 0: run with cpu; 1: run with gpu")
169+
}
170+
171+
func main() {
172+
flag.Parse()
173+
174+
if modelDir != "" && imageFile != "" {
175+
if deviceType == 0 {
176+
CpuInfer(C.CString(modelDir), C.CString(imageFile))
177+
} else if deviceType == 1 {
178+
GpuInfer(C.CString(modelDir), C.CString(imageFile))
179+
}
180+
} else {
181+
fmt.Printf("Usage: ./infer -model path/to/model_dir -image path/to/image -device run_option \n")
182+
fmt.Printf("e.g ./infer -model ./ppyoloe_crn_l_300e_coco -image 000000014439.jpg -device 0 \n")
183+
}
184+
185+
}
Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
English | [简体中文](README_CN.md)
2+
# YOLOv5 Golang Deployment Example
3+
4+
This directory provides examples that `infer.go` uses CGO to call FastDeploy C API and finish the deployment of YOLOv5 model on CPU/GPU.
5+
6+
Before deployment, two steps require confirmation
7+
8+
- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
9+
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
10+
11+
Taking inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.4 above (x.x.x>1.0.4) or develop version (x.x.x=0.0.0) is required to support this model.
12+
13+
### Use Golang and CGO to deploy YOLOv5 model
14+
15+
Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above.
16+
```bash
17+
wget https://fastdeploy.bj.bcebos.com/dev/cpp/fastdeploy-linux-x64-0.0.0.tgz
18+
tar xvf fastdeploy-linux-x64-0.0.0.tgz
19+
```
20+
21+
Copy FastDeploy C APIs from precompiled library to the current directory.
22+
```bash
23+
cp -r fastdeploy-linux-x64-0.0.0/include/fastdeploy_capi .
24+
```
25+
26+
Download the YOLOv5 ONNX model file and test images
27+
```bash
28+
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s.onnx
29+
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
30+
```
31+
32+
Configure the `cgo CFLAGS: -I` to FastDeploy C API directory path and the `cgo LDFLAGS: -L` to FastDeploy dynamic library path. The FastDeploy dynamic library is located in the `/lib` directory.
33+
```bash
34+
cgo CFLAGS: -I./fastdeploy_capi
35+
cgo LDFLAGS: -L./fastdeploy-linux-x64-0.0.0/lib -lfastdeploy
36+
```
37+
38+
Use the following command to add Fastdeploy library path to the environment variable.
39+
```bash
40+
source /Path/to/fastdeploy-linux-x64-0.0.0/fastdeploy_init.sh
41+
```
42+
43+
Compile the Go file `infer.go`.
44+
```bash
45+
go build infer.go
46+
```
47+
48+
After compiling, use the following command to obtain the predicted results.
49+
```bash
50+
# CPU inference
51+
./infer -model yolov5s.onnx -image 000000014439.jpg -device 0
52+
# GPU inference
53+
./infer -model yolov5s.onnx -image 000000014439.jpg -device 1
54+
```
55+
56+
Then visualized inspection result is saved in the local image `vis_result.jpg`.

0 commit comments

Comments
 (0)