Description
I use a docker container build from tensorflow/tensorflow:2.5.0-gpu, and download tensorrt by myself in this container. It failed when I try to build an engine from my onnx model, and I can’t find any helpful information from the error message. I upload my onnx file here, and anyone may help me build the engine(any environment is OK…)? Or just tell me why I can’t build engine. Thanks!
Environment
TensorRT Version: 8.2.1.8
GPU Type: rtx2080
Nvidia Driver Version: 465.19.01
CUDA Version: 11.2.152
CUDNN Version: 8.1.0
Operating System + Version: ubuntu18.04
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag): tensorflow/tensorflow:2.5.0-gpu
Relevant Files
TRT_LOGGER = trt.Logger(trt.Logger.WARNING) def build_engine(onnx_path, using_half=True, dynamic_input=False): with trt.Builder(TRT_LOGGER) as builder, builder.create_network(1) as network, trt.OnnxParser(network, TRT_LOGGER) as parser: # builder.max_batch_size = 1 config = builder.create_builder_config() config.max_workspace_size = 2 * 1 << 30 if using_half: config.set_flag(trt.BuilderFlag.FP16) with open(onnx_path, 'rb') as model: if not parser.parse(model.read()): print('error: failed to parse onnx model') for error in range(parser.num_errors): print(parser.get_error(error)) return None engine = builder.build_engine(network, config) return engine engine = build_engine(r'./model.onnx', False, True) update
solve the problem using the tensorrt ngc(21.11-py3) temporarily, and I have removed the onnx model.
