buildCudaEngine failed on windows 10 with TensorRT-5.0.4.3

Hi,

Win10
RTX 2080
nvidia driver version: 417.35
CUDA version: 10
CUDNN version: 7.3.1 or 7.4.2
Python version [3.6]
pytorch 1.0

I tried to import ONNX model into tensorRT using sample project “sampleONNXMNIST” coming with TensorRT-5.0.4.3 SDK. The ONNX model was trained and saved in Pytorch 1.0. It succeeded to pass nvonnxparser function, however it failed on buildCudaEngine function. Error message is :

ERROR: c:\p4sw\sw\gpgpu\MachineLearning\DIT\release\5.0\builder\cudnnBuilderUtils.cpp (255) - Cuda Error in nvinfer1::cudnn::findFastestTactic: 4
ERROR: c:\p4sw\sw\gpgpu\MachineLearning\DIT\release\5.0\engine\runtime.cpp (30) - Cuda Error in nvinfer1::`anonymous-namespace’::DefaultAllocator::free: 4

the code is like this :

IBuilder* builder = createInferBuilder(gLogger); nvinfer1::INetworkDefinition* network = builder->createNetwork(); auto parser = nvonnxparser::createParser(*network, gLogger); if (!parser->parseFromFile(modelFile.c_str(), verbosity)) { std::string msg("failed to parse onnx file"); gLogger.log(nvinfer1::ILogger::Severity::kERROR, msg.c_str()); exit(EXIT_FAILURE); } // Build the engine builder->setMaxBatchSize(maxBatchSize); std::size_t x = builder->getMaxWorkspaceSize(); builder->setMaxWorkspaceSize(3600_MB); printf("%ld\n", x); samplesCommon::enableDLA(builder, gUseDLACore); ICudaEngine* engine = builder->buildCudaEngine(*network); assert(engine); 

I also tried a few parameters for setMaxWorkspaceSize, still no luck but error message could be different. I attached the model file i used, Thanks.

I always use maxBatchSize =1

we are reviewing and will keep you updated.

Up

Hello,

engineering has committed the fix for next version of TensorRT. In the meantime, as a workaround, engineering recommends using cuDNN 3.7.0 cuDNN Archive | NVIDIA Developer

I tried cuDNN 7.3.0. It passed buildCudaEngine function. Thank you!

Hi,
I did get some results from tensorRT, however results looks different and worse than PyTorch inference results. Do you know if there is anything in tensorRT&cuDNN or ONNX could cause such difference? Thanks again for your help.