[deepstream][原创]deepstream:5.1-21.02-triton的docker无法使用pytorch
deepstream:5.1-21.02-triton的docker安装pytorch后会提示libtorch_cuda_cpp.so: undefined symbol参照Unable to Import PyTorch - #4 by mchi - DeepStream SDK - NVIDIA Developer Forums解决方法it’s caused by PyTorch versio
deepstream:5.1-21.02-triton的docker安装pytorch后会提示
libtorch_cuda_cpp.so: undefined symbol
参照Unable to Import PyTorch - #4 by mchi - DeepStream SDK - NVIDIA Developer Forums解决方法
it’s caused by PyTorch version incompatibilities.
After installing torch, remove “/opt/tritonserver/lib/pytorch/” from the LD_LIBRARY_PATH, torch can then work, otherwise it will links to the lib under /opt/tritonserver/lib/pytorch/ and the failed due to incompatibilities. But, after changing LD_LIBRARY_PATH, nvinferserver plugin can’t work then.
According to Release Notes :: NVIDIA Deep Learning Triton Inference Server Documentation 1, triton is using a dedicated Pytorch repo: triton-inference-server/pytorch_backend 1, so the incompatibility may be expected.
May I know why you need torch in DS docker?
# pip3 install torch==1.8.1+cpu torchvision==0.9.1+cpu torchaudio===0.8.1 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
# export LD_LIBRARY_PATH=/usr/src/tensorrt/lib:/opt/jarvis/lib/:/opt/kenlm/lib/:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
# python3 -c "import torch; print(torch.__version__)"
1.8.1+cpu

欢迎来到由智源人工智能研究院发起的Triton中文社区,这里是一个汇聚了AI开发者、数据科学家、机器学习爱好者以及业界专家的活力平台。我们致力于成为业内领先的Triton技术交流与应用分享的殿堂,为推动人工智能技术的普及与深化应用贡献力量。
更多推荐
所有评论(0)