[docker]报错:ImportError: libcuda.so.1: cannot open shared object file: No such file or directory
在使用docker时运行tensorflow深度学习框架出现下列问题:libcuda.so.1: cannot open shared object file: No such file or directory。经查询可知,运行时缺少显卡支持,没有运行gpu版本的docker.更改docker运行程序为nvidia-docker run即可正常运行。
·
今天在使用docker时运行tensorflow深度学习框架出现下列问题:libcuda.so.1: cannot open shared object file: No such file or directory
Traceback (most recent call last):
File "ACE-Net.py", line 4, in <module>
from tensorflow.python.util import deprecation
File "/usr/local/lib/python3.6/dist-packages/tensorflow/__init__.py", line 24, in <module>
from tensorflow.python import *
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 72, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/usr/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/usr/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: libcuda.so.1: cannot open shared object file: No such file or directory
经查询可知,运行时缺少显卡支持,没有运行gpu版本的docker.更改docker运行程序为nvidia-docker run即可正常运行。
原来的docker运行代码:
docker run --rm -ti \
--name local_lenny \
-p 6006:6006 \
-v /home/data_link2/comparison/Heterogeneous_CD/data:/storage/data \
-v /homedata_link2/comparison/Heterogeneous_CD/legacy/Deep_Image_Translation:/storage/src \
llu025/lenny:gpu \
/bin/bash
修改后的docker运行代码:
nvidia-docker run --rm -ti \
--name local_lenny \
-p 6006:6006 \
-v /home/data_link2/comparison/Heterogeneous_CD/data:/storage/data \
-v /homedata_link2/comparison/Heterogeneous_CD/legacy/Deep_Image_Translation:/storage/src \
llu025/lenny:gpu \
/bin/bash
欢迎来到由智源人工智能研究院发起的Triton中文社区,这里是一个汇聚了AI开发者、数据科学家、机器学习爱好者以及业界专家的活力平台。我们致力于成为业内领先的Triton技术交流与应用分享的殿堂,为推动人工智能技术的普及与深化应用贡献力量。
更多推荐
已为社区贡献1条内容
所有评论(0)