电脑配置:
ubuntu 14.04 64bit
128G 内存
GTX Titan X显卡
软件版本:
Cuda 7.0
Cudnn-7.0


1. 安装开发所需的依赖包

sudo apt-get install build-essential  # basic requirement  
sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libboost-all-dev libhdf5-serial-dev libgflags-dev libgoogle-glog-dev liblmdb-dev protobuf-compiler #required by caffe  

2.安装CUDA 7.0

安装CUDA有两种方法,离线.run安装,在线.deb安装。官网下载地址

切换到下载的deb所在目录,执行下边的命令

sudo dpkg -i cuda-repo-<distro>_<version>_<architecture>.deb  
sudo apt-get update  
sudo apt-get install cuda  

我安装的是 :cuda-repo-ubuntu1404-7-0-local_7.0-28_amd64.deb

3.安装cudnn 7.0 v4

下载 cudnn-7.0-linux-x64-v4.0-prod.tgz官网下载地址
注:要用学校邮箱注册,才被允许下载
下载后,切换到下载目录

tar -zxvf cudnn-7.0-linux-x64-v4.0-prod.tgz  
cd cuda  
sudo cp lib64/lib* /usr/local/cuda/lib64/  
sudo cp include/cudnn.h /usr/local/cuda/include/
注:不同版本的cudnn解压出来的文件夹名称不一样,根据文件名称再执行后两句
更新软链接
cd /usr/local/cuda/lib64/
sudo chmod +r libcudnn.so.4.0.7
sudo ln -sf libcudnn.so.4.0.7 libcudnn.so.4
sudo ln -sf libcudnn.so.4 libcudnn.so
sudo ldconfig

4,设置环境变量

在/etc/profile中添加CUDA环境变量
sudo gedit /etc/profile
加入以下两句
PATH=/usr/local/cuda/bin:$PATH
export PATH
保存后, 执行下列命令, 使环境变量立即生效
source /etc/profile  
同时需要添加lib库路径
 cd /etc/ld.so.conf.d/
sudo touch cuda.conf
<pre name="code" class="plain" style="color: rgb(85, 85, 85); font-size: 15px; line-height: 35px;">sudo  gedit cuda.conf
 
    加入以下一句 
   
/usr/local/cuda/lib64  
使之生效
sudo ldconfig  

5.安装CUDA SAMPLE

进入/usr/local/cuda/samples, 执行下列命令来build samples

cd /usr/local/cuda/samples
sudo make all -j8 
注:-j8可提升速度,-j4比较慢,-j16容易崩
完成后,进入 samples/bin/x86_64/linux/release, 运行deviceQuery
cd <span style="font-family: 'microsoft yahei';">/usr/local/cuda/</span><span style="font-family: 'microsoft yahei';">samples/bin/x86_64/linux/release</span>
./deviceQuery
如果出现显卡信息, 则驱动及显卡安装成功。
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GTX TITAN X"
  CUDA Driver Version / Runtime Version          7.5 / 7.0
  CUDA Capability Major/Minor version number:    5.2
  Total amount of global memory:                 12287 MBytes (12884180992 bytes)
  (24) Multiprocessors, (128) CUDA Cores/MP:     3072 CUDA Cores
  GPU Max Clock rate:                            1266 MHz (1.27 GHz)
  Memory Clock rate:                             3505 Mhz
  Memory Bus Width:                              384-bit
  L2 Cache Size:                                 3145728 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 7.5, CUDA Runtime Version = 7.0, NumDevs = 1, Device0 = GeForce GTX TITAN X
Result = PASS

6.安装Intel MKL 或Atlas

我装的是Atlas,安装命令:
sudo apt-get install libatlas-base-dev  

7.安装Caffe所需要的Python环境

我下载的是Anaconda2-4.0.0-Linux-x86_64.sh,注:要下载python2.7版本的
切换到文件所在目录
bash Anaconda2-4.0.0-Linux-x86_64.sh 
注:安装过程默认安全,enter和yes互敲就行

8.  添加Anaconda Library Path

sudo gedit /etc/ld.so.conf

在/etc/ld.so.conf最后加入以下路径
/home/username/anaconda/lib  
注:路径要安装anaconda的实际情况修改,例如我的:/../../anaconda2/lib

sudo gedit ~/.bashrc
在最后添加下边路径
export LD_LIBRARY_PATH="/ home/username/anaconda/ lib:$LD_LIBRARY_PATH"
使之生效
sudo source ~/.bashrc

9.安装anaconda-opencv

conda install -c https://conda.binstar.org/menpo opencv

10. 安装python依赖库

去caffe的github下载caffe源码包,下载最新版即可
进入caffe-master下的python目录,执行如下命令
for req in $(cat requirements.txt); do pip install $req; done 

11. 编译Caffe
进入caffe-master目录,复制一份Makefile.config.examples
cp Makefile.config.example Makefile.config  
sudo gedit Makefile.config  
## Refer to http://caffe.berkeleyvision.org/installation.html
# Contributions simplifying and improving our build system are welcome!

# cuDNN acceleration switch (uncomment to build with cuDNN).
USE_CUDNN := 1

# CPU-only switch (uncomment to build without GPU support).
# CPU_ONLY := 1

# uncomment to disable IO dependencies and corresponding data layers
# USE_OPENCV := 0
# USE_LEVELDB := 0
# USE_LMDB := 0

# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
#	You should not set this flag if you will be reading LMDBs with any
#	possibility of simultaneous read and write
# ALLOW_LMDB_NOLOCK := 1

# Uncomment if you're using OpenCV 3
# OPENCV_VERSION := 3

# To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
# CUSTOM_CXX := g++

# CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr

# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 lines for compatibility.
CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
		-gencode arch=compute_20,code=sm_21 \
		-gencode arch=compute_30,code=sm_30 \
		-gencode arch=compute_35,code=sm_35 \
		-gencode arch=compute_50,code=sm_50 \
		-gencode arch=compute_50,code=compute_50

# BLAS choice:
# atlas for ATLAS (default)
# mkl for MKL
# open for OpenBlas
BLAS := atlas
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
# Leave commented to accept the defaults for your choice of BLAS
# (which should work)!
# BLAS_INCLUDE := /path/to/your/blas
# BLAS_LIB := /path/to/your/blas

# Homebrew puts openblas in a directory that is not on the standard search path
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
# BLAS_LIB := $(shell brew --prefix openblas)/lib

# This is required only if you will compile the matlab interface.
# MATLAB directory should contain the mex binary in /bin.
# MATLAB_DIR := /usr/local
# MATLAB_DIR := /Applications/MATLAB_R2012b.app

# NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
# PYTHON_INCLUDE := /usr/include/python2.7 \
		# /usr/lib/python2.7/dist-packages/numpy/core/include
# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.
ANACONDA_HOME := $(HOME)/</span><span style="color:#ff0000;">anaconda2</span><span style="color:#555555;">
PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
		$(ANACONDA_HOME)/include/python2.7 \
		$(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include \

# Uncomment to use Python 3 (default is Python 2)
# PYTHON_LIBRARIES := boost_python3 python3.5m
# PYTHON_INCLUDE := /usr/include/python3.5m \
#                 /usr/lib/python3.5/dist-packages/numpy/core/include

# We need to be able to find libpythonX.X.so or .dylib.
#PYTHON_LIB := /usr/lib
PYTHON_LIB := $(ANACONDA_HOME)/lib

# Homebrew installs numpy in a non standard path (keg only)
# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
# PYTHON_LIB += $(shell brew --prefix numpy)/lib

# Uncomment to support layers written in Python (will link against Python libs)
# WITH_PYTHON_LAYER := 1

# Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib

# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
# LIBRARY_DIRS += $(shell brew --prefix)/lib

# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1

# N.B. both build and distribute dirs are cleared on `make clean`
BUILD_DIR := build
DISTRIBUTE_DIR := distribute

# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
# DEBUG := 1

# The ID of the GPU that 'make runtest' will use to run unit tests.
TEST_GPUID := 0

# enable pretty build (comment to see full commands)
Q ?= @

改好后,编译
make all -j8 
make test  
make runtest  
如果出现错误可执行:make clean,改正错误后重新编译

12.编译Python wrapper

make pycaffe  

~~~~~~~~~~~~~~~~~~~~~以上是caffe的安装配置
~~~~~~~~~~~~~~~~~~~~~以下是faster-rcnn的安装配置

13.githubclonepy-faster-rcnn的库

git clone --recursive https://github.com/rbgirshick/py-faster-rcnn.git

14.编译

注:编译之前同样需要修改Makefile.config,同11步。
cd $py-faster-rcnn/caffe-fast-rcnn
在caffe-fast-rcnn下把第11步的Makefile.config拷贝过来,打开下一句即可
WITH_PYTHON_LAYER := 1
保存进行编译
cd py-faster-rcnn/lib/
make -j8
cd $py-faster-rcnn/caffe-fast-rcnn
make -j8 && make pycaffe

注:$py-faster-rcnn 为解压后所在目录

15. 下载Faster RCNN检测器

cd $py-faster-rcnn
./data/scripts/fetch_faster_rcnn_models.sh

16.运行demo

cd py-faster-rcnn/
./tools/demo.py

出现结果则证明配置成功。


 
  






Logo

欢迎来到由智源人工智能研究院发起的Triton中文社区,这里是一个汇聚了AI开发者、数据科学家、机器学习爱好者以及业界专家的活力平台。我们致力于成为业内领先的Triton技术交流与应用分享的殿堂,为推动人工智能技术的普及与深化应用贡献力量。

更多推荐