1장 일반 개요
운영체제 확인
nvidia@jetson-0423718017159:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu Description:
Ubuntu 18.04.1 LTS Release:
18.04 Codename: bionic
VNC 활용
sudo apt install code


딥 러닝 소프트웨어 개발 키
head -n 1 /etc/nv_tegra_release
deep@deep:~$ head -n 1 /etc/nv_tegra_release
R32 (release), REVISION: 3.1, GCID: 18284527, BOARD: t186ref, EABI: aarch64, DATE: Mon Dec 16 21:38:34 UTC 2019
설치가 완료되면 업데이트를 진행합니다.
$ sudo apt update && sudo apt upgrade -y
만약 GPG error: file:/var/cuda-repo-9-0-local Release: The following signatures couldn't.... 이라는 오류가 발생하면 다음과 같이 명령을 입력합니다.
$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys F60F4B3D7FA2AF80
샘플 프로그램을 실행하기 위해 다음과 같은 명령을 실행합니다.
실행이 완료되면 영상과 함께 실시간으로 차량이 분석되는 영상이 출력됩니다.
sudo ./jetson_clocks.sh
cd ~/tegra_multimedia_api/samples/backend
./backend 1 ../../data/Video/sample_outdoor_car_1080p_10fps.h264 H264 --trt-deployfile ../../data/Model/GoogleNet_one_class/GoogleNet_modified_oneClass_halfHD.prototxt --trt-modelfile ../../data/Model/GoogleNet_one_class/GoogleNet_modified_oneClass_halfHD.caffemodel --trt-forcefp32 0 --trt-proc-interval 1 -fps 10
NVIDIA JETSON TX2 JetPack3.2 TensorFlow 설치하기
JAVA 8 버전을 먼저 설치합니다.
$ sudo apt-get update $ sudo add-apt-repository ppa:webupd8team/java $ sudo apt-get update $ sudo apt-get install oracle-java8-installer
다음 명령을 실행하여 의존성 파일을 설치합니다.
$ sudo apt-get install python3-numpy swig python3-dev python3-pip python3-wheel -y
Bazel을 설치합니다.
$ wget --no-check-certificate https://github.com/bazelbuild/bazel/releases/download/0.10.0/bazel-0.10.0-dist.zip $ unzip bazel-0.10.0-dist.zip -d bazel-0.10.0-dist $ cd bazel-0.10.0-dist ./compile.sh $ cp output/bazel /usr/local/bin
github에서 tensorflow를 clone 합니다.
$ git clone https://github.com/tensorflow/tensorflow
다음과 같이 TensorFlow 셋팅을 진행합니다.
$ cd tensorflow $ ./configure You have bazel 0.10.0- (@non-git) installed. Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python3 Found possible Python library paths: /usr/local/lib/python3.5/dist-packages /usr/lib/python3/dist-packages Please input the desired Python library path to use. Default is [/usr/local/lib/python3.5/dist-packages] Do you wish to build TensorFlow with jemalloc as malloc support? [Y/n]: jemalloc as malloc support will be enabled for TensorFlow. Do you wish to build TensorFlow with Google Cloud Platform support? [Y/n]: n No Google Cloud Platform support will be enabled for TensorFlow. Do you wish to build TensorFlow with Hadoop File System support? [Y/n]: n No Hadoop File System support will be enabled for TensorFlow. Do you wish to build TensorFlow with Amazon S3 File System support? [Y/n]: n No Amazon S3 File System support will be enabled for TensorFlow. Do you wish to build TensorFlow with Apache Kafka Platform support? [y/N]: nNo Apache Kafka Platform support will be enabled for TensorFlow. Do you wish to build TensorFlow with XLA JIT support? [y/N]: n No XLA JIT support will be enabled for TensorFlow. Do you wish to build TensorFlow with GDR support? [y/N]: No GDR support will be enabled for TensorFlow. Do you wish to build TensorFlow with VERBS support? [y/N]: No VERBS support will be enabled for TensorFlow. Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: No OpenCL SYCL support will be enabled for TensorFlow. Do you wish to build TensorFlow with CUDA support? [y/N]: y CUDA support will be enabled for TensorFlow. Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to default to CUDA 9.0]: Please specify the location where CUDA 9.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/local/cuda-9.0 Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7.0]: Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda-9.0]: Do you wish to build TensorFlow with TensorRT support? [y/N]: No TensorRT support will be enabled for TensorFlow. Please specify a list of comma-separated Cuda compute capabilities you want to build with. You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.5,5.2] Do you want to use clang as CUDA compiler? [y/N]: nvcc will be used as CUDA compiler. Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: Do you wish to build TensorFlow with MPI support? [y/N]: No MPI support will be enabled for TensorFlow. Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]: Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: Not configuring the WORKSPACE for Android builds. Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See tools/bazel.rc for more details. --config=mkl # Build with MKL support. --config=monolithic # Config for mostly static monolithic build. --config=tensorrt # Build with TensorRT support. Configuration finished
TensorFlow 빌드를 진행합니다.
$ bazel build --config=opt --config=cuda
https://devtalk.nvidia.com/default/topic/1031300/tensorflow-1-7-wheel-with-jetpack-3-2-/
위의 URL에서 TensorFlow 1.8 버전을 다운로드 받습니다.
다음 명령을 실행하여 TensorFlow 설치를 진행합니다.
$ pip3 install tensorflow-1.8.0-cp35-cp35m-linux_aarch64.whl
Last updated
Was this helpful?