Skip to content

Jetson-inference Setup Guide

Introduction

This guide will walk you through the process of setting up and compiling the Jetson Inference project, which is a collection of tools and libraries for real-time video analytics on NVIDIA Jetson platforms.

https://github.com/dusty-nv/jetson-inference

You have two options to setting up the Jetson Inference Project:

  • Option 1: Run the Docker Container

  • Option 2: Build the project from source

Run the Docker Container

First, you should clone the files in the project.

git clone --recursive https://github.com/dusty-nv/jetson-inference

By going into jetson-inference directory that created, you must run the container.

cd jetson-inference 
docker/run.sh 

Docker container will automatically run and pull all the files, it will take few minutes depending on the network. This is the first setup and will only be done once.

Then, you must build the container.

docker/build.sh 

Then you are good.

Build the project from source

Before starting

Before you begin, the following packages have to be installed:

  • Git: For cloning the repository.
  • CMake: For building the project.
  • Python3 and Python3-dev: For building the Python bindings.
  • Numpy: A library for the Python programming language, adding support for arrays and matrices.

First, update your package list:

sudo apt-get update

Then, install Git and CMake:

sudo apt-get install git cmake

Install the necessary development packages:

sudo apt-get install libpython3-dev python3-numpy

Clone the Repository

Navigate to your chosen directory and clone the project:

git clone https://github.com/dusty-nv/jetson-inference

By going into jetson-inference directory that created, you must run the container.

cd jetson-inference
git submodule update โ€“init 

Next, to download all the necessary files, and build the project create a folder called build and run cmake.

cd jetson-inference
mkdir build 
cd build 
cmake ../  

Then, Model-Downloader tool will run automatically on the screen. This project comes with various pre-trained network models, you can choose which one(s) to download.

image

You can also re-run Model-Downloader tool later using the following command.

cd jetson-inference/tools
./download-models.sh 

Then, PyTorch Installer will appear on the screen. PyTorch is used to re-train networks and we will not need it in this project, so you can skip this part.

image

To compile the project at the end, run the following commands while in the build directory:

cd jetson-inference/build
make 
sudo make install 
sudo ldconfig

Then we are good.

Run the project

After uccessfully setting up the Jetson Inference project, you can now start using it by executing the command:

docker/run.sh 

To exist, just execute the command:

exit

And that's it! You have successfully set up and compiled the Jetson Inference project. You can now use the provided tools and libraries for real-time video analytics on your NVIDIA Jetson platform.

Comments