Skip to content

Complete commands to install Tensorflow GPU on Ubuntu 24.04

Notifications You must be signed in to change notification settings

a13xe/tensorflow-2-6-1-Ubuntu-24-04

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 

Repository files navigation

Hey! Before you start the installation process, make sure your video card supports CUDA version 12.1. If it doesn’t, you can still follow the instructions, but be sure to choose the versions carefully. Use the nvidia-smi command to check which CUDA version is supported.

Also, here are a video tutorial and some tech support links:

YouTube video tutorial CUDA & cuDNN support TensorRT & TensorFlow support

✅ Step 1 CUDA Installation

🔵 Install CUDA:

Open your terminal and type:

sudo apt update && sudo apt upgrade
sudo apt install build-essential
wget https://developer.download.nvidia.com/compute/cuda/12.1.1/local_installers/cuda_12.1.1_530.30.02_linux.run
sudo sh cuda_12.1.1_530.30.02_linux.run

🔵 Add Paths:

Edit the bashrc file:

nano ~/.bashrc

Add these lines at the end of the file:

export PATH=/usr/local/cuda-12.1/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-12.1/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

Press Ctrl+O, then ENTER, and finally Ctrl+X to save and exit nano.

Then type:

source ~/.bashrc

Next, create another file:

sudo nano /etc/ld.so.conf

And paste this line into it:

/usr/local/cuda-12.1/lib64

Press Ctrl+O, ENTER, and Ctrl+X to save and exit.

Test the installation:

sudo ldconfig
echo $PATH
echo $LD_LIBRARY_PATH
sudo ldconfig -p | grep cuda
nvcc --version

✅ Step 2 cuDNN Installation

🔵 Install cuDNN

Go to the NVIDIA cuDNN archive and download the cuDNN v8.9.7 (December 5th, 2023), for CUDA 12.x (direct download link).

Unzip it:

tar -xvf cudnn-linux-x86_64-8.9.7.29_cuda12-archive.tar.xz
cd cudnn-linux-x86_64-8.9.7.29_cuda12-archive

Install cuDNN:

sudo cp include/cudnn*.h /usr/local/cuda-12.1/include
sudo cp lib/libcudnn* /usr/local/cuda-12.1/lib64
sudo chmod a+r /usr/local/cuda-12.1/include/cudnn*.h /usr/local/cuda-12.1/lib64/libcudnn*
cd ..
ls -l /usr/local/cuda-12.1/lib64/libcudnn*

🔵 Test cuDNN:

Create a test file:

nano test_cudnn.c

Add the following code:

#include <cudnn.h>
#include <stdio.h>

int main() {
    cudnnHandle_t handle;
    cudnnStatus_t status = cudnnCreate(&handle);
    if (status == CUDNN_STATUS_SUCCESS) {
        printf("cuDNN successfully initialized.\n");
    } else {
        printf("cuDNN initialization failed.\n");
    }
    cudnnDestroy(handle);
    return 0;
}

Press Ctrl+O, ENTER, and Ctrl+X to save and exit.

Compile and run the test:

gcc -o test_cudnn test_cudnn.c -I/usr/local/cuda-12.1/include -L/usr/local/cuda-12.1/lib64 -lcudnn
./test_cudnn

✅ Step 3 Install TensorRT

Visit the NVIDIA TensorRT download site and download TensorRT 8.6 (direct link).

Unzip TensorRT:

tar -xzvf TensorRT-8.6.1.6.Linux.x86_64-gnu.cuda-12.0.tar.gz
sudo mv TensorRT-8.6.1.6 /usr/local/TensorRT-8.6.1

Edit your paths again:

nano ~/.bashrc

Add these two lines at the end of the file:

export PATH=/usr/local/cuda-12.1/bin:/usr/local/TensorRT-8.6.1/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-12.1/lib64:/usr/local/TensorRT-8.6.1/lib:$LD_LIBRARY_PATH

Press Ctrl+O, ENTER, and Ctrl+X to save and exit.

Then type:

source ~/.bashrc

Fix hard links:

sudo ldconfig
sudo rm /usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn*.so.8
sudo ln -s /usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.x.x /usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8

✅ Step 4 Miniconda Installation

Download and install Miniconda:

wget https://repo.anaconda.com/miniconda/Miniconda3-py310_24.4.0-0-Linux-x86_64.sh
bash ./Miniconda3-py310_24.4.0-0-Linux-x86_64.sh

Restart the terminal.

✅ Step 5 Environment Setup

Create the environment:

conda create --name tf_gpu python=3.9
conda activate tf_gpu

Install TensorFlow:

python3 -m pip install tensorflow[and-cuda]

Verify the installation:

python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"

To install PyTorch, visit this link, select CUDA version 12.1, and run the command provided.

✅ Step 6 Making it Work with JupyterLab

Open the terminal and run:

pip install jupyterlab
cd ~
mkdir ml
mkdir tf_gpu
jupyter lab

Releases

No releases published

Packages

No packages published