How to Install TensorFlow on Docker — Complete Step-by-Step Tutorial

How to install TensorFlow

Introduction

Docker provides an easy, clean, and reproducible environment for running TensorFlow. Whether you’re working on deep learning models, testing scripts, or deploying machine learning workloads in production, using Docker ensures that your TensorFlow setup remains consistent across any system.

In this article, you’ll learn how to install TensorFlow on Docker for both CPU-only and GPU-accelerated environments. This tutorial also includes practical examples, customization tips, and troubleshooting guides.

Why Use Docker for TensorFlow?

There are several reasons to run TensorFlow inside Docker containers:

  • Environment Isolation: Avoid dependency conflicts and OS-level package issues.
  • Reproducibility: Run identical TensorFlow environments across machines and platforms.
  • Simplified GPU Setup: Use pre-built TensorFlow GPU images without manually installing CUDA or cuDNN.
  • Portable Deployment: Easily move from local testing to cloud or production environments.
  • Flexibility: Mount your code and data volumes directly into the container for development.

Docker provides an official TensorFlow image that simplifies installation and version management.

Prerequisites

Before you start, make sure you have:

  • Operating System: 64-bit Linux, macOS, or Windows (WSL2 for Windows recommended).
  • Docker: Installed and running on your system.
  • Hardware: Optional NVIDIA GPU with CUDA-capable driver for GPU workloads.
  • Internet Access: To pull the TensorFlow Docker image.

To verify Docker installation, run:

docker --version

Step 1: Install TensorFlow on Docker (CPU Version)

If you don’t need GPU acceleration, use the CPU-only version of TensorFlow.

1. Pull the TensorFlow image

docker pull tensorflow/tensorflow:latest

2. Run TensorFlow interactively

docker run -it --rm tensorflow/tensorflow \
    python -c "import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000,1000])))"

This verifies your TensorFlow installation by running a small computation.

3. Run your own scripts

You can mount your local code into the container:

docker run -it --rm -v $PWD:/app -w /app tensorflow/tensorflow python script.py

4. Launch Jupyter Notebook

docker run -it -p 8888:8888 tensorflow/tensorflow:latest-jupyter

Then open your browser at http://localhost:8888 to start working in Jupyter.

Step 2: Install TensorFlow on Docker (GPU Version)

For systems with NVIDIA GPUs, follow these extra steps.

1. Install NVIDIA Container Toolkit

Run the following commands on Ubuntu:

distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
  sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
sudo apt-get install -y nvidia-docker2
sudo systemctl restart docker

2. Pull GPU-enabled TensorFlow image

docker pull tensorflow/tensorflow:latest-gpu

3. Run TensorFlow with GPU support

docker run --gpus all -it --rm tensorflow/tensorflow:latest-gpu \
    python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"

If your GPU is detected correctly, you’ll see it listed in the output.

Step 3: Build a Custom TensorFlow Docker Image

If you need custom dependencies or a specific TensorFlow version, create a Dockerfile.

Sample Dockerfile

FROM tensorflow/tensorflow:latest-gpu
RUN apt-get update && apt-get install -y python3-dev git && rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip install --upgrade pip && pip install -r requirements.txt
COPY . /workspace
WORKDIR /workspace
CMD ["python", "main.py"]

Build and Run

docker build -t my-tf-image:1.0 .
docker run --gpus all -it --rm my-tf-image:1.0

This approach is ideal for teams that need consistent environments across multiple developers or servers.

Step 4: Troubleshooting Common Issues

ProblemCauseSolution
GPU not detectedMissing --gpus all flag or toolkitEnsure NVIDIA Container Toolkit is installed
“Permission denied” on mounted volumeFile ownership mismatchUse --user $(id -u):$(id -g) when running container
Out-of-memory errorsLimited shared memoryAdd --shm-size=2g to your docker run command
TensorFlow import errorIncompatible imageUse official tensorflow/tensorflow images

Best Practices

  • Use version-specific tags instead of latest (e.g., tensorflow/tensorflow:2.15.0).
  • Regularly update Docker images to get the latest security patches.
  • Keep your custom Dockerfiles minimal and efficient.
  • Verify GPU availability using:
  python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
  • Mount your project code using volumes for faster iteration.

Conclusion

Installing TensorFlow on Docker simplifies machine learning workflows, ensuring a consistent and portable setup across development and production.
Using Docker containers eliminates dependency issues and enables rapid deployment — whether you’re working locally, on a cloud server, or in a CI/CD pipeline.

For more advanced setups, explore:

  • TensorFlow Serving on Docker
  • Kubernetes Deployment
  • Model Versioning & Scaling
(Visited 1 times, 1 visits today)

You may also like