Part 5 of the Docker Roadmap Series

Let’s talk about one of Docker’s greatest gifts to humanity: the ability to run complex software without spending three days reading installation guides written by people who apparently think “just compile from source” is helpful advice.

You know what I’m talking about. Remember the good old days when setting up a PostgreSQL database meant downloading installers, configuring users, setting up data directories, tweaking config files, and inevitably breaking something that took hours to debug? Well, those days are dead and buried, and Docker killed them.

The Beautiful World of Pre-Built Images

Here’s the thing that blew my mind when I first discovered Docker: someone else has already done the hard work. That PostgreSQL database you need? There’s an image for that. Redis cache? Image. Elasticsearch cluster? Image. That weird legacy Java application that only runs on a specific version of Tomcat with exactly three environment variables set? Believe it or not, probably an image.

Docker Hub alone has millions of images, and most of them are maintained by people who actually know what they’re doing (unlike that script you cobbled together at 2 AM last month). It’s like having a team of system administrators who’ve already figured out all the configuration headaches for you.

But here’s the catch: with great power comes great responsibility to not completely mess it up. And trust me, there are plenty of ways to mess it up.

Databases: Because Your App Needs Somewhere to Put Its Stuff

Let’s start with the most common use case: databases. If you’ve ever tried to install MySQL on three different operating systems and gotten three completely different error messages, you’ll appreciate how beautifully simple this is:

PostgreSQL: The Workhorse

PostgreSQL is like the Swiss Army knife of databases - it does everything well and doesn’t complain much. Here’s how to get one running in about 10 seconds:

# The basic "I just need a database" approach
docker run --name my-postgres \
  -e POSTGRES_PASSWORD=mysecretpassword \
  -e POSTGRES_DB=myapp \
  -p 5432:5432 \
  -d postgres:15
 
# Wait about 5 seconds for it to start up
docker logs my-postgres --follow
# Look for: "database system is ready to accept connections"
# Press Ctrl+C to exit logs

Boom. You now have a fully functional PostgreSQL 15 database running on port 5432. No installers, no package managers fighting with each other, no mysterious permission errors. Just a database that works.

But let’s be honest, running a database without persistent storage is like buying a car without wheels - technically possible, but utterly pointless. Remember our volume lessons from the previous article?

# The "I actually want to keep my data" approach
docker volume create postgres-data
 
docker run --name persistent-postgres \
  -e POSTGRES_PASSWORD=mysecretpassword \
  -e POSTGRES_DB=myapp \
  -e POSTGRES_USER=myuser \
  -v postgres-data:/var/lib/postgresql/data \
  -p 5432:5432 \
  -d postgres:15
 
# Connect to it from your host machine
# (assuming you have psql installed locally)
psql -h localhost -U myuser -d myapp
# Enter the password when prompted

Pro tip: Always check the image’s documentation on Docker Hub. The PostgreSQL image has dozens of environment variables you can use to customize the setup. Want to initialize the database with a specific schema? Mount a .sql file to /docker-entrypoint-initdb.d/. Want custom PostgreSQL configuration? Mount your postgresql.conf file. The maintainers have thought of everything.

MySQL: For When You’re Feeling Nostalgic

MySQL is like that reliable old friend who’s been around forever and knows all your secrets. Sometimes you need that familiarity:

# MySQL with all the bells and whistles
docker run --name mysql-server \
  -e MYSQL_ROOT_PASSWORD=rootpassword \
  -e MYSQL_DATABASE=myapp \
  -e MYSQL_USER=appuser \
  -e MYSQL_PASSWORD=apppassword \
  -v mysql-data:/var/lib/mysql \
  -p 3306:3306 \
  -d mysql:8.0
 
# Connect using the MySQL client
# mysql -h 127.0.0.1 -u appuser -p myapp

One thing that drives me nuts about MySQL is its tendency to be picky about authentication methods. If your application is having trouble connecting, you might need to tweak the authentication:

# Sometimes you need the old authentication method for legacy apps
docker run --name legacy-mysql \
  -e MYSQL_ROOT_PASSWORD=rootpassword \
  -e MYSQL_DATABASE=legacyapp \
  --entrypoint="" \
  mysql:8.0 \
  mysqld --default-authentication-plugin=mysql_native_password
 
# Or just specify it in the command
docker run --name mysql-native \
  -e MYSQL_ROOT_PASSWORD=rootpassword \
  -p 3306:3306 \
  -d mysql:8.0 \
  --default-authentication-plugin=mysql_native_password

Redis: When You Need Things to Be Fast

Redis is the sports car of the data world - everything is fast, everything is in memory, and if you crash, you better hope you had backups:

# Basic Redis - simple and fast
docker run --name redis-cache \
  -p 6379:6379 \
  -d redis:7-alpine
 
# Redis with persistence (because losing cache data is annoying)
docker run --name persistent-redis \
  -v redis-data:/data \
  -p 6379:6379 \
  -d redis:7-alpine redis-server --appendonly yes
 
# Test it works
docker exec -it redis-cache redis-cli
# Inside the Redis CLI:
# SET mykey "Hello World"
# GET mykey
# Expected output: "Hello World"
# Type 'exit' to leave

The redis:7-alpine image is particularly nice because Alpine Linux is tiny, making your Redis container much smaller and faster to download. Unless you have a specific reason to use the full Debian-based image, Alpine is usually the way to go.

MongoDB: Document-Based Chaos

MongoDB is like that friend who insists on doing everything their own way, and somehow it works:

# MongoDB with authentication (recommended)
docker run --name mongo-db \
  -e MONGO_INITDB_ROOT_USERNAME=admin \
  -e MONGO_INITDB_ROOT_PASSWORD=password123 \
  -e MONGO_INITDB_DATABASE=myapp \
  -v mongo-data:/data/db \
  -p 27017:27017 \
  -d mongo:6
 
# Connect using mongo shell (if installed locally)
# mongosh "mongodb://admin:password123@localhost:27017/myapp"
 
# Or use the mongo shell inside the container
docker exec -it mongo-db mongosh -u admin -p password123

Word of warning: MongoDB’s authentication can be confusing. The MONGO_INITDB_* variables only work when the database is being initialized for the first time. If you’re getting authentication errors, you might need to remove the volume and start fresh.

Interactive Test Environments: Your Playground Awaits

Sometimes you just need to quickly test something in a specific environment without polluting your local machine with yet another runtime, package manager, or mysterious configuration that’ll break something else later.

Quick Language Environments

Need to test some Python code but don’t want to deal with virtual environments, dependency conflicts, or that weird Python installation you broke last month?

# Python 3.11 playground
docker run -it --rm python:3.11 python
# This drops you into a Python REPL. Type 'exit()' to leave.
 
# Or run a specific Python script
echo "print('Hello from Docker!')" > test.py
docker run -it --rm -v $(pwd):/app -w /app python:3.11 python test.py
 
# Node.js environment
docker run -it --rm node:18 node
# JavaScript REPL. Type '.exit' to leave.
 
# Need to test with a specific Node version? Easy.
docker run -it --rm node:16 node --version
docker run -it --rm node:20 node --version

Full Development Environments

Sometimes you need more than just a language runtime. You need a full development environment with all the tools:

# Ubuntu with development tools
docker run -it --rm \
  -v $(pwd):/workspace \
  -w /workspace \
  ubuntu:22.04 bash
 
# Inside the container, install what you need:
# apt update && apt install -y git vim curl build-essential
 
# Or use a pre-built development image
docker run -it --rm \
  -v $(pwd):/workspace \
  -w /workspace \
  mcr.microsoft.com/vscode/devcontainers/base:ubuntu

Want to test how your application behaves on different Linux distributions? Easy:

# Test on Alpine Linux (lightweight)
docker run -it --rm -v $(pwd):/app -w /app alpine:latest sh
 
# Test on CentOS/RHEL-like system
docker run -it --rm -v $(pwd):/app -w /app rockylinux:9 bash
 
# Test on Debian
docker run -it --rm -v $(pwd):/app -w /app debian:bullseye bash

This is incredibly useful for debugging environment-specific issues or testing deployment scripts across different Linux distributions.

Specialized Development Environments

Some images are specifically designed for development workflows:

# Jupyter notebook environment
docker run -p 8888:8888 \
  -v $(pwd):/home/jovyan/work \
  jupyter/scipy-notebook
 
# The container will output a URL with a token, something like:
# http://127.0.0.1:8888/lab?token=abc123...
# Open that URL in your browser
 
# Postgres with pgAdmin web interface
docker run -p 5050:80 \
  -e PGADMIN_DEFAULT_EMAIL=admin@admin.com \
  -e PGADMIN_DEFAULT_PASSWORD=admin \
  -d dpage/pgadmin4
 
# Open http://localhost:5050 and log in with the credentials above

Command Line Utilities: Because Installing Everything Locally is Madness

Here’s where Docker really shines: running command-line tools without installing them on your system. No more “works on my machine” because the machine is containerized and identical everywhere.

Network Utilities

# Curl with all the features (useful for testing APIs)
docker run --rm curlimages/curl:latest \
  -H "Content-Type: application/json" \
  -d '{"test": "data"}' \
  https://httpbin.org/post
 
# HTTPie for human-readable HTTP requests
docker run --rm --net=host httpie/httpie \
  POST httpbin.org/post test=data
 
# Ping utility (useful for network debugging)
docker run --rm busybox ping -c 3 google.com

File Processing and Conversion

# ImageMagick for image processing
echo "Convert image.jpg to PNG:"
docker run --rm -v $(pwd):/data \
  dpokidov/imagemagick convert /data/image.jpg /data/image.png
 
# FFmpeg for video processing
echo "Convert video.mp4 to different format:"
docker run --rm -v $(pwd):/data \
  jrottenberg/ffmpeg:4.1-alpine \
  -i /data/input.mp4 /data/output.avi
 
# Pandoc for document conversion
echo "Convert Markdown to PDF:"
docker run --rm -v $(pwd):/data \
  pandoc/latex README.md -o README.pdf

Development Tools

# Run linters without installing them
docker run --rm -v $(pwd):/app -w /app \
  golangci/golangci-lint:latest golangci-lint run
 
# Terraform for infrastructure as code
docker run --rm -v $(pwd):/workspace -w /workspace \
  hashicorp/terraform:latest init
 
# Run security scans
docker run --rm -v $(pwd):/app \
  securecodewarrior/semgrep --config=auto /app

Database Tools

# MySQL dump without installing MySQL client
docker run --rm mysql:8.0 \
  mysqldump -h your-mysql-host -u username -p database_name > backup.sql
 
# PostgreSQL backup
docker run --rm postgres:15 \
  pg_dump -h your-postgres-host -U username database_name > backup.sql
 
# Redis CLI for debugging
docker run --rm -it redis:7-alpine redis-cli -h your-redis-host

The Art of Image Selection: Not All Images Are Created Equal

Here’s where experience saves you from pain: choosing the right image. Docker Hub is like a giant library where anyone can publish anything, and the quality varies from “absolutely brilliant” to “what the hell were they thinking?”

Official Images: Your Safe Harbor

Official images are maintained by the Docker team in collaboration with the upstream project maintainers. They’re marked with a blue “Official” badge on Docker Hub. These are your go-to choice:

# Good choices (official images)
docker pull postgres:15
docker pull redis:7-alpine
docker pull node:18
docker pull python:3.11
docker pull nginx:alpine

Official images follow consistent patterns:

  • They’re regularly updated with security patches
  • They have predictable tag naming
  • The documentation is usually excellent
  • Configuration follows best practices

Verified Publisher Images: Corporate Backing

These are maintained by companies and have been verified by Docker. They’re usually safe bets:

# Examples of verified publisher images
docker pull mcr.microsoft.com/mssql/server:2022-latest
docker pull bitnami/postgresql:15
docker pull elastic/elasticsearch:8.8.0

Community Images: Proceed with Caution

Community images can be fantastic, but you need to do your homework:

# Check these things before using community images:
# 1. How many pulls does it have? (more = more trusted)
# 2. When was it last updated?
# 3. Does the Dockerfile look reasonable?
# 4. Are there any known security issues?
 
# Example: This image has millions of pulls and is well-maintained
docker pull linuxserver/plex:latest
 
# But this one might be questionable:
# docker pull random-username/super-cool-database:latest  # ⚠️ Be careful

Reading the Docs (Yes, Really)

Every image worth using has documentation on Docker Hub. Read it. I can’t stress this enough. The documentation tells you:

  • What environment variables are available
  • Which ports are exposed
  • Where data is stored inside the container
  • How to customize the configuration
  • Common usage patterns

For example, the PostgreSQL official image documentation explains that:

  • POSTGRES_PASSWORD is required
  • POSTGRES_DB creates a database on startup
  • Scripts in /docker-entrypoint-initdb.d/ run on first startup
  • Data is stored in /var/lib/postgresql/data

This information is gold. Don’t ignore it.

Version Management: The Tag Game

Docker tags are like software versions, but with more chaos. Understanding how to use them properly will save you from unexpected breakage:

Tag Strategies

# Dangerous: Always pulls the latest version
docker pull postgres:latest  # ⚠️ Could break your app tomorrow
 
# Better: Use specific major versions
docker pull postgres:15  # Gets the latest PostgreSQL 15.x
 
# Best: Use specific versions for production
docker pull postgres:15.3  # Exact version, predictable behavior
 
# Also good: Use major.minor versions
docker pull postgres:15.3-alpine  # Specific version with Alpine base

The Latest Tag Trap

The latest tag is a lie. It doesn’t mean “latest stable” or “recommended version.” It just means “whatever the maintainer decided to tag as latest.” For some images, latest might be:

  • The most recent stable release
  • A development version
  • An old version that hasn’t been updated
  • Complete chaos

Never use latest in production. Always pin to specific versions.

Multi-Architecture Considerations

Some images support multiple architectures (AMD64, ARM64, etc.). Docker usually handles this automatically, but sometimes you need to be explicit:

# Let Docker choose the right architecture
docker pull redis:7-alpine
 
# Force a specific architecture (useful for M1 Macs)
docker pull --platform linux/amd64 redis:7-alpine

Docker Compose: Making It All Work Together

Running individual containers is fine for testing, but real applications usually need multiple services. Here’s where Docker Compose transforms chaos into order:

version: '3.8'
 
services:
  web:
    image: nginx:alpine
    ports:
      - "80:80"
    volumes:
      - ./html:/usr/share/nginx/html:ro
    depends_on:
      - api
 
  api:
    image: node:18-alpine
    working_dir: /app
    volumes:
      - ./api:/app
    command: npm start
    environment:
      - DATABASE_URL=postgresql://user:password@db:5432/myapp
      - REDIS_URL=redis://cache:6379
    depends_on:
      - db
      - cache
 
  db:
    image: postgres:15-alpine
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: user
      POSTGRES_PASSWORD: password
    volumes:
      - postgres_data:/var/lib/postgresql/data
 
  cache:
    image: redis:7-alpine
    volumes:
      - redis_data:/data
 
volumes:
  postgres_data:
  redis_data:

This single file defines an entire application stack: web server, API, database, and cache. Run it with:

# Start everything
docker compose up -d
 
# View logs
docker compose logs -f
 
# Stop everything
docker compose down
 
# Stop and remove volumes (nuclear option)
docker compose down -v

Security Considerations: Trust, But Verify

Using third-party images means running someone else’s code on your machine. Here are some basic security practices:

Image Scanning

# Scan an image for vulnerabilities (requires Docker Scout or similar)
docker scout cves nginx:latest
 
# Use multi-stage builds to reduce attack surface
# (We'll cover this in the next article about building images)

Running as Non-Root

Many images run as root by default, which is a security risk. Look for images that support non-root execution:

# Some images support the --user flag
docker run --user 1000:1000 node:18 node --version
 
# Others have built-in non-root users
docker run bitnami/postgresql:15  # Runs as postgres user

Keep Images Updated

# Regular maintenance: pull latest versions
docker pull postgres:15
docker pull redis:7-alpine
docker pull nginx:alpine
 
# Remove old, unused images
docker image prune -a

Troubleshooting Common Issues

”Container Exits Immediately”

This usually means the main process inside the container crashed or finished. Check the logs:

docker logs container-name
 
# Common causes:
# 1. Missing required environment variables
# 2. Permission issues with volumes
# 3. Port conflicts
# 4. Invalid configuration

“Can’t Connect to Database”

Database containers take time to start up. Always check if the database is ready before connecting:

# Wait for PostgreSQL to be ready
docker run --rm --link postgres-container:postgres \
  postgres:15 sh -c 'until pg_isready -h postgres; do sleep 1; done'
 
# Or check the logs for "ready to accept connections"
docker logs postgres-container --follow

“Permission Denied” with Bind Mounts

This is the classic bind mount user mismatch issue:

# Fix: Run container with your user ID
docker run -u $(id -u):$(id -g) -v $(pwd):/app node:18 npm install
 
# Or fix permissions on the host
sudo chown -R $(id -u):$(id -g) ./project-directory

The Bottom Line

Third-party Docker images are like having a team of experts who’ve already solved your problems. Use them wisely:

  • 🏆 Prefer official images - they’re maintained by people who know what they’re doing
  • 📚 Read the documentation - it’s usually excellent and will save you hours
  • 🏷️ Pin specific versions - latest is not your friend in production
  • 🔒 Consider security - you’re running someone else’s code
  • 🛠️ Use Docker Compose - managing multiple containers manually is masochism
  • 📊 Monitor and update - images need maintenance like any other dependency

The beauty of Docker isn’t just containerization - it’s the ecosystem. Millions of pre-built, tested, documented images are waiting for you. Stop reinventing wheels and start shipping software.

Next up, we’ll dive into building your own images, because sometimes you do need to reinvent a wheel (or at least customize one to your exact specifications).


Questions about third-party images? Hit me up! And if you’re still manually installing databases on bare metal… well, I’m not judging, but Docker is definitely judging you.