image

How to Host Docker on VPS in 2025

Published : October 3, 2025
Last Updated : October 6, 2025
Published In : Technical Guide

Docker solves a problem every developer faces: “It works on my machine, but not in production.”


It packages your application with everything it needs into a lightweight container. Think of it as a shipping container for your code, running the same way whether it is on your laptop, a staging server, or a production VPS.


Containers share the host’s operating system kernel instead of spinning up entire virtual machines. As a result, you can run five to ten applications on a 4 GB VPS where two traditional VMs would struggle.


Quick example to show you what I mean:

Step-by-Step to Run an Nginx Container

  1. Open your terminal and run this command:

				
					 docker run -d -p 80:80 nginx
				
			

2. What happens next:

    • Docker pulls the official nginx image from Docker Hub.

    • It creates a container in detached mode (-d).
    • It maps port 80 of the container to port 80 of your VPS so you can visit the site in a browser.

This single command downloads Nginx, creates a container, and serves your website. No complex installation, no configuration conflicts, no headaches.

Prerequisites

Skip the bare minimums that barely work and start with specifications that keep your environment stable.

Component Minimum Recommended Why This Matters
RAM 2 GB 4 GB+ Docker uses about 300 MB; each container needs 50–500 MB
Storage 20 GB SSD 40 GB+ NVMe Images pile up fast, logs grow, you need room to breathe
CPU 2 vCPU 4+ vCPU Container startup and build processes are CPU-intensive
OS Ubuntu 24.04+ Ubuntu 24.04 LTS Latest Docker features and best security patches

Critical requirements:

  • KVM or VMware virtualization (avoid OpenVZ)

  • Outbound internet on ports 80/443 for image pulls

  • Root or sudo access for installation

That 1 GB VPS might seem tempting, but you will hit memory limits the moment you try to build anything substantial. Start with 4 GB, your future self will thank you.

Step 1: Connect and Verify Your VPS

Alright, let’s get you connected to your server. I’ll assume your VPS provider gave you SSH credentials.

If you are still using password authentication in 2025, we can fix that later, but let’s start with the basics.

First, connect to your VPS by running:

				
					ssh username@your_server_ip
				
			

Replace username and your_server_ip with the actual credentials your provider gave you. If you are on Windows and somehow do not have SSH built in, you can use a tool like PuTTY.


Now we need to make sure your system is ready for Docker, and to check if your kernel version is at least 4.0 (5.15 or newer is ideal), run:

				
					uname -r
				
			

To verify your kernel version is 4.0 or higher:

				
					if [ $(uname -r | cut -d. -f1) -ge 4 ]; then echo "Kernel OK"; else echo "Kernel too old"; fi
				
			

To verify your operating system version is Ubuntu 22.04 or newer, run:

				
					cat /etc/os-release | grep VERSION
				
			

To confirm you have at least 2 GB of available memory, run:

				
					free -h
				
			

To be sure you have at least 20 GB of free disk space on the root partition, run:

				
					df -h /
				
			

Finally, to check that your VPS uses KVM or VMware virtualization (not OpenVZ), run:

				
					sudo dmidecode -s system-product-name
				
			

You should see:

  • Kernel version 4.0 or higher (5.15+ is best)

  • Ubuntu 22.04 or newer

  • At least 2 GB of RAM available

  • 20 GB or more of free disk space

  • Virtualization type showing KVM or VMware

If any of these checks fail, stop here and contact your VPS provider before moving on.

Step 2: Clean Installation (Skip the Headaches)

Here is something we wish someone had told us years ago: always clean out any old Docker remnants before installing. Leftover packages can create strange conflicts later.


First, remove any existing Docker-related packages by running:

				
					sudo apt-get remove docker docker-engine docker.io containerd runc -y
sudo apt autoremove -y

				
			

Do not worry if you see messages like “package not found.” That is actually what you want—it means there is nothing old to interfere.


Next, update your system so you have the latest security patches and kernel updates before installing Docker:

				
					sudo apt update && sudo apt upgrade -y
				
			

This step might take a few minutes, but it is essential to avoid problems later.

Step 3: Install Docker (The Right Way)

For a production server in 2025, it is best to follow Docker’s official installation method instead of using the quick convenience script that is meant only for development.


First, set up Docker’s official APT repository. To add Docker’s GPG key, run:

				
					sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

				
			

Next, add the repository to your APT sources:

				
					echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] \
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

				
			

Now install Docker and all required components:

				
					sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
				
			

Start and enable the Docker service so it runs automatically:

				
					sudo systemctl start docker
sudo systemctl enable docker
sudo systemctl status docker

				
			

You should see active (running) in green. If not, check the logs:

				
					sudo journalctl -u docker --no-pager
				
			

To run Docker commands without sudo, add your user to the docker group:

				
					sudo usermod -aG docker $USER
				
			

Log out and log back in for group changes to take effect, or run:

				
					newgrp docker
				
			

Finally, verify that everything is working:

				
					docker --version
docker run hello-world

				
			

If you see Hello from Docker! you are all set. If not, we will troubleshoot in the next section.

Secure Your Firewall Rules

Make your Docker firewall rules persistent so they survive reboots:

				
					# Make iptables rules persistent
sudo apt install iptables-persistent
sudo iptables-save > /etc/iptables/rules.v4

				
			

Step 4: Essential Docker Commands (Your Daily Toolkit)

These are the commands we use every single day. Master them and you will feel at home with Docker.

Container Management

To run a container:

				
					docker run -d --name myapp nginx
				
			

To list running containers:

				
					docker ps
				
			

To list all containers, including stopped ones:

				
					docker ps -a
				
			

To stop a container:

				
					docker stop myapp
				
			

To start it again:

				
					docker start myapp
				
			

To remove a container:

				
					docker rm myapp
				
			

Image Management

To pull an image:

				
					docker pull nginx:1.25
				
			

To list images:

				
					docker images
				
			

To remove an image:

				
					docker rmi nginx:1.25
				
			

To build an image from a Dockerfile:

				
					docker build -t myapp .
				
			

Debugging and Monitoring

To view container logs:

				
					docker logs myapp
				
			

To follow logs in real time:

				
					docker logs -f myapp
				
			

To execute commands inside a container:

				
					docker exec -it myapp /bin/bash
				
			

To monitor resource usage:

				
					docker stats

				
			

Cleanup (Use Carefully)

To remove stopped containers:

				
					docker container prune
				
			

To remove unused images:

				
					docker image prune
				
			

For a full cleanup of everything unused:

				
					docker system prune -a
				
			

Pro tip: always give your containers descriptive names, such as –name web-frontend instead of leaving them with random IDs. Your future self, especially at 2 AM, will thank you.

Step 5: Docker Compose (Where the Magic Happens)

This is where Docker becomes truly powerful. Instead of managing individual containers, you can define your entire application stack in a single file.


Docker Compose is already installed from the previous step. To verify it works, run:

				
					docker compose version
				
			

Now create your first docker-compose.yml file with the following content:

				
					version: '3.8'
services:
  web:
    image: nginx:1.25
    ports:
      - "80:80"
    volumes:
      - ./html:/usr/share/nginx/html
    restart: unless-stopped
  
  database:
    image: mysql:8.0
    environment:
      MYSQL_ROOT_PASSWORD: your_secure_password
      MYSQL_DATABASE: myapp
    volumes:
      - mysql_data:/var/lib/mysql
    restart: unless-stopped

volumes:
  mysql_data:

				
			

To launch your stack, run:

				
					docker compose up -d
				
			

Check the status of all services:

				
					docker compose ps
				
			

View logs in real time:

				
					docker compose logs -f
				
			

Stop everything when you are done:

				
					docker compose down
				
			

Verify volumes were created:

				
					docker volume ls
				
			

With a single file describing your entire infrastructure, you can deploy real applications quickly and reproduce the setup anywhere.

Step 6: Security Hardening (Do Not Skip This)

I have seen too many compromised servers because people skipped security. Let’s lock yours down.

A critical warning before we start: when you expose container ports using Docker, those ports bypass your firewall rules completely. Many people miss this, and it is a serious security risk.

Begin by configuring a UFW firewall with Docker in mind:

				
					sudo ufw --force enable
sudo ufw allow 22/tcp    # SSH
sudo ufw allow 80/tcp    # HTTP
sudo ufw allow 443/tcp   # HTTPS

				
			

Only open the ports you actually need.
Configure Docker to work with UFW:

				
					sudo systemctl stop docker
echo '{
  "iptables": false
}' | sudo tee /etc/docker/daemon.json
sudo systemctl start docker

				
			

Next, create a non-root user inside your containers by adding this to your Dockerfile:

				
					RUN adduser --disabled-password --gecos '' appuser
USER appuser

				
			

Keep Docker updated regularly. Check for updates monthly:

				
					sudo apt update && sudo apt list --upgradable | grep docker
sudo apt upgrade docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

				
			

Scan images for vulnerabilities with Docker Scout:

				
					docker scout quickview nginx:latest
				
			

Remember: containers inherit the host’s security. If your VPS is compromised, your containers are too, so do not skip these steps.

Step 7: Production Ready Practices

These are the steps that separate hobby projects from true production deployments.

Always set resource limits in your docker-compose.yml to prevent a single container from consuming all resources:

				
					services:
  web:
    image: nginx
    deploy:
      resources:
        limits:
          memory: 512M
          cpus: '0.5'
restart: unless-stopped

				
			

Implement health checks so Docker can automatically restart unhealthy containers:

				
					services:
  web:
    image: nginx
    healthcheck:
      test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost"]
      interval: 30s
      timeout: 10s
      retries: 3
restart: unless-stopped

				
			

Configure proper logging and log rotation to avoid disk bloat. Edit the Docker daemon configuration:

				
					sudo mkdir -p /etc/docker
echo '{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}' | sudo tee /etc/docker/daemon.json
sudo systemctl restart docker

				
			

Set up monitoring to keep track of performance and issues. For example, you can use Prometheus:

				
					services:
  monitoring:
    image: prom/prometheus
    ports:
      - "9090:9090"
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
restart: unless-stopped

				
			

These steps are not optional – they are essential for any application you intend to keep online and stable.

Troubleshooting Common Docker Issues

Here are the problems seen most often and how to fix them.

  • If you get “Cannot connect to Docker daemon”: Check if Docker is running:

				
					sudo systemctl status docker
				
			

If it is not running, start it:

				
					sudo systemctl start docker
				
			

Still having problems? Make sure your user is in the Docker group:

				
					groups | grep docker
				
			

  • If you see “docker: command not found”: This usually means the installation failed. Reinstall Docker properly:

				
					sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

				
			

  • If you get a port already in use error:

    Find what is using the port:

				
					sudo netstat -tulpn | grep :80
				
			

Then kill the process or change the Docker port:

				
					docker run -p 8080:80 nginx
				
			

  • If you run out of disk space:

Check Docker disk usage:

				
					docker system df
				
			

Clean up unused data:

				
					docker system prune -a --volumes
				
			

  • If you have image pull failures:

    Check internet connectivity:

				
					ping docker.io
				
			

Try pulling from a different registry:

				
					docker pull quay.io/nginx/nginx
				
			

We keep these commands handy because even after years of using Docker, unexpected issues can still pop up.

Monitoring and Maintenance

Here is a weekly maintenance routine that keeps Docker environments stable and efficient.

Check resource usage:

				
					# Overall system
htop

# Docker specific
docker stats --no-stream

# Disk usage
docker system df


				
			

Update containers:

				
					# Pull latest images
docker compose pull

# Restart with new images
docker compose up -d

				
			

Clean up weekly:

				
					# Remove old containers and images
docker system prune

# Check logs are not filling disk
du -sh /var/lib/docker/containers/*/

				
			

Monitor key metrics to catch issues early:
Memory usage should remain below 80%


Keep at least 5 GB of free disk space


Ensure no containers are constantly restarting


Confirm logs are not growing uncontrollably


Setting simple alerts for these metrics helps prevent outages before they start.

Real-World Deployment Example

Here is a production-ready setup that can be adapted to most web applications.

Directory structure:

				
					myapp/
├── docker-compose.yml
├── nginx/
│   └── nginx.conf
├── app/
│   └── Dockerfile
└── .env

				
			

Security Tip: Never commit .env files to version control. They often contain passwords and API keys.

docker-compose.yml

				
					services:
  reverse-proxy:
    image: nginx:1.25-alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx:/etc/nginx/conf.d
      - ./ssl:/etc/ssl/certs
    depends_on:
      - app
    restart: unless-stopped

  app:
    build: ./app
    environment:
      - DATABASE_URL=mysql://user:${DB_PASSWORD}@database:3306/myapp
    depends_on:
      - database
      - redis
    restart: unless-stopped

  database:
    image: mysql:8.0
    environment:
      MYSQL_ROOT_PASSWORD: ${DB_ROOT_PASSWORD}
      MYSQL_DATABASE: myapp
      MYSQL_USER: user
      MYSQL_PASSWORD: ${DB_PASSWORD}
    volumes:
      - mysql_data:/var/lib/mysql
    restart: unless-stopped

  redis:
    image: redis:7-alpine
    volumes:
      - redis_data:/data
    restart: unless-stopped

volumes:
  mysql_data:
  redis_data:

				
			

Deploy it:

				
					# Create .env file with your passwords
echo "DB_PASSWORD=your_secure_db_password" > .env
echo "DB_ROOT_PASSWORD=your_root_password" >> .env

# Launch everything
docker compose up -d

# Verify operation
docker compose ps
curl http://your_server_ip

				
			

This structure works for most web applications and can be customized for any tech stack.

Conclusion

You now have everything required to run Docker effectively on a VPS. Starting with at least 4 GB of RAM ensures containers have the resources they need for real workloads.

Use Docker Compose for any project that involves multiple containers. It maintains consistent deployments and simplifies scaling or migration.

Implement monitoring, resource limits, and strong security practices from the beginning. Pay particular attention to firewall settings, as exposed ports can bypass standard rules.

Running Docker on a VPS provides an ideal balance of control, performance, and cost. It avoids vendor lock-in and keeps your environment fully portable.

Begin with a small project, such as containerizing an existing application or deploying a new service. Once you experience the simplicity of containerized deployments, you will see why Docker is a core tool for modern infrastructure.


About the Author Peter French is the Managing Director at Virtarix, with over 17 years in the tech industry. He has co-founded a cloud storage business, led strategy at a global cloud computing leader, and driven market growth in cybersecurity and data protection.

Other posts

image
October 3, 2025
Published in : Technical Guide
How to Host Docker on VPS in 2025

Docker solves a problem every developer faces: “It works on my machine, but not in production.” This guide shows you how to host on VPS in 2025.

image
October 1, 2025
Published in : Technical Guide
How to Self-Host Bitwarden on a VPS in 2025

In this guide, we walk you through setting up Bitwarden on a VPS using the latest software and best practices available right now.

image
June 13, 2025
Published in : Virtual Private Servers (VPS)
Is NVMe Faster Than SSD?

Short answer? Yes.Long answer? NVMe is a major upgrade for VPS hosting, whether you’re the one running the infrastructure or relying on it.

Listed on WHTop.com Listed on WHTop.com

© 2025 : Virtarix. All Rights Reserved