image

The Ultimate Guide to Docker Installation and Deployment using Cloud-Init

Published : January 20, 2026
Last Updated : January 20, 2026
Published In : Technical Guide

Deploying production servers manually wastes hours on repetitive configuration tasks. You install packages, configure security, deploy applications, and repeat this process for every new server. This guide shows you how to automate the entire setup. You’ll learn to install Docker and deploy your containerized application in one go using Cloud-Init on your virtual private server.

What is Docker?

Docker is a container platform that packages your application with everything it needs to run. It isolates your app from the underlying infrastructure, creating a consistent environment across all systems. Docker solves the classic “works on my machine” problem by ensuring your app runs identically everywhere – from development to production.

Why Use Docker?

Docker ensures every developer works in identical environments, eliminating issues from version mismatches. Build once, test thoroughly, then deploy that exact container to production – no configuration changes, no surprises. Containers run anywhere Docker is installed, making deployment portable across any infrastructure.

What is Cloud-Init?

Cloud-Init automates server setup from first boot, handling user creation, package installation, and command execution. Write your configuration once and reuse it across multiple servers. This eliminates human error and speeds up infrastructure deployment.

Docker Plus Cloud-Init Benefits

Combine these tools and you get one-click server deployment. Your Cloud-Init config installs Docker, pulls your container image, and starts your application automatically. We recommend the official Docker repository installation method. It provides the latest stable releases and regular security updates. The convenience script exists for testing but should never run in production environments.

This approach creates production-ready servers in minutes instead of hours. Your infrastructure becomes code that you can version control, review, and reuse across your entire deployment pipeline.

Build Your Demo Application

Let’s create a simple Node.js application to demonstrate the complete workflow. We use Express, a minimal web framework that powers millions of production applications.

First, initialize your project and install Express.

				
					npm init -y
npm install express

				
			

Create an index.js file with this code.

				
					const express = require('express');
const app = express();
const port = process.env.PORT || 3000;

app.get('/', (req, res) => {
  res.send('Hello from Docker!');
});

// Error handling middleware MUST be defined AFTER all routes
app.use((err, req, res, next) => {
  console.error(err.stack);
  res.status(500).send('Something broke!');
});

app.listen(port, () => {
  console.log(`Server running on port ${port}`);
});

				
			

Update your package.json to include a start script with the current stable Express version.

				
					{
  "name": "docker-demo",
  "version": "1.0.0",
  "scripts": {
    "start": "node index.js"
  },
  "dependencies": {
    "express": "^4.21.0"
  }
}

				
			

This creates a basic web server that responds with a greeting message. The error handling middleware catches any errors from your routes and prevents your application from crashing. We use Express 4.21.0 for maximum compatibility. Express 5.x became the npm default in March 2025 and is fully stable with Node.js 24, but it includes breaking changes in routing syntax, error handling, and deprecated methods. For tutorials and gradual migration paths, Express 4.x remains a solid choice with full security support continuing through 2026. Both versions are production-ready, so choose based on your migration strategy.

Create Your Dockerfile

The Dockerfile tells Docker how to build your application image. We use Node.js 24 (the current Active LTS release as of December 2025) with Alpine Linux for a smaller image size.

Create a file named Dockerfile in your project root.

				
					FROM node:24-alpine

WORKDIR /app

COPY package*.json ./

RUN npm ci --omit=dev

COPY . .

RUN addgroup -g 1001 -S nodejs && adduser -S nodejs -u 1001
RUN chown -R nodejs:nodejs /app
USER nodejs

ENV PORT=80
EXPOSE 80

HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD wget --no-verbose --tries=1 --spider http://localhost:80 || exit 1

CMD ["npm", "start"]

				
			

Here’s what each instruction does:

  • FROM sets the base image to Node.js 24 (Active LTS through April 2028) on Alpine Linux
  • WORKDIR creates the app directory inside the container
  • COPY and RUN install production dependencies, leveraging Docker’s layer caching
  • RUN addgroup and adduser create a non-root nodejs user for security
  • ENV and EXPOSE document the container port
  • HEALTHCHECK monitors application health using wget
  • CMD starts the application

Create a .dockerignore file to exclude unnecessary files from your image

				
					node_modules
npm-debug.log
.git
.env

				
			

This keeps your image small by excluding development files and dependencies that npm installs anyway.

Build and Test Your Image

Build your Docker image with this command. Replace myapp with your chosen image name.

				
					docker build -t myapp .
				
			

The build process reads your Dockerfile, executes each instruction, and creates a new image layer for each step. The final image contains your complete application.

Test the image locally before deploying to production. Run a container from your image.

				
					docker run -p 5000:80 -d myapp
				
			

This maps port 5000 on your machine to port 80 in the container. The d flag runs the container in detached mode, returning control to your terminal.

Open your browser and visit localhost:5000. You should see your greeting message. This verifies proper functionality in a containerized environment.

Stop the test container when you’re done.

				
					docker ps
docker stop [CONTAINER_ID]


				
			

Push Your Image to Docker Hub

Your container works locally. Now make it available for deployment by pushing it to Docker Hub, a public registry for Docker images.

Note: Create a free Docker Hub account at hub.docker.com before proceeding if you haven’t already.

First, log in to Docker Hub from your terminal.

				
					docker login

				
			

Tag your image with your Docker Hub username. Replace username with your actual Docker Hub username.

				
					docker tag myapp username/myapp:v1.0.0

				
			

Push the tagged image to Docker Hub.

				
					docker push username/myapp:v1.0.0

				
			

The push uploads your image layers to Docker Hub. Other people and servers can now pull your image and run containers from it. We use version tags like v1.0.0 instead of latest for production deployments to maintain control over which version runs.

Deploy with the Convenience Script

The convenience script offers the fastest installation method. Docker maintains this script for development and testing environments.

Create a file named cloud-config.yaml with this content.

#cloud-config

				
					runcmd:
  - curl -fsSL https://get.docker.com | sh
  - docker pull username/myapp:v1.0.0
  - docker run -d -p 80:80 --restart unless-stopped username/myapp:v1.0.0

				
			

This configuration downloads and runs the Docker installation script. Then it pulls your application image and starts a container that listens on port 80.

The restart policy ensures your container starts automatically if the server reboots or if the container crashes.

However, remember that Docker explicitly does not recommend this script for production. It works great for quick tests but lacks the security and stability guarantees needed for production systems.

Understanding Docker and Firewall Security

Before we deploy to production, you need to understand a critical security issue. Docker bypasses firewall rules by default. This surprises many people who assume their firewall protects all ports.

Here’s what happens. When you configure UFW (Uncomplicated Firewall) to block ports, you expect those ports to be blocked. But Docker modifies iptables directly, inserting its own rules that process before UFW sees the traffic. Your published container ports become publicly accessible even when UFW denies them.

This happens because Docker creates a special iptables chain called DOCKER-USER. Docker’s rules accept traffic to container ports before UFW can evaluate them. Your firewall essentially becomes useless for container ports without additional configuration.

We fix this by integrating UFW with Docker’s DOCKER-USER chain. Our production cloud-config includes rules that make UFW and Docker work together properly. These rules ensure your firewall actually protects your containers.

Without this fix, anyone on the internet can access your container ports regardless of your firewall settings. This creates a massive security hole in your infrastructure.

Deploy with the Official Repository

Production environments need a proper installation from Docker’s official repository. This method provides stable releases, security updates, and full support. Our configuration includes the critical UFW-Docker integration that protects your containers.

Whether you’re setting up a new VPS or configuring an existing server, this cloud-config handles everything automatically.

Create your production cloud-config.yaml file.

#cloud-config

				
					package_update: true
package_upgrade: true

packages:
  - apt-transport-https
  - ca-certificates
  - curl
  - gnupg
  - lsb-release
  - ufw

write_files:
  - path: /etc/docker/daemon.json
    content: |
      {
        "log-driver": "json-file",
        "log-opts": {
          "max-size": "10m",
          "max-file": "3"
        }
      }

				
			
				
					runcmd:
  # Remove conflicting packages first
  - apt-get remove -y docker.io docker-compose-v2 docker-doc podman-docker containerd runc || true
  - install -m 0755 -d /etc/apt/keyrings
  - curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
  - chmod a+r /etc/apt/keyrings/docker.asc
  # Use modern .sources file format
  - |
    tee /etc/apt/sources.list.d/docker.sources <<'EOF' > /dev/null
    Types: deb
    URIs: https://download.docker.com/linux/debian
    Suites: $(. /etc/os-release && echo "$VERSION_CODENAME")
    Components: stable
    Signed-By: /etc/apt/keyrings/docker.asc
    EOF
  - apt-get update
  - apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
  - systemctl start docker
  - systemctl enable docker
  - docker run hello-world
  # Add UFW-Docker integration rules after Docker starts
  - |
    cat >> /etc/ufw/after.rules <<'EOF'
    # BEGIN UFW AND DOCKER
    *filter
    :ufw-user-forward - [0:0]
    :DOCKER-USER - [0:0]
    -A DOCKER-USER -j ufw-user-forward
    -A DOCKER-USER -j RETURN -s 10.0.0.0/8
    -A DOCKER-USER -j RETURN -s 172.16.0.0/12
    -A DOCKER-USER -j RETURN -s 192.168.0.0/16
    -A DOCKER-USER -j DROP
    COMMIT
    # END UFW AND DOCKER
    EOF
  # CRITICAL: Configure SSH before enabling firewall to avoid lockout
  - ufw allow 22/tcp
  - ufw allow 80/tcp
  - ufw --force enable
  - ufw reload
  - docker network create app-network
  - docker pull username/myapp:v1.0.0
  # Adjust memory (512m) and CPU (1.0) limits based on your application needs
  - docker run -d --network app-network -p 80:80 --memory="512m" --cpus="1.0" --restart unless-stopped username/myapp:v1.0.0


				
			

This configuration:

  1. Updates system packages
  2. Installs Docker prerequisites
  3. Configures log rotation in daemon.json
  4. Removes conflicting Docker packages
  5. Installs Docker from the official repository using the modern .sources format
  6. Verifies installation with hello-world test
  7. Adds UFW-Docker integration for firewall security
  8. Configures firewall (SSH port 22, HTTP port 80)
  9. Creates a custom Docker network
  10. Deploys your application with resource limits

Replace username/myapp:v1.0.0 with your Docker Hub image. For Ubuntu, change debian to ubuntu in the repository configuration. For other distributions like CentOS or Fedora, consult Docker’s official installation documentation for the appropriate repository configuration.

Once you have this production-ready configuration, you need to provide it to your cloud provider during server creation.

Using Your Cloud-Config File

Paste your cloud-config content into your provider’s user data field:

  • AWS EC2: User Data (Advanced Details)
  • DigitalOcean: User Data section
  • Azure: Custom Data (Advanced tab)
  • Google Cloud: Metadata startup-script

Setup completes in 5 to 10 minutes after the first boot.

 

Verify Your Deployment

After your server boots with Cloud-Init, give it a few minutes to complete the setup process. The time depends on your internet speed and server resources.

Connect to your server via SSH and check that Docker is installed correctly.

				
					docker --version
				
			

This should display Docker Engine version 27.0 or newer (29.1.x is current as of December 2025). If you see a version number, Docker has been installed successfully.

Check your running containers.

				
					docker ps
				
			

You should see your application container running. The output shows the container ID, image name, command, creation time, status, and port mappings.

Test your application by visiting your server’s IP address in a browser. You should see your application’s response.

Check the container health status.

				
					docker inspect --format='{{.State.Health.Status}}' [CONTAINER_ID]
				
			

This should return healthy if your application runs correctly.

Verify that UFW properly protects your container ports. Check the firewall status.

				
					sudo ufw status
				
			

This shows your active firewall rules. Port 22 and 80 should be allowed while other ports stay blocked. More importantly, verify that the UFW-Docker integration works.

				
					sudo iptables -L DOCKER-USER -n

				
			

This displays the DOCKER-USER chain rules. You should see rules that forward traffic to ufw-user-forward, ensuring your firewall actually filters container traffic.

If something went wrong, check the Cloud-Init logs.

				
					sudo cat /var/log/cloud-init-output.log
				
			

This log file contains all output from Cloud-Init execution, including any errors that occurred during setup.

Production Security Best Practices

Always use specific image tags in production instead of latest. This prevents unexpected updates from breaking your deployment. We already use v1.0.0 in our examples following this practice.

Never expose sensitive data through environment variables in plain text. Create a secure environment file and reference it.

				
					echo "API_KEY=your-secret-key" | sudo tee /secure/app.env
sudo chmod 600 /secure/app.env

				
			

Then update your Docker run command in cloud-config.

				
					- docker run -d --network app-network -p 80:80 --env-file /secure/app.env --memory="512m" --cpus="1.0" --restart unless-stopped username/myapp:v1.0.0
				
			

Keep your Docker images updated regularly. Security vulnerabilities get discovered and patched frequently. Rebuild your images monthly at a minimum and after any security announcements.

Monitor your containers and set up alerts for failures. The health checks we included help with automatic recovery, but you still need monitoring to catch persistent issues.

Consider implementing SSL/TLS for production deployments. Your application currently runs on HTTP, but production traffic should use HTTPS. Set up a reverse proxy like Nginx to handle SSL certificates.

Regular security updates keep your system protected. Add automatic security updates to your cloud-config.

				
					packages:
  - unattended-upgrades

				
			

This package automatically installs security updates without manual intervention.

The UFW-Docker integration we configured is absolutely essential. Without it, your firewall provides false security. Container ports remain exposed even when you think they’re blocked. Always include those after.rules in your production deployments.

Common Issues and Solutions

Connection refused errors usually mean the container isn’t running or isn’t listening on the expected port. Verify the container status and check the logs.

				
					docker logs [CONTAINER_ID]

				
			

Permission errors when running Docker commands indicate you need to add your user to the Docker group or use sudo. For automated deployments, the root user runs Cloud-Init commands, so this shouldn’t be an issue.

If your container starts but immediately stops, check the application logs inside the container. The issue typically lies in your application code or missing environment variables.

Port conflicts occur when another service already uses port 80. Either stop that service or change your container’s port mapping.

				
					- docker run -d --network app-network -p 8080:80 --restart unless-stopped username/myapp:v1.0.0

				
			

Out of disk space errors happen when Docker images and containers fill your drive. Clean up unused images and containers regularly.

				
					docker system prune -a
				
			

Health check failures indicate your application isn’t responding correctly. Check the application logs and ensure your health check endpoint works properly.

Firewall issues can manifest as containers being inaccessible or unexpectedly accessible. If you can reach container ports that should be blocked, verify the UFW-Docker integration rules in after.rules. Check the DOCKER-USER chain with iptables to confirm the rules loaded correctly.

Next Steps

You now have a complete automated deployment pipeline using Docker and Cloud-Init. Your servers deploy with a single configuration file, eliminating manual setup and ensuring consistency across your infrastructure.

From here, explore Docker Compose for multi-container applications, implement SSL/TLS with reverse proxies, and add centralized logging for better observability. The foundation you built turns infrastructure into version-controlled code that deploys reliably every time.


About the Author Peter French is the Managing Director at Virtarix, with over 17 years in the tech industry. He has co-founded a cloud storage business, led strategy at a global cloud computing leader, and driven market growth in cybersecurity and data protection.

Other posts

image
January 30, 2026
Published in : Technical Guide
Plesk vs cPanel: Which is better for your VPS?

Compare Plesk Obsidian vs cPanel for VPS hosting. Pricing, performance, OS support, developer tools, and which control panel fits your needs best.

image
January 27, 2026
Published in : Technical Guide
How To Mount Remote File Systems Over SSH using SSHFS

Learn how to mount remote directories locally over SSH using SSHFS. Access, edit, and manage server files securely with no server-side setup.

image
January 23, 2026
Published in : Technical Guide
Linux Commands: Basic Syntax, Consistency & Challenges

Learn core Linux commands, what stays consistent across distros, and where Ubuntu, Fedora, and Arch differ for package, network, and service management.

Listed on WHTop.com Listed on WHTop.com

© 2026 : Virtarix. All Rights Reserved