How to Host n8n with Docker: A Production Deployment Guide
Learn how to easily host n8n, the powerful workflow automation tool, using Docker. This guide covers setup, configuration, and best practices for a smooth deployment.
Updated February 2026. This article has been reviewed and updated to reflect the latest information.
Docker is the recommended way to self-host n8n. It gives you a clean, reproducible environment that you can spin up in minutes and tear down without leaving traces on your host system. We have deployed n8n in Docker containers for a range of Australian businesses, and it remains the method we reach for first.
This guide walks you through everything from a basic docker run command to a production-ready setup with PostgreSQL, a reverse proxy, automated SSL, and proper backup strategies. The approach is the same whether you are running n8n on a $10/month VPS or a dedicated server in an Australian data centre.
If you are still weighing up whether to self-host at all, have a read through our guide on the benefits of self-hosting n8n first. If you have already made that decision and just need to get it running, you are in the right place.
Why Docker Is the Preferred Deployment Method for n8n
You can install n8n directly on your server using npm, but we stopped recommending that approach to clients years ago. Here is why Docker wins:
- Isolation. n8n and its dependencies live inside a container. No conflicts with other software on your server, no Node.js version headaches.
- Reproducibility. Your entire setup is defined in a
docker-compose.ymlfile. You can recreate the exact same environment on a new server in under five minutes. - Easy updates. Pull the new image, restart the container. That is the entire update process.
- Rollbacks. Pin your image to a specific version tag. If an update breaks something, roll back to the previous tag in seconds.
- Portability. Move your entire n8n instance between servers or cloud providers with minimal changes.
For any organisation running n8n as part of their business infrastructure rather than just experimenting, Docker is the clear choice.
Prerequisites
Before you start, you will need:
- A Linux server. Ubuntu 22.04 or 24.04 LTS is what we typically use. Debian works fine too. You can use macOS or Windows for local development, but production deployments should be on Linux.
- Docker Engine installed. Follow the official Docker installation guide for your distribution.
- Docker Compose, which ships with Docker Engine on modern installations (as
docker composerather than the olderdocker-compose). - A domain name pointed at your server’s IP address, if you want SSL (and you should).
- Basic command line familiarity. You do not need to be a Linux expert, but you should be comfortable with
cd,ls,nano/vim, andssh.
If any of that sounds unfamiliar, our n8n consulting team can handle the infrastructure setup for you so you can focus on building workflows.
Quick Start: Running n8n with Docker
The fastest way to get n8n running is a single command:
docker run -it --rm \
--name n8n \
-p 5678:5678 \
-v n8n_data:/home/node/.n8n \
n8nio/n8n
This pulls the latest n8n image, maps port 5678 to your host, and creates a Docker volume called n8n_data to persist your data. Open http://localhost:5678 in your browser and you will see the n8n setup screen.
This is fine for a quick test. It is not fine for production. Let us build something proper.
Production Setup with Docker Compose
A production n8n deployment needs several things the quick start command does not provide: a proper database, environment configuration, automatic restarts, and a reverse proxy with SSL. Docker Compose ties all of this together in a single file.
Create a project directory and the compose file:
mkdir -p /opt/n8n && cd /opt/n8n
The docker-compose.yml File
services:
n8n:
image: n8nio/n8n:latest
container_name: n8n
restart: unless-stopped
ports:
- "5678:5678"
environment:
- GENERIC_TIMEZONE=Australia/Brisbane
- TZ=Australia/Brisbane
- NODE_ENV=production
- N8N_HOST=n8n.yourdomain.com.au
- N8N_PORT=5678
- N8N_PROTOCOL=https
- WEBHOOK_URL=https://n8n.yourdomain.com.au/
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=n8n
- DB_POSTGRESDB_USER=n8n
- DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
volumes:
- n8n_data:/home/node/.n8n
- n8n_files:/files
depends_on:
postgres:
condition: service_healthy
postgres:
image: postgres:16-alpine
container_name: n8n-postgres
restart: unless-stopped
environment:
- POSTGRES_DB=n8n
- POSTGRES_USER=n8n
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U n8n"]
interval: 10s
timeout: 5s
retries: 5
volumes:
n8n_data:
n8n_files:
postgres_data:
The .env File
Create a .env file in the same directory to store sensitive values:
POSTGRES_PASSWORD=your-strong-password-here
N8N_ENCRYPTION_KEY=your-random-encryption-key-here
Generate a proper encryption key with:
openssl rand -hex 32
Critical: The N8N_ENCRYPTION_KEY is used to encrypt your stored credentials. If you lose this key, you lose access to every credential saved in n8n. Back it up separately and store it somewhere secure like a password manager or secrets vault.
Start the Stack
docker compose up -d
Check that everything is running:
docker compose ps
docker compose logs -f n8n
Persistent Data and Volumes
The volume configuration in the compose file above handles three things:
n8n_datamaps to/home/node/.n8ninside the container. This stores the encryption key, custom certificates, and other n8n configuration files.n8n_filesmaps to/files, giving you a shared directory for file-based operations in your workflows.postgres_datapersists your PostgreSQL database, which holds all your workflows, credentials (encrypted), execution history, and settings.
Without these volumes, every container restart wipes your data. We have seen businesses lose entire workflow libraries because someone spun up n8n without a volume mount. Do not skip this.
You can use bind mounts instead of named volumes if you prefer to control exactly where data lives on the host filesystem:
volumes:
- /opt/n8n/data:/home/node/.n8n
- /opt/n8n/files:/files
Bind mounts make backups slightly more straightforward since you know exactly where the files are on disk.
Environment Variables and Configuration
n8n is configured almost entirely through environment variables. Here are the ones we set on most client deployments beyond what is in the compose file above:
environment:
# Execution settings
- EXECUTIONS_DATA_PRUNE=true
- EXECUTIONS_DATA_MAX_AGE=168 # hours — prune after 7 days
# Security
- N8N_SECURE_COOKIE=true
# Logging
- N8N_LOG_LEVEL=info
- N8N_LOG_OUTPUT=console
# External modules (if needed)
- NODE_FUNCTION_ALLOW_EXTERNAL=*
# Queue mode for scaling (advanced)
# - EXECUTIONS_MODE=queue
# - QUEUE_BULL_REDIS_HOST=redis
The EXECUTIONS_DATA_PRUNE setting is one that catches people out. Without it, your database grows indefinitely with execution history. On a busy instance, this can balloon to tens of gigabytes within months. Set a sensible retention period and let n8n clean up after itself.
For the full list of available environment variables, refer to the n8n environment variables documentation.
Reverse Proxy Setup
Running n8n directly on port 5678 without a reverse proxy is fine for local development but not for production. You need a reverse proxy to handle SSL termination, serve n8n on standard HTTPS port 443, and add a layer of security.
We recommend Caddy for most deployments because it handles SSL certificates automatically with zero configuration. If your organisation already runs Nginx and your team is comfortable with it, that works too.
Option A: Caddy (Recommended)
Add Caddy to your docker-compose.yml:
caddy:
image: caddy:2-alpine
container_name: n8n-caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
depends_on:
- n8n
Add the Caddy volumes to the volumes section:
volumes:
n8n_data:
n8n_files:
postgres_data:
caddy_data:
caddy_config:
Create a Caddyfile in your project directory:
n8n.yourdomain.com.au {
reverse_proxy n8n:5678
}
That is the entire Caddy configuration. Three lines. Caddy automatically obtains and renews Let’s Encrypt certificates for your domain. Clients who have been wrestling with Certbot and Nginx configs are usually surprised by how little there is to it.
When using Caddy, remove the ports mapping from the n8n service (or change it to "127.0.0.1:5678:5678") so n8n is only accessible through the reverse proxy.
Option B: Nginx with Let’s Encrypt
If you prefer Nginx, here is a configuration that works:
server {
listen 80;
server_name n8n.yourdomain.com.au;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name n8n.yourdomain.com.au;
ssl_certificate /etc/letsencrypt/live/n8n.yourdomain.com.au/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/n8n.yourdomain.com.au/privkey.pem;
location / {
proxy_pass http://localhost:5678;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support — required for the n8n editor
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Increase timeouts for long-running workflows
proxy_read_timeout 300s;
proxy_send_timeout 300s;
}
}
Use Certbot to obtain your SSL certificate:
sudo apt install certbot
sudo certbot certonly --standalone -d n8n.yourdomain.com.au
The WebSocket headers are important. Without them, the n8n editor will not function properly. You will see connection errors and the UI will feel broken.
Updating n8n in Docker
Updates are one of Docker’s real strengths here:
cd /opt/n8n
docker compose pull n8n
docker compose up -d n8n
That pulls the latest image and recreates the container. Your data is safe in the volumes.
Our recommendation: Do not use the latest tag in production. Pin to a specific version:
image: n8nio/n8n:1.73.1
This gives you control over when updates happen. Test the new version in a staging environment first, then update the version number in your compose file and redeploy. We have seen minor n8n updates occasionally change node behaviour in ways that break existing workflows. Pinning versions prevents surprises.
Check the n8n release notes before updating. Pay particular attention to any breaking changes, especially if you are crossing a major version boundary like the move from 1.x to 2.x.
Backup Strategies
A backup you have never tested restoring is not a backup. Here is what we set up for every client deployment.
Database Backups
For PostgreSQL, use pg_dump to create regular backups:
docker exec n8n-postgres pg_dump -U n8n n8n | gzip > /opt/n8n/backups/n8n-db-$(date +%Y%m%d-%H%M%S).sql.gz
Schedule this with a cron job. Daily at minimum, hourly if the instance is critical:
0 * * * * docker exec n8n-postgres pg_dump -U n8n n8n | gzip > /opt/n8n/backups/n8n-db-$(date +\%Y\%m\%d-\%H\%M\%S).sql.gz
Volume Backups
Back up the n8n data volume, which contains the encryption key and any local files:
docker run --rm \
-v n8n_data:/source:ro \
-v /opt/n8n/backups:/backup \
alpine tar czf /backup/n8n-data-$(date +%Y%m%d-%H%M%S).tar.gz -C /source .
Encryption Key
Back up your N8N_ENCRYPTION_KEY separately and store it in a password manager or secrets vault. Without this key, your credential data is unrecoverable even if you have a perfect database backup.
Off-site Storage
Sync your backups to an off-site location. For Australian businesses with data sovereignty requirements, services like AWS S3 in the Sydney region (ap-southeast-2) or Azure Australia East keep your data within the country:
aws s3 sync /opt/n8n/backups/ s3://your-bucket/n8n-backups/ --storage-class STANDARD_IA
Rotate old backups to keep storage costs manageable. We typically keep daily backups for 30 days and weekly backups for 6 months.
Troubleshooting Common Issues
These are the problems we see most often when clients come to us with broken n8n Docker deployments.
Container keeps restarting: Check docker compose logs n8n. Nine times out of ten, it is a database connection issue. Either PostgreSQL has not finished starting (use the depends_on with service_healthy as shown above) or the credentials in the environment variables do not match what PostgreSQL was initialised with.
Webhooks not working: Make sure WEBHOOK_URL is set to your external URL including the protocol (https://). If you are behind a reverse proxy, the proxy needs to forward the Host header correctly. Also confirm that ports 80 and 443 are open in your firewall.
“Bad encryption key” errors: This happens when the N8N_ENCRYPTION_KEY changes between container restarts, or when you restore a database backup without using the matching encryption key. The key and the database must stay in sync.
Permission denied on volumes: The n8n container runs as user node (UID 1000). If you are using bind mounts, make sure the host directories are owned by UID 1000:
sudo chown -R 1000:1000 /opt/n8n/data /opt/n8n/files
Out of memory: n8n can be memory-hungry, especially when processing large datasets. Set memory limits in your compose file and monitor usage:
deploy:
resources:
limits:
memory: 2G
Production Considerations
Once n8n is running and handling real workloads, keep an eye on a few things.
Resource allocation. For most small-to-medium workloads (fewer than 50 active workflows), 2 CPU cores and 2 GB of RAM are sufficient. If you are processing large files, making many API calls in parallel, or running AI-based nodes, budget 4 GB or more. Monitor actual usage with docker stats and adjust.
Execution data pruning. As mentioned earlier, enable EXECUTIONS_DATA_PRUNE and set a reasonable EXECUTIONS_DATA_MAX_AGE. Unpruned execution history is the single most common reason we see n8n instances slow down over time.
Monitoring. At minimum, set up uptime monitoring that hits your n8n instance’s health endpoint. For more thorough monitoring, export metrics to Prometheus or a similar tool. We configure alerts for high memory usage, disk space on backup volumes, and SSL certificate expiry.
Security. Keep your Docker images updated, restrict SSH access to your server, and use a firewall (UFW on Ubuntu makes this simple). If n8n does not need to be accessible from the public internet, put it behind a VPN or restrict access by IP in your reverse proxy configuration.
Scaling. If a single n8n instance is not enough, n8n supports queue mode using Redis, which lets you run multiple worker containers. This is an advanced configuration that most businesses do not need, but it is there when you do.
For complex production deployments, or if you would rather not manage infrastructure yourself, get in touch with our team. We handle n8n hosting and management for organisations across Australia.
Getting Started
If you have followed this guide, you should have a production-ready n8n instance running in Docker with PostgreSQL, a reverse proxy, automated SSL, and proper backups. That covers the foundation for any serious n8n deployment.
The next step is building the workflows that make it worthwhile. If you need a hand with the infrastructure, the workflow design, or both, our n8n consulting team works with Australian businesses every week to deploy and optimise n8n. Book a call and we can talk through your setup.
For more on self-hosting n8n, see our guide on how to self-host n8n which covers the broader decision-making around hosting options beyond Docker.
Frequently Asked Questions
Can I run n8n with Docker on a Raspberry Pi or ARM device?
Yes. The official n8n Docker image supports ARM64 architecture, so it runs on Raspberry Pi 4/5 and other ARM-based devices. Performance will be limited compared to a proper server (expect slower execution times and lower concurrency) but it is perfectly viable for lightweight personal automations or development environments.
How much disk space does an n8n Docker deployment need?
The n8n image itself is roughly 400-500 MB. PostgreSQL adds another 200 MB or so. Beyond that, disk usage depends on your execution history, stored files, and backup retention. For a moderately busy instance with execution pruning enabled, plan for 10-20 GB. Without pruning, the database can grow to 50 GB or more within a year on a busy instance.
Should I use SQLite or PostgreSQL with Docker?
PostgreSQL, every time, for anything beyond quick experiments. SQLite is the default and works for a single-user test instance, but it does not handle concurrent access well and becomes a performance bottleneck as your workflow count and execution volume grow. PostgreSQL also gives you proper backup tools like pg_dump and the ability to move to a managed database service later if needed.
Is it safe to use the latest Docker tag for n8n in production?
We do not recommend it. The latest tag means you get whatever version was most recently published, which could include breaking changes. Pin your image to a specific version number (for example, n8nio/n8n:1.73.1), test updates in a staging environment, and then update the version in your compose file deliberately. This is especially important given n8n’s rapid release cadence, with new versions coming out most weeks.
How do I migrate n8n from a direct npm install to Docker?
Export your workflows from the existing n8n instance using the CLI (n8n export:workflow --all) or the API, then import them into your new Docker-based instance. For credentials, you will need the same N8N_ENCRYPTION_KEY that your original instance used. Copy the .n8n directory from your old installation into the Docker volume, set the matching encryption key in your environment variables, and your credentials should transfer across intact.
Can I run multiple n8n instances on the same server with Docker?
Yes. Use separate Docker Compose projects with different container names, ports, volumes, and databases. Each instance is fully isolated. This is useful for separating development and production environments, or for running distinct n8n instances for different departments or clients. Just make sure each instance has its own unique N8N_ENCRYPTION_KEY and database.
Jump to a section
Ready to streamline your operations?
Get in touch for a free consultation to see how we can streamline your operations and increase your productivity.