Guide to n8n Configuration Settings: Environment Variables, Database, Queues, and Production Tuning
Detailed overview of environment variables for n8n with best practices for deployment, database, security, and more.
Updated February 2026. This article has been reviewed and updated to reflect the latest information.
n8n works out of the box with zero configuration. That sounds great until you realise the defaults are designed for local development: SQLite database, no authentication, basic execution mode, logging turned down low. Run it like that for a real business and you will hit walls within weeks.
We have configured n8n for a range of Australian businesses at this point. The configuration layer is where most of the difference between a stable production instance and a fragile demo lives. This guide covers the settings that matter, why they matter, and how we set them in practice.
If you have not yet set up your n8n instance, start with our guides on how to self-host n8n and how to host n8n with Docker. This post assumes you have a running instance and want to configure it properly.
How n8n Configuration Works
n8n reads configuration from environment variables. You can set these in your Docker Compose file, a .env file, your system environment, or your hosting platform’s settings panel. There is no GUI for most of these settings. You set them before n8n starts, and they apply at startup.
The naming convention is straightforward. Settings are namespaced with underscores: N8N_, DB_, EXECUTIONS_, QUEUE_, and so on. Boolean values are true or false as strings. Some accept comma-separated lists.
Here is a minimal production .env file that we use as a starting point for most client deployments:
# Core
N8N_HOST=0.0.0.0
N8N_PORT=5678
N8N_PROTOCOL=https
WEBHOOK_URL=https://n8n.yourdomain.com.au/
N8N_EDITOR_BASE_URL=https://n8n.yourdomain.com.au/
GENERIC_TIMEZONE=Australia/Brisbane
# Database
DB_TYPE=postgresdb
DB_POSTGRESDB_HOST=localhost
DB_POSTGRESDB_PORT=5432
DB_POSTGRESDB_DATABASE=n8n
DB_POSTGRESDB_USER=n8n
DB_POSTGRESDB_PASSWORD=your-secure-password
# Execution
EXECUTIONS_MODE=regular
EXECUTIONS_TIMEOUT=300
EXECUTIONS_TIMEOUT_MAX=600
EXECUTIONS_DATA_SAVE_ON_ERROR=all
EXECUTIONS_DATA_SAVE_ON_SUCCESS=all
EXECUTIONS_DATA_PRUNE=true
EXECUTIONS_DATA_MAX_AGE=168
# Security
N8N_BASIC_AUTH_ACTIVE=false
N8N_USER_MANAGEMENT_DISABLED=false
N8N_ENCRYPTION_KEY=generate-a-random-64-char-string
We will break each section down in detail below.
Database Configuration: SQLite vs PostgreSQL
This is the single most important configuration decision you will make, and we have a firm position on it: use PostgreSQL from day one.
The default SQLite database works fine for testing and light personal use. But SQLite uses file-level locking, which means only one write operation can happen at a time. When you have multiple workflows executing concurrently, and in production you always will, SQLite becomes a bottleneck. We have seen instances grind to a halt once they pass roughly 30-40 active workflows with moderate execution frequency.
Migrating from SQLite to PostgreSQL on a live instance is possible but painful. You need to export your workflows, credentials, and execution data, then import them into the new database. Credentials are encrypted with your N8N_ENCRYPTION_KEY, so you need to preserve that exactly. We have done this migration for clients more times than we would like, and every time we think the same thing: this would have cost nothing to avoid.
DB_TYPE=postgresdb
DB_POSTGRESDB_HOST=localhost
DB_POSTGRESDB_PORT=5432
DB_POSTGRESDB_DATABASE=n8n
DB_POSTGRESDB_USER=n8n
DB_POSTGRESDB_PASSWORD=your-secure-password
DB_POSTGRESDB_SCHEMA=public
DB_POSTGRESDB_SSL_ENABLED=false
DB_POSTGRESDB_SSL_REJECT_UNAUTHORIZED=true
If your PostgreSQL instance is on a separate server or a managed service like AWS RDS or Supabase, enable SSL and set the appropriate certificate authority settings. For most Australian hosting setups where the database sits on the same server or within a private network, SSL between n8n and the database is not strictly necessary, but we enable it anyway as a matter of habit.
Connection pooling is worth considering if you are running queue mode (covered below). n8n does not natively support external connection pooling, but placing PgBouncer in front of PostgreSQL helps manage connections under heavy load.
Execution Settings
Execution settings control how n8n runs your workflows, how long they can run, and what happens with the data afterwards.
EXECUTIONS_MODE=regular
EXECUTIONS_TIMEOUT=300
EXECUTIONS_TIMEOUT_MAX=600
EXECUTIONS_DATA_SAVE_ON_ERROR=all
EXECUTIONS_DATA_SAVE_ON_SUCCESS=all
EXECUTIONS_DATA_PRUNE=true
EXECUTIONS_DATA_MAX_AGE=168
EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=true
Timeout
EXECUTIONS_TIMEOUT sets how many seconds a workflow can run before n8n kills it. The default is -1, meaning no timeout. Never run production with no timeout. A single workflow stuck in an infinite loop or waiting on a dead API endpoint will consume resources indefinitely. We set 300 seconds (five minutes) as a baseline and increase it only for specific workflows that genuinely need longer, such as large data processing jobs.
EXECUTIONS_TIMEOUT_MAX caps the maximum timeout that individual workflows can set via their own settings. This prevents someone from accidentally setting a workflow timeout to 24 hours.
Execution Data
n8n stores the input and output data of every node for every execution. That is great for debugging but will bloat your database fast if left unchecked.
EXECUTIONS_DATA_PRUNE=true enables automatic cleanup of old execution data. EXECUTIONS_DATA_MAX_AGE controls how many hours of execution history to retain. We typically set 168 hours (seven days) for production instances. That gives enough history to debug issues without letting the database grow unbounded.
For workflows that process sensitive data such as customer records or financial information, consider setting EXECUTIONS_DATA_SAVE_ON_SUCCESS=none at the workflow level. This means successful executions do not store their data, reducing both storage and privacy exposure.
Queue Mode with Redis and BullMQ
If you have more than a handful of concurrent workflows in production, you should be running queue mode. In the default regular mode, n8n runs everything in a single process. If that process crashes or runs out of memory, every active workflow dies with it.
Queue mode separates the work. The main n8n process handles the editor, webhook reception, and scheduling. Actual workflow execution gets offloaded to separate worker processes via a Redis-backed queue (BullMQ under the hood).
EXECUTIONS_MODE=queue
QUEUE_BULL_REDIS_HOST=localhost
QUEUE_BULL_REDIS_PORT=6379
QUEUE_BULL_REDIS_PASSWORD=your-redis-password
QUEUE_BULL_REDIS_DB=0
QUEUE_WORKER_TIMEOUT=60
QUEUE_RECOVERY_INTERVAL=60
You then start separate worker processes alongside your main n8n instance:
# Main process (editor + webhooks + scheduling)
n8n start
# Worker process (execution)
n8n worker --concurrency=10
The --concurrency flag controls how many workflows a single worker will execute in parallel. We typically start at 10 and adjust based on the resource profile of the workflows. CPU-heavy workflows like image processing or large data transformations need lower concurrency. Lightweight API-call workflows can handle higher concurrency.
You can run multiple workers across different servers for horizontal scaling. This is how we set up n8n for clients with high execution volumes.
Important: Queue mode requires Redis. If Redis goes down, no new executions will start. We always configure Redis with persistence enabled (appendonly yes) and monitor it as a critical dependency.
Authentication and User Management
n8n has built-in user management that supports multiple users with role-based access. This is enabled by default in recent versions.
N8N_USER_MANAGEMENT_DISABLED=false
N8N_EMAIL_MODE=smtp
N8N_SMTP_HOST=smtp.example.com
N8N_SMTP_PORT=587
N8N_SMTP_USER=your-smtp-user
N8N_SMTP_PASS=your-smtp-password
[email protected]
N8N_SMTP_SSL=true
The SMTP settings are needed for user invitations and password resets. Without SMTP configured, you can still create users, but they will not receive invitation emails and you will need to share setup links manually.
For instances where only a single user or a small team needs access, basic authentication provides a simpler alternative:
N8N_BASIC_AUTH_ACTIVE=true
N8N_BASIC_AUTH_USER=admin
N8N_BASIC_AUTH_PASSWORD=a-strong-password
One more thing: N8N_ENCRYPTION_KEY encrypts stored credentials (API keys, OAuth tokens, database passwords). Lose this key and you lose access to every credential in n8n. Generate a strong random string and store it somewhere safe outside of n8n. We keep ours in a password manager and in a sealed envelope in a physical safe. That might sound excessive until you have lost one.
N8N_ENCRYPTION_KEY=a-random-64-character-string-store-this-securely
Webhook Configuration
Webhooks are how external services trigger n8n workflows. Getting the URL configuration wrong is one of the most common issues we see.
WEBHOOK_URL=https://n8n.yourdomain.com.au/
N8N_EDITOR_BASE_URL=https://n8n.yourdomain.com.au/
WEBHOOK_URL tells n8n what base URL to use when generating webhook URLs for your workflows. If this is not set correctly, webhook trigger nodes will display incorrect URLs, and incoming webhook calls will fail.
If n8n sits behind a reverse proxy (which it should in production), make sure your proxy is configured to pass the original headers through. For Nginx:
location / {
proxy_pass http://localhost:5678;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
chunked_transfer_encoding off;
proxy_buffering off;
}
The chunked_transfer_encoding off and proxy_buffering off directives are important for server-sent events, which n8n uses for real-time execution updates in the editor.
Timezone and Locale Settings
n8n defaults to the America/New_York timezone. If you are running workflows for an Australian business and do not change this, every scheduled trigger and every date function will be wrong.
GENERIC_TIMEZONE=Australia/Brisbane
Use the appropriate IANA timezone for your location: Australia/Sydney, Australia/Melbourne, Australia/Brisbane, Australia/Perth, Australia/Adelaide, or Australia/Hobart. Note that Brisbane does not observe daylight saving time, which makes it a simpler choice for businesses that operate across multiple Australian states.
This setting affects cron-based triggers, the Schedule Trigger node, and date/time functions within expressions. Timestamps are still stored as UTC internally, but the timezone controls how times are displayed and interpreted.
Logging and Diagnostics
The default logging level is info, which is fine for production. Bump it to debug only when actively troubleshooting, as debug logging is extremely verbose and will fill your disk.
N8N_LOG_LEVEL=info
N8N_LOG_OUTPUT=console,file
N8N_LOG_FILE_LOCATION=/home/node/.n8n/logs/n8n.log
N8N_LOG_FILE_MAXSIZE=16
N8N_LOG_FILE_MAXCOUNT=100
N8N_DIAGNOSTICS_ENABLED=false
N8N_DIAGNOSTICS_ENABLED controls whether n8n sends anonymous telemetry data. We disable it for client deployments as a matter of policy. It has no functional impact, but our clients prefer to know exactly what data leaves their servers.
External Storage for Binary Data
By default, n8n stores binary data (files, images, documents processed by workflows) in the database. If your workflows handle a lot of files, this bloats the database and slows things down.
Configure external storage to offload binary data to the filesystem or an S3-compatible bucket:
N8N_DEFAULT_BINARY_DATA_MODE=filesystem
N8N_BINARY_DATA_STORAGE_PATH=/data/n8n-binary
For S3-compatible storage (works with AWS S3, MinIO, Backblaze B2, and other providers):
N8N_DEFAULT_BINARY_DATA_MODE=s3
N8N_EXTERNAL_STORAGE_S3_HOST=s3.ap-southeast-2.amazonaws.com
N8N_EXTERNAL_STORAGE_S3_BUCKET_NAME=your-n8n-binary-data
N8N_EXTERNAL_STORAGE_S3_BUCKET_REGION=ap-southeast-2
N8N_EXTERNAL_STORAGE_S3_ACCESS_KEY=your-access-key
N8N_EXTERNAL_STORAGE_S3_ACCESS_SECRET=your-secret-key
Using ap-southeast-2 (Sydney) as your S3 region keeps binary data within Australia, which matters for clients with data sovereignty requirements.
Security Settings
You can restrict which nodes are available and lock down external code execution:
NODES_EXCLUDE=["n8n-nodes-base.executeCommand","n8n-nodes-base.readWriteFile"]
N8N_BLOCK_ENV_ACCESS_IN_NODE=true
NODES_EXCLUDE removes specific nodes from the editor entirely. We typically exclude the Execute Command node and direct file system access nodes in multi-user environments where not every user should have shell access to the server.
N8N_BLOCK_ENV_ACCESS_IN_NODE prevents workflows from reading server environment variables via expressions. Set this if your environment variables contain secrets that not all n8n users should see.
For workflows that need access to custom configuration values, use workflow-level static data or a dedicated credentials entry rather than environment variables.
Performance Tuning for Production
A few more settings that affect performance under load:
Node.js memory: n8n runs on Node.js, which defaults to a relatively low memory limit. For production instances processing large datasets, increase it:
NODE_OPTIONS=--max-old-space-size=4096
Execution data pruning: Worth repeating from earlier. Without pruning, your database will grow without limit. Enable it and set a sensible retention window.
Webhook timeout: If your webhook-triggered workflows do significant processing before responding, increase the webhook timeout to avoid the caller receiving a timeout error:
N8N_DEFAULT_WEBHOOK_TIMEOUT=120
Concurrent executions: In regular (non-queue) mode, limit the number of concurrent executions to prevent resource exhaustion:
EXECUTIONS_PROCESS=main
Running executions in the main process (rather than own which forks a separate process per execution) uses less memory but means a crash in one workflow can affect others. For true isolation, use queue mode.
Common Configuration Mistakes and Fixes
We see the same problems come up again and again when we take on new clients. Here are the ones that cause the most grief:
Missing or incorrect WEBHOOK_URL. Symptoms: webhook URLs in the editor show localhost:5678 instead of your actual domain. Webhooks work in testing but fail when external services try to call them. Fix: set WEBHOOK_URL to your full external URL including the protocol.
Lost N8N_ENCRYPTION_KEY. Symptoms: after a redeployment, all credentials show as invalid. Fix: there is no fix. You need to re-enter every credential. Prevention: store the key securely and include it in every deployment configuration.
Running SQLite in production. Symptoms: slow editor, timeouts during workflow execution, database lock errors in logs. Fix: migrate to PostgreSQL. Our guide on hosting n8n with Docker covers this setup.
No execution timeout. Symptoms: stuck workflows consuming memory and CPU indefinitely, eventual out-of-memory crashes. Fix: set EXECUTIONS_TIMEOUT to a reasonable value.
Wrong timezone. Symptoms: scheduled workflows fire at the wrong time, date calculations are off by hours. Fix: set GENERIC_TIMEZONE to your Australian timezone.
No execution data pruning. Symptoms: database grows to tens of gigabytes over months, queries slow down, backups take forever. Fix: enable pruning with EXECUTIONS_DATA_PRUNE=true and set EXECUTIONS_DATA_MAX_AGE.
Need Help Configuring n8n for Production?
Getting these settings right early saves a lot of pain later. If you would rather skip the trial-and-error phase, book a call with our team and we will configure your instance properly from the start.
We have been doing this across Australia long enough to have seen most of the ways it can go wrong. New instance or broken production deployment, we can help.
Frequently Asked Questions
What is the most important n8n configuration setting to change from default?
The database type. Switch from SQLite to PostgreSQL before you deploy to production. SQLite cannot handle concurrent writes, and in a production n8n instance with multiple active workflows, concurrent writes are constant. Every other configuration issue is recoverable. Running SQLite in production until your database locks up and corrupts is not always recoverable.
Can I change n8n configuration settings without restarting the instance?
No. n8n reads environment variables at startup. Any configuration change requires a restart. In queue mode, you also need to restart the worker processes. We recommend having a brief maintenance window or using a zero-downtime deployment strategy with Docker and a reverse proxy to minimise disruption.
How do I set up n8n queue mode with Docker Compose?
You need three services in your Docker Compose file: the main n8n process, at least one n8n worker, and a Redis instance. All three services share the same environment variables, with the main process started using n8n start and the workers started using n8n worker. Our Docker hosting guide walks through the full Docker Compose configuration including queue mode.
What timezone should I use for n8n in Australia?
Set GENERIC_TIMEZONE to the IANA timezone for your primary business location. If your business operates across multiple states with different daylight saving rules, Australia/Brisbane is often the simplest choice because Queensland does not observe daylight saving, so your scheduled workflows will not shift by an hour twice a year. If consistency with your local clock matters more, use the timezone for your state.
How do I secure n8n credentials if multiple people use the instance?
Set a strong N8N_ENCRYPTION_KEY to encrypt credentials at rest. Enable user management so each person has their own account. Set N8N_BLOCK_ENV_ACCESS_IN_NODE=true to prevent workflows from reading server environment variables, and use NODES_EXCLUDE to remove dangerous nodes like Execute Command from users who do not need them. For sensitive deployments, also restrict the n8n editor behind a VPN or IP allowlist at the reverse proxy level.
Stuck on something? Get in touch and we will sort it out.
Jump to a section
Ready to streamline your operations?
Get in touch for a free consultation to see how we can streamline your operations and increase your productivity.