Skip to main content

Deploying EmailEngine

This guide helps you choose the right deployment method and provides best practices for production deployments.

Deployment Options

EmailEngine can be deployed in various ways depending on your infrastructure and requirements:

MethodComplexityBest ForScaling
DockerLowQuick start, containersVertical
Docker ComposeLowDevelopment, small teamsLimited
KubernetesHighEnterprise, cloud-nativeVertical
SystemD ServiceMediumBare metal, VPSVertical
Render.comLowManaged hostingVertical
Nginx Reverse ProxyMediumProduction with SSLN/A

Quick Comparison

Docker

Pros:

  • Quick to set up
  • Isolated environment
  • Easy updates
  • Portable

Cons:

  • Requires Docker knowledge
  • Single container limitations

When to use: Quick start, development, simple production

Docker deployment guide →


Docker Compose

Pros:

  • Multi-container orchestration
  • Easy configuration
  • Development-friendly
  • Includes Redis setup

Cons:

  • Not production-grade at scale
  • Limited high-availability options

When to use: Development, staging, small production

Docker Compose setup →


Kubernetes

Pros:

  • Production-ready
  • High availability
  • Auto-scaling
  • Self-healing

Cons:

  • Complex setup
  • Requires K8s knowledge
  • Higher resource overhead

When to use: Enterprise, large scale, cloud deployments

Kubernetes deployment →


SystemD Service

Pros:

  • Native Linux integration
  • No containerization overhead
  • Full system access
  • Easy log management

Cons:

  • Manual dependency management
  • OS-specific
  • Manual updates

When to use: VPS, bare metal, traditional Linux servers

SystemD service guide →


Render.com

Pros:

  • Fully managed
  • Zero DevOps
  • Auto SSL
  • Built-in monitoring

Cons:

  • Vendor lock-in
  • Cost at scale
  • Limited customization

When to use: Quick deployment, prototyping, small teams

Render deployment →


Nginx Reverse Proxy

Pros:

  • SSL/TLS termination
  • Load balancing
  • Security hardening
  • Rate limiting

Cons:

  • Additional component
  • Configuration complexity

When to use: Production deployments requiring HTTPS

Nginx proxy setup →

Choosing the Right Method

For Development

Recommended: Docker Compose

services:
redis:
image: redis:7-alpine
emailengine:
image: postalsys/emailengine:v2
ports:
- "3000:3000"
environment:
- EENGINE_REDIS=redis://redis:6379/2

Why: Quick setup, easy to tear down, includes all dependencies.

For Production (Small Scale)

Recommended: Docker + Render.com OR SystemD + Nginx

Docker on Render:

  • Managed hosting
  • Auto SSL
  • Simple deployment

SystemD + Nginx:

  • Full control
  • Cost-effective
  • VPS-friendly

For Production (Large Scale)

Recommended: Kubernetes

Features needed:

  • High availability
  • Auto-scaling
  • Load balancing
  • Health checks
  • Rolling updates

Production Checklist

Before deploying to production, ensure you have:

Infrastructure

  • Redis 6.0+ deployed with persistence enabled
  • Sufficient memory (1-2 MB per mailbox)
  • Fast network connection to Redis (< 5ms latency)
  • HTTPS/TLS configured
  • Firewall rules configured

Configuration

  • Strong EENGINE_SECRET (32+ characters)
  • EENGINE_SECRET for field encryption
  • OAuth2 credentials configured
  • Webhook endpoints configured
  • Base URL set correctly
  • License key activated

Monitoring

  • Prometheus metrics enabled
  • Log aggregation configured
  • Health check endpoints monitored
  • Alerts configured for errors
  • Backup strategy for Redis

Security

  • Secrets stored securely (not in code)
  • Network access restricted
  • API tokens rotated regularly
  • Redis password protected
  • Regular security updates

Complete security checklist →

Scaling Strategies

Vertical Scaling (Only Supported Method)

No Horizontal Scaling

EmailEngine does NOT support running multiple instances against the same Redis. Each instance would independently sync all accounts, causing conflicts and resource waste.

Increase resources on single instance:

  • More CPU cores (increase EENGINE_WORKERS)
  • More RAM (more concurrent accounts)
  • Faster network (reduce latency)

Configuration:

EENGINE_WORKERS=16           # Match CPU cores
EENGINE_WORKERS_WEBHOOKS=8
EENGINE_WORKERS_SUBMIT=4

Good for: Several thousand accounts per instance

Manual Sharding (Advanced): For very large deployments, you can run completely separate EmailEngine instances with separate Redis databases and manually distribute accounts across them. This requires your application to route requests appropriately.

Scaling guide →

High Availability

Since EmailEngine doesn't support multiple instances, focus on Redis high availability:

Requirements:

  1. Single EmailEngine instance (primary)
  2. Standby EmailEngine instance (cold standby, not running)
  3. Redis Sentinel or Cluster (auto-failover)
  4. Persistent storage for Redis
  5. Health monitoring to detect failures

Architecture Example

Failover Process:

  1. Health monitor detects primary failure
  2. Manually start standby instance (or use orchestration tool)
  3. Standby connects to Redis Sentinel (gets current master)
  4. Service resumes with minimal downtime

Health Check Endpoint

curl http://localhost:3000/health

Response:

{
"success": true
}

The health endpoint verifies that all IMAP workers are running and Redis is accessible. It returns a 500 error if any checks fail.

Environment-Specific Configuration

Development

# .env.development
NODE_ENV=development
EENGINE_LOG_LEVEL=trace
EENGINE_PORT=3001
EENGINE_REDIS=redis://localhost:6379/8

Staging

# .env.staging
NODE_ENV=production
EENGINE_LOG_LEVEL=debug
EENGINE_SETTINGS='{"serviceUrl":"https://staging-email.example.com"}'
EENGINE_REDIS=redis://staging-redis:6379/2

Production

# .env.production
NODE_ENV=production
EENGINE_LOG_LEVEL=info
EENGINE_SETTINGS='{"serviceUrl":"https://emailengine.example.com"}'
EENGINE_REDIS=redis://prod-redis:6379/2
EENGINE_SECRET=${ENCRYPTION_KEY}

Common Deployment Patterns

Pattern 1: Single Server

Use case: Small teams, < 100 accounts

Setup guide →


Pattern 2: Managed Platform

Use case: Quick deployment, minimal DevOps

Render deployment →


Pattern 3: Kubernetes Cluster

Use case: Enterprise, high availability

Note: EmailEngine runs as a single instance only. Kubernetes is used for container orchestration, health monitoring, and automatic restarts rather than horizontal scaling.

Kubernetes guide →

Migration & Updates

Version Updates

Docker:

docker pull postalsys/emailengine:v2
docker stop emailengine
docker rm emailengine
docker run ... postalsys/emailengine:v2

SystemD:

# Download new version
wget https://go.emailengine.app/emailengine.tar.gz
tar xzf emailengine.tar.gz
sudo mv emailengine /usr/local/bin/
sudo chmod +x /usr/local/bin/emailengine

# Restart service
systemctl restart emailengine

Kubernetes:

kubectl set image deployment/emailengine \
emailengine=postalsys/emailengine:v2

Updates with Brief Downtime

Since EmailEngine doesn't support multiple instances, updates will have brief downtime:

Kubernetes recreate strategy:

spec:
replicas: 1
strategy:
type: Recreate

Docker Compose:

docker-compose up -d --no-deps --build emailengine

Backup Before Updates

# Backup Redis data
redis-cli --rdb /backup/dump.rdb

# Or use BGSAVE
redis-cli BGSAVE
cp /var/lib/redis/dump.rdb /backup/

Monitoring & Observability

Metrics

The Prometheus metrics endpoint is available at /metrics on the main API server (same port as the web interface and API). It requires authentication with a token that has the metrics scope.

Access metrics:

curl http://localhost:3000/metrics \
-H "Authorization: Bearer YOUR_METRICS_TOKEN"

Logging

Docker logs:

docker logs -f emailengine

SystemD logs:

journalctl -u emailengine -f

Log aggregation:

  • ELK Stack (Elasticsearch, Logstash, Kibana)
  • Grafana Loki
  • Datadog
  • CloudWatch

Monitoring setup →