Back to blog

Zero-Cost Docker Deployment: Running Aegis2FA in Production

Complete guide to deploying a Node.js app with Docker, PostgreSQL, and Redis using free tier services. Includes Docker Compose, health checks, and monitoring.

#docker#deployment#devops#backend

The Zero-Cost Challenge

I wanted to deploy Aegis2FA without spending a dime. The goal:

  • PostgreSQL database
  • Redis for caching and rate limiting
  • Node.js backend
  • HTTPS with SSL
  • Monitoring and logging
  • $0/month cost

Spoiler: I achieved it. Here's how.

The Stack

  • Docker - Containerization
  • Docker Compose - Multi-container orchestration
  • Fly.io - Free tier (3 VMs, 3GB storage)
  • Supabase - Free PostgreSQL (500MB)
  • Upstash - Free Redis (10K commands/day)
  • Let's Encrypt - Free SSL certificates

Docker Configuration

Backend Dockerfile

# Multi-stage build for smaller images
FROM node:20-alpine AS builder
 
WORKDIR /app
 
# Copy package files
COPY package*.json ./
COPY prisma ./prisma/
 
# Install dependencies
RUN npm ci
 
# Copy source code
COPY . .
 
# Generate Prisma client
RUN npx prisma generate
 
# Build TypeScript
RUN npm run build
 
# Production stage
FROM node:20-alpine
 
WORKDIR /app
 
# Install dumb-init for proper signal handling
RUN apk add --no-cache dumb-init
 
# Create non-root user
RUN addgroup -g 1001 -S nodejs && \
    adduser -S nodejs -u 1001
 
# Copy built app and dependencies
COPY --from=builder --chown=nodejs:nodejs /app/node_modules ./node_modules
COPY --from=builder --chown=nodejs:nodejs /app/dist ./dist
COPY --from=builder --chown=nodejs:nodejs /app/prisma ./prisma
COPY --chown=nodejs:nodejs package*.json ./
 
USER nodejs
 
EXPOSE 3000
 
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=40s --retries=3 \
  CMD node -e "require('http').get('http://localhost:3000/health', (r) => { process.exit(r.statusCode === 200 ? 0 : 1); })"
 
# Use dumb-init to handle signals properly
ENTRYPOINT ["dumb-init", "--"]
 
CMD ["node", "dist/main.js"]

Optimizations

# .dockerignore
node_modules
npm-debug.log
.env
.env.local
dist
coverage
.git
.github
*.md
tests
.vscode
.idea

This reduced build from 2GB to 150MB!

Docker Compose for Local Development

version: '3.8'
 
services:
  # PostgreSQL database
  postgres:
    image: postgres:15-alpine
    container_name: aegis2fa-postgres
    environment:
      POSTGRES_DB: aegis2fa
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
    ports:
      - '5432:5432'
    volumes:
      - postgres-data:/var/lib/postgresql/data
    healthcheck:
      test: ['CMD-SHELL', 'pg_isready -U postgres']
      interval: 10s
      timeout: 5s
      retries: 5
 
  # Redis cache
  redis:
    image: redis:7-alpine
    container_name: aegis2fa-redis
    command: redis-server --appendonly yes
    ports:
      - '6379:6379'
    volumes:
      - redis-data:/data
    healthcheck:
      test: ['CMD', 'redis-cli', 'ping']
      interval: 10s
      timeout: 3s
      retries: 5
 
  # API backend
  api:
    build:
      context: .
      dockerfile: Dockerfile
    container_name: aegis2fa-api
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
    environment:
      NODE_ENV: development
      DATABASE_URL: postgresql://postgres:postgres@postgres:5432/aegis2fa
      REDIS_URL: redis://redis:6379
      JWT_ACCESS_SECRET: dev-secret-access
      JWT_REFRESH_SECRET: dev-secret-refresh
      PORT: 3000
    ports:
      - '3000:3000'
    volumes:
      - ./src:/app/src
      - ./prisma:/app/prisma
    command: npm run dev
    restart: unless-stopped
    healthcheck:
      test: ['CMD', 'node', '-e', "require('http').get('http://localhost:3000/health')"]
      interval: 30s
      timeout: 3s
      retries: 3
 
  # Nginx reverse proxy (optional for local dev)
  nginx:
    image: nginx:alpine
    container_name: aegis2fa-nginx
    depends_on:
      - api
    ports:
      - '80:80'
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
    restart: unless-stopped
 
volumes:
  postgres-data:
  redis-data:

Running Locally

# Start all services
docker-compose up -d
 
# View logs
docker-compose logs -f api
 
# Run migrations
docker-compose exec api npx prisma migrate deploy
 
# Stop services
docker-compose down
 
# Clean up (including volumes)
docker-compose down -v

Production Deployment on Fly.io

fly.toml Configuration

app = "aegis2fa"
primary_region = "iad"
 
[build]
  dockerfile = "Dockerfile"
 
[env]
  NODE_ENV = "production"
  PORT = "8080"
 
[http_service]
  internal_port = 8080
  force_https = true
  auto_stop_machines = false
  auto_start_machines = true
  min_machines_running = 1
 
  [[http_service.checks]]
    grace_period = "10s"
    interval = "30s"
    method = "GET"
    timeout = "5s"
    path = "/health"
 
[[vm]]
  cpu_kind = "shared"
  cpus = 1
  memory_mb = 256
 
[metrics]
  port = 9091
  path = "/metrics"

Deploy Script

#!/bin/bash
set -e
 
echo "🚀 Deploying Aegis2FA to Fly.io..."
 
# Set secrets (one-time)
fly secrets set \
  DATABASE_URL="postgresql://user:pass@host:5432/db" \
  REDIS_URL="rediss://default:pass@host:6379" \
  JWT_ACCESS_SECRET="$(openssl rand -base64 32)" \
  JWT_REFRESH_SECRET="$(openssl rand -base64 32)" \
  ENCRYPTION_KEY="$(openssl rand -hex 32)"
 
# Run database migrations
fly ssh console -C "npx prisma migrate deploy"
 
# Deploy app
fly deploy --ha=false
 
# Verify deployment
fly status
 
echo "✅ Deployment complete!"
echo "🌐 App available at: https://aegis2fa.fly.dev"

Setting Up Free Database (Supabase)

# 1. Sign up at supabase.com (free)
# 2. Create new project
# 3. Get connection string from Settings > Database
 
# Example connection string:
DATABASE_URL="postgresql://postgres:[PASSWORD]@db.[PROJECT].supabase.co:5432/postgres"

Setting Up Free Redis (Upstash)

# 1. Sign up at upstash.com (free)
# 2. Create new database
# 3. Get connection string
 
# Example connection string:
REDIS_URL="rediss://default:[PASSWORD]@[ENDPOINT].upstash.io:6379"

Health Checks

Health Check Endpoint

// src/routes/health.ts
import { Router } from 'express';
import { db } from '../db';
import { redis } from '../redis';
 
const router = Router();
 
router.get('/health', async (req, res) => {
  const health = {
    status: 'ok',
    timestamp: new Date().toISOString(),
    uptime: process.uptime(),
    checks: {
      database: 'unknown',
      redis: 'unknown',
      memory: process.memoryUsage(),
    },
  };
 
  try {
    // Check database
    await db.$queryRaw`SELECT 1`;
    health.checks.database = 'ok';
  } catch (error) {
    health.checks.database = 'error';
    health.status = 'degraded';
  }
 
  try {
    // Check Redis
    await redis.ping();
    health.checks.redis = 'ok';
  } catch (error) {
    health.checks.redis = 'error';
    health.status = 'degraded';
  }
 
  const statusCode = health.status === 'ok' ? 200 : 503;
  res.status(statusCode).json(health);
});
 
router.get('/ready', async (req, res) => {
  try {
    await db.$queryRaw`SELECT 1`;
    res.status(200).json({ ready: true });
  } catch (error) {
    res.status(503).json({ ready: false, error: error.message });
  }
});
 
export default router;

Kubernetes Readiness/Liveness Probes

apiVersion: apps/v1
kind: Deployment
metadata:
  name: aegis2fa
spec:
  replicas: 2
  template:
    spec:
      containers:
        - name: api
          image: aegis2fa:latest
          ports:
            - containerPort: 3000
 
          # Liveness probe - restart if failing
          livenessProbe:
            httpGet:
              path: /health
              port: 3000
            initialDelaySeconds: 30
            periodSeconds: 30
            timeoutSeconds: 5
            failureThreshold: 3
 
          # Readiness probe - remove from service if failing
          readinessProbe:
            httpGet:
              path: /ready
              port: 3000
            initialDelaySeconds: 10
            periodSeconds: 10
            timeoutSeconds: 3
            failureThreshold: 2
 
          # Resources
          resources:
            requests:
              memory: "128Mi"
              cpu: "100m"
            limits:
              memory: "256Mi"
              cpu: "200m"
 
          # Environment from secrets
          envFrom:
            - secretRef:
                name: aegis2fa-secrets

Logging and Monitoring

Structured Logging with Pino

import pino from 'pino';
 
export const logger = pino({
  level: process.env.LOG_LEVEL || 'info',
  transport:
    process.env.NODE_ENV === 'development'
      ? {
          target: 'pino-pretty',
          options: {
            colorize: true,
            ignore: 'pid,hostname',
            translateTime: 'SYS:standard',
          },
        }
      : undefined,
  serializers: {
    req: pino.stdSerializers.req,
    res: pino.stdSerializers.res,
    err: pino.stdSerializers.err,
  },
});
 
// Usage
logger.info('Server started');
logger.error({ err }, 'Request failed');
logger.warn({ userId }, 'Rate limit exceeded');

Prometheus Metrics

import { register, Counter, Histogram, Gauge } from 'prom-client';
 
// HTTP requests
const httpRequestDuration = new Histogram({
  name: 'http_request_duration_seconds',
  help: 'Duration of HTTP requests in seconds',
  labelNames: ['method', 'route', 'status'],
  buckets: [0.1, 0.5, 1, 2, 5],
});
 
// Active connections
const activeConnections = new Gauge({
  name: 'active_connections',
  help: 'Number of active connections',
});
 
// Failed authentications
const failedAuth = new Counter({
  name: 'failed_auth_total',
  help: 'Total number of failed authentication attempts',
  labelNames: ['reason'],
});
 
// Metrics endpoint
app.get('/metrics', async (req, res) => {
  res.set('Content-Type', register.contentType);
  res.end(await register.metrics());
});

Backup Strategy

Automated Database Backups

#!/bin/bash
# scripts/backup.sh
 
set -e
 
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR="/backups"
BACKUP_FILE="$BACKUP_DIR/aegis2fa_$TIMESTAMP.sql.gz"
 
# Create backup
pg_dump $DATABASE_URL | gzip > $BACKUP_FILE
 
# Upload to S3 (or any object storage)
aws s3 cp $BACKUP_FILE s3://aegis2fa-backups/
 
# Keep only last 7 days
find $BACKUP_DIR -name "*.sql.gz" -mtime +7 -delete
 
echo "✅ Backup complete: $BACKUP_FILE"

Automated Backup with Cron

# Run daily at 2 AM
0 2 * * * /app/scripts/backup.sh >> /var/log/backup.log 2>&1

Performance Optimization

Docker Multi-Stage Builds

Reduced image size from 1.2GB to 150MB:

# Development dependencies stay in builder stage
FROM node:20-alpine AS builder
RUN npm ci
 
# Only production dependencies in final image
FROM node:20-alpine
COPY --from=builder /app/node_modules ./node_modules

Layer Caching

# Copy package.json first for better caching
COPY package*.json ./
RUN npm ci
 
# Copy source code after
COPY . .
RUN npm run build

Security Best Practices

Run as Non-Root User

RUN addgroup -g 1001 -S nodejs && \
    adduser -S nodejs -u 1001
 
USER nodejs

Scan for Vulnerabilities

# Scan image
docker scan aegis2fa:latest
 
# Alternative: Trivy
trivy image aegis2fa:latest

Secrets Management

# Never hardcode secrets!
# Use environment variables or secrets management
 
# Docker secrets (Docker Swarm)
docker secret create db_password ./db_password.txt
 
# Kubernetes secrets
kubectl create secret generic aegis2fa-secrets \
  --from-literal=database-url=$DATABASE_URL \
  --from-literal=jwt-secret=$JWT_SECRET

Results

After 6 months running on free tier:

  • $0/month hosting costs
  • 99.9% uptime
  • Sub-100ms response times
  • 150MB Docker image
  • 128MB RAM usage
  • Zero security incidents

Cost Breakdown (Free Tier)

ServiceFree TierCost
Fly.io3 shared VMs, 3GB storage$0
Supabase500MB PostgreSQL$0
Upstash10K commands/day Redis$0
CloudflareCDN + DDoS protection$0
Let's EncryptSSL certificates$0
Total$0/month

Scaling Beyond Free Tier

When you outgrow free tier:

Fly.io - $2/month per VM Supabase - $25/month for 8GB database Upstash - $10/month for 1M commands

Total: ~$40/month for production-grade infrastructure


Questions about Docker deployment? Need help with Fly.io? Check the Aegis2FA deployment docs or reach out!