Deploy with Docker
Deploy NFYio using Docker Compose. Complete walkthrough of docker-compose.yml, environment configuration, health checks, and backup strategies.
This guide walks you through deploying NFYio using Docker and Docker Compose. You’ll configure the full stack, start services, verify health, and learn how to update and back up your deployment.
Prerequisites
Before you begin, ensure you have:
- Docker 24+ and Docker Compose v2+
- 4GB RAM minimum (8GB recommended for AI features)
- 20GB disk space for images and data
- Git for cloning the repository
- Basic familiarity with terminal and environment variables
Verify your setup:
docker --version
# Docker version 24.0.0 or higher
docker compose version
# Docker Compose version v2.20.0 or higher
docker-compose.yml Walkthrough
NFYio’s docker-compose.yml defines the full platform stack. Here’s a simplified overview of the key services:
# Core infrastructure
services:
postgres:
image: postgres:15-alpine
environment:
POSTGRES_DB: ${POSTGRES_DB:-nfyio}
POSTGRES_USER: ${POSTGRES_USER:-nfyio}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-nfyio}"]
interval: 5s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
command: redis-server --requirepass ${REDIS_PASSWORD}
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "-a", "${REDIS_PASSWORD}", "ping"]
interval: 5s
retries: 5
seaweedfs-master:
image: chrislusf/seaweedfs:latest
command: master
ports:
- "9333:9333"
volumes:
- seaweedfs_master:/data
seaweedfs-volume:
image: chrislusf/seaweedfs:latest
command: volume -mserver=seaweedfs-master:9333
depends_on:
seaweedfs-master:
condition: service_started
keycloak:
image: quay.io/keycloak/keycloak:latest
command: start-dev
environment:
KEYCLOAK_ADMIN: ${KEYCLOAK_ADMIN:-admin}
KEYCLOAK_ADMIN_PASSWORD: ${KEYCLOAK_ADMIN_PASSWORD}
depends_on:
postgres:
condition: service_healthy
nfyio-gateway:
image: nfyio/gateway:latest
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
keycloak:
condition: service_started
ports:
- "3000:3000"
nfyio-storage:
image: nfyio/storage:latest
depends_on:
seaweedfs-master:
condition: service_started
ports:
- "7007:7007"
nfyio-agent:
image: nfyio/agent:latest
depends_on:
nfyio-gateway:
condition: service_started
ports:
- "7010:7010"
volumes:
postgres_data:
redis_data:
seaweedfs_master:
Environment Configuration
Step 1: Clone and Copy Environment
git clone https://github.com/hilaltechnologic/nfyio.git
cd nfyio
cp .env.example .env
Step 2: Edit .env
Open .env and configure the following:
# ── Security (REQUIRED) ─────────────────────────────
# Generate with: openssl rand -hex 64
SESSION_SECRET=your-64-character-hex-secret
# ── Database ───────────────────────────────────────
POSTGRES_PASSWORD=strong-postgres-password
POSTGRES_DB=nfyio
POSTGRES_USER=nfyio
# ── Redis ──────────────────────────────────────────
REDIS_PASSWORD=strong-redis-password
# ── Keycloak ────────────────────────────────────────
KEYCLOAK_ADMIN=admin
KEYCLOAK_ADMIN_PASSWORD=strong-keycloak-password
# ── AI (optional) ───────────────────────────────────
EMBEDDINGS_ENABLED=true
OPENAI_API_KEY=sk-your-openai-key
OPENAI_EMBEDDING_MODEL=text-embedding-3-small
# ── Production ──────────────────────────────────────
PUBLIC_URL=https://yourdomain.com
ALLOWED_ORIGINS=https://app.yourdomain.com,https://yourdomain.com
Generate a secure session secret:
openssl rand -hex 64
Step 3: Production Overrides
For production, consider using a separate override file:
# Create docker-compose.prod.yml for production overrides
cat > docker-compose.prod.yml << 'EOF'
services:
nfyio-gateway:
restart: always
deploy:
resources:
limits:
memory: 1G
nfyio-storage:
restart: always
nfyio-agent:
restart: always
EOF
Starting Services
Start All Services
docker compose up -d
For production overrides:
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
Verify Startup
Wait 30–60 seconds for services to initialize, then check status:
docker compose ps
Expected output (all services Up or healthy):
NAME STATUS PORTS
nfyio-agent Up (healthy) 0.0.0.0:7010->7010/tcp
nfyio-gateway Up (healthy) 0.0.0.0:3000->3000/tcp
nfyio-storage Up (healthy) 0.0.0.0:7007->7007/tcp
keycloak Up 0.0.0.0:8443->8443/tcp
postgres Up (healthy) 0.0.0.0:5432->5432/tcp
redis Up (healthy) 0.0.0.0:6379->6379/tcp
seaweedfs-master Up 0.0.0.0:9333->9333/tcp
seaweedfs-volume Up 8080/tcp
Health Checks
Manual Health Verification
# Gateway (API, auth, dashboard)
curl -s http://localhost:3000/health | jq .
# {"status":"ok","version":"0.9.0"}
# Storage proxy (S3-compatible)
curl -s http://localhost:7007/health | jq .
# {"status":"ok","backend":"seaweedfs"}
# Agent service (RAG, LLM)
curl -s http://localhost:7010/health | jq .
# {"status":"ok","model":"gpt-4o"}
Automated Health Script
Create a simple health check script:
#!/bin/bash
set -e
echo "Checking NFYio services..."
for url in "http://localhost:3000/health" "http://localhost:7007/health" "http://localhost:7010/health"; do
status=$(curl -s -o /dev/null -w "%{http_code}" "$url")
if [ "$status" = "200" ]; then
echo "✓ $url"
else
echo "✗ $url (HTTP $status)"
exit 1
fi
done
echo "All services healthy."
Logs
View Logs
# All services
docker compose logs -f
# Specific service
docker compose logs -f nfyio-gateway
# Last 100 lines
docker compose logs --tail=100 nfyio-storage
# Since timestamp
docker compose logs --since 2026-03-01 nfyio-agent
Log Drivers (Production)
For production, consider JSON file logging with rotation:
# In docker-compose.prod.yml
services:
nfyio-gateway:
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
Updating
Pull and Restart
# Pull latest images
docker compose pull
# Recreate containers with new images
docker compose up -d
# Run database migrations
docker compose exec nfyio-gateway deno task migrate
Zero-Downtime Update (with multiple replicas)
If using Docker Swarm or multiple replicas, use rolling updates. For single-node Docker Compose, expect brief downtime during up -d.
Backup and Restore
PostgreSQL Backup
# Create backup
docker compose exec postgres pg_dump -U nfyio nfyio > backup_$(date +%Y%m%d_%H%M%S).sql
# Or with compression
docker compose exec postgres pg_dump -U nfyio nfyio | gzip > backup_$(date +%Y%m%d).sql.gz
PostgreSQL Restore
# Restore from backup (stop gateway first to avoid connections)
docker compose stop nfyio-gateway nfyio-agent
# Restore
gunzip -c backup_20260301.sql.gz | docker compose exec -T postgres psql -U nfyio nfyio
# Restart
docker compose start nfyio-gateway nfyio-agent
Redis Backup
# Trigger Redis save
docker compose exec redis redis-cli -a $REDIS_PASSWORD BGSAVE
# Copy RDB file
docker compose cp redis:/data/dump.rdb ./redis_backup_$(date +%Y%m%d).rdb
SeaweedFS (Object Storage) Backup
SeaweedFS stores objects on disk. Back up the volume data directory:
# Find volume data path
docker compose exec seaweedfs-volume ls -la /data/
# Create tar backup (adjust path as needed)
docker compose exec seaweedfs-volume tar czf - /data > seaweedfs_backup_$(date +%Y%m%d).tar.gz
Full Backup Script
#!/bin/bash
BACKUP_DIR="./backups/$(date +%Y%m%d_%H%M%S)"
mkdir -p "$BACKUP_DIR"
echo "Backing up PostgreSQL..."
docker compose exec -T postgres pg_dump -U nfyio nfyio | gzip > "$BACKUP_DIR/postgres.sql.gz"
echo "Backing up Redis..."
docker compose exec redis redis-cli -a "${REDIS_PASSWORD}" BGSAVE
sleep 2
docker compose cp redis:/data/dump.rdb "$BACKUP_DIR/redis.rdb"
echo "Backup complete: $BACKUP_DIR"
Troubleshooting
| Issue | Solution |
|---|---|
| Port already in use | Change port mappings in docker-compose.yml or stop conflicting services |
| Out of memory | Increase Docker memory limit or reduce resource usage |
| Database connection refused | Wait for PostgreSQL healthcheck; check POSTGRES_* env vars |
| Storage proxy fails | Ensure SeaweedFS master is running; check seaweedfs-master:9333 |
| Keycloak slow startup | First boot can take 1–2 minutes; retry health checks |
What’s Next
- Deploy with Kubernetes — Production-grade orchestration with Helm
- Monitoring & Observability — Metrics, logs, and alerts
- Migration from AWS S3 — Move existing S3 data to NFYio