Back to Blog

Backup and Disaster Recovery for nfyio

Automate backups for PostgreSQL, SeaweedFS, Redis, and Keycloak. Build a disaster recovery plan that protects your self-hosted nfyio infrastructure.

n

nfyio Team

Talya Smart & Technoplatz JV

Backup and Disaster Recovery for nfyio

Self-hosted means you own your data — and the responsibility to protect it. This guide covers automated backup strategies for every component in the nfyio stack and a tested recovery procedure.

What Needs Backing Up

ComponentData TypeBackup MethodRPO Target
PostgreSQLMetadata, embeddings, RLS policiespg_dump / WAL archiving5 minutes
SeaweedFSObject storage blobsVolume replication + snapshots1 hour
RedisJob queues, cacheRDB snapshotsTolerable loss
KeycloakUsers, realms, rolesDatabase export + realm JSON1 hour
Config.env, Docker Compose, Helm valuesGit repositoryCommitted

PostgreSQL Backup

Logical Backup with pg_dump

For databases under 50 GB:

# Full database dump (compressed)
pg_dump -h localhost -U nfyio -d nfyio \
  --format=custom \
  --compress=9 \
  --file=/backups/nfyio-$(date +%Y%m%d-%H%M%S).dump

# Verify the dump
pg_restore --list /backups/nfyio-20260306-120000.dump | head -20

Continuous WAL Archiving

For near-zero RPO, enable WAL archiving in postgresql.conf:

wal_level = replica
archive_mode = on
archive_command = 'cp %p /backups/wal/%f'
max_wal_senders = 3

Configure a base backup:

pg_basebackup -h localhost -U replication -D /backups/base \
  --format=tar --gzip --checkpoint=fast --wal-method=stream

Automated Cron Schedule

# /etc/cron.d/nfyio-backup
# Daily full backup at 2 AM
0 2 * * * root pg_dump -h localhost -U nfyio -d nfyio \
  --format=custom --compress=9 \
  --file=/backups/pg/nfyio-$(date +\%Y\%m\%d).dump 2>&1 | logger -t nfyio-backup

# Hourly WAL archive cleanup (keep 48 hours)
0 * * * * root find /backups/wal -mtime +2 -delete

SeaweedFS Backup

SeaweedFS replicates data across volume servers. For off-site backup:

Volume Snapshot

# List volumes
curl -s http://localhost:9333/cluster/status | jq '.Topology.DataCenters'

# Export a specific volume
weed export -dir=/data/seaweedfs -volumeId=1 \
  -o /backups/seaweedfs/vol1-$(date +%Y%m%d).tar

S3-to-S3 Mirror

Use mc mirror to replicate nfyio storage to an off-site target:

# Configure remote backup target
mc alias set backup-remote https://backup.example.com $BACKUP_KEY $BACKUP_SECRET

# Mirror all buckets
mc mirror nfyio/ backup-remote/nfyio-backup/ \
  --overwrite --preserve --watch

Replication Factor

In your SeaweedFS config, set replication for fault tolerance:

# Replication: datacenter,rack,node
# "001" = replicate to 1 other server in the same rack
weed master -defaultReplication=001

Redis Backup

Redis data in nfyio is mostly job queues and ephemeral cache. A periodic RDB snapshot is sufficient:

# Trigger a save
redis-cli BGSAVE

# Copy the dump file
cp /var/lib/redis/dump.rdb /backups/redis/dump-$(date +%Y%m%d-%H%M%S).rdb

For AOF persistence (append-only file):

# redis.conf
appendonly yes
appendfsync everysec

Keycloak Backup

Realm Export

# Export all realms
docker exec keycloak /opt/keycloak/bin/kc.sh export \
  --dir /tmp/keycloak-export \
  --users same_file

# Copy out
docker cp keycloak:/tmp/keycloak-export /backups/keycloak/

Database Backup

Keycloak uses its own PostgreSQL schema. Include it in your database backup:

pg_dump -h localhost -U keycloak -d keycloak \
  --format=custom --compress=9 \
  --file=/backups/keycloak/keycloak-db-$(date +%Y%m%d).dump

Disaster Recovery Procedure

Full Recovery Steps

1. Provision new infrastructure:

# Clone your IaC repo
git clone https://github.com/your-org/nfyio-infra.git
cd nfyio-infra

# Spin up base services
docker compose up -d postgresql redis

2. Restore PostgreSQL:

# Create database
createdb -h localhost -U postgres nfyio

# Restore from dump
pg_restore -h localhost -U postgres -d nfyio \
  --clean --if-exists --no-owner \
  /backups/pg/nfyio-20260306.dump

3. Restore SeaweedFS volumes:

# Start SeaweedFS master
docker compose up -d seaweedfs-master seaweedfs-volume

# Import volumes
weed import -dir=/data/seaweedfs \
  -i /backups/seaweedfs/vol1-20260306.tar

4. Restore Keycloak:

docker compose up -d keycloak

# Import realm
docker exec keycloak /opt/keycloak/bin/kc.sh import \
  --dir /tmp/keycloak-export

5. Start nfyio services:

docker compose up -d gateway storage agents

6. Verify:

curl -s http://localhost:3000/health | jq

# Check data integrity
curl -s http://localhost:3000/api/v1/buckets \
  -H "Authorization: Bearer $JWT" | jq '.[] | .name'

Backup Verification Script

Run this weekly to test backup integrity:

#!/bin/bash
set -euo pipefail

BACKUP_DIR="/backups"
ERRORS=0

echo "=== nfyio Backup Verification ==="

# Check PostgreSQL dump
LATEST_PG=$(ls -t $BACKUP_DIR/pg/*.dump 2>/dev/null | head -1)
if [ -z "$LATEST_PG" ]; then
  echo "FAIL: No PostgreSQL backup found"
  ERRORS=$((ERRORS + 1))
else
  AGE=$(( ($(date +%s) - $(stat -c %Y "$LATEST_PG")) / 3600 ))
  if [ $AGE -gt 25 ]; then
    echo "WARN: PostgreSQL backup is ${AGE}h old"
  else
    echo "OK: PostgreSQL backup (${AGE}h old)"
  fi
  pg_restore --list "$LATEST_PG" > /dev/null 2>&1 || {
    echo "FAIL: PostgreSQL backup is corrupted"
    ERRORS=$((ERRORS + 1))
  }
fi

# Check SeaweedFS backup
LATEST_SW=$(ls -t $BACKUP_DIR/seaweedfs/*.tar 2>/dev/null | head -1)
if [ -z "$LATEST_SW" ]; then
  echo "FAIL: No SeaweedFS backup found"
  ERRORS=$((ERRORS + 1))
else
  echo "OK: SeaweedFS backup exists"
fi

# Check Keycloak export
if [ -d "$BACKUP_DIR/keycloak" ] && [ "$(ls -A $BACKUP_DIR/keycloak/)" ]; then
  echo "OK: Keycloak export exists"
else
  echo "FAIL: No Keycloak export found"
  ERRORS=$((ERRORS + 1))
fi

echo "=== Verification complete: $ERRORS errors ==="
exit $ERRORS

Key Takeaways

  • Back up PostgreSQL daily with pg_dump and continuously with WAL archiving for near-zero RPO
  • SeaweedFS built-in replication handles node failures; use mc mirror for off-site backup
  • Redis data is ephemeral in nfyio — RDB snapshots are sufficient
  • Export Keycloak realms as JSON and back up its database separately
  • Test your recovery procedure monthly — an untested backup is not a backup
  • Keep your .env, Docker Compose, and Helm values files in version control

For deployment options, see the installation guide.

n

Written by

nfyio Team

Talya Smart & Technoplatz JV

Building the future of web design at Anti-Gravity. Passionate about creating beautiful, accessible experiences.