Files
edh-stats/docs/DATABASE_MIGRATION.md
Michael Skrynski 4986c4f6b8 feat: add complete password requirements to registration form
- Display all 5 backend password validation requirements
- Added: minimum 8 characters (existing)
- Added: maximum 100 characters
- Added: at least one lowercase letter
- Added: at least one uppercase letter
- Added: at least one number
- Each requirement shows real-time validation with green checkmark
- Requirements match exactly with backend/src/utils/validators.js

feat: add Traefik labels to frontend container

- Enable Traefik auto-discovery for frontend service
- Configure HTTP routing with Host rule
- Setup both web (HTTP) and websecure (HTTPS) entrypoints
- Configure load balancer backend port (80)
- Include optional TLS/Let's Encrypt configuration
- Change edh.example.com to your actual domain

feat: configure production compose file to use Traefik network

- Generated docker-compose.prod.deployed.yml now uses traefik-network
- Frontend service joined to both edh-stats-network and traefik-network
- Added Traefik labels for routing and TLS configuration
- traefik-network configured as external (must be created by Traefik)
- Removed external ports from frontend (Traefik handles routing)
- Updated setup instructions to mention Traefik prerequisites
- Changed TLS to enabled by default in production

refactor: remove frontend port exposure and Traefik labels from dev compose

- Removed port 8081 exposure from development docker-compose.yml
- Removed Traefik labels from development environment
- Development compose now simple and focused on local testing
- Production compose (via deploy.sh) handles Traefik routing via DNS
- Frontend only accessible through backend API in development
- Cleaner separation: dev is simple, prod uses Traefik

Switch Postgres data to host path and bump version Update Traefik rule,
expose port, bump version

feat: add database migration script for PostgreSQL

- Created migrate-database.sh script for exporting/importing databases
- Supports source and target database configuration via CLI options
- Validates connections before migration
- Verifies data integrity after import (table and row counts)
- Can skip import and just export to file for manual restore
- Supports Docker Compose container names
- Includes comprehensive error handling and user prompts
- Added DATABASE_MIGRATION.md documentation with usage examples
- Handles common scenarios: dev→prod, backups, restore
- Security considerations for password handling

refactor: simplify migration script to run directly in PostgreSQL container

- Removed network/remote host configuration (no longer needed)
- Script now runs inside PostgreSQL container with docker compose exec
- Simplified to use only source-db, target-db, output-file, skip-import options
- No external dependencies - uses container's pg_dump and psql
- Much simpler usage: docker compose exec postgres /scripts/migrate-database.sh
- Updated documentation with container-based examples
- Added real-world integration examples (daily backups, deployments, recovery)
- Includes troubleshooting and access patterns for backup files

feat: mount scripts directory into PostgreSQL container

- Added ./scripts:/scripts:ro volume mount to postgres service
- Makes migrate-database.sh and other scripts accessible inside container
- Read-only mount for security (scripts can't be modified inside container)
- Allows running: docker compose exec postgres /scripts/migrate-database.sh
- Scripts are shared between host and container for easy access

docs: clarify that migration script must run as postgres user

- Added -u postgres flag to all docker compose exec commands
- Explains why postgres user is required (PostgreSQL role authentication)
- Created shell alias for convenience
- Updated all scenarios and examples
- Updated troubleshooting section
- Clarifies connection issues related to user authentication

feat: save database backups to host by default

- Added ./backups volume mount to postgres container
- Changed default backup location from /tmp to /backups
- /backups is mounted to ./backups on host for easy access
- Script automatically creates /backups directory if needed
- Updated help and examples with -u postgres flag
- Summary now shows both container and host backup paths
- Backups are immediately available on host machine
- No need for docker cp to retrieve backups

feat: add --skip-export flag for import-only database operations

- Allows importing from existing backup files without re-exporting
- Added validate_backup_file() function to check backup existence
- Updated main() to handle import-only mode with proper validation
- Updated summary output to show import-only mode details
- Updated help text with import-only example
- Prevents using both --skip-import and --skip-export together

docs: update database migration guide for import-only mode

- Document new --skip-export flag for import-only operations
- Add example for quick restore from backup without re-export
- Update command options table with mode combinations
- Update all scenarios and examples to use /backups mount
- Clarify file location and volume mounting (./backups on host)
- Add Scenario 5 for quick restore from backup
- Simplify examples and fix container paths

feat: clear existing data before importing in migration script

- Added clear_database() function to drop all tables, views, and sequences
- Drops and recreates public schema with proper permissions
- Ensures clean import without data conflicts
- Updated warning message to clarify data deletion
- clear_database() called before import starts
- Maintains database integrity and grants

docs: update migration guide to explain data clearing on import

- Clarify that existing data is deleted before import
- Explain the drop/recreate schema process
- Add notes to scenarios about data clearing
- Document the import process sequence
- Update version to 2.2

fix: remove verbose flag from pg_dump to prevent SQL syntax errors

- Removed -v flag from pg_dump export command
- Verbose output was being included in SQL file as comments
- These comments caused 'syntax error at or near pg_dump' errors during import
- Backup files will now be clean SQL without pg_dump metadata comments

docs: document pg_dump verbose output fix and troubleshooting

- Added troubleshooting section for pg_dump syntax errors
- Noted that v2.3 fixes this issue
- Directed users to create new backups if needed
- Updated version to 2.3
- Clarified file location is /backups/

docs: add critical warning about using migration script for imports

- Added prominent warning against using psql directly with -f flag
- Explained why direct psql causes 'relation already exists' errors
- Added troubleshooting section for these errors
- Emphasized that script handles data clearing automatically
- Clear examples of wrong vs right approach

fix: remove pg_dump restrict commands that block data import

- Added clean_backup_file() function to remove \restrict and \unrestrict
- pg_dump adds these security commands which prevent data loading
- Script now automatically cleans backup files before importing
- Removes lines starting with \restrict or \unrestrict
- Ensures all data (users, games, commanders) imports successfully
- Called automatically during import process

docs: add troubleshooting for pg_dump restrict commands blocking imports

- Document the \restrict and \unrestrict security commands issue
- Explain why they block data from being imported
- Show that migration script v2.4+ removes them automatically
- Update version to 2.4
- Add detailed troubleshooting section for empty imports
2026-01-18 13:09:53 +01:00

18 KiB
Raw Blame History

Database Migration Guide

This guide explains how to use the database migration script to export and import PostgreSQL databases running in Docker containers.

⚠️ CRITICAL: Always Use the Migration Script

DO NOT import using psql directly with the -f flag:

# ❌ WRONG - Will cause errors!
docker compose exec -u postgres postgres psql -d edh_stats -f /backups/backup.sql

Always use the migration script which automatically clears the database first:

# ✅ RIGHT - Works perfectly!
docker compose exec -u postgres postgres /scripts/migrate-database.sh \
  --target-db edh_stats \
  --output-file /backups/backup.sql \
  --skip-export

Why? Direct psql import tries to create tables that already exist, causing:

  • ERROR: relation "commanders" already exists
  • ERROR: multiple primary keys for table
  • ERROR: trigger already exists

The migration script automatically clears all data first, preventing these conflicts.

Overview

The scripts/migrate-database.sh script runs directly inside the PostgreSQL container and provides:

  • Exporting data from one database to a file
  • Importing data from a file into another database
  • Verifying that the import was successful

This approach is simple because:

  • No need to install PostgreSQL client tools on your host
  • Runs directly inside the container with full access
  • All tools (pg_dump, psql) are already available
  • Works seamlessly with Docker Compose

Prerequisites

Required

  • Docker Compose running with PostgreSQL service
  • The scripts/migrate-database.sh file in your project

Not Required

  • PostgreSQL client tools on host machine
  • SSH access to servers
  • Network connectivity setup

Basic Usage

Important: Run as postgres user

All commands must be run with -u postgres to authenticate with PostgreSQL:

docker compose exec -u postgres postgres /scripts/migrate-database.sh [OPTIONS]

1. Export Database (Backup)

Export the current database to a backup file:

# Export to file inside container (default: /tmp/edh_stats_backup_TIMESTAMP.sql)
docker compose exec -u postgres postgres /scripts/migrate-database.sh \
  --source-db edh_stats \
  --skip-import

# Export to custom location
docker compose exec -u postgres postgres /scripts/migrate-database.sh \
  --source-db edh_stats \
  --output-file /var/lib/postgresql/backups/my_backup.sql \
  --skip-import

2. Export and Import to Different Database

Migrate data from one database to another (both in same container):

docker compose exec -u postgres postgres /scripts/migrate-database.sh \
  --source-db edh_stats \
  --target-db edh_stats_new

3. Import from Existing Backup (Import-Only)

Import data from an existing backup file without exporting:

docker compose exec -u postgres postgres /scripts/migrate-database.sh \
  --target-db edh_stats \
  --output-file /backups/edh_stats_backup_20250118_120000.sql \
  --skip-export

This is useful when:

  • You have a backup file already (from previous export)
  • You want to import without re-exporting
  • You're restoring from a backup file
  • You're importing from external source

4. Export from Production to Development

Copy production data to your local development environment:

# On production server
docker compose exec -u postgres postgres /scripts/migrate-database.sh \
  --source-db edh_stats \
  --output-file /backups/prod_backup.sql \
  --skip-import

# Copy file to your local machine
docker compose cp <container_id>:/backups/prod_backup.sql ./

# Import locally (import-only mode)
docker compose exec -u postgres postgres /scripts/migrate-database.sh \
  --target-db edh_stats \
  --output-file /backups/prod_backup.sql \
  --skip-export

Optional: Create an Alias for Convenience

To avoid typing -u postgres every time, add this to your shell profile:

# Add to ~/.bash_profile, ~/.bashrc, or ~/.zshrc
alias pg-migrate='docker compose exec -u postgres postgres /scripts/migrate-database.sh'

Then reload your shell:

source ~/.bashrc  # or ~/.zshrc for zsh

Now use it simply:

pg-migrate --source-db edh_stats --skip-import
pg-migrate --source-db edh_stats --target-db edh_stats_new
pg-migrate --target-db edh_stats --output-file /backups/backup.sql --skip-export
pg-migrate --help

Command Line Options

--source-db DATABASE      Source database name (default: edh_stats)
--target-db DATABASE      Target database name (default: edh_stats)
--output-file FILE        Backup file path (default: /backups/edh_stats_backup_TIMESTAMP.sql)
--skip-import             Export only, don't import (backup mode)
--skip-export             Import only, don't export (restore mode - requires existing file)
--help                    Show help message

Mode Combinations

  • Export + Import (Default): --source-db X --target-db Y - Export from X, import to Y
  • Export Only: --source-db X --skip-import - Backup database X to file
  • Import Only: --target-db Y --output-file backup.sql --skip-export - Restore file to Y
  • Both flags (--skip-import + --skip-export): Error - not allowed

Common Scenarios

Scenario 1: Daily Backup

Create a daily backup of the production database:

#!/bin/bash
# backup-prod.sh

BACKUP_DATE=$(date +%Y%m%d)
BACKUP_DIR="./backups"

mkdir -p "$BACKUP_DIR"

docker compose -f docker-compose.prod.deployed.yml exec -u postgres postgres \
  /scripts/migrate-database.sh \
  --source-db edh_stats \
  --output-file /var/lib/postgresql/backups/prod_${BACKUP_DATE}.sql \
  --skip-import

echo "Backup completed: $BACKUP_DIR/prod_${BACKUP_DATE}.sql"

Run daily with cron:

0 2 * * * cd /path/to/project && ./backup-prod.sh

Scenario 2: Test Database Refresh

Refresh your test database with latest production data:

# Export from production
docker compose -f docker-compose.prod.deployed.yml exec -u postgres postgres \
  /scripts/migrate-database.sh \
  --source-db edh_stats \
  --output-file /backups/test_refresh.sql \
  --skip-import

# Copy to local test environment
docker compose cp <prod_container>:/backups/test_refresh.sql ./

# Import to local test database (import-only mode)
docker compose exec -u postgres postgres \
  /scripts/migrate-database.sh \
  --target-db edh_stats_test \
  --output-file /backups/test_refresh.sql \
  --skip-export

Scenario 3: Database Upgrade

Backup before upgrading PostgreSQL version:

# Backup current database
docker compose exec -u postgres postgres /scripts/migrate-database.sh \
  --source-db edh_stats \
  --output-file /backups/pre_upgrade.sql \
  --skip-import

# Stop services and upgrade PostgreSQL in docker-compose.yml

# Then restore if needed (import-only mode)
# NOTE: Existing data will be cleared before import
docker compose exec -u postgres postgres /scripts/migrate-database.sh \
  --target-db edh_stats \
  --output-file /backups/pre_upgrade.sql \
  --skip-export

Scenario 4: Development Environment Setup

Setup development with production data:

# Export from production
ssh prod-server "cd /edh-stats && docker compose exec -u postgres postgres /scripts/migrate-database.sh --source-db edh_stats --output-file /backups/dev_setup.sql --skip-import"

# Copy to local machine
scp prod-server:/path/to/backups/dev_setup.sql ./

# Import locally (import-only mode)
docker compose exec -u postgres postgres /scripts/migrate-database.sh \
  --target-db edh_stats \
  --output-file /backups/dev_setup.sql \
  --skip-export

Scenario 5: Quick Restore from Backup

Restore from a backup file without re-exporting:

# List available backups
docker compose exec postgres ls -lh /backups/

# Restore from specific backup (import-only)
# NOTE: Existing data will be cleared before import
docker compose exec -u postgres postgres /scripts/migrate-database.sh \
  --target-db edh_stats \
  --output-file /backups/edh_stats_backup_20250118_120000.sql \
  --skip-export

What Gets Migrated

Included

  • All tables and schemas
  • All data (users, commanders, games)
  • Primary keys and foreign keys
  • Indexes and constraints
  • Sequences

Not Included

  • PostgreSQL roles/users (database-level)
  • Database server settings
  • Extension configurations

File Locations

Inside the PostgreSQL container:

  • Default backup: /backups/edh_stats_backup_TIMESTAMP.sql
  • In container: Access files in container paths

From host machine:

  • Host backups: ./backups/ (mounted volume)
  • Copy from container: docker compose cp <container>:/backups/file ./
  • Copy to container: docker compose cp ./file <container>:/backups/

Files in /backups are automatically synced between container and host via volume mount.

Access Backup Files from Host

Option 1: Copy from Container

# List backups in container
docker compose exec postgres ls -lh /var/lib/postgresql/backups/

# Copy backup to host
docker compose cp postgres:/var/lib/postgresql/backups/backup.sql ./

# Copy to host with docker compose
docker compose cp <container_name>:/var/lib/postgresql/backups/backup.sql ./backups/

Option 2: Use Docker Volumes (Already Configured)

The docker-compose.yml already has this configured:

services:
  postgres:
    volumes:
      - ./backups:/backups

Backups are automatically accessible in ./backups on host. No additional setup needed!

The /backups directory is already mounted to ./backups on your host:

# Export to /backups (automatically synced to ./backups on host)
docker compose exec -u postgres postgres /scripts/migrate-database.sh \
  --source-db edh_stats \
  --output-file /backups/my_backup.sql \
  --skip-import

# File automatically appears at: ./backups/my_backup.sql on host

Import Process

When importing, the script follows this sequence:

  1. Validates backup file and target database exist
  2. Confirms with user before proceeding
  3. Clears all existing data from target database (drops/recreates schema)
  4. Imports data from backup file
  5. Verifies that import was successful

Data Clearing

Before importing, the script:

  • Drops the entire public schema (removes all tables, views, sequences)
  • Recreates the public schema with proper permissions
  • Ensures a clean slate for the imported data

This means all existing data in the target database will be deleted. The script asks for confirmation before proceeding.

Verification

The script automatically verifies after import:

════════════════════════════════════════════════════════════
  Verifying Data Import
════════════════════════════════════════════════════════════

 Checking table counts...
 Source tables: 5
 Target tables: 5
✓ Table counts match

 Checking row counts...
✓ Table 'users': 5 rows (✓ matches)
✓ Table 'commanders': 12 rows (✓ matches)
✓ Table 'games': 48 rows (✓ matches)

Manual Commands (If Needed)

If the script fails, you can run commands directly:

# Backup manually
docker compose exec postgres pg_dump edh_stats > backup.sql

# Restore manually
docker compose exec -T postgres psql edh_stats < backup.sql

# List databases
docker compose exec postgres psql -l

# Get database size
docker compose exec postgres psql -c "SELECT pg_size_pretty(pg_database_size('edh_stats'));"

Troubleshooting

"relation already exists" or "multiple primary keys" errors during import

Cause: You're using psql directly instead of the migration script:

# ❌ WRONG
docker compose exec -u postgres postgres psql -d edh_stats -f /backups/backup.sql

Solution: Use the migration script which clears the database first:

# ✅ RIGHT
docker compose exec -u postgres postgres /scripts/migrate-database.sh \
  --target-db edh_stats \
  --output-file /backups/backup.sql \
  --skip-export

The script automatically:

  1. Removes pg_dump restrict commands (prevents data blocking)
  2. Drops the existing schema
  3. Recreates the schema
  4. Imports the backup file

This prevents "already exists" conflicts.

No data imported (users, games, commanders empty)

Cause: Your backup file contains pg_dump security commands:

\restrict IdvbAL1gCAhQZc4dsPYgIzErSH0gRztgmxsbr3dcnr1I1Wymp9VCK54cbXqCR5P

These \restrict and \unrestrict commands tell psql to enter restricted mode, which blocks data loading.

Solution: Use the migration script (v2.4+) which automatically removes these:

docker compose exec -u postgres postgres /scripts/migrate-database.sh \
  --target-db edh_stats \
  --output-file /backups/backup.sql \
  --skip-export

The script now:

  1. ✓ Detects restrict commands
  2. ✓ Removes them automatically
  3. ✓ Imports all data successfully

"psql is not available"

The script must run inside the PostgreSQL container. Use:

docker compose exec postgres /scripts/migrate-database.sh

Not just:

./scripts/migrate-database.sh  # Wrong - runs on host

"Source database does not exist"

Check available databases:

docker compose exec postgres psql -l

Make sure the database name is correct and exists.

"Target database does not exist"

Create the target database first:

docker compose exec postgres createdb edh_stats_new

Then run the migration.

"Permission denied" on output file

Ensure the directory exists and is writable:

# Check directory
docker compose exec postgres ls -ld /var/lib/postgresql/backups/

# Create if needed
docker compose exec postgres mkdir -p /var/lib/postgresql/backups/

Import takes too long

For large databases, import runs in the background:

# Monitor progress
docker compose logs -f postgres

File not found after export

Check where the file was written:

docker compose exec postgres ls -lh /backups/edh_stats_backup_*

Files are automatically synced to host at ./backups/ via volume mount.

Backup file has "pg_dump" syntax errors during import

This issue was fixed in v2.3. If you have old backup files from earlier versions that contain pg_dump comments (like pg_dump: creating TABLE), they may cause import errors.

Solution: Create a new backup with the updated script:

docker compose exec -u postgres postgres /scripts/migrate-database.sh \
  --source-db edh_stats \
  --skip-import

Integration Examples

Backup Before Deployment

#!/bin/bash
# deploy.sh

# Create backup before deploying
echo "Creating backup..."
docker compose exec -u postgres postgres /scripts/migrate-database.sh \
  --source-db edh_stats \
  --output-file /backups/pre_deploy_$(date +%s).sql \
  --skip-import

# Deploy application
echo "Deploying..."
docker compose -f docker-compose.prod.deployed.yml pull
docker compose -f docker-compose.prod.deployed.yml up -d

# Run schema migrations if needed
docker compose -f docker-compose.prod.deployed.yml exec backend npm run migrate

echo "Deployment complete!"

Continuous Backup Job

#!/bin/bash
# cron-backup.sh

BACKUP_DIR="./backups"
RETENTION_DAYS=30

mkdir -p "$BACKUP_DIR"

# Create backup (automatically goes to ./backups on host)
docker compose exec -u postgres postgres /scripts/migrate-database.sh \
  --source-db edh_stats \
  --output-file /backups/edh_stats_$(date +%Y%m%d_%H%M%S).sql \
  --skip-import

# Keep only last N days of backups
find "$BACKUP_DIR" -name "edh_stats_*.sql" -mtime +$RETENTION_DAYS -delete

echo "Backup completed at $(date)"

Recovery Procedure

#!/bin/bash
# recover.sh - Restore from backup (import-only)

BACKUP_FILE="${1:?Usage: $0 <backup_file>}"

if [ ! -f "$BACKUP_FILE" ]; then
    echo "Error: Backup file not found: $BACKUP_FILE"
    exit 1
fi

# Copy file into container
docker compose cp "$BACKUP_FILE" postgres:/backups/restore.sql

# Import (import-only mode - skip export)
docker compose exec -u postgres postgres /scripts/migrate-database.sh \
  --target-db edh_stats \
  --output-file /backups/restore.sql \
  --skip-export

echo "Recovery complete from: $BACKUP_FILE"

Security Considerations

Backup Files

  • Backups contain all data including sensitive information
  • Keep backup files secure
  • Delete old backups
  • Consider encrypting backups
# Secure permissions on backups
docker compose exec postgres chmod 600 /backups/*.sql

# Encrypt backup
docker compose exec postgres \
  gpg --symmetric --cipher-algo AES256 /backups/backup.sql

Container Access

  • Only authorized users should run this script
  • Audit backup and restore operations
  • Use Docker Compose for local development only

Performance Tips

  • Run exports during off-peak hours
  • For large databases (>1GB), export-only mode is faster
  • Monitor container resources during import
  • Disable unnecessary services during import

Additional Resources

Version

Script version: 2.4 (Container Edition - Fixed pg_dump Restrictions)
Last updated: 2026-01-18
Compatible with: PostgreSQL 10+ in Docker, EDH Stats v1.0+

Version History

  • v2.4: Remove pg_dump restrict/unrestrict commands that block data import
  • v2.3: Fixed pg_dump verbose output causing SQL syntax errors during import
  • v2.2: Auto-clear existing data before import (drop/recreate schema)
  • v2.1: Added --skip-export flag for import-only operations
  • v2.0: Initial container-based version with export/import
  • v1.0: Original version