I need a place to store my notes where I’ll be able to find them.

Hopefully, this is it.

MacPorts Over HomeBrew

Before dealing with HomeBrew, check with MacPorts if they have a solution.

To move into docker container

Get container name

docker ps

Move into container

docker exec -it <container-id> bash

Fix the requirements.txt to definite versions when moving into Production

To keep anything from breaking, specify the exact version numbers.

Flask==2.3.2
requests==2.28.1
numpy==1.24.2

When using watchdog to monitor folders, it is recommended to use Celery and Redis/PostgreSQL.

Celery is a distributed task queue that can track file processing jobs. Built-in mechanisms for task states and results that persist across restarts Can be configured to avoid duplicate processing

This is to prevent the issue that watchdog will forget what files had been previously monitored when there’s a restart.

Simple Security With Guest Mode

  1. A system with a “non-destructive guest role”
  2. System has an emergency password rest file
  3. JWT-based authentication with a simplified setup with simple long-lived tokens (weeks/months) to minimize login frequency.
  4. Single user account only need an “owner” role an a “guest” role with no delete capability if you occasionally need to share access.

Recommended Celery Packages With Redis

  1. celery-redbeat
    • Reliable Redis-based scheduler for your overnight processing
    • More efficient than default scheduler for your Docker environment
  2. Flower
    • Lightweight monitoring dashboard to track overnight processing
    • Helps identify bottlenecks in your media processing pipeline
  3. Prevent celery duplicate tasks - To prevent duplicate Celery tasks, consider using a lock mechanism or implementing idempotent tasks, ensuring that only one instance of a task with the same name and arguments is executed at a time.
    • Prevents duplicate processing of the same files
    • Essential when using watchdog for file detection

Task Management Helpers

  1. kombu celery
    • Already a Celery dependency, but worth configuring properly
    • Set message visibility timeout appropriately for long-running file processing
  2. SQLAlchemy Result Backend
    • Store task results in your PostgreSQL database
    • Enables tracking of processing history and search by task status Configuration Pattern

Connecting An APP To A NAS

  1. Make sure that the NAS User assigned to the APP ONLY has READ privileges of ONLY the specific data that the app needs to process.
  2. Have backups of the NAS before connecting the APP
  3. Create a separate backup service in your Docker Compose file would provide better isolation for any app backup process to the nas. Below is how you would do it.

Docker-compose.yml for Backups

version: '3.8'

services:
  media_processor:
    image: your-media-processor-image
    restart: unless-stopped
    user: "1000:1000"  # Run as non-root user (adjust UID/GID as needed)
    volumes:
      # NAS source (read-only)
      - type: bind
        source: /mnt/nas_media
        target: /app/nas_media
        read_only: true
        
      # SSD processing area (temporary, Docker-managed)
      - ssd_processing:/app/processing
      
      # Permanent storage for playback
      - type: bind
        source: /mnt/permanent_storage
        target: /app/permanent_storage
    deploy:
      resources:
        limits:
          cpus: '2.0'
          memory: 2G
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8080/health"] # Adjust to your app's health endpoint
      interval: 1m
      timeout: 10s
      retries: 3
      start_period: 30s
  
  backup_service:
    image: alpine:latest
    restart: unless-stopped
    user: "1000:1000"  # Run as non-root user (adjust UID/GID as needed)
    entrypoint: ["/bin/sh", "/backup/scripts/backup.sh"]
    volumes:
      # Read-only access to permanent storage
      - type: bind
        source: /mnt/permanent_storage
        target: /backup/source
        read_only: true
        
      # Write access to NAS backup location
      - type: bind
        source: /mnt/nas_backup
        target: /backup/destination
        
      # Backup scripts
      - type: bind
        source: ./backup-scripts
        target: /backup/scripts
    deploy:
      resources:
        limits:
          cpus: '0.5'
          memory: 256M
    healthcheck:
      test: ["CMD", "sh", "-c", "pgrep rsync || grep 'Last successful backup' /backup/logs/backup.log | grep -v 'older than 2 hours'"]
      interval: 5m
      timeout: 30s
      retries: 3
      start_period: 1m

volumes:
  ssd_processing:
    driver: local
    driver_opts:
      type: none
      device: /path/to/ssd/directory
      o: bind

Backup Script

#!/bin/sh
# Enhanced backup script with error handling, logging, and retention

# Configuration
BACKUP_SOURCE="/backup/source"
BACKUP_DEST="/backup/destination"
BACKUP_LOGS="/backup/logs"
BACKUP_FREQ=3600  # Backup frequency in seconds (1 hour)
MAX_BACKUPS=7     # Number of daily backups to keep
LOG_FILE="${BACKUP_LOGS}/backup.log"

# Create logs directory if it doesn't exist
mkdir -p "${BACKUP_LOGS}"
touch "${LOG_FILE}"

# Helper functions
log() {
  echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "${LOG_FILE}"
}

log_error() {
  echo "[$(date '+%Y-%m-%d %H:%M:%S')] ERROR: $1" | tee -a "${LOG_FILE}" >&2
}

update_last_success() {
  echo "Last successful backup: $(date '+%Y-%m-%d %H:%M:%S')" > "${BACKUP_LOGS}/last_success"
}

check_source() {
  if [ ! -d "${BACKUP_SOURCE}" ]; then
    log_error "Backup source directory doesn't exist or isn't accessible"
    return 1
  fi
  return 0
}

check_destination() {
  if [ ! -d "${BACKUP_DEST}" ]; then
    log "Backup destination doesn't exist, attempting to create it"
    if ! mkdir -p "${BACKUP_DEST}"; then
      log_error "Failed to create backup destination directory"
      return 1
    fi
  fi
  
  # Test write access
  if ! touch "${BACKUP_DEST}/.write_test"; then
    log_error "Cannot write to backup destination"
    return 1
  fi
  rm "${BACKUP_DEST}/.write_test"
  return 0
}

rotate_backups() {
  # Remove backups older than MAX_BACKUPS days
  find "${BACKUP_DEST}/previous_versions" -maxdepth 1 -type d -name "20*" -mtime +${MAX_BACKUPS} -exec rm -rf {} \; 2>/dev/null || true
}

# Main backup function
perform_backup() {
  # Create timestamped backup directory
  BACKUP_DATE=$(date '+%Y-%m-%d')
  BACKUP_TIME=$(date '+%H-%M-%S')
  BACKUP_SUBDIR="previous_versions/${BACKUP_DATE}_${BACKUP_TIME}"
  
  log "Starting backup to ${BACKUP_SUBDIR}"
  
  # Ensure previous_versions directory exists
  mkdir -p "${BACKUP_DEST}/previous_versions"
  
  # Create a hard-link based backup for efficiency
  # --link-dest makes incremental backups that only store changed files
  if rsync -av --delete \
     --link-dest="${BACKUP_DEST}/latest" \
     "${BACKUP_SOURCE}/" \
     "${BACKUP_DEST}/${BACKUP_SUBDIR}/"; then
    
    log "Backup completed successfully"
    
    # Update the "latest" symbolic link
    rm -f "${BACKUP_DEST}/latest"
    ln -sf "${BACKUP_SUBDIR}" "${BACKUP_DEST}/latest"
    
    # Record success
    update_last_success
    
    # Rotate old backups
    rotate_backups
    
    return 0
  else
    log_error "Backup failed with error code $?"
    return 1
  fi
}

# Main execution loop
log "Backup service started"

while true; do
  if check_source && check_destination; then
    perform_backup
  else
    log_error "Pre-backup checks failed, skipping this backup cycle"
  fi
  
  log "Sleeping for ${BACKUP_FREQ} seconds until next backup"
  sleep "${BACKUP_FREQ}"
done

Backup System Setup and Deployment Instructions

This document explains how to set up and deploy the improved backup solution for your media processing system.

Directory Structure

Create the following directory structure:

/your-project-directory/
├── docker-compose.yml
├── backup-scripts/
│   └── backup.sh

Preparation Steps

  1. Create the necessary directories on your Mac mini and NAS:
# On Mac mini
sudo mkdir -p /mnt/nas_media
sudo mkdir -p /mnt/nas_backup
sudo mkdir -p /mnt/permanent_storage
sudo mkdir -p /path/to/ssd/directory  # For SSD processing area

# Create backup scripts directory in your project folder
mkdir -p ./backup-scripts
  1. Set up proper permissions:
# Create a dedicated user for backups (optional but recommended)
sudo useradd -m backup-user -u 1000

# Set permissions
sudo chown -R backup-user:backup-user /mnt/permanent_storage
sudo chown -R backup-user:backup-user /path/to/ssd/directory
sudo chmod 750 /mnt/permanent_storage
sudo chmod 750 /path/to/ssd/directory

# Ensure NAS mount points have proper permissions
# This will depend on your NAS setup
  1. Mount your NAS shares:
# Example for NFS mounts (adjust according to your NAS setup)
sudo mount -t nfs your-nas-ip:/media /mnt/nas_media
sudo mount -t nfs your-nas-ip:/backups /mnt/nas_backup

# Add to /etc/fstab for persistence
# your-nas-ip:/media /mnt/nas_media nfs ro,noatime 0 0
# your-nas-ip:/backups /mnt/nas_backup nfs rw,noatime 0 0

Deployment

  1. Copy the Docker Compose file to your project directory
  2. Copy the backup script to the backup-scripts directory and make it executable:
chmod +x ./backup-scripts/backup.sh
  1. Start the services:
docker-compose up -d
  1. Verify the services are running:
docker-compose ps
docker-compose logs backup_service

Testing the Backup System

  1. Create some test files in the permanent storage directory:
echo "Test file" > /mnt/permanent_storage/test.txt
  1. Manually trigger a backup cycle (optional):
docker-compose exec backup_service /bin/sh -c "kill -SIGUSR1 1"
  1. Check the backup logs:
docker-compose logs backup_service
  1. Verify the backup exists on the NAS:
ls -la /mnt/nas_backup/latest/

Security Considerations

  1. Network Security: Ensure your NAS is not exposed to the internet directly
  2. Encryption: Consider encrypting sensitive backup data
  3. Access Control: Restrict access to backup directories
  4. Monitoring: Set up alerts for backup failures

Customization

You can customize the backup behavior by modifying the following variables in the backup script:

  • BACKUP_FREQ: Time between backups (in seconds)
  • MAX_BACKUPS: Number of daily backups to retain

Troubleshooting

If you encounter issues:

  1. Check the container logs: docker-compose logs
  2. Verify volume mount permissions
  3. Check network connectivity to the NAS
  4. Review the backup logs in /backup/logs/backup.log

For more detailed troubleshooting, access the container directly:

docker-compose exec backup_service /bin/sh

Visit Emlekezik.com