Skip to content

Deployment

Spooled Cloud can be deployed as a managed SaaS service or self-hosted on your own infrastructure. This guide covers all deployment options and production best practices.

☁️

Managed (SaaS)

Fully managed at spooled.cloud. Zero infrastructure to maintain.

🏠

Self-Hosted

Deploy on your infrastructure with full control over data and scaling.

Deployment Architecture

A production Spooled deployment consists of stateless API servers behind a load balancer, with PostgreSQL for persistence and Redis for real-time pub/sub.

%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#ecfdf5', 'primaryTextColor': '#065f46', 'primaryBorderColor': '#10b981', 'lineColor': '#6b7280'}}}%%
flowchart TB
  subgraph lb["Load Balancer"]
    LB["Nginx / Cloud LB"]
  end

  subgraph api["API Tier (Stateless)"]
    A1["Spooled API #1"]
    A2["Spooled API #2"]
    A3["Spooled API #N"]
  end

  subgraph data["Data Tier"]
    PG[(PostgreSQL)]
    RD[(Redis)]
  end

  subgraph workers["Worker Pool"]
    W1["Worker 1"]
    W2["Worker 2"]
    WN["Worker N"]
  end

  LB --> A1
  LB --> A2
  LB --> A3
  
  A1 --> PG
  A2 --> PG
  A3 --> PG
  
  A1 --> RD
  A2 --> RD
  A3 --> RD
  
  W1 --> LB
  W2 --> LB
  WN --> LB

Managed (spooled.cloud)

The easiest way to get started. Sign up at spooled.cloud/signup and start queuing jobs immediately.

  • ✓ No infrastructure to manage
  • ✓ Automatic scaling and updates
  • ✓ Built-in monitoring and alerting
  • ✓ 99.9% SLA (Pro and Enterprise plans)
  • ✓ SOC 2 Type II compliance

Self-Hosted: Docker

The quickest way to self-host Spooled for development or small deployments.

Workers are separate

Spooled runs the queue + APIs, but it does not execute your business logic. You run one or more worker processes (your app code) that claim jobs and complete/fail them. Docker Compose starts Spooled + PostgreSQL + Redis; you add your own worker container(s) separately.

Quick Start

# Clone the repository
git clone https://github.com/spooled-cloud/spooled-backend.git
cd spooled-backend

# Copy example environment
cp .env.example .env

# Start with Docker Compose
docker compose up -d

# Check status
docker compose ps

# View logs
docker compose logs -f spooled-api

Docker Compose Configuration

The docker-compose.yml file configures the following services:

  • spooled-api — REST API on port 8080, gRPC API on port 50051
  • postgres — PostgreSQL database for queue storage
  • redis — Redis for pub/sub and rate limiting
  • dashboard — Web UI on port 3000

See the GitHub repository for the full docker-compose.yml.

Self-Hosted: Kubernetes

For production deployments, use our Helm chart.

Prerequisites

  • Kubernetes 1.24+
  • Helm 3.10+
  • PostgreSQL 14+ (managed like RDS/Cloud SQL or self-hosted)
  • Redis 7+ (managed like ElastiCache/Memorystore or self-hosted)

Installation

# Helm chart status
# The official Helm chart is not published yet.
# Track progress / updates in GitHub Discussions:
# https://github.com/orgs/Spooled-Cloud/discussions
#
# For Kubernetes today, use Docker Compose as a baseline and translate to:
# - Deployment for spooled-api (stateless)
# - Service + Ingress (WebSocket/SSE compatible)
# - Managed PostgreSQL + Redis (recommended)
#
# If you need a chart urgently, open a discussion and we'll prioritize it.

Configuration

Create a custom values.yaml to configure:

  • replicaCount — Number of API replicas
  • ingress — Ingress settings for external access
  • resources — CPU and memory limits
  • autoscaling — HPA configuration
  • secrets — External secrets reference

Self-Hosted: Bare Metal

For maximum control, run Spooled directly on servers.

Build from Source

# Requirements: Rust 1.75+
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

# Clone and build
git clone https://github.com/spooled-cloud/spooled-backend.git
cd spooled-backend
cargo build --release

# Binary at target/release/spooled-backend

System Service

Configure systemd to manage the Spooled service with automatic restart on failure. See the repository for example systemd unit files.

Reverse Proxy

Configure Nginx or Caddy as a reverse proxy with TLS termination. Ensure:

  • WebSocket upgrade support for /api/v1/ws
  • Buffering disabled for SSE at /api/v1/events
  • Proper proxy headers (Host, X-Real-IP, X-Forwarded-For)
  • gRPC proxying on port 50051 (HTTP/2 required)

Environment Variables

VariableRequiredDescription
DATABASE_URLYesPostgreSQL connection string
REDIS_URLYes*Redis connection string (* optional for single-node)
JWT_SECRETYes256-bit secret for JWT signing
ADMIN_API_KEYYesInitial admin API key
HOSTNoBind address (default: 0.0.0.0)
PORTNoREST API port (default: 8080)
GRPC_PORTNogRPC API port (default: 50051)
LOG_LEVELNoLogging level: trace, debug, info, warn, error
LOG_FORMATNoLog format: json or pretty
METRICS_ENABLEDNoEnable Prometheus metrics (default: true)

Production Checklist

  • TLS/HTTPS — All traffic encrypted
  • Database backups — Automated daily backups with point-in-time recovery
  • Monitoring — Prometheus metrics scraped, Grafana dashboards configured
  • Alerting — Alerts for high error rates, queue depth, and DLQ growth
  • Log aggregation — Centralized logging (Loki, CloudWatch, Datadog)
  • Secret management — Secrets in Vault, AWS Secrets Manager, or K8s secrets
  • Health checks — Load balancer configured with /health endpoint
  • Horizontal scaling — Multiple API instances behind load balancer

Database Requirements

PostgreSQL

  • Version: 14+ (16 recommended)
  • Extensions: pgcrypto (auto-enabled)
  • Connections: At least 20 per API instance
  • Storage: SSD recommended for queue performance

Redis

  • Version: 7+ (for function support)
  • Memory: At least 256MB for small deployments
  • Persistence: RDB or AOF for durability