Skip to content
Open-source • Apache 2.0 License

Reliable queueing for background work at any scale

Open-source job queue infrastructure: durable Postgres-backed jobs, cron schedules, and simple workflows. Bulk enqueue, automatic retries (with backoff), idempotency keys, and dead-letter queues—failed work stays visible and replayable.

Star on GitHub
enqueue-job.ts
// Any service → Spooled queue → Your workers
const response = await fetch('https://api.spooled.cloud/api/v1/jobs', {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${API_KEY}`,
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    queue_name: 'critical-jobs',
    payload: jobPayload,
    idempotency_key: requestId,
    tags: { source: 'api' },
  })
});

// ✓ Queued instantly (bulk up to 100 jobs)
// ✓ Retries with exponential backoff
// ✓ Deduplicated by idempotency key
// ✓ Filterable by tags, real-time status
Quick Start

Up and running in minutes

Enqueue jobs, process them with workers, done. Use idempotency keys to prevent duplicates. All examples work with cURL, Node.js, Python, and Go.

1
Enqueue a Job
from spooled import SpooledClient
import os

client = SpooledClient(api_key=os.environ["SPOOLED_API_KEY"])

image_id = "img_123"

# Create a background job
result = client.jobs.create({
    "queue_name": "image-processing",
    "payload": {
        "image_url": "https://example.com/image.jpg",
        "operations": ["resize", "compress"],
        "output_format": "webp"
    },
    "idempotency_key": f"process-image-{image_id}",
    "max_retries": 3
})

print(f"Created job: {result.id}")

client.close()
2
Process Jobs
from spooled import SpooledClient
from spooled.worker import SpooledWorker
import os

client = SpooledClient(api_key=os.environ["SPOOLED_API_KEY"])
worker = SpooledWorker(client, queue_name="email-notifications", concurrency=10)

@worker.process
def handle_job(ctx):
    to = ctx.payload["to"]
    subject = ctx.payload["subject"]
    
    # Process the job
    send_email(to, subject)
    
    return {"sent": True}

worker.start()  # Blocking
Bonus: Real-time Updates
from spooled import SpooledClient
from spooled.realtime import SubscriptionFilter
import os

client = SpooledClient(api_key=os.environ["SPOOLED_API_KEY"])

realtime = client.realtime(type="sse")

@realtime.on("job.created")
def on_job_created(data):
    print("job.created:", data)

realtime.connect()
realtime.subscribe(SubscriptionFilter(queue="orders"))
1
Queue a job
Send a job via REST or gRPC. We store it durably with idempotency and scheduling.
2
Workers process
Your workers claim work, run your code, then complete/fail. Retries happen automatically.
3
Monitor
Track throughput, latency, failures, and DLQ replays in the dashboard.
Dashboard

Everything you need to monitor your queues

A real-time view into jobs, retries, queues, and workers—built for debugging and operations.

Job timeline
Follow a job end-to-end, including retries and outcomes.
DLQ replay
Inspect failures and replay jobs after you’ve fixed the root cause.
Queue stats
Throughput, pending counts, and latency at a glance.
Worker health
See active workers, leases, and heartbeats in real time.
Real-World Examples

See it in action

Copy-paste these examples into your project. Each one shows exactly how to integrate Spooled.

Spooled handles

  • • Reliable job queuing & storage
  • • Automatic retries with backoff
  • • Job deduplication (idempotency)
  • • Dead-letter queues
  • • Real-time job monitoring

You provide

  • • Workers (your code, your servers)
  • • Email service (Resend, SendGrid...)
  • • Storage (S3, Cloudflare R2...)
  • • Any external APIs you need
  • • Your business logic

🛒 E-Commerce Order Processing

Process Stripe payments reliably

When a customer completes checkout, Stripe sends a webhook. Spooled queues it; your worker processes the payment, sends the email, and updates inventory (with retries and deduplication).

💳
Stripe Webhook
📥
Queue Job
Process Payment
📧
Send Email

1 Queue the job

JavaScript
// Your Stripe webhook endpoint (runs on YOUR server)
app.post('/webhooks/stripe', async (req, res) => {
  const event = req.body;
  
  // Queue the job in Spooled (just stores the data)
  await fetch('https://api.spooled.cloud/api/v1/jobs', {
    method: 'POST',
    headers: {
      'Authorization': 'Bearer sp_live_...',
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({
      queue_name: 'payments',
      payload: {
        event_type: event.type,
        customer_id: event.data.object.customer,
        amount: event.data.object.amount,
        order_id: event.data.object.metadata.order_id
      },
      idempotency_key: event.id  // Prevents duplicate processing
    })
  });
  
  res.status(200).send('Queued');  // Respond fast to Stripe
});

2 Process in worker

JavaScript
// Your worker (runs on YOUR server - could be a cron, container, etc.)
const WORKER_ID = 'worker-1';

async function processPayments() {
  // 1. Claim jobs from Spooled
  const res = await fetch('https://api.spooled.cloud/api/v1/jobs/claim', {
    method: 'POST',
    headers: { 
      'Authorization': 'Bearer sp_live_...',
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({ queue_name: 'payments', worker_id: WORKER_ID, limit: 10 })
  });
  const { jobs } = await res.json();
  
  for (const job of jobs) {
    try {
      // 2. YOUR code does the actual work
      await db.orders.update(job.payload.order_id, { status: 'paid' });
      const customerEmail = await lookupCustomerEmail(job.payload.customer_id);
      await resend.emails.send({  // YOUR Resend/SendGrid account
        to: customerEmail,
        subject: 'Order Confirmed!'
      });
      
      // 3. Tell Spooled: success!
      await fetch(`https://api.spooled.cloud/api/v1/jobs/${job.id}/complete`, {
        method: 'POST',
        headers: { 
          'Authorization': 'Bearer sp_live_...',
          'Content-Type': 'application/json'
        },
        body: JSON.stringify({ worker_id: WORKER_ID })
      });
    } catch (err) {
      // Job will auto-retry later
      await fetch(`https://api.spooled.cloud/api/v1/jobs/${job.id}/fail`, {
        method: 'POST',
        headers: { 
          'Authorization': 'Bearer sp_live_...',
          'Content-Type': 'application/json'
        },
        body: JSON.stringify({ worker_id: WORKER_ID, error: err.message })
      });
    }
  }
}
Use Cases

Built for any async workload

From simple webhook handlers to complex event-driven architectures, Spooled scales with your needs.

Webhook Processing

Queue events from any source. For providers like Stripe/GitHub/Shopify, use a tiny webhook adapter to verify signatures and enqueue jobs.

Example: Process payment events, deploy on push, handle SMS responses

Stripe (via adapter) GitHub (via adapter) Shopify (via adapter) Custom JSON

High-Volume Jobs

Bulk enqueue up to 100 jobs at once. Check batch status in a single call. Tag jobs for filtering and analytics across millions of records.

Example: Mass email campaigns, data migrations, bulk notifications

bulk enqueue batch status job tags high throughput

Scheduled Tasks

Schedule jobs with second-precision cron (6-field expressions). Timezone-aware execution with manual trigger option and execution history.

Example: Daily reports, subscription renewals, cleanup tasks

cron jobs second precision timezone support recurring jobs

Event-Driven Workflows

Chain jobs with dependencies—run steps in parallel or sequence. Retry failed workflows end-to-end with full error handling and visibility.

Example: Order processing, user onboarding, approval flows

job dependencies parallel execution workflow retry orchestration
Performance

Built for scale

100

Jobs per bulk enqueue

6 -field

Cron (second precision)

SSE +WS

Real-time streaming

2 APIs

REST + gRPC (HTTP/2)

Built for production workloads
Features

Everything you need for reliable processing

Production-grade features built into the platform. Focus on your business logic while Spooled handles the infrastructure.

Queue Anything

Webhooks, background jobs, scheduled tasks, and workflows — all handled as jobs. Bulk enqueue up to 100 jobs in a single API call.

Learn more

Automatic Retries

Failed jobs automatically retry with exponential backoff. Configure max attempts (up to 100), delays, and get per-job completion webhooks.

Learn more

Idempotency Built-in

Prevent duplicate processing with built-in idempotency keys. Safe webhook retries guaranteed—even if your source sends the same event twice.

Learn more

Dead-Letter Queue

Jobs that exhaust retries land in the DLQ. Inspect payloads, debug failures, bulk retry or purge with filters—full control over failed work.

Learn more

API Rate Limits

Fair usage limits per plan with clear errors and upgrade paths. Track your current usage and limits via the API.

Learn more

Queue Control

Pause and resume queues instantly during deployments or incidents. Boost job priority on the fly. Check batch status for up to 100 jobs at once.

Learn more

Real-time Observability

Monitor job throughput, latency, and error rates. Stream live updates via WebSocket and SSE. Export Prometheus metrics for your dashboards.

Learn more

REST + gRPC APIs

Use REST for web apps and simple integrations. Use gRPC with HTTP/2 bidirectional streaming for high-throughput workers processing thousands of jobs/sec.

Learn more

Tags & Filtering

Add searchable tags to jobs for filtering and analytics. Track jobs by customer, environment, or any custom dimension you need.

Learn more

Official SDKs

Production-ready Node.js, Python, Go, and PHP SDKs with full type safety. gRPC streaming, worker abstractions, and circuit breakers built-in.

Learn more
Live Demo

Watch jobs flow through the queue

Spooled stores your jobs. Your workers claim and process them. Failed jobs auto-retry.

Incoming

Live
Waiting for webhooks...

Processing

0 active
No active jobs

Completed

0 total
Jobs will appear here
0
Jobs/sec
0ms
Avg latency
100%
Success
0
Retries
Reliability

Automatic retries with exponential backoff

Failed jobs automatically retry with intelligent backoff. After all retries, jobs move to the Dead Letter Queue for inspection.

Initial Attempt

t = 0s

Job is picked up by worker for processing

1

Retry #1

t + 1m

First retry after 1 minute delay

2

Retry #2

t + 3m

Exponential backoff: 2 minute delay

3

Retry #3

t + 7m

Exponential backoff: 4 minute delay

Backoff Formula (timings shown above are real)
delay_minutes = min(2^retry_count, 60)

Default job retries are scheduled in minutes (1m, 2m, 4m, ...), capped at 60m. (Queue/job config can override max retries.)

List DLQ jobs
# List jobs in dead-letter queue
dlq_jobs = client.jobs.dlq.list({
    "queue_name": "payment-processing",
    "limit": 100
})

for job in dlq_jobs:
    print(f"DLQ Job: {job.id}, status: {job.status}")
    print(f"Retry count: {job.retry_count}, created: {job.created_at}")
Retry DLQ jobs
# Retry DLQ jobs
result = client.jobs.dlq.retry({
    "queue_name": "payment-processing",
    "limit": 50
})

print(f"Retried {result.retried_count} jobs")
Real-time Streaming

Publish once, stream updates everywhere

Queue jobs via REST. Stream queue/job updates via SSE (and WebSocket for dashboards).

Publisher

Your Application

# Enqueue a job
POST
/api/v1/jobs
{
"queue_name": "orders",
"payload": {...}
}

Spooled

Queue & Store

Live
Click "Publish Job" to see it flow
Queue: orders 0 jobs processed

Stream Client

SSE: Queue stats

# Live event stream
GET
/api/v1/events/queues/orders
{
"event": "queue.stats",
"data": { pending, processing, ... }
}
Stream live queue stats (SSE)
from spooled import SpooledClient
from spooled.realtime import SubscriptionFilter
import os

client = SpooledClient(api_key=os.environ["SPOOLED_API_KEY"])

realtime = client.realtime(type="sse")

@realtime.on("job.created")
def on_job_created(data):
    print("job.created:", data)

realtime.connect()
realtime.subscribe(SubscriptionFilter(queue="orders"))
Distributed Processing

Scale with your worker pool

Run multiple workers on your servers. Spooled distributes jobs automatically—no coordination needed.

You run: Workers on your infrastructure  •  Spooled provides: Queue coordination & job locking

Your Worker Pool

Your servers claim jobs from Spooled

3
Workers
0
Claimed
0
Queued
Your Server 1
Idle
Polling for jobs...
Your Server 2
Idle
Polling for jobs...
Your Server 3
Idle
Polling for jobs...
Job Queue
Cron Schedules

Recurring jobs made simple

Create jobs that run on a schedule. Daily reports, subscription renewals, cleanup tasks—set it once and forget it.

Create Recurring Schedule
# Create a cron schedule
schedule = client.schedules.create({
    "name": "Daily Report",
    "cron_expression": "0 0 9 * * *",
    "timezone": "America/New_York",
    "queue_name": "reports",
    "payload_template": {"type": "daily_report"}
})
Schedule for Later
from datetime import datetime, timedelta

# Schedule a job to run in 24 hours
client.jobs.create({
    "queue_name": "reminders",
    "payload": {"user_id": "usr_123", "type": "cart-abandoned"},
    "scheduled_at": (datetime.utcnow() + timedelta(hours=24)).isoformat() + "Z",
    "idempotency_key": f"reminder-{user_id}-cart"
})

Common Cron Patterns

0 * * * * Every hour
0 9 * * * Daily at 9 AM
0 0 * * 0 Weekly on Sunday
0 0 1 * * Monthly on 1st
*/15 * * * * Every 15 minutes
0 9 * * 1-5 Weekdays at 9 AM
Workflows

Orchestrate complex workflows

Chain jobs together with dependencies. Build multi-step processes where each job waits for its dependencies to complete.

Create Account
Step 1
Send Welcome
Step 2 (waits for Step 1)
Setup Defaults
Step 3 (waits for Step 1)
User Onboarding Workflow
from spooled import SpooledClient
import os

client = SpooledClient(api_key=os.environ["SPOOLED_API_KEY"])

# Create a workflow with job dependencies
workflow = client.workflows.create({
    "name": "user-onboarding",
    "jobs": [
        {
            "key": "create-account",
            "queue_name": "users",
            "payload": {"email": "user@example.com", "plan": "pro"}
        },
        {
            "key": "send-welcome-email",
            "queue_name": "emails",
            "depends_on": ["create-account"],  # Waits for this job
            "payload": {"template": "welcome"}
        },
        {
            "key": "setup-defaults",
            "queue_name": "users",
            "depends_on": ["create-account"],  # Also waits
            "payload": {"settings": {}}
        }
    ]
})

print(f"Created workflow: {workflow.workflow_id}")

Automatic Ordering

Jobs run in the correct order based on dependencies. No manual coordination needed.

Failure Handling

If a parent job fails, dependent children are automatically cancelled.

Parallel Execution

Independent jobs run in parallel for maximum throughput.

Job Priority

Process urgent jobs first

Higher priority jobs jump the queue. Perfect for VIP customers, critical alerts, and time-sensitive tasks.

Priority Levels
from spooled import SpooledClient
import os

client = SpooledClient(api_key=os.environ["SPOOLED_API_KEY"])

# High priority - VIP customer order (processed first)
client.jobs.create({
    "queue_name": "orders",
    "payload": {"order_id": 789, "customer_tier": "vip"},
    "priority": 10   # High priority
})

# Normal priority (default)
client.jobs.create({
    "queue_name": "orders",
    "payload": {"order_id": 790},
    "priority": 0    # Default
})

# Low priority - background cleanup
client.jobs.create({
    "queue_name": "maintenance",
    "payload": {"task": "cleanup"},
    "priority": -10  # Low priority
})

# Workers claim jobs: High → Normal → Low

Processing Order

VIP Order
Priority: 10
↑ First
Normal
Priority: 0
↑ Second
Cleanup
Priority: -10
↑ Last
Outgoing Webhooks

Get notified when events occur

Spooled POSTs to your configured URLs when jobs complete, fail, or queues pause. Connect to Slack, Discord, your own app, or any webhook endpoint.

Configure Notifications
from spooled import SpooledClient
import os

client = SpooledClient(api_key=os.environ["SPOOLED_API_KEY"])

# Setup webhook for job events
client.webhooks.create({
    "name": "Slack Notifications",
    "url": "https://hooks.slack.com/...",
    "events": ["job.completed", "job.failed", "queue.paused"],
    "secret": "your-hmac-secret"  # For signature verification
})

# ✓ Spooled POSTs to your URL
# ✓ Automatic retries
# ✓ Delivery history in dashboard
Per-Job Completion Webhook
from spooled import SpooledClient
import os

client = SpooledClient(api_key=os.environ["SPOOLED_API_KEY"])

# Get notified when THIS job completes
client.jobs.create({
    "queue_name": "exports",
    "payload": {"report_id": 12345},
    "completion_webhook": "https://your-app.com/webhooks/export-done"
})

# When job completes, Spooled POSTs to your URL:
# POST https://your-app.com/webhooks/export-done
# {"status": "completed", "job_id": "...", "result": {...}}

Available Events

job.created
job.started
job.completed
job.failed
job.cancelled
queue.paused
queue.resumed
worker.registered
worker.deregistered
schedule.triggered
💬

Slack Alerts

Get notified in Slack when jobs fail

📊

Analytics

Send events to your analytics platform

🎯

Custom Logic

Trigger your own workflows

📧

Email Notifications

Alert teams about important events

Architecture

Production-ready from day one

Built with Rust for performance and reliability. PostgreSQL for durability. Redis for real-time pub/sub.

System Architecture

Multi-tenant queue with PostgreSQL RLS, Redis pub/sub, and real-time updates.

%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#ecfdf5', 'primaryTextColor': '#065f46', 'primaryBorderColor': '#10b981', 'lineColor': '#6b7280', 'secondaryColor': '#eff6ff', 'tertiaryColor': '#faf5ff', 'fontSize': '18px', 'fontFamily': 'Inter, system-ui, sans-serif'}}}%%
flowchart LR
  subgraph sources[" 📥 Webhook Sources "]
    GH["GitHub"]
    ST["Stripe"]
    CU["Custom HTTP"]
    HC["HTTP Clients"]
  end

  subgraph backendSvc[" ⚡ Spooled Backend "]
    API["REST API"]
    GRPC["gRPC API"]
    RT["WebSocket/SSE"]
  end

  subgraph storage[" 💾 Data Plane "]
    PG[("PostgreSQL")]
    RD[("Redis")]
  end

  subgraph obs[" 📊 Observability "]
    PR["Prometheus /metrics"]
    GF["Grafana (optional)"]
  end

  subgraph dashboard[" 🖥️ Dashboard "]
    DB["Realtime UI"]
  end

  GH --> API
  ST --> API
  CU --> API
  HC --> API
  HC --> GRPC

  API --> PG
  GRPC --> PG
  RT --> RD
  PG --> DB
  RD --> DB
  API --> PR
  PR --> GF
Rust
Backend
PostgreSQL
Database
Redis
Pub/Sub
RLS
Multi-tenant
Universal Webhooks

Accept events from any source

Spooled provides a unique incoming endpoint for your organization. It accepts Spooled-formatted JSON (queue + payload). For providers like Stripe/GitHub/Shopify, use a tiny adapter/relay to verify signatures and enqueue jobs.

Stripe

Via a tiny adapter (verify → enqueue)

GitHub

Via Actions or a tiny adapter

Shopify

Via a tiny adapter (verify → enqueue)

Custom

Direct (send Spooled JSON format)

# Your unique webhook endpoint (secured with X-Webhook-Token)
POST https://api.spooled.cloud/api/v1/webhooks/{org_id}/custom
X-Webhook-Token: whk_your_secret_token
Content-Type: application/json

{
  "queue_name": "payments",
  "event_type": "stripe.payment_succeeded",
  "idempotency_key": "evt_123",
  "payload": { ... }
}

REST + gRPC APIs

Use REST (POST /api/v1/jobs) for simple integrations, or gRPC (:50051) for high-throughput workers with bidirectional streaming.

View Docs →
Open Source

Built in the open,
for the community

Spooled is 100% open-source under the Apache 2.0 license. Inspect the code, contribute features, or deploy on your own infrastructure.

TypeScript Rust PostgreSQL Redis Docker Apache 2.0

No vendor lock-in

Self-host on your infrastructure or use our managed cloud. Your data, your control.

Transparent & auditable

Review the source code. Run security audits. Know exactly how your data is handled.

Community-driven

Built by developers, for developers. Contribute features, report bugs, shape the roadmap.

Ready to make your webhooks reliable?

Get started for free. No credit card required. Upgrade when you need more capacity.

No credit card required Free tier forever Open source