Reliable queueing for
background work
at any scale
Open-source job queue infrastructure: durable Postgres-backed jobs, cron schedules, and simple workflows. Bulk enqueue, automatic retries (with backoff), idempotency keys, and dead-letter queues—failed work stays visible and replayable.
// Any service → Spooled queue → Your workers
const response = await fetch('https://api.spooled.cloud/api/v1/jobs', {
method: 'POST',
headers: {
'Authorization': `Bearer ${API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
queue_name: 'critical-jobs',
payload: jobPayload,
idempotency_key: requestId,
tags: { source: 'api' },
})
});
// ✓ Queued instantly (bulk up to 100 jobs)
// ✓ Retries with exponential backoff
// ✓ Deduplicated by idempotency key
// ✓ Filterable by tags, real-time status Up and running in minutes
Enqueue jobs, process them with workers, done. Use idempotency keys to prevent duplicates. All examples work with cURL, Node.js, Python, and Go.
from spooled import SpooledClient
import os
client = SpooledClient(api_key=os.environ["SPOOLED_API_KEY"])
image_id = "img_123"
# Create a background job
result = client.jobs.create({
"queue_name": "image-processing",
"payload": {
"image_url": "https://example.com/image.jpg",
"operations": ["resize", "compress"],
"output_format": "webp"
},
"idempotency_key": f"process-image-{image_id}",
"max_retries": 3
})
print(f"Created job: {result.id}")
client.close()from spooled import SpooledClient
from spooled.worker import SpooledWorker
import os
client = SpooledClient(api_key=os.environ["SPOOLED_API_KEY"])
worker = SpooledWorker(client, queue_name="email-notifications", concurrency=10)
@worker.process
def handle_job(ctx):
to = ctx.payload["to"]
subject = ctx.payload["subject"]
# Process the job
send_email(to, subject)
return {"sent": True}
worker.start() # Blockingfrom spooled import SpooledClient
from spooled.realtime import SubscriptionFilter
import os
client = SpooledClient(api_key=os.environ["SPOOLED_API_KEY"])
realtime = client.realtime(type="sse")
@realtime.on("job.created")
def on_job_created(data):
print("job.created:", data)
realtime.connect()
realtime.subscribe(SubscriptionFilter(queue="orders"))Everything you need to monitor your queues
A real-time view into jobs, retries, queues, and workers—built for debugging and operations.
See it in action
Copy-paste these examples into your project. Each one shows exactly how to integrate Spooled.
✓ Spooled handles
- • Reliable job queuing & storage
- • Automatic retries with backoff
- • Job deduplication (idempotency)
- • Dead-letter queues
- • Real-time job monitoring
→ You provide
- • Workers (your code, your servers)
- • Email service (Resend, SendGrid...)
- • Storage (S3, Cloudflare R2...)
- • Any external APIs you need
- • Your business logic
🛒 E-Commerce Order Processing
Process Stripe payments reliably
When a customer completes checkout, Stripe sends a webhook. Spooled queues it; your worker processes the payment, sends the email, and updates inventory (with retries and deduplication).
1 Queue the job
// Your Stripe webhook endpoint (runs on YOUR server)
app.post('/webhooks/stripe', async (req, res) => {
const event = req.body;
// Queue the job in Spooled (just stores the data)
await fetch('https://api.spooled.cloud/api/v1/jobs', {
method: 'POST',
headers: {
'Authorization': 'Bearer sp_live_...',
'Content-Type': 'application/json'
},
body: JSON.stringify({
queue_name: 'payments',
payload: {
event_type: event.type,
customer_id: event.data.object.customer,
amount: event.data.object.amount,
order_id: event.data.object.metadata.order_id
},
idempotency_key: event.id // Prevents duplicate processing
})
});
res.status(200).send('Queued'); // Respond fast to Stripe
});2 Process in worker
// Your worker (runs on YOUR server - could be a cron, container, etc.)
const WORKER_ID = 'worker-1';
async function processPayments() {
// 1. Claim jobs from Spooled
const res = await fetch('https://api.spooled.cloud/api/v1/jobs/claim', {
method: 'POST',
headers: {
'Authorization': 'Bearer sp_live_...',
'Content-Type': 'application/json'
},
body: JSON.stringify({ queue_name: 'payments', worker_id: WORKER_ID, limit: 10 })
});
const { jobs } = await res.json();
for (const job of jobs) {
try {
// 2. YOUR code does the actual work
await db.orders.update(job.payload.order_id, { status: 'paid' });
const customerEmail = await lookupCustomerEmail(job.payload.customer_id);
await resend.emails.send({ // YOUR Resend/SendGrid account
to: customerEmail,
subject: 'Order Confirmed!'
});
// 3. Tell Spooled: success!
await fetch(`https://api.spooled.cloud/api/v1/jobs/${job.id}/complete`, {
method: 'POST',
headers: {
'Authorization': 'Bearer sp_live_...',
'Content-Type': 'application/json'
},
body: JSON.stringify({ worker_id: WORKER_ID })
});
} catch (err) {
// Job will auto-retry later
await fetch(`https://api.spooled.cloud/api/v1/jobs/${job.id}/fail`, {
method: 'POST',
headers: {
'Authorization': 'Bearer sp_live_...',
'Content-Type': 'application/json'
},
body: JSON.stringify({ worker_id: WORKER_ID, error: err.message })
});
}
}
}Built for any async workload
From simple webhook handlers to complex event-driven architectures, Spooled scales with your needs.
Webhook Processing
Queue events from any source. For providers like Stripe/GitHub/Shopify, use a tiny webhook adapter to verify signatures and enqueue jobs.
Example: Process payment events, deploy on push, handle SMS responses
High-Volume Jobs
Bulk enqueue up to 100 jobs at once. Check batch status in a single call. Tag jobs for filtering and analytics across millions of records.
Example: Mass email campaigns, data migrations, bulk notifications
Scheduled Tasks
Schedule jobs with second-precision cron (6-field expressions). Timezone-aware execution with manual trigger option and execution history.
Example: Daily reports, subscription renewals, cleanup tasks
Event-Driven Workflows
Chain jobs with dependencies—run steps in parallel or sequence. Retry failed workflows end-to-end with full error handling and visibility.
Example: Order processing, user onboarding, approval flows
Built for scale
Jobs per bulk enqueue
Cron (second precision)
Real-time streaming
REST + gRPC (HTTP/2)
Everything you need for reliable processing
Production-grade features built into the platform. Focus on your business logic while Spooled handles the infrastructure.
Queue Anything
Webhooks, background jobs, scheduled tasks, and workflows — all handled as jobs. Bulk enqueue up to 100 jobs in a single API call.
Automatic Retries
Failed jobs automatically retry with exponential backoff. Configure max attempts (up to 100), delays, and get per-job completion webhooks.
Idempotency Built-in
Prevent duplicate processing with built-in idempotency keys. Safe webhook retries guaranteed—even if your source sends the same event twice.
Dead-Letter Queue
Jobs that exhaust retries land in the DLQ. Inspect payloads, debug failures, bulk retry or purge with filters—full control over failed work.
API Rate Limits
Fair usage limits per plan with clear errors and upgrade paths. Track your current usage and limits via the API.
Queue Control
Pause and resume queues instantly during deployments or incidents. Boost job priority on the fly. Check batch status for up to 100 jobs at once.
Real-time Observability
Monitor job throughput, latency, and error rates. Stream live updates via WebSocket and SSE. Export Prometheus metrics for your dashboards.
REST + gRPC APIs
Use REST for web apps and simple integrations. Use gRPC with HTTP/2 bidirectional streaming for high-throughput workers processing thousands of jobs/sec.
Tags & Filtering
Add searchable tags to jobs for filtering and analytics. Track jobs by customer, environment, or any custom dimension you need.
Official SDKs
Production-ready Node.js, Python, Go, and PHP SDKs with full type safety. gRPC streaming, worker abstractions, and circuit breakers built-in.
Watch jobs flow through the queue
Spooled stores your jobs. Your workers claim and process them. Failed jobs auto-retry.
Incoming
LiveProcessing
0 activeCompleted
0 totalAutomatic retries with exponential backoff
Failed jobs automatically retry with intelligent backoff. After all retries, jobs move to the Dead Letter Queue for inspection.
Initial Attempt
t = 0sJob is picked up by worker for processing
Retry #1
t + 1mFirst retry after 1 minute delay
Retry #2
t + 3mExponential backoff: 2 minute delay
Retry #3
t + 7mExponential backoff: 4 minute delay
delay_minutes = min(2^retry_count, 60)
Default job retries are scheduled in minutes (1m, 2m, 4m, ...), capped at 60m. (Queue/job config can override max retries.)
# List jobs in dead-letter queue
dlq_jobs = client.jobs.dlq.list({
"queue_name": "payment-processing",
"limit": 100
})
for job in dlq_jobs:
print(f"DLQ Job: {job.id}, status: {job.status}")
print(f"Retry count: {job.retry_count}, created: {job.created_at}")# Retry DLQ jobs
result = client.jobs.dlq.retry({
"queue_name": "payment-processing",
"limit": 50
})
print(f"Retried {result.retried_count} jobs")Publish once, stream updates everywhere
Queue jobs via REST. Stream queue/job updates via SSE (and WebSocket for dashboards).
Publisher
Your Application
Spooled
Queue & Store
orders 0 jobs processed Stream Client
SSE: Queue stats
from spooled import SpooledClient
from spooled.realtime import SubscriptionFilter
import os
client = SpooledClient(api_key=os.environ["SPOOLED_API_KEY"])
realtime = client.realtime(type="sse")
@realtime.on("job.created")
def on_job_created(data):
print("job.created:", data)
realtime.connect()
realtime.subscribe(SubscriptionFilter(queue="orders"))Scale with your worker pool
Run multiple workers on your servers. Spooled distributes jobs automatically—no coordination needed.
You run: Workers on your infrastructure • Spooled provides: Queue coordination & job locking
Your Worker Pool
Your servers claim jobs from Spooled
Recurring jobs made simple
Create jobs that run on a schedule. Daily reports, subscription renewals, cleanup tasks—set it once and forget it.
# Create a cron schedule
schedule = client.schedules.create({
"name": "Daily Report",
"cron_expression": "0 0 9 * * *",
"timezone": "America/New_York",
"queue_name": "reports",
"payload_template": {"type": "daily_report"}
})from datetime import datetime, timedelta
# Schedule a job to run in 24 hours
client.jobs.create({
"queue_name": "reminders",
"payload": {"user_id": "usr_123", "type": "cart-abandoned"},
"scheduled_at": (datetime.utcnow() + timedelta(hours=24)).isoformat() + "Z",
"idempotency_key": f"reminder-{user_id}-cart"
})Common Cron Patterns
0 * * * * Every hour 0 9 * * * Daily at 9 AM 0 0 * * 0 Weekly on Sunday 0 0 1 * * Monthly on 1st */15 * * * * Every 15 minutes 0 9 * * 1-5 Weekdays at 9 AM Orchestrate complex workflows
Chain jobs together with dependencies. Build multi-step processes where each job waits for its dependencies to complete.
from spooled import SpooledClient
import os
client = SpooledClient(api_key=os.environ["SPOOLED_API_KEY"])
# Create a workflow with job dependencies
workflow = client.workflows.create({
"name": "user-onboarding",
"jobs": [
{
"key": "create-account",
"queue_name": "users",
"payload": {"email": "user@example.com", "plan": "pro"}
},
{
"key": "send-welcome-email",
"queue_name": "emails",
"depends_on": ["create-account"], # Waits for this job
"payload": {"template": "welcome"}
},
{
"key": "setup-defaults",
"queue_name": "users",
"depends_on": ["create-account"], # Also waits
"payload": {"settings": {}}
}
]
})
print(f"Created workflow: {workflow.workflow_id}")Automatic Ordering
Jobs run in the correct order based on dependencies. No manual coordination needed.
Failure Handling
If a parent job fails, dependent children are automatically cancelled.
Parallel Execution
Independent jobs run in parallel for maximum throughput.
Process urgent jobs first
Higher priority jobs jump the queue. Perfect for VIP customers, critical alerts, and time-sensitive tasks.
from spooled import SpooledClient
import os
client = SpooledClient(api_key=os.environ["SPOOLED_API_KEY"])
# High priority - VIP customer order (processed first)
client.jobs.create({
"queue_name": "orders",
"payload": {"order_id": 789, "customer_tier": "vip"},
"priority": 10 # High priority
})
# Normal priority (default)
client.jobs.create({
"queue_name": "orders",
"payload": {"order_id": 790},
"priority": 0 # Default
})
# Low priority - background cleanup
client.jobs.create({
"queue_name": "maintenance",
"payload": {"task": "cleanup"},
"priority": -10 # Low priority
})
# Workers claim jobs: High → Normal → LowProcessing Order
Priority: 10
Priority: 0
Priority: -10
Get notified when events occur
Spooled POSTs to your configured URLs when jobs complete, fail, or queues pause. Connect to Slack, Discord, your own app, or any webhook endpoint.
from spooled import SpooledClient
import os
client = SpooledClient(api_key=os.environ["SPOOLED_API_KEY"])
# Setup webhook for job events
client.webhooks.create({
"name": "Slack Notifications",
"url": "https://hooks.slack.com/...",
"events": ["job.completed", "job.failed", "queue.paused"],
"secret": "your-hmac-secret" # For signature verification
})
# ✓ Spooled POSTs to your URL
# ✓ Automatic retries
# ✓ Delivery history in dashboardfrom spooled import SpooledClient
import os
client = SpooledClient(api_key=os.environ["SPOOLED_API_KEY"])
# Get notified when THIS job completes
client.jobs.create({
"queue_name": "exports",
"payload": {"report_id": 12345},
"completion_webhook": "https://your-app.com/webhooks/export-done"
})
# When job completes, Spooled POSTs to your URL:
# POST https://your-app.com/webhooks/export-done
# {"status": "completed", "job_id": "...", "result": {...}}Available Events
job.created job.started job.completed job.failed job.cancelled queue.paused queue.resumed worker.registered worker.deregistered schedule.triggered Slack Alerts
Get notified in Slack when jobs fail
Analytics
Send events to your analytics platform
Custom Logic
Trigger your own workflows
Email Notifications
Alert teams about important events
Production-ready from day one
Built with Rust for performance and reliability. PostgreSQL for durability. Redis for real-time pub/sub.
System Architecture
Multi-tenant queue with PostgreSQL RLS, Redis pub/sub, and real-time updates.
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#ecfdf5', 'primaryTextColor': '#065f46', 'primaryBorderColor': '#10b981', 'lineColor': '#6b7280', 'secondaryColor': '#eff6ff', 'tertiaryColor': '#faf5ff', 'fontSize': '18px', 'fontFamily': 'Inter, system-ui, sans-serif'}}}%%
flowchart LR
subgraph sources[" 📥 Webhook Sources "]
GH["GitHub"]
ST["Stripe"]
CU["Custom HTTP"]
HC["HTTP Clients"]
end
subgraph backendSvc[" ⚡ Spooled Backend "]
API["REST API"]
GRPC["gRPC API"]
RT["WebSocket/SSE"]
end
subgraph storage[" 💾 Data Plane "]
PG[("PostgreSQL")]
RD[("Redis")]
end
subgraph obs[" 📊 Observability "]
PR["Prometheus /metrics"]
GF["Grafana (optional)"]
end
subgraph dashboard[" 🖥️ Dashboard "]
DB["Realtime UI"]
end
GH --> API
ST --> API
CU --> API
HC --> API
HC --> GRPC
API --> PG
GRPC --> PG
RT --> RD
PG --> DB
RD --> DB
API --> PR
PR --> GF Accept events from any source
Spooled provides a unique incoming endpoint for your organization. It accepts Spooled-formatted JSON (queue + payload). For providers like Stripe/GitHub/Shopify, use a tiny adapter/relay to verify signatures and enqueue jobs.
Via a tiny adapter (verify → enqueue)
Via Actions or a tiny adapter
Via a tiny adapter (verify → enqueue)
Direct (send Spooled JSON format)
# Your unique webhook endpoint (secured with X-Webhook-Token)
POST https://api.spooled.cloud/api/v1/webhooks/{org_id}/custom
X-Webhook-Token: whk_your_secret_token
Content-Type: application/json
{
"queue_name": "payments",
"event_type": "stripe.payment_succeeded",
"idempotency_key": "evt_123",
"payload": { ... }
} REST + gRPC APIs
Use REST (POST /api/v1/jobs) for simple integrations,
or gRPC (:50051) for high-throughput workers with bidirectional streaming.
Built in the open,
for the community
Spooled is 100% open-source under the Apache 2.0 license. Inspect the code, contribute features, or deploy on your own infrastructure.
No vendor lock-in
Self-host on your infrastructure or use our managed cloud. Your data, your control.
Transparent & auditable
Review the source code. Run security audits. Know exactly how your data is handled.
Community-driven
Built by developers, for developers. Contribute features, report bugs, shape the roadmap.
Ready to make your webhooks reliable?
Get started for free. No credit card required. Upgrade when you need more capacity.