Reliable webhooks and
background jobs
at any scale
Open-source queue infrastructure for webhooks, background jobs, and workflows. Built-in retries, idempotency, and dead-letter queues—so failed work is visible, recoverable, and re-playable.
// Stripe webhook → Spooled → Your worker
const response = await fetch('https://api.spooled.cloud/api/v1/jobs', {
method: 'POST',
headers: {
'Authorization': `Bearer $${API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
queue_name: 'stripe-events',
payload: stripeEvent,
idempotency_key: stripeEvent.id,
})
});
// ✓ Queued instantly
// ✓ Retries on failure (exponential backoff)
// ✓ Deduplicated by idempotency key
// ✓ Live status in dashboard Up and running in minutes
Enqueue a job, process it, done. It's that simple.
// Your app: Send a job to the queue
const response = await fetch(
'https://api.spooled.cloud/api/v1/jobs',
{
method: 'POST',
headers: {
'Authorization': `Bearer $${API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
queue_name: 'emails',
payload: {
to: 'user@example.com',
template: 'welcome',
},
}),
}
);
// Job is now queued! ✓ // Your worker: Claim → process → complete/fail
const WORKER_ID = 'worker-1';
while (true) {
const res = await fetch('.../api/v1/jobs/claim', {
method: 'POST',
headers: { /* auth */ },
body: JSON.stringify({
queue_name: 'emails',
worker_id: WORKER_ID,
limit: 5,
}),
});
const { jobs } = await res.json();
for (const job of jobs) {
await processJob(job);
}
} // Stream queue stats (pending/processing/completed) in near real time
curl -N https://api.spooled.cloud/api/v1/events/queues/emails \\
-H "Authorization: Bearer sk_live_..." Simple API, powerful infrastructure
Get started in minutes. Queue jobs with a single API call and let Spooled handle retries, deduplication, and observability.
Queue a Job
Send jobs to Spooled via REST API. We store them reliably with deduplication and scheduling.
Your Workers Process
Your code (on your servers) polls for jobs, processes them, and reports success/failure. Failed jobs auto-retry.
Monitor Everything
Real-time dashboard shows all job states. Track success rates, latency, and failures.
# Queue a webhook event
curl -X POST https://api.spooled.cloud/api/v1/jobs \
-H "Authorization: Bearer sk_live_..." \
-H "Content-Type: application/json" \
-d '{
"queue_name": "stripe-webhooks",
"payload": {
"type": "invoice.paid",
"customer_id": "cus_123"
},
"idempotency_key": "evt_abc123"
}'# Claim jobs
curl -X POST https://api.spooled.cloud/api/v1/jobs/claim \
-H "Authorization: Bearer sk_live_..." \
-H "Content-Type: application/json" \
-d '{
"queue_name": "stripe-webhooks",
"worker_id": "worker-1",
"limit": 5
}'
# Complete a job (ack)
curl -X POST https://api.spooled.cloud/api/v1/jobs/job_xyz/complete \
-H "Authorization: Bearer sk_live_..." \
-H "Content-Type: application/json" \
-d '{"worker_id":"worker-1"}'See it in action
Copy-paste these examples into your project. Each one shows exactly how to integrate Spooled.
✓ Spooled handles
- • Reliable job queuing & storage
- • Automatic retries with backoff
- • Job deduplication (idempotency)
- • Dead-letter queues
- • Real-time job monitoring
→ You provide
- • Workers (your code, your servers)
- • Email service (Resend, SendGrid...)
- • Storage (S3, Cloudflare R2...)
- • Any external APIs you need
- • Your business logic
🛒 E-Commerce Order Processing
Process Stripe payments reliably
When a customer completes checkout, Stripe sends a webhook. Spooled queues it; your worker processes the payment, sends the email, and updates inventory (with retries and deduplication).
1 Queue the job
// Your Stripe webhook endpoint (runs on YOUR server)
app.post('/webhooks/stripe', async (req, res) => {
const event = req.body;
// Queue the job in Spooled (just stores the data)
await fetch('https://api.spooled.cloud/api/v1/jobs', {
method: 'POST',
headers: {
'Authorization': 'Bearer sk_live_...',
'Content-Type': 'application/json'
},
body: JSON.stringify({
queue_name: 'payments',
payload: {
event_type: event.type,
customer_id: event.data.object.customer,
amount: event.data.object.amount,
order_id: event.data.object.metadata.order_id
},
idempotency_key: event.id // Prevents duplicate processing
})
});
res.status(200).send('Queued'); // Respond fast to Stripe
});2 Process in worker
// Your worker (runs on YOUR server - could be a cron, container, etc.)
const WORKER_ID = 'worker-1';
async function processPayments() {
// 1. Claim jobs from Spooled
const res = await fetch('https://api.spooled.cloud/api/v1/jobs/claim', {
method: 'POST',
headers: {
'Authorization': 'Bearer sk_live_...',
'Content-Type': 'application/json'
},
body: JSON.stringify({ queue_name: 'payments', worker_id: WORKER_ID, limit: 10 })
});
const { jobs } = await res.json();
for (const job of jobs) {
try {
// 2. YOUR code does the actual work
await db.orders.update(job.payload.order_id, { status: 'paid' });
const customerEmail = await lookupCustomerEmail(job.payload.customer_id);
await resend.emails.send({ // YOUR Resend/SendGrid account
to: customerEmail,
subject: 'Order Confirmed!'
});
// 3. Tell Spooled: success!
await fetch(`https://api.spooled.cloud/api/v1/jobs/${job.id}/complete`, {
method: 'POST',
headers: {
'Authorization': 'Bearer sk_live_...',
'Content-Type': 'application/json'
},
body: JSON.stringify({ worker_id: WORKER_ID })
});
} catch (err) {
// Job will auto-retry later
await fetch(`https://api.spooled.cloud/api/v1/jobs/${job.id}/fail`, {
method: 'POST',
headers: {
'Authorization': 'Bearer sk_live_...',
'Content-Type': 'application/json'
},
body: JSON.stringify({ worker_id: WORKER_ID, error: err.message })
});
}
}
}Built for any async workload
From simple webhook handlers to complex event-driven architectures, Spooled scales with your needs.
Webhook Processing
Route webhooks from any HTTP source. Reduce failures with retries, idempotency keys, and a clear audit trail.
Example: Process payment events, deploy on push, handle SMS responses
Background Jobs
Offload slow operations like email sending, image processing, and report generation to background workers.
Example: Send welcome emails, generate invoices, resize images
Scheduled Tasks
Schedule jobs for later execution with precise timing. Perfect for reminders, reports, and recurring tasks.
Example: Daily reports, subscription renewals, cleanup tasks
Event-Driven Workflows
Chain jobs together to build complex workflows. Each step can trigger the next with full error handling.
Example: Order processing, user onboarding, approval flows
Built for scale
Max retries (configurable)
Max job lease (visibility)
Live queue/job streams
API keys / dashboard auth
Everything you need for reliable processing
Production-grade features built into the platform. Focus on your business logic while Spooled handles the infrastructure.
Queue Anything
Webhooks, background jobs, scheduled tasks, and event-driven workflows — all handled as jobs with automatic retries and idempotency.
Automatic Retries
Failed jobs automatically retry with exponential backoff. Configure max attempts, delay strategies, and custom backoff curves per job.
Idempotency Built-in
Prevent duplicate processing with built-in idempotency keys. Safe webhook retries guaranteed at the infrastructure level.
Dead-Letter Queue
Jobs that exhaust retries land in the DLQ. Inspect payloads, debug failures, and replay jobs with a single API call.
API Rate Limits
Plan-based API rate limiting protects the Spooled API itself. For downstream limits (e.g. Stripe, Slack), control throughput in your worker pool.
Real-time Observability
Monitor job throughput, latency, and error rates. Stream live updates via WebSocket and SSE for instant feedback.
Automatic retries with exponential backoff
Failed jobs automatically retry with intelligent backoff. After all retries, jobs move to the Dead Letter Queue for inspection.
Initial Attempt
t = 0sJob is picked up by worker for processing
Retry #1
t + 1mFirst retry after 1 minute delay
Retry #2
t + 3mExponential backoff: 2 minute delay
Retry #3
t + 7mExponential backoff: 4 minute delay
delay_minutes = min(2^retry_count, 60)
Default job retries are scheduled in minutes (1m, 2m, 4m, ...), capped at 60m. (Queue/job config can override max retries.)
Watch jobs flow through the queue
Spooled stores your jobs. Your workers claim and process them. Failed jobs auto-retry.
Incoming
LiveProcessing
0 activeCompleted
0 totalPublish once, stream updates everywhere
Queue jobs via REST. Stream queue/job updates via SSE (and WebSocket for dashboards).
Publisher
Your Application
Spooled
Queue & Store
orders 0 jobs processed Stream Client
SSE: Queue stats
// Stream live queue stats via SSE
curl -N https://api.spooled.cloud/api/v1/events/queues/orders \\
-H "Authorization: Bearer sk_live_..."
// Dashboards can also use WebSocket (JWT auth via query param):
// GET /api/v1/ws?token=JWT_ACCESS_TOKEN&queue=orders Scale with your worker pool
Run multiple workers on your servers. Spooled distributes jobs automatically—no coordination needed.
You run: Workers on your infrastructure • Spooled provides: Queue coordination & job locking
Your Worker Pool
Your servers claim jobs from Spooled
Recurring jobs made simple
Create jobs that run on a schedule. Daily reports, subscription renewals, cleanup tasks—set it once and forget it.
// Run daily at 9 AM
await fetch('https://api.spooled.cloud/api/v1/schedules', {
method: 'POST',
headers: {
'Authorization': `Bearer $${API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
name: 'daily-sales-report',
cron_expression: '0 9 * * *',
timezone: 'America/New_York',
queue_name: 'reports',
payload_template: {
report_type: 'daily_sales',
format: 'pdf'
}
})
});
// ✓ Runs automatically every day
// ✓ Retries on failure
// ✓ Timezone-aware // Send reminder in 1 hour
await fetch('https://api.spooled.cloud/api/v1/jobs', {
method: 'POST',
headers: {
'Authorization': `Bearer $${API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
queue_name: 'notifications',
payload: {
type: 'reminder',
user_id: 123
},
scheduled_at: new Date(
Date.now() + 3600000 // +1 hour
).toISOString()
})
});
// ✓ Runs at exact time
// ✓ Perfect for reminders, trials, renewals Common Cron Patterns
0 * * * * Every hour 0 9 * * * Daily at 9 AM 0 0 * * 0 Weekly on Sunday 0 0 1 * * Monthly on 1st */15 * * * * Every 15 minutes 0 9 * * 1-5 Weekdays at 9 AM Orchestrate complex workflows
Chain jobs together with dependencies. Build multi-step processes where each job waits for its dependencies to complete.
// Create a workflow with job dependencies
await fetch('https://api.spooled.cloud/api/v1/workflows', {
method: 'POST',
headers: {
'Authorization': `Bearer $${API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
name: 'user-onboarding',
jobs: [
{
name: 'create-account',
queue_name: 'users',
payload: {
email: 'user@example.com',
plan: 'pro'
}
},
{
name: 'send-welcome-email',
queue_name: 'emails',
depends_on: ['create-account'], // Waits for this job
payload: {
template: 'welcome'
}
},
{
name: 'setup-defaults',
queue_name: 'users',
depends_on: ['create-account'], // Also waits
payload: {
settings: {...}
}
}
]
})
});
// ✓ Jobs run in dependency order
// ✓ If parent fails, children are cancelled
// ✓ Parallel execution where possible Automatic Ordering
Jobs run in the correct order based on dependencies. No manual coordination needed.
Failure Handling
If a parent job fails, dependent children are automatically cancelled.
Parallel Execution
Independent jobs run in parallel for maximum throughput.
Process urgent jobs first
Higher priority jobs jump the queue. Perfect for VIP customers, critical alerts, and time-sensitive tasks.
// High priority - processed first (VIP customer order)
await fetch('https://api.spooled.cloud/api/v1/jobs', {
method: 'POST',
headers: {
'Authorization': `Bearer $${API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
queue_name: 'orders',
payload: {
order_id: 789,
customer_tier: 'vip'
},
priority: 10 // High priority
})
});
// Normal priority (default)
await createJob({
queue_name: 'orders',
payload: {...order},
priority: 0 // Default
});
// Low priority - background cleanup
await createJob({
queue_name: 'maintenance',
payload: {...cleanup},
priority: -10 // Low priority
});
// Workers claim jobs: High → Normal → Low Processing Order
Priority: 10
Priority: 0
Priority: -10
Get notified when events occur
Spooled POSTs to your configured URLs when jobs complete, fail, or queues pause. Connect to Slack, Discord, your own app, or any webhook endpoint.
// Setup webhook for job events
await fetch('https://api.spooled.cloud/api/v1/outgoing-webhooks', {
method: 'POST',
headers: {
'Authorization': `Bearer $${API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
name: 'Slack Notifications',
url: 'https://hooks.slack.com/...',
events: [
'job.completed',
'job.failed',
'queue.paused'
],
secret: 'your-hmac-secret' // For signature verification
})
});
// ✓ Spooled POSTs to your URL
// ✓ Automatic retries
// ✓ Delivery history in dashboard // Get notified when THIS job completes
await fetch('https://api.spooled.cloud/api/v1/jobs', {
method: 'POST',
headers: {
'Authorization': `Bearer $${API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
queue_name: 'exports',
payload: {
report_id: 12345
},
completion_webhook: 'https://your-app.com/webhooks/export-done'
})
});
// When job completes, Spooled POSTs to your URL:
// POST https://your-app.com/webhooks/export-done
// {status: "completed", job_id: "...", result: {...}} Available Events
job.created job.started job.completed job.failed job.cancelled queue.paused queue.resumed worker.registered worker.deregistered schedule.triggered Slack Alerts
Get notified in Slack when jobs fail
Analytics
Send events to your analytics platform
Custom Logic
Trigger your own workflows
Email Notifications
Alert teams about important events
Production-ready from day one
Built with Rust for performance and reliability. PostgreSQL for durability. Redis for real-time pub/sub.
System Architecture
Multi-tenant queue with PostgreSQL RLS, Redis pub/sub, and real-time updates.
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#ecfdf5', 'primaryTextColor': '#065f46', 'primaryBorderColor': '#10b981', 'lineColor': '#6b7280', 'secondaryColor': '#eff6ff', 'tertiaryColor': '#faf5ff', 'fontSize': '18px', 'fontFamily': 'Inter, system-ui, sans-serif'}}}%%
flowchart LR
subgraph sources[" 📥 Webhook Sources "]
GH["GitHub"]
ST["Stripe"]
CU["Custom HTTP"]
HC["HTTP Clients"]
end
subgraph backendSvc[" ⚡ Spooled Backend "]
API["REST API"]
GRPC["gRPC (optional)"]
RT["WebSocket/SSE"]
end
subgraph storage[" 💾 Data Plane "]
PG[("PostgreSQL")]
RD[("Redis")]
end
subgraph obs[" 📊 Observability "]
PR["Prometheus /metrics"]
GF["Grafana (optional)"]
end
subgraph dashboard[" 🖥️ Dashboard "]
DB["Realtime UI"]
end
GH --> API
ST --> API
CU --> API
HC --> API
HC --> GRPC
API --> PG
GRPC --> PG
RT --> RD
PG --> DB
RD --> DB
API --> PR
PR --> GF Accept webhooks from any source
Receive HTTP POST requests from any service. No vendor lock-in, no special configuration.
Payment events
Repository events
Order updates
Any HTTP POST
Simple HTTP Interface
Just POST to /api/v1/jobs with your payload.
No SDKs required—works with cURL or any HTTP client. SDKs are planned.
Built in the open,
for the community
Spooled is 100% open-source under the Apache 2.0 license. Inspect the code, contribute features, or deploy on your own infrastructure.
No vendor lock-in
Self-host on your infrastructure or use our managed cloud. Your data, your control.
Transparent & auditable
Review the source code. Run security audits. Know exactly how your data is handled.
Community-driven
Built by developers, for developers. Contribute features, report bugs, shape the roadmap.
Ready to make your webhooks reliable?
Get started for free. No credit card required. Upgrade when you need more capacity.