Workers
Workers are processes that claim jobs from queues and execute them. Use our SDKs, the REST API, or gRPC API to build workers in any language.
How Workers Work
Workers follow a simple claim-process-complete pattern. They poll for available jobs, process them, and report the result back to Spooled.
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#ecfdf5', 'primaryTextColor': '#065f46', 'primaryBorderColor': '#10b981', 'lineColor': '#6b7280', 'signalColor': '#10b981'}}}%%
sequenceDiagram
participant W as Worker
participant API as Spooled API
participant Q as Queue
loop Every poll interval
W->>API: POST /jobs/claim
API->>Q: SELECT FOR UPDATE SKIP LOCKED
Q-->>API: Job(s)
API-->>W: job data
W->>W: Process job
alt Success
W->>API: POST /jobs/complete
else Failure
W->>API: POST /jobs/fail
end
end Worker Lifecycle
- Claim — Request one or more jobs from a queue
- Process — Execute business logic with the job payload
- Complete or Fail — Report the result back to Spooled
- Repeat — Continue polling for more jobs
Using SDKs
Note: SDKs are under active development. The example below shows the planned API. For production today, use the REST API directly.
SDKs provide a high-level worker framework that handles polling, concurrency, and error handling automatically. Here's an example using the Node.js SDK:
import { SpooledWorker } from '@spooled/sdk';
const worker = new SpooledWorker({
apiKey: process.env.SPOOLED_API_KEY,
queue: 'email-notifications',
concurrency: 10,
});
worker.on('job', async (job) => {
const { to, subject, body } = job.payload;
await sendEmail(to, subject, body);
await job.complete();
});
worker.start(); See the SDKs documentation for language-specific examples:
- Node.js:
SpooledWorkerclass with event handlers - Python:
@worker.handlerdecorator pattern - Go:
worker.Handlewith context-based processing
Using the gRPC API
For high-throughput workers, use the gRPC API with streaming. gRPC provides lower latency and enables real-time job streaming without polling.
| Feature | REST API | gRPC API |
|---|---|---|
| Protocol | HTTP/1.1 + JSON | HTTP/2 + Protobuf |
| Job fetching | Polling (/jobs/claim) | Streaming (StreamJobs) |
| Latency | ~10-50ms per request | ~1-5ms, persistent connection |
| Best for | Web apps, simple integrations | High-throughput workers |
See the gRPC API documentation for examples using grpcurl
or native gRPC clients in Node.js, Python, and Go.
Using the REST API
For custom implementations or languages without an SDK, use the REST API directly.
Claim Jobs
curl -X POST https://api.spooled.cloud/api/v1/jobs/claim \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"queue": "emails", "limit": 10}' Complete Job
curl -X POST https://api.spooled.cloud/api/v1/jobs/JOB_ID/complete \
-H "Authorization: Bearer YOUR_API_KEY" Fail Job
curl -X POST https://api.spooled.cloud/api/v1/jobs/JOB_ID/fail \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{"error": "Processing failed"}' Extend Visibility (Heartbeat)
For long-running jobs, extend the visibility timeout to prevent re-delivery:
curl -X POST https://api.spooled.cloud/api/v1/jobs/JOB_ID/heartbeat \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{"visibility_timeout": 300}' Visibility Timeout
When claimed, a job becomes invisible to other workers for the visibility timeout period. If the worker doesn't complete/fail the job in time, it becomes visible again for another worker to pick up.
| Setting | Value | Description |
|---|---|---|
| Default timeout | 30 seconds | Good for most short-running jobs |
| Maximum timeout | 12 hours | For very long-running jobs |
| Heartbeat interval | visibility / 2 | Recommended heartbeat frequency |
Scaling Workers
Scale horizontally by running multiple worker instances. Workers coordinate automatically through Spooled — no external coordination (like Redis locks) needed.
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#eff6ff', 'primaryTextColor': '#1e40af', 'primaryBorderColor': '#3b82f6', 'lineColor': '#6b7280'}}}%%
flowchart LR
subgraph workers["Worker Pool"]
W1["Worker 1"]
W2["Worker 2"]
W3["Worker 3"]
WN["Worker N"]
end
Q[(Queue)] --> W1
Q --> W2
Q --> W3
Q --> WN
W1 --> R1[Result]
W2 --> R2[Result]
W3 --> R3[Result]
WN --> RN[Result] Concurrency
Control how many jobs a single worker processes in parallel:
- I/O-bound jobs (API calls, email sending): Higher concurrency (10-100)
- CPU-bound jobs (image processing): Match to available CPU cores
- Memory-intensive jobs: Limit based on available memory
Best Practices
Idempotency
Jobs may be delivered more than once in edge cases (worker crash after processing but before completing). Design handlers to be idempotent by checking if the work was already done.
Graceful Shutdown
Handle shutdown signals (SIGTERM, SIGINT) to finish processing current jobs before exiting. SDKs provide built-in graceful shutdown support.
Structured Logging
Use structured logging for observability. Include job IDs and queue names in all log entries to make debugging easier.
Health Checks
Expose a health endpoint for orchestrators like Kubernetes. Workers should report unhealthy if they can't connect to Spooled.