Workers
Workers are processes that claim jobs from queues and execute them. Use our SDKs, the REST API, or gRPC API to build workers in any language.
Related guides:
How Workers Work
Workers follow a simple claim-process-complete pattern. They poll for available jobs, process them, and report the result back to Spooled.
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#ecfdf5', 'primaryTextColor': '#065f46', 'primaryBorderColor': '#10b981', 'lineColor': '#6b7280', 'signalColor': '#10b981'}}}%%
sequenceDiagram
participant W as Worker
participant API as Spooled API
participant Q as Queue
loop Every poll interval
W->>API: POST /jobs/claim
API->>Q: SELECT FOR UPDATE SKIP LOCKED
Q-->>API: Job(s)
API-->>W: job data
W->>W: Process job
alt Success
W->>API: POST /jobs/complete
else Failure
W->>API: POST /jobs/fail
end
end Worker Lifecycle
- Claim — Request one or more jobs from a queue
- Process — Execute business logic with the job payload
- Complete or Fail — Report the result back to Spooled
- Repeat — Continue polling for more jobs
Using SDKs
SDKs provide a high-level worker framework that handles polling, concurrency, and error handling automatically.
import { SpooledClient, SpooledWorker } from '@spooled/sdk';
const client = new SpooledClient({
apiKey: process.env.SPOOLED_API_KEY!,
});
const worker = new SpooledWorker(client, {
queueName: 'my-queue',
concurrency: 10,
});
worker.process(async (ctx) => {
await processJob(ctx.payload);
return { success: true };
});
await worker.start();See the SDKs documentation for language-specific examples:
- Node.js:
SpooledWorkerwithworker.process(async (ctx) => ...)(optional lifecycle events) - Python:
@worker.processdecorator to register a job handler - Go:
worker.New(...)+worker.Process(...)with context cancellation - PHP:
SpooledWorkerwith a handler callback (closure)
Using the REST API
For custom implementations or languages without an SDK, use the REST API directly.
Claim Jobs
# Claim up to 5 jobs
curl -X POST https://api.spooled.cloud/api/v1/jobs/claim \
-H "Authorization: Bearer sp_live_YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"queue_name": "my-queue",
"worker_id": "worker-1",
"limit": 5
}'Complete Job
# Complete a job
curl -X POST https://api.spooled.cloud/api/v1/jobs/job_xyz123/complete \
-H "Authorization: Bearer sp_live_YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"worker_id": "worker-1",
"result": {"processed": true}
}'Fail Job
# Fail a job (will retry if retries remaining)
curl -X POST https://api.spooled.cloud/api/v1/jobs/job_xyz123/fail \
-H "Authorization: Bearer sp_live_YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"worker_id": "worker-1",
"error": "Connection timeout"
}'Extend Visibility (Heartbeat)
For long-running jobs, extend the visibility timeout to prevent re-delivery:
# Extend job lease (heartbeat)
curl -X POST https://api.spooled.cloud/api/v1/jobs/job_xyz123/heartbeat \
-H "Authorization: Bearer sp_live_YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"worker_id": "worker-1",
"lease_duration_secs": 300
}'Using the gRPC API
For high-throughput workers, use the gRPC API with streaming. gRPC provides lower latency and enables real-time job streaming without polling.
| Feature | REST API | gRPC API |
|---|---|---|
| Protocol | HTTP/1.1 + JSON | HTTP/2 + Protobuf |
| Job fetching | Polling (/jobs/claim) | Streaming (StreamJobs) |
| Latency | ~10-50ms per request | ~1-5ms, persistent connection |
| Best for | Web apps, simple integrations | High-throughput workers |
See the gRPC API documentation for examples using grpcurl
or native gRPC clients in Node.js, Python, Go, and PHP.
Visibility Timeout
When claimed, a job becomes invisible to other workers for the visibility timeout period. If the worker doesn't complete/fail the job in time, it becomes visible again for another worker to pick up.
| Setting | Value | Description |
|---|---|---|
| Default timeout | 30 seconds | Good for most short-running jobs |
| Maximum timeout | 12 hours | For very long-running jobs |
| Heartbeat interval | visibility / 2 | Recommended heartbeat frequency |
Dashboard Tip
What to look for:
- → Active workers and their queues
- → Last heartbeat timestamp
- → Jobs currently processing
- → Worker uptime
Actions:
- ✓ Monitor worker health
- ✓ Identify stale workers
Scaling Workers
Scale horizontally by running multiple worker instances. Workers coordinate automatically through Spooled — no external coordination (like Redis locks) needed.
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#eff6ff', 'primaryTextColor': '#1e40af', 'primaryBorderColor': '#3b82f6', 'lineColor': '#6b7280'}}}%%
flowchart LR
subgraph workers["Worker Pool"]
W1["Worker 1"]
W2["Worker 2"]
W3["Worker 3"]
WN["Worker N"]
end
Q[(Queue)] --> W1
Q --> W2
Q --> W3
Q --> WN
W1 --> R1[Result]
W2 --> R2[Result]
W3 --> R3[Result]
WN --> RN[Result] Concurrency
Control how many jobs a single worker processes in parallel:
- I/O-bound jobs (API calls, email sending): Higher concurrency (10-100)
- CPU-bound jobs (image processing): Match to available CPU cores
- Memory-intensive jobs: Limit based on available memory
Best Practices
Idempotency
Jobs may be delivered more than once in edge cases (worker crash after processing but before completing). Design handlers to be idempotent by checking if the work was already done.
Graceful Shutdown
Handle shutdown signals (SIGTERM, SIGINT) to finish processing current jobs before exiting. SDKs provide built-in graceful shutdown support.
Structured Logging
Use structured logging for observability. Include job IDs and queue names in all log entries to make debugging easier.
Health Checks
Expose a health endpoint for orchestrators like Kubernetes. Workers should report unhealthy if they can't connect to Spooled.
Debug Worker Issues
What to look for:
- → Worker heartbeat status (stale = problem)
- → Jobs stuck in processing state
- → Error rates per worker
Actions:
- ✓ Check worker logs
- ✓ Verify network connectivity
- ✓ Restart unhealthy workers