Jobs & Queues
Jobs are the fundamental building blocks of Spooled. Learn how to queue, process, and manage
asynchronous work reliably at any scale.
Job Lifecycle
Every job follows a predictable lifecycle from creation to completion. Understanding this
lifecycle is key to building reliable systems.
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#ecfdf5', 'primaryTextColor': '#065f46', 'primaryBorderColor': '#10b981', 'lineColor': '#6b7280'}}}%%
stateDiagram-v2
[*] --> pending: Job created
pending --> processing: Worker claims
processing --> completed: Success
processing --> failed: Error
failed --> pending: Retry
failed --> deadletter: Max retries exceeded
deadletter --> pending: Retry from DLQ
completed --> [*] Job States State Description Transitions pending Job is waiting to be processed → processing (by worker) processing Worker is processing the job → completed, failed completed Job finished successfully Terminal state failed Job failed, may retry → pending (retry), deadletter deadletter Job in dead-letter queue → pending (retry from DLQ)
Dashboard Tip
📍 Dashboard → Jobs
What to look for:
→ Filter jobs by status (pending, processing, completed, failed, deadletter) → View job payload and metadata → See retry count and last error
Creating Jobs
Use the SDK or REST API to create jobs. Each job belongs to a queue and carries
a payload that your workers will process.
cURL Node.js Python Go PHP
Copy curl -X POST https://api.spooled.cloud/api/v1/jobs \
-H "Authorization: Bearer sp_live_YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"queue_name": "my-queue",
"payload": {
"event": "user.created",
"user_id": "usr_123",
"email": "alice@example.com"
},
"idempotency_key": "user-created-usr_123"
}' import { SpooledClient } from '@spooled/sdk' ;
const client = new SpooledClient ({
apiKey: process.env. SPOOLED_API_KEY ! ,
});
const userId = 'usr_123' ;
// Create a job
const { id } = await client.jobs. create ({
queueName: 'email-notifications' ,
payload: {
to: 'user@example.com' ,
subject: 'Welcome!' ,
template: 'welcome' ,
},
idempotencyKey: `welcome-${ userId }` ,
maxRetries: 5 ,
});
console. log ( `Created job: ${ id }` ); from spooled import SpooledClient
import os
client = SpooledClient( api_key = os.environ[ "SPOOLED_API_KEY" ])
image_id = "img_123"
# Create a background job
result = client.jobs.create({
"queue_name" : "image-processing" ,
"payload" : {
"image_url" : "https://example.com/image.jpg" ,
"operations" : [ "resize" , "compress" ],
"output_format" : "webp"
},
"idempotency_key" : f "process-image- { image_id } " ,
"max_retries" : 3
})
print ( f "Created job: { result.id } " )
client.close() package main
import (
" context "
" fmt "
" os "
" github.com/spooled-cloud/spooled-sdk-go/spooled "
" github.com/spooled-cloud/spooled-sdk-go/spooled/resources "
)
func ptr [ T any ]( v T ) * T { return & v }
func main () {
client, err := spooled. NewClient (spooled. WithAPIKey (os. Getenv ( "SPOOLED_API_KEY" )))
if err != nil {
panic (err)
}
resp, err := client. Jobs (). Create (context. Background (), & resources . CreateJobRequest {
QueueName: "my-queue" ,
Payload: map [ string ] any { "key" : "value" },
IdempotencyKey: ptr ( "unique-key" ),
MaxRetries: ptr ( 3 ),
})
if err != nil {
panic (err)
}
fmt. Printf ( "Created job: %s\n " , resp.ID)
} <? php
use Spooled\SpooledClient ;
use Spooled\Config\ClientOptions ;
$client = new SpooledClient ( new ClientOptions (
apiKey : getenv ( 'SPOOLED_API_KEY' ),
));
$userId = 'usr_123' ;
// Create a job
$job = $client -> jobs -> create ([
'queue' => 'email-notifications' ,
'payload' => [
'to' => 'user@example.com' ,
'subject' => 'Welcome!' ,
'template' => 'welcome' ,
],
'idempotencyKey' => "welcome-{ $userId }" ,
'maxRetries' => 5 ,
]);
echo "Created job: { $job -> id } \n " ; Job Properties Property Type Description queue_name string Queue name (alphanumeric + hyphens) payload object JSON payload (up to 1MB on paid plans) idempotency_key string Prevents duplicate jobs (optional but recommended) max_retries number Max retry attempts (default: 3) scheduled_at Date When to process (null = immediately) priority number Higher values processed first (default: 0) tags array | object Optional tags for filtering (stored as JSON)
Filtering jobs (including tags)
Use query parameters to filter job listings. For tags, you can filter by a single tag value:
# List jobs tagged "urgent"
curl https://api.spooled.cloud/api/v1/jobs?tag=urgent \
-H "Authorization: Bearer YOUR_API_KEY"
# List DLQ jobs tagged "urgent"
curl https://api.spooled.cloud/api/v1/jobs/dlq?tag=urgent \
-H "Authorization: Bearer YOUR_API_KEY" Claiming & Processing Jobs
Workers claim jobs to process them. Claimed jobs are locked for a visibility timeout to prevent
duplicate processing.
cURL Node.js Python Go PHP
Copy # Claim up to 5 jobs
curl -X POST https://api.spooled.cloud/api/v1/jobs/claim \
-H "Authorization: Bearer sp_live_YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"queue_name": "my-queue",
"worker_id": "worker-1",
"limit": 5
}' // Claim jobs for processing
const { jobs } = await client.jobs. claim ({
queueName: 'my-queue' ,
workerId: 'worker-1' ,
limit: 10 ,
leaseDurationSecs: 30 ,
});
for ( const job of jobs) {
console. log ( `Claimed job: ${ job . id }` );
// Process the job...
} # Claim jobs for processing
result = client.jobs.claim({
"queue_name" : "my-queue" ,
"worker_id" : "worker-1" ,
"limit" : 10 ,
"lease_duration_secs" : 30
})
for job in result.jobs:
print ( f "Claimed job: { job.id } " )
# Process the job... import (
" context "
" github.com/spooled-cloud/spooled-sdk-go/spooled "
" github.com/spooled-cloud/spooled-sdk-go/spooled/resources "
)
client, _ := spooled. NewClient (spooled. WithAPIKey ( "sp_live_YOUR_API_KEY" ))
claimed, err := client. Jobs (). Claim (context. Background (), & resources . ClaimJobsRequest {
QueueName: "my-queue" ,
WorkerID: "worker-1" ,
Limit: intPtr ( 10 ),
})
if err != nil {
panic (err)
}
for _, job := range claimed.Jobs {
fmt. Printf ( "Claimed job: %s\n " , job.ID)
} <? php
// Claim jobs for processing
$result = $client -> jobs -> claim ([
'queue' => 'my-queue' ,
'workerId' => 'worker-1' ,
'limit' => 10 ,
'leaseDurationSecs' => 30 ,
]);
foreach ($result -> jobs as $job) {
echo "Claimed job: { $job -> id } \n " ;
// Process the job...
} Important: Visibility Timeout
If a worker doesn't complete or fail a job within the visibility timeout, the job becomes
available for other workers. Set this higher than your expected processing time.
Completing Jobs cURL Node.js Python Go PHP
Copy # Complete a job
curl -X POST https://api.spooled.cloud/api/v1/jobs/job_xyz123/complete \
-H "Authorization: Bearer sp_live_YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"worker_id": "worker-1",
"result": {"processed": true}
}' // Complete the job after processing
await client.jobs. complete (job.id, {
workerId: 'worker-1' ,
result: { processed: true , outputUrl: 'https://...' },
}); # Complete the job after processing
client.jobs.complete(job.id, {
"worker_id" : "worker-1" ,
"result" : { "processed" : True , "output_url" : "https://..." }
}) import (
" context "
" github.com/spooled-cloud/spooled-sdk-go/spooled "
" github.com/spooled-cloud/spooled-sdk-go/spooled/resources "
)
client := spooled. NewClient (spooled. WithAPIKey ( "sp_live_YOUR_API_KEY" ))
err := client. Jobs (). Complete (context. Background (), jobID, & resources . CompleteJobRequest {
WorkerID: stringPtr ( "worker-1" ),
Result: map [ string ] interface {}{ "processed" : true , "output_url" : "https://..." },
})
if err != nil {
panic (err)
}
fmt. Println ( "Job completed successfully" ) <? php
// Complete the job after processing
$client -> jobs -> complete ($job -> id, [
'workerId' => 'worker-1' ,
'result' => [ 'processed' => true , 'outputUrl' => 'https://...' ],
]); Failing Jobs cURL Node.js Python Go PHP
Copy # Fail a job (will retry if retries remaining)
curl -X POST https://api.spooled.cloud/api/v1/jobs/job_xyz123/fail \
-H "Authorization: Bearer sp_live_YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"worker_id": "worker-1",
"error": "Connection timeout"
}' // Fail the job (will retry with exponential backoff)
await client.jobs. fail (job.id, {
workerId: 'worker-1' ,
error: 'Connection timeout to downstream service' ,
}); # Fail the job (will retry with exponential backoff)
client.jobs.fail(job.id, {
"worker_id" : "worker-1" ,
"error" : "Connection timeout to downstream service"
}) import (
" context "
" github.com/spooled-cloud/spooled-sdk-go/spooled "
" github.com/spooled-cloud/spooled-sdk-go/spooled/resources "
)
client := spooled. NewClient (spooled. WithAPIKey ( "sp_live_YOUR_API_KEY" ))
err := client. Jobs (). Fail (context. Background (), jobID, & resources . FailJobRequest {
WorkerID: stringPtr ( "worker-1" ),
Error: stringPtr ( "Connection timeout to downstream service" ),
})
if err != nil {
panic (err)
}
fmt. Println ( "Job failed and will retry" ) <? php
// Fail the job (will retry with exponential backoff)
$client -> jobs -> fail ($job -> id, [
'workerId' => 'worker-1' ,
'error' => 'Connection timeout to downstream service' ,
]); Heartbeat (Extend Lease) For long-running jobs, extend the lease to prevent the job from being released:
cURL Node.js Python Go PHP
Copy # Extend job lease (heartbeat)
curl -X POST https://api.spooled.cloud/api/v1/jobs/job_xyz123/heartbeat \
-H "Authorization: Bearer sp_live_YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"worker_id": "worker-1",
"lease_duration_secs": 300
}' // Extend lease for long-running jobs
await client.jobs. heartbeat (job.id, {
workerId: 'worker-1' ,
leaseDurationSecs: 300 , // 5 minutes
}); # Extend lease for long-running jobs
client.jobs.heartbeat(job.id, {
"worker_id" : "worker-1" ,
"lease_duration_secs" : 300 # 5 minutes
}) import (
" context "
" github.com/spooled-cloud/spooled-sdk-go/spooled "
" github.com/spooled-cloud/spooled-sdk-go/spooled/resources "
)
client, _ := spooled. NewClient (spooled. WithAPIKey ( "sp_live_YOUR_API_KEY" ))
err := client. Jobs (). Heartbeat (context. Background (), jobID, & resources . HeartbeatJobRequest {
WorkerID: stringPtr ( "worker-1" ),
LeaseDurationSec: intPtr ( 300 ), // 5 minutes
})
if err != nil {
panic (err)
}
fmt. Println ( "Lease extended" ) <? php
// Extend lease for long-running jobs
$client -> jobs -> heartbeat ($job -> id, [
'workerId' => 'worker-1' ,
'leaseDurationSecs' => 300 , // 5 minutes
]); Batch Operations
Queue multiple jobs in a single API call for better performance. All jobs in a batch share
the same idempotency key scope.
cURL Node.js Python Go PHP
Copy # Bulk enqueue up to 100 jobs
curl -X POST https://api.spooled.cloud/api/v1/jobs/bulk \
-H "Authorization: Bearer sp_live_YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"queue_name": "notifications",
"jobs": [
{"payload": {"type": "email", "to": "a@example.com"}},
{"payload": {"type": "sms", "to": "+1234567890"}},
{"payload": {"type": "push", "token": "abc123"}}
]
}' // Bulk enqueue multiple jobs (up to 100)
const result = await client.jobs. bulkEnqueue ({
queueName: 'notifications' ,
jobs: [
{ payload: { type: 'email' , to: 'a@example.com' } },
{ payload: { type: 'sms' , to: '+1234567890' } },
{ payload: { type: 'push' , token: 'abc123' } },
],
defaultMaxRetries: 5 ,
});
console. log ( `Enqueued ${ result . successCount } jobs` ); # Bulk enqueue multiple jobs (up to 100)
result = client.jobs.bulk_enqueue({
"queue_name" : "notifications" ,
"jobs" : [
{ "payload" : { "type" : "email" , "to" : "a@example.com" }},
{ "payload" : { "type" : "sms" , "to" : "+1234567890" }},
{ "payload" : { "type" : "push" , "token" : "abc123" }},
],
"default_max_retries" : 5
})
print ( f "Enqueued { result.success_count } jobs" ) import (
" context "
" github.com/spooled-cloud/spooled-sdk-go/spooled "
" github.com/spooled-cloud/spooled-sdk-go/spooled/resources "
)
client, _ := spooled. NewClient (spooled. WithAPIKey ( "sp_live_YOUR_API_KEY" ))
result, err := client. Jobs (). BulkEnqueue (context. Background (), & resources . BulkEnqueueRequest {
QueueName: "notifications" ,
Jobs: [] resources . BulkJobItem {
{Payload: map [ string ] interface {}{ "type" : "email" , "to" : "a@example.com" }},
{Payload: map [ string ] interface {}{ "type" : "sms" , "to" : "+1234567890" }},
{Payload: map [ string ] interface {}{ "type" : "push" , "token" : "abc123" }},
},
})
if err != nil {
panic (err)
}
fmt. Printf ( "Enqueued %d jobs \n " , result.Succeeded) <? php
// Bulk enqueue multiple jobs (up to 100)
$result = $client -> jobs -> bulkEnqueue ([
'queue' => 'notifications' ,
'jobs' => [
[ 'payload' => [ 'type' => 'email' , 'to' => 'a@example.com' ]],
[ 'payload' => [ 'type' => 'sms' , 'to' => '+1234567890' ]],
[ 'payload' => [ 'type' => 'push' , 'token' => 'abc123' ]],
],
'defaultMaxRetries' => 5 ,
]);
echo "Enqueued { $result -> successCount } jobs \n " ; Scheduled Jobs
Schedule jobs for future execution. Perfect for reminders, delayed notifications, or recurring tasks.
cURL Node.js Python Go PHP
Copy # Schedule a job for later
curl -X POST https://api.spooled.cloud/api/v1/jobs \
-H "Authorization: Bearer sp_live_YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"queue_name": "reminders",
"payload": {"type": "cart-abandoned", "user_id": "usr_123"},
"scheduled_at": "2024-12-10T09:00:00Z"
}' // Schedule a job to run in 24 hours
await client.jobs. create ({
queueName: 'reminders' ,
payload: { userId: 'usr_123' , type: 'cart-abandoned' },
scheduledAt: new Date (Date. now () + 24 * 60 * 60 * 1000 ),
idempotencyKey: `reminder-${ userId }-cart` ,
}); from datetime import datetime, timedelta
# Schedule a job to run in 24 hours
client.jobs.create({
"queue_name" : "reminders" ,
"payload" : { "user_id" : "usr_123" , "type" : "cart-abandoned" },
"scheduled_at" : (datetime.utcnow() + timedelta( hours = 24 )).isoformat() + "Z" ,
"idempotency_key" : f "reminder- { user_id } -cart"
}) import (
" context "
" time "
" github.com/spooled-cloud/spooled-sdk-go/spooled "
" github.com/spooled-cloud/spooled-sdk-go/spooled/resources "
)
client, _ := spooled. NewClient (spooled. WithAPIKey ( "sp_live_YOUR_API_KEY" ))
scheduledTime := time. Now (). Add ( 24 * time.Hour)
resp, err := client. Jobs (). Create (context. Background (), & resources . CreateJobRequest {
QueueName: "reminders" ,
Payload: map [ string ] interface {}{ "user_id" : "usr_123" , "type" : "cart-abandoned" },
ScheduledAt: & scheduledTime,
IdempotencyKey: stringPtr ( "reminder-usr_123-cart" ),
})
if err != nil {
panic (err)
}
fmt. Printf ( "Scheduled job: %s\n " , resp.ID) <? php
use DateTime ;
// Schedule a job to run in 24 hours
$scheduledAt = ( new DateTime ( '+24 hours' )) -> format ( DateTime :: ATOM );
$client -> jobs -> create ([
'queue' => 'reminders' ,
'payload' => [ 'userId' => 'usr_123' , 'type' => 'cart-abandoned' ],
'scheduledAt' => $scheduledAt,
'idempotencyKey' => "reminder-{ $userId }-cart" ,
]); Idempotency
Idempotency keys prevent duplicate job processing. When you create a job with an idempotency key:
If no job exists with that key, a new job is created If a job exists and is still pending/processing, the existing job is returned If a job exists and completed, the completed job is returned (no duplicate) Best Practice
Always use meaningful idempotency keys derived from your business logic, like
process-order-{orderId} or
send-welcome-email-{userId}.
Working with Queues
Queues are logical groupings of related jobs. Use separate queues for different job types
to enable independent scaling and monitoring.
Queue Naming Use lowercase letters, numbers, and hyphens Maximum 64 characters Be descriptive: payment-processing, email-notifications Recommended Queue Structure High-priority queue — Critical operations (payments, security alerts) Default queue — Standard background jobs Low-priority queue — Analytics, cleanup, non-urgent tasks Dashboard Tip
📍 Dashboard → Queues
What to look for:
→ Jobs pending vs processing → Throughput rate → Pause/resume controls
Actions:
✓ Pause a queue during maintenance ✓ Monitor queue depth for scaling decisions
Next Steps