Skip to content

Real-world examples

These examples are made for “normal” daily work. No buzzwords — just copy/paste setups you can run today.

Tip: If you’re new, start with Quickstart first, then come back here for real-life patterns.

Example 0: A tiny worker that prints jobs

A worker is a small program that pulls jobs from a queue. This one does the simplest thing possible: it prints the job and returns { ok: true }.

Worker: print jobs from a queue
import { SpooledClient, SpooledWorker } from '@spooled/sdk';

const client = new SpooledClient({ apiKey: process.env.SPOOLED_API_KEY! });

const worker = new SpooledWorker(client, {
  queueName: 'my-queue',
  concurrency: 1,
});

worker.process(async (ctx) => {
  console.log('Job ID:', ctx.jobId);
  console.log('Payload:', ctx.payload);
  return { ok: true };
});

await worker.start();
How to run
  1. Set SPOOLED_API_KEY in your environment
  2. Pick a queue name (example: my-queue)
  3. Run the worker, then create jobs into that queue

Example 1: New user signup → enqueue a job

When a user signs up, your API should respond fast. So you enqueue a job and do slow work in the background (send email, create CRM record, generate PDF, etc.).

Create a job (signup / background task)
curl -X POST https://api.spooled.cloud/api/v1/jobs \
  -H "Authorization: Bearer sp_live_YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "queue_name": "my-queue",
    "payload": {
      "event": "user.created",
      "user_id": "usr_123",
      "email": "alice@example.com"
    },
    "idempotency_key": "user-created-usr_123"
  }'

Use idempotency_key (or idempotencyKey in SDKs) so retries don’t create duplicates.


Example 2: GitHub issue opened → enqueue a job (no servers needed)

If you don’t want to run a webhook server, GitHub Actions can send jobs to Spooled for you. This is a very common “automation” pattern for small teams.

Step 1: Add repo secrets

  • SPOOLED_WEBHOOK_URL
  • SPOOLED_WEBHOOK_TOKEN

Step 2: Add this workflow

name: Spooled - enqueue GitHub issues

on:
  issues:
    types: [opened, reopened]

jobs:
  enqueue:
    runs-on: ubuntu-latest
    steps:
      - name: Send issue to Spooled
        env:
          SPOOLED_WEBHOOK_URL: ${{ secrets.SPOOLED_WEBHOOK_URL }}
          SPOOLED_WEBHOOK_TOKEN: ${{ secrets.SPOOLED_WEBHOOK_TOKEN }}
        run: |
          python - << 'PY' > payload.json
          import json, os
          event = json.load(open(os.environ["GITHUB_EVENT_PATH"]))
          body = {
            "queue_name": "github-events",
            "event_type": "github.issue.opened",
            "idempotency_key": f"github-issue-{event['issue']['id']}",
            "payload": {
              "repo": os.environ.get("GITHUB_REPOSITORY"),
              "number": event["issue"]["number"],
              "title": event["issue"]["title"],
              "url": event["issue"]["html_url"],
              "author": event["issue"]["user"]["login"],
            },
          }
          print(json.dumps(body))
          PY

          curl -sS -X POST "$SPOOLED_WEBHOOK_URL"             -H "Content-Type: application/json"             -H "X-Webhook-Token: $SPOOLED_WEBHOOK_TOKEN"             --data-binary "@payload.json"

Step 3: Process the jobs

Start the worker from Example 0, but change the queue name to github-events.


Example 3: Run a job every day (cron schedules)

Need daily reports, cleanup, reminders, renewals? Use schedules. Spooled uses 6-field cron: second minute hour day month weekday.

Create a cron schedule
# Create a cron schedule
curl -X POST https://api.spooled.cloud/api/v1/schedules \
  -H "Authorization: Bearer sp_live_YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Daily Report",
    "cron_expression": "0 0 9 * * *",
    "timezone": "America/New_York",
    "queue_name": "reports",
    "payload_template": {"type": "daily_report"}
  }'

Example 4: CSV import → one job per row

A super common real task: you have a CSV file with hundreds of rows and you want to process it reliably. Spooled is perfect for this.

Read users.csv and enqueue one job per row
import { SpooledClient } from '@spooled/sdk';
import fs from 'node:fs/promises';

const client = new SpooledClient({ apiKey: process.env.SPOOLED_API_KEY! });

const csv = await fs.readFile('users.csv', 'utf8');
const [headerLine, ...rows] = csv.trim().split(/\r?\n/);
const headers = headerLine.split(',').map((s) => s.trim());

for (const row of rows) {
  if (!row.trim()) continue;
  const values = row.split(',').map((s) => s.trim());
  const payload: Record<string, string> = {};
  headers.forEach((h, i) => (payload[h] = values[i] ?? ''));

  await client.jobs.create({
    queueName: 'csv-import',
    payload,
    idempotencyKey: payload.email ? `csv-${payload.email}` : undefined,
  });
}

console.log('✅ Enqueued CSV jobs');

In production, prefer idempotency_key based on a stable ID (email, user_id, order_id) so reruns don’t double-process.


Example 5: From “toy” to production

  • Start with print: print payloads until you trust the pipeline
  • Add idempotency: always for external events (Stripe IDs, GitHub IDs, etc.)
  • Handle failures: throw an error to fail a job (Spooled retries)
  • Use the dashboard: inspect jobs, errors, retries, DLQ

Next steps