BricqsBricqs

Operations: scaling

Most Bricqs integrations never need to think about scale. The ones that do are running tentpole campaigns: festival drops, sports finals, year-end sales. This page covers the headroom Bricqs ships with, the patterns for handling spikes, and the levers when you need more.

Reading time8 minutes
Last updatedMay 2026

Key takeaways

Quick read
  • Default ingestion handles 1000 req/min/key, 5000 req/min/tenant. Most teams never hit the cap.
  • Use batch ingestion above 50 events per second. Single-event POSTs work; batch is cheaper.
  • Cache leaderboard reads at 5 to 15 second TTL. Most clients can tolerate stale by a few seconds.
  • Webhook fan-out is the most common bottleneck under load. Plan async on your side from day one.
  • Tentpole readiness: rehearse 4x peak load 2 weeks before launch. Discoveries always happen in rehearsal.

Defaults

What every tenant gets

Per-key throughput

1000 req/min sliding window. Burst 100 req/sec. Returns 429 with Retry-After when exceeded.

Per-tenant throughput

5000 req/min aggregate across all keys in the tenant. Designed to absorb spike events without saturating one key.

Ingestion latency

p95 under 500ms for single events. Async path returns 202 quickly; rules-engine evaluation runs in the background.

Webhook delivery

5 retries over ~30 minutes with exponential backoff. Outbound queue absorbs short downstream outages.

Read throughput

Higher than write. State reads (GET /state, GET /leaderboard) are cache-friendly and serve from edge.

Spike patterns

When you cross 100 req/s

The patterns below cover almost every tentpole shape. Pick the one that matches your engagement.

PatternWhenWhat to do
Bulk historical replayImporting past behaviour, replaying from a queue.Use POST /events/batch (up to 500 per request). Stream serially; do not parallelise above 5 concurrent batches.
High-cardinality fanoutAwarding a one-shot bonus to a million users.Use the bulk admin endpoint POST /admin/points/bulk-grant. Designed for million-row jobs; runs as an async worker.
Tentpole live spikeFestival drop, sports final, sale launch with high concurrency.Pre-warm caches; use sync ingestion sparingly; raise per-tenant cap by request 1 week before.
Read-heavy leaderboardLive contest with millions of viewers refreshing.Cache leaderboard at 5s TTL on your edge. Bricqs serves cached reads anyway; an extra layer halves origin traffic.
Webhook fan-out spikeReward issuance during a contest closeout.Process webhooks asynchronously on your side. Acknowledge in <1s; do work in a queue.
Default rule:Most tentpoles are read-heavy. Cache aggressively on the read path; the write path is rarely the bottleneck.

Batch ingestion

The right tool above 50 events per second

server-side bulk forwarder·ts
const BATCH_SIZE = 500;
const CONCURRENCY = 5;

async function flushBatch(events: BricqsEvent[]) {
  const res = await fetch("https://api.bricqs.co/api/v1/ingest/events/batch", {
    method: "POST",
    headers: {
      Authorization: `Bearer ${process.env.BRICQS_ADMIN_KEY!}`,
      "X-Bricqs-Tenant": process.env.BRICQS_TENANT!,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({ events }),
  });

  if (res.status === 429) {
    const retryAfter = Number(res.headers.get("retry-after") ?? "1");
    await sleep(retryAfter * 1000);
    return flushBatch(events);
  }

  return res.json();
}

// Batch with fixed-size windows; cap concurrency.
const queue = [...allEvents];
const inflight: Promise<unknown>[] = [];
while (queue.length || inflight.length) {
  while (inflight.length < CONCURRENCY && queue.length) {
    const batch = queue.splice(0, BATCH_SIZE);
    inflight.push(flushBatch(batch));
  }
  await Promise.race(inflight).then((p) => {
    inflight.splice(inflight.findIndex((x) => x === p), 1);
  });
}

500 events per request is the safe upper bound. Five concurrent batches is the safe upper bound for most tenants. Higher rates need a quota raise.

Tentpole readiness

The two-week pre-launch checklist

text
Two weeks before:
[ ] Estimate peak: ingestion req/min, leaderboard read req/sec, webhook delivery req/sec.
[ ] If peak > 4x baseline, request a temporary quota raise via support.
[ ] Audit all event sources for idempotency keys (a missing key under load doubles spend).

One week before:
[ ] Run a load rehearsal at 4x estimated peak in test tenant.
[ ] Verify webhook handler holds <500ms p95 under load.
[ ] Pre-warm leaderboard caches with realistic participant volume.

Three days before:
[ ] Freeze non-critical changes to challenge / contest config.
[ ] Confirm on-call rotation for the engagement window.
[ ] Confirm alert thresholds match the expected spike (raise temporarily if needed).

Launch day:
[ ] Watch ingestion lag, webhook delivery success, rules-engine latency.
[ ] Have rollback plan for client-side flags (kill-switch on the spin button, etc.).
[ ] Post-mortem the day after.

Caching

Where to cache and where not to

Cache on your edge: leaderboards, contest stats

5 to 15 second TTL. Users tolerate slight staleness. Halves origin traffic at peak.

Cache on your edge: tier and points (carefully)

60 second TTL only if you can invalidate on grant/deduct. Stale balances frustrate users at checkout.

Do not cache: challenge state during active flows

Users expect step completion to register immediately. The 15s SDK poll is the right cadence; a longer cache breaks UX.

Do not cache: reward claims and codes

These are one-shot, sensitive, and personalised. Always live.

Common mistakes

What goes wrong under load

01Mistake

Single-event POSTs at 200 req/sec. Hits per-key rate limit; partial failures everywhere.

Fix

Switch to batch endpoint above 50 req/sec sustained. Five concurrent batches is the safe ceiling.

02Mistake

Synchronous webhook handlers that call your CRM inline. Timeouts under load.

Fix

Acknowledge in <1s. Push to a queue. Process async. The webhook handler should not depend on downstream latency.

03Mistake

No load rehearsal before tentpole. Discoveries happen in production at midnight.

Fix

Run 4x peak in test tenant 2 weeks early. Every team finds something; the only question is whether they find it before launch.

04Mistake

Caching tier/points without invalidation. Users see stale balances at checkout.

Fix

Either skip caching on these endpoints or invalidate on grant/deduct webhooks. A 60s stale cache is not safe in a payment flow.

05Mistake

Adding capacity in front of webhook fan-out without backpressure. Your handler 200s but your queue grows unbounded.

Fix

Bound the queue; shed load at the edge if the queue exceeds a threshold. Bricqs retries; better a small late delivery than an OOM.

Developer FAQ

Common questions when integrating gamification with Bricqs.

Ready to ship?

Wire it up with the Bricqs SDK or API

Headless SDK for React UIs, REST API for any backend. Same engine behind both.

1 brief to align the room2 mechanics max in version one
What happens next
01
Pick the mechanic
Choose the smallest working system for the brief.
02
Launch without rebuilds
Configure rules and rewards in one place.