Shopify webhook deduplication with X-Shopify-Event-Id

Shopify is at-least-once: the same event will arrive more than once when retries succeed after an initial timeout. The correct dedup key is the X-Shopify-Event-Id header — Shopify's own dedup doc names it explicitly. Here's how to use it without falling into the obvious traps.

Last reviewed · Verified against Shopify dev docs.

How do I deduplicate Shopify webhooks?

Quick answer Use X-Shopify-Event-Id as the dedup key. Same value across retries of the same event. Persist event IDs for at least 4 hours (covers Shopify's retry window); 24 hours is a safer practical default. Note: multiple subscriptions on the same topic produce events with different IDs — dedup by Event-Id handles retries, not subscription fan-out.

Which header is the dedup key

Shopify's own deduplication doc names the header in one sentence:

Get the event ID from the headers. This is the X-Shopify-Event-Id header and the same value across more than one webhook indicates a duplicate.

Source: shopify.dev/docs/apps/build/webhooks/ignore-duplicates. That's the entire authoritative answer. Anything else you read on the internet should defer to this.

What about X-Shopify-Webhook-Id?

It exists. Shopify includes it on some deliveries. Shopify's deduplication page does not define it and does not use it as the dedup key. Treat it as undocumented for idempotency purposes — if Shopify hasn't said what it means, building idempotency on top of it is building on sand.

Implementing dedup correctly

The shape of the check

Three lines of pseudocode that cover the case:

  1. Read X-Shopify-Event-Id from headers.
  2. Attempt to insert into a uniqueness store, scoped to (topic, event_id) at minimum.
  3. If the insert was a duplicate, ack 200 and skip processing. Otherwise, process.

You ack 200 either way. Shopify only cares about the response code, not whether you actually did work. Returning 5xx on a duplicate triggers a retry and creates a feedback loop.

Rails / Sidekiq

class WebhookController < ActionController::API
  def receive
    event_id = request.headers["X-Shopify-Event-Id"]
    topic    = request.headers["X-Shopify-Topic"]

    # Unique index on (topic, event_id) makes this race-safe.
    record = WebhookDedup.create(topic: topic, event_id: event_id)
    return head(:ok) unless record.persisted?  # duplicate

    ProcessWebhookJob.perform_later(topic, request.raw_post)
    head :ok
  end
end

The unique index is doing the work. Without it, two requests arriving inside the same millisecond both pass the exists? check and you double-process.

Node.js (Express + Postgres)

app.post('/webhook', express.raw({ type: '*/*' }), async (req, res) => {
  const eventId = req.header('X-Shopify-Event-Id');
  const topic   = req.header('X-Shopify-Topic');

  try {
    await pg.query(
      'INSERT INTO webhook_dedup(topic, event_id) VALUES ($1, $2)',
      [topic, eventId]
    );
  } catch (e) {
    if (e.code === '23505') return res.sendStatus(200);  // unique-violation; duplicate
    throw e;
  }

  await queue.add('process', { topic, body: req.body.toString() });
  res.sendStatus(200);
});

Redis variant (faster, eventually consistent)

const key = `dedup:${topic}:${eventId}`;
const set = await redis.set(key, '1', 'EX', 86400, 'NX');  // 24h TTL
if (set === null) return res.sendStatus(200);  // already saw this

Redis is faster and forgets old IDs automatically. Use it when your daily event volume makes a relational table painful. The 24-hour TTL is opinion, not Shopify policy — pick a TTL longer than your worst-case retry-or-replay window.

How long to keep event IDs

Three windows to think about:

WindowWhy it mattersMinimum TTL
Shopify's own retry windowThe same event can show up up to 8 times across 4 hours4 hours
Your reconciliation replay windowIf you replay missed events from Admin API, they share IDs with the original failed deliveryHow often you reconcile + a buffer
Operational replay windowYou manually trigger a replay during an incidentDays, sometimes

Practical default: 24 hours. Long enough that any of the three above land inside the dedup window. Short enough that the table or Redis keyspace doesn't grow unbounded. Adjust up if you do weekly Admin-API reconciliation runs.

Three traps

1. Treating multiple subscriptions like duplicates

If you have two webhook subscriptions on orders/create (perhaps because two apps share the same handler), Shopify sends two events with different IDs. Your dedup table won't catch that — and shouldn't. Those are legitimately separate deliveries. Dedup by Event-Id is for retries of one event, not for collapsing subscription fan-out.

2. Storing the event ID before reading the body

HMAC must be computed on the raw body before any JSON parser touches it. Your dedup check should ideally be ordered: read raw body, verify HMAC, then dedup, then queue. Inserting into the dedup table before HMAC verify means a forged or malformed request poisons the table and a legitimate retry of the real event gets dropped.

3. Letting the dedup write fail silently

If your dedup insert raises (network blip, pool exhaustion), the safe default is to process the event and let downstream idempotency catch any double-processing. Skipping on a failed dedup write loses the event. Processing twice on a failed dedup write is recoverable if your handler is itself idempotent — which is the model Shopify expects anyway.

Where HookRescue fits

Every event that flows through us already has a stable dedup key (X-Shopify-Event-Id if Shopify provided it, a SHA-256 of the raw body otherwise) and the events table has a unique index on (source, dedup_key). Replays — manual or automatic — reuse the same key, so they never produce a duplicate downstream of us. Your handler still needs to be idempotent because of the multi-subscription case above, but the retry-induced duplicates are filtered at our boundary.

Related problems