The Timezone Bug That Quietly Ate Three Weeks of Revenue Data

A Node.js service was writing UTC timestamps to a PostgreSQL database configured for Europe/Berlin. Nobody noticed the mismatch until a DST transition made an entire hour of orders vanish from daily reports.


I got looped into this one because a finance analyst sent an email titled "Numbers are off." Not panicked. Not angry. Just a factual observation that daily revenue totals in the reporting dashboard didn't match what Stripe said. The delta was small — around 2-4% on most days, sometimes less. The kind of discrepancy people wave away with "probably a sync delay" for weeks before anyone looks closer.

Nobody had looked closer. For three weeks.

The setup

The client ran an e-commerce platform on Node.js with PostgreSQL as the primary datastore. Orders came in, got written to an orders table with a created_at timestamp, and a nightly aggregation job summed everything up by day for the reporting dashboard. Straightforward pipeline. It had worked fine for over a year.

The reporting query looked roughly like this:

SELECT
  date_trunc('day', created_at) AS order_date,
  SUM(total_amount) AS daily_revenue
FROM orders
WHERE created_at >= '2026-03-01'
  AND created_at < '2026-04-01'
GROUP BY 1
ORDER BY 1;

Clean. Readable. Also wrong, in a way that only shows up if you know what timezone created_at is actually in.

Two truths that didn't agree

The Node.js application was creating timestamps with new Date(), which produces UTC. The ORM (Prisma, in this case) was writing those values into a timestamp with time zone column. So far, fine — PostgreSQL stores timestamptz as UTC internally regardless of what you send it.

The problem was on the read side. The PostgreSQL server had its timezone parameter set to Europe/Berlin. When the reporting query called date_trunc('day', created_at), Postgres first converted the stored UTC value to the session timezone — Europe/Berlin — and then truncated to the day boundary. That meant "day" didn't start at midnight UTC. It started at midnight Berlin time, which is either UTC+1 or UTC+2 depending on the time of year.

For most of the year this produced a quiet, consistent skew. Orders placed between midnight and 1am UTC got bucketed into the previous day's totals. The daily numbers were always slightly off, but the monthly totals still reconciled because the misallocated orders didn't disappear — they just landed in the wrong daily bucket. Nobody cross-checked at the daily level.

Then daylight saving time happened.

The hour that vanished

On March 29, 2026, clocks in Germany jumped from 2:00 to 3:00. That's the spring-forward transition. For the reporting query, this meant there was no 2am hour in Europe/Berlin on that date. Any order created during what Berlin considered the non-existent hour got truncated into a day boundary that didn't behave like the others.

But the real damage was subtler. The DST shift changed the offset from UTC+1 to UTC+2, which moved the daily boundary from 23:00 UTC to 22:00 UTC. Orders placed between 22:00 and 23:00 UTC on the transition night got allocated to a different day than the same orders would have been allocated to the night before. The aggregation job ran at 04:00 Berlin time. It re-summed the previous day. On the transition night, it produced a daily total that was short by about €12,000 — one hour's worth of orders had been bucketed into the next day instead.

That €12,000 gap is what made the finance analyst send the email. The 2-4% daily skew had been there for months. The DST jump just made it big enough to notice.

Warning

If your database session timezone differs from the timezone your application writes in, every time-based aggregation query is silently wrong. You might not notice until a DST transition makes the error large enough to see.

The fix

We did three things.

First, we set the PostgreSQL session timezone explicitly in the application's connection configuration:

// In the Prisma datasource or connection pool config
await pool.query("SET timezone = 'UTC'");

This ensured that date_trunc operated on UTC boundaries regardless of what the server's default timezone was set to. The reporting queries now bucketed orders by UTC day, which matched how the application wrote them.

Second, we added a SET timezone = 'UTC' to the reporting query's session preamble in the dashboard tool (Metabase, in their case). The dashboard had its own connection pool, and it was inheriting the server default. We couldn't assume every query tool would respect the application-level setting.

Third — and this was the tedious part — we re-ran the aggregation for every day since the server timezone had been changed. That turned out to be five months of daily totals. The deltas were small on most days, but the finance team wanted accurate numbers, and they were right to. We wrote a backfill script, spot-checked a dozen days against Stripe's records, and shipped corrected data.

How it got there in the first place

The PostgreSQL instance was managed by a cloud provider. When the team had originally provisioned it, someone had set the timezone to Europe/Berlin because "we're in Germany, that makes sense." The Node.js developers had never thought about it because timestamptz handles conversion automatically — which it does, correctly. The mismatch only manifested in queries that interpreted timestamps through the session timezone, and nobody on the team had reason to think about what date_trunc does under the hood.

I don't blame them. I've provisioned databases in local timezones before. It feels like the right default until it isn't.

The boring lesson

Set your database timezone to UTC. Set your application timezone to UTC. Set your reporting tool's session timezone to UTC. Do all your display-layer conversion in the frontend, where the user's locale is actually known. Every layer that introduces a timezone opinion between "event happened" and "human reads a chart" is a layer where silent data skew can hide.

The scarier takeaway for me was the three weeks. The numbers were wrong every single day, and nobody caught it because the error was small and consistent. It took an external shock — a DST transition — to make the problem visible. I keep wondering how many systems out there have a quiet 2% skew in their daily aggregations that just hasn't been bumped hard enough to notice yet.