We Tracked Developer Time for Two Weeks. The Results Were Uncomfortable.
A consulting engagement where we measured where engineering hours actually went — and discovered the biggest productivity killer wasn't what anyone expected.
Last fall, I walked into a client engagement where the CTO's opening line was: "We have 40 engineers and we ship like we have 12." He wasn't wrong. Their feature velocity had cratered over the past year despite steady hiring. The usual suspects had been blamed — technical debt, unclear requirements, too many meetings. But nobody had actually measured anything.
So we did.
The Setup
I proposed a two-week time audit. Not a surveillance tool, not a screen tracker — just a shared spreadsheet where each developer logged their activities in 30-minute blocks. Categories were simple: feature work, bug fixes, code review, waiting (on CI, deploys, environments, approvals), meetings, and "other."
Some engineers pushed back. Fair enough. Nobody likes the feeling of being watched. I made it anonymous and voluntary, and promised the data would be used to fix processes, not evaluate individuals. About 30 of the 40 engineers opted in.
What We Expected
The team leads had a theory: too many meetings were eating into deep work time. The CTO suspected technical debt was the culprit — too much time spent working around legacy code. One senior engineer was convinced it was code review bottlenecks.
They were all partially right. But they all missed the biggest item.
What We Found
After two weeks, I pulled the numbers together. Here's the rough breakdown of an average developer's week:
- Feature work: 14 hours (35%)
- Waiting: 9 hours (22.5%)
- Meetings: 6 hours (15%)
- Bug fixes: 4 hours (10%)
- Code review: 3.5 hours (8.75%)
- Other: 3.5 hours (8.75%)
That "waiting" number hit the room like a brick. Nine hours a week per developer. Across 40 engineers, that's 360 hours of paid engineering time spent waiting for something — every single week.
We dug into what "waiting" actually meant. The breakdown within that category was revealing:
- CI pipeline runs: averaging 38 minutes per push, and developers pushed 4-5 times a day
- Staging environment provisioning: a manual process involving a Slack message to the platform team, average turnaround of 3 hours
- PR approval wait times: some PRs sat for a full day before anyone looked at them
- Deploy queue: only one team could deploy at a time through a shared pipeline
The CI Pipeline Problem
The 38-minute CI time was the one that bothered me most, because it had a compounding effect. Developers would push code, context-switch to something else while waiting, then take another 10-15 minutes to reload the original context once CI finished. So the real cost of a 38-minute pipeline wasn't 38 minutes — it was closer to an hour each time.
I spent a day profiling their CI configuration. The culprit was almost comically straightforward. They ran their entire test suite on every push to every branch. All 4,200 tests. Unit tests, integration tests, end-to-end tests — the works. No parallelization, no selective test running.
We restructured it into tiers:
# .github/workflows/ci.yml (simplified)
on:
push:
branches-ignore: [main]
jobs:
fast-check:
# Runs on every push - under 5 minutes
steps:
- run: npm run lint
- run: npm run test:unit
- run: npm run typecheck
integration:
# Only runs when fast-check passes
needs: fast-check
steps:
- run: npm run test:integration
e2e:
# Only runs on PRs targeting main
if: github.event_name == 'pull_request'
needs: integration
steps:
- run: npm run test:e2eThe fast feedback loop dropped to under 5 minutes. That alone gave back roughly 3-4 hours per developer per week.
The Environment Bottleneck
The staging environment situation was worse than the numbers suggested. Three hours average turnaround, but the median was actually about 45 minutes — the average was dragged up by requests that landed on Friday afternoon and didn't get handled until Monday. Still, the fundamental issue was the same: developers needed a human to provision something that should have been self-service.
The platform team wasn't being slow. They were a two-person team managing infrastructure for 40 developers. They were drowning.
We didn't build anything fancy. A simple script that spun up a namespaced environment on their existing Kubernetes cluster, seeded with a recent database snapshot. Developers could run it themselves. The platform team reviewed the implementation, added guardrails for resource limits, and moved on to actual platform work instead of fielding Slack requests all day.
The Part Nobody Wanted to Hear
Here's where the conversation got uncomfortable. When we looked at the "meetings" category more carefully, about half of that time — roughly 3 hours per week per developer — was status update meetings. Standups, sprint reviews, "sync" meetings that existed because people didn't trust the ticketing system to reflect reality.
I asked why the ticketing system didn't reflect reality. Turns out, updating tickets was in the "other" category. Developers found the workflow so cumbersome (their Jira instance had 14 custom fields per ticket) that they'd skip updates and just tell people in meetings instead. The meetings existed to compensate for a broken tool, and the tool stayed broken because everyone relied on meetings.
We trimmed the custom fields down to 5. Updated the board to actually reflect their workflow. Killed two of the three weekly status meetings. Kept the daily standup but capped it at 10 minutes.
Note
Three Months Later
I checked in with the team about twelve weeks after the changes. Feature velocity had roughly doubled — not because people were working harder, but because a higher percentage of their week was spent on actual engineering work. The CI changes alone had eliminated the most frequent context switches. The self-service environments removed a multi-hour dependency. The meeting cleanup gave back meaningful blocks of focus time.
The CTO told me something that stuck with me: "We almost hired 10 more engineers to solve a problem that wasn't about headcount."
What I'd Do Differently
The anonymous spreadsheet approach worked for this team, but it's clunky. It relies on self-reporting, which means people round off, forget, or categorize things inconsistently. If I were doing this again, I'd probably combine it with actual data — CI logs, PR metrics from GitHub, deploy frequency from the pipeline. The subjective experience matters, but pairing it with hard numbers makes the findings harder to dismiss.
I also wouldn't wait two weeks. One week probably would have shown the same patterns. Two weeks gave us more statistical confidence, but developer patience for self-tracking has a half-life of about five days.
The broader lesson is boring but true: before you try to fix developer productivity, measure where the time actually goes. Your intuition about the bottleneck is probably wrong — or at least incomplete. The fix is rarely "work harder" or "hire more." It's usually "stop making people wait for things that should be instant."
What's the biggest time sink you've seen on your team that nobody talks about?