Back to Blog

Leadership • Engineering

How to Set Engineering KPIs as a Non-Technical Founder

You just hired engineers and have no idea how to measure productivity. Lines of code? Velocity? Story points? None of those work. Here is what does.

Mike Tempest 10 min read

Why Business KPIs Do Not Work for Engineering

You know how to measure sales: revenue, pipeline, conversion rates. You know how to measure marketing: CAC, CPL, MQL to SQL conversion. These are clean, simple numbers that tell you whether the function is working.

So you try to apply the same thinking to engineering. Revenue per engineer. Lines of code per day. Features shipped per sprint. Story points completed. Hours logged. And then you wonder why your engineers get defensive when you ask about productivity.

The problem is that typical business KPIs assume a linear relationship between input and output. More sales calls equals more revenue. More ad spend equals more leads. But software does not work that way. The relationship between engineering effort and business value is non-linear, delayed, and context-dependent.

A feature that takes three days to build might generate ten times the revenue of a feature that took three weeks. An engineer who writes 100 lines of code might deliver more value than an engineer who writes 1,000, because the 100 lines deleted unnecessary complexity. Story points measure estimation accuracy, not delivered value. Hours logged measure presence, not progress.

At Risika, we learned this the hard way. Early attempts to track velocity and story points created perverse incentives. Engineers started inflating estimates to hit targets. Features got scoped to fit the sprint rather than to solve the customer problem. The metrics were green but the business outcomes were not improving.

The shift came when we stopped trying to measure activity and started measuring outcomes. Not how busy engineering was, but whether engineering was delivering the business results we needed. That required different metrics entirely.

The Five Engineering KPIs That Actually Matter

These metrics connect engineering activity to business outcomes. They tell you whether your team can ship quickly, reliably, and sustainably.

1

Deployment Frequency

How often do you ship code to production? Daily? Weekly? Monthly? Deployment frequency is the single best leading indicator of engineering effectiveness. Teams that deploy frequently can respond to customer feedback quickly, fix bugs fast, and iterate based on real-world data rather than assumptions.

The magic of frequent deployments is not speed for its own sake. It is that each deployment is smaller, lower risk, and easier to reason about. If something breaks, you know exactly what changed. If a feature underperforms, you can pivot quickly. If a customer requests something urgent, you can ship it this week rather than waiting for the next quarterly release.

Elite teams deploy multiple times per day. High performers deploy daily or several times per week. Medium performers deploy weekly or fortnightly. Low performers deploy monthly or less. Where you sit on that spectrum tells you a lot about whether your engineering culture prioritises speed and customer responsiveness or process and bureaucracy.

For a non-technical founder, this is the easiest metric to track. Just count how many times code went to production this week. No complex tooling required. If the answer is "I do not know," that is a red flag in itself.

2

Lead Time for Changes

How long does it take for a code commit to reach production? This is your end-to-end delivery speed. Lead time measures the entire pipeline from when an engineer writes code to when that code is running in production serving customers.

Short lead times mean you can respond to market opportunities quickly. A competitor launches a feature? You can match it this week. A customer finds a critical bug? You can fix it today. A regulatory requirement changes? You can adapt before the deadline, not after.

Long lead times are a symptom of process debt. Too many manual steps. Too many approval gates. Flaky tests that need re-running. Deployment processes that only work on Tuesdays when Dave is available. Each delay compounds, and by the time code reaches customers, the world has moved on.

Elite teams have lead times measured in hours. High performers measure in days. Medium performers measure in weeks. If your lead time is measured in months, you have a serious bottleneck somewhere in the system. Finding and fixing that bottleneck is often the highest-leverage improvement you can make.

3

Cycle Time

How long does it take to ship a feature from start to finish? Cycle time measures from when work starts to when it is shipped to customers. This is different from lead time, which measures code-to-production. Cycle time measures idea-to-customer.

Long cycle times usually mean scope creep, unclear requirements, or too much work in progress. If features take months to ship, they are either too large (break them down) or blocked by dependencies (fix the blockers) or waiting in queues (limit work in progress).

The relationship between cycle time and business outcomes is straightforward. Shorter cycles mean you learn faster. You can test hypotheses, see what works, double down or pivot. Longer cycles mean you are making bets based on old assumptions and only finding out months later whether you were right.

This is the metric that connects most directly to roadmap execution. If your sales team is promising features to close deals and those features take six months to ship, cycle time is your problem. Reduce it, and suddenly your commercial team has far more leverage.

4

Escaped Defects

How many bugs reach customers? This is your quality signal. Every engineering team creates bugs. The question is whether you catch them before customers do or whether customers catch them for you.

Escaped defects are expensive. They damage customer trust. They create support tickets. They interrupt engineering work because bugs in production always take priority over new features. A team that ships fast but ships broken features is not actually shipping value.

Track this as a rate, not an absolute number. Escaped defects per deployment gives you a quality metric that accounts for how much you are shipping. A team that deploys once a month with zero bugs is not better than a team that deploys ten times a week with one bug per deploy. The second team is delivering far more value despite the higher absolute bug count.

This metric also tells you whether your speed metrics are sustainable. If deployment frequency is going up but escaped defects are also climbing, you are cutting corners on quality to hit speed targets. That creates technical debt that will slow you down later. Sustainable speed means high frequency and low defect rates simultaneously.

5

Time to First Value for New Hires

How long does it take a new engineer to ship their first feature to production? This metric tells you whether your codebase and processes are set up for growth. It is a leading indicator of whether you can scale the team without grinding to a halt.

If a new hire takes three months to ship anything, you have one of three problems. Either the codebase is so complex that it takes that long to understand it (technical debt problem). Or your onboarding process is non-existent (documentation problem). Or your deployment process is so fragile that new hires are not trusted to touch it (process problem).

Elite teams get new engineers shipping code in their first week. Good teams hit it within two to three weeks. If it is taking more than a month, something is wrong. And if you are planning to scale your engineering team, this problem compounds. Hiring faster does not solve it. Fixing the underlying onboarding and codebase issues does.

This is also a retention signal. Engineers who do not ship anything meaningful in their first few months usually leave. They joined to build things, and if the environment prevents that, they will find somewhere that does not.

How to Set Targets Without Micromanaging

Measuring is the easy part. Setting targets is where most non-technical founders go wrong. The temptation is to treat engineering KPIs like sales quotas: set aggressive targets, tie them to compensation, and hold people accountable. This approach destroys engineering teams.

Engineering is a creative problem-solving activity, not a transactional one. When you tie KPIs directly to individual performance or compensation, you create perverse incentives. Deployment frequency becomes a target? Engineers start deploying trivial changes to hit the number. Escaped defects become a target? Engineers stop taking risks and only work on safe, boring features. Cycle time becomes a target? Quality suffers because the goal is shipping fast, not shipping right.

This is Goodhart's Law: when a measure becomes a target, it ceases to be a good measure. The solution is to use KPIs as conversation starters, not scorecards.

Start by measuring, not targeting

Track your metrics for at least a month before setting any targets. Understand your baseline. How often are you actually deploying today? What is the current lead time? Without baseline data, you are setting targets based on guesses rather than reality. Most teams are surprised when they actually measure. Things they thought were fast turn out to be slow. Things they thought were broken turn out to be fine.

Set team-level targets, not individual ones

Engineering is collaborative. Individuals work on different parts of the system with different complexity and risk profiles. Judging individuals by deployment frequency or cycle time creates competition rather than collaboration. Judge the team as a whole. Are we shipping faster this quarter than last quarter? Is the trend moving in the right direction? That keeps everyone aligned on the same goal.

Aim for improvement, not perfection

If you are deploying weekly now, aim for twice a week in six months. If your lead time is three days, aim for two days. Incremental improvement is sustainable. Asking for a ten times improvement overnight is not. Most teams can improve by 20 to 30 percent over six months without heroic effort. Push for continuous improvement rather than unrealistic leaps.

Involve engineers in setting targets

The people doing the work know what is realistic better than you do. If you impose targets from above, you get compliance or gaming. If engineers set their own targets, you get ownership. Ask them: "Where do you think we can realistically get to in six months?" Most engineers will set ambitious but achievable targets if they trust the process is about improvement, not punishment.

Review trends, not snapshots

Any individual week might be an outlier. A critical bug consumes the sprint. A key engineer is on holiday. A customer emergency derails plans. What matters is the trend over quarters, not whether this particular week hit the target. Look at rolling averages. Are things generally getting better, staying flat, or getting worse? That tells you far more than whether you hit an arbitrary target in a specific sprint.

The Dashboard Non-Technical Founders Should Ask For

You do not need expensive tooling to track engineering KPIs. Most teams overcomplicate this. They buy dashboards that track 50 metrics, none of which anyone looks at. Start simple. A weekly update covering these five questions is enough:

  • 1 How many times did we deploy to production this week? Trend over the last four weeks.
  • 2 What was the average lead time from commit to production? Sample five recent deploys.
  • 3 What shipped this week and how long did it take? List of completed features with their cycle times.
  • 4 How many customer-facing bugs were reported? Escaped defects per deploy, trending.
  • 5 Any new hires? When did they ship their first feature? Track this for the last three hires.

That is it. One page, updated weekly, reviewed in a 15-minute conversation. No complex charts. No vanity metrics. Just the five numbers that tell you whether engineering is getting faster, more reliable, and more scalable.

If you want to automate this later, tools like GitHub Actions, GitLab CI, or your issue tracker can generate most of these metrics automatically. But do not start there. Start with manual tracking in a spreadsheet. Build the discipline of looking at the numbers weekly before you invest in automation. Most teams never get past this stage and do not need to.

When KPIs Become Counterproductive: Goodhart's Law for Founders

Goodhart's Law states that when a measure becomes a target, it ceases to be a good measure. This is the central tension in performance management. You want metrics to drive behaviour, but the moment people optimise for the metric rather than the outcome, the metric loses its value.

Engineering KPIs are especially vulnerable to this because engineers are very good at optimising systems. Tell them deployment frequency is the target, and they will find ways to hit that target whether or not it improves business outcomes. Deploy configuration changes. Deploy comment updates. Deploy anything that counts as a deploy. The number goes up, but the actual speed of delivering customer value does not.

Here are the warning signs that your KPIs have become counterproductive:

Engineers are gaming the metrics

If deployment frequency is a target and engineers start deploying trivial changes, the metric is being gamed. If cycle time is a target and features get scoped down to fit the target rather than to solve customer problems, the metric is being gamed. If escaped defects are a target and engineers stop shipping risky features, the metric is creating risk aversion rather than quality.

The metrics are green but business outcomes are not improving

If deployment frequency is up but customer satisfaction is down, something is wrong. If lead time is shrinking but revenue is flat, you are optimising the wrong thing. KPIs are means, not ends. The end is business outcomes: revenue, retention, customer satisfaction, market share. If the KPIs improve but the business does not, the KPIs are measuring the wrong things.

Engineers are stressed about hitting numbers rather than solving problems

If your team is anxious about whether they will hit the deployment target this week rather than excited about shipping a great feature, the incentives are wrong. Engineering should be motivated by impact, not by arbitrary numerical targets. When KPIs create stress rather than clarity, they are doing more harm than good.

The fix is to use KPIs as diagnostic tools, not performance scorecards. When deployment frequency drops, ask why. Is there a bottleneck in the pipeline? A lack of clarity on priorities? A technical issue slowing things down? Use the metric to start the conversation, not to end it.

At Risika, we deliberately disconnected KPIs from individual performance reviews. The metrics were team-level health indicators, not individual scorecards. That created space for honest conversations about what was working and what was not without engineers feeling judged. The result was genuine improvement rather than metric manipulation.

Getting Started: What to Do This Week

If you are a non-technical founder who has just realised you have no idea whether your engineering team is effective, here is what to do this week:

Step 1: Ask for baseline data

Ask your technical lead for the five metrics above, measured over the last month. Do not set targets yet. Just understand where you are today. Most teams will need to start tracking these manually because they have never measured them before. That is fine. Start simple.

Step 2: Set up a weekly review

Block 15 minutes every week to review the metrics with your engineering lead. Not a formal meeting. Just a quick check-in. Are things trending in the right direction? What changed this week? What blockers appeared? Keep it conversational, not confrontational.

Step 3: Identify the biggest bottleneck

After a month of data, ask: "If we could improve one of these metrics by 30 percent in the next quarter, which would have the biggest business impact?" That becomes your focus. Do not try to improve everything at once. Pick the highest-leverage improvement and focus there.

Step 4: Involve the team

Share the metrics with the whole engineering team. Explain why they matter. Ask for ideas on how to improve them. Engineers are problem solvers. If you give them a clear problem (our lead time is too long) and autonomy to fix it, most will. If you impose solutions from above, they will resist.

This is not complex. But it requires discipline. Weekly tracking. Honest conversations. Focusing on trends rather than individual bad weeks. Most non-technical founders skip this because it feels like extra work. But without it, you are flying blind. You have no idea whether your engineering team is effective until something breaks badly enough that customers complain.

Better to have the visibility now, when you can still course-correct, than to discover six months before your Series A that engineering has been stuck in second gear the whole time and nobody noticed.

Need help setting up engineering KPIs?

A Fractional CPTO can help you design the right metrics for your stage, set up tracking, and coach your team on using KPIs effectively without creating perverse incentives.

Frequently Asked Questions

What KPIs should I use to measure my engineering team's performance?

The five KPIs that actually matter are deployment frequency (how often you ship), lead time for changes (how long from commit to production), cycle time (how long features take from start to finish), escaped defects (bugs that reach customers), and time to first value for new hires. Avoid vanity metrics like lines of code, hours worked, or commits per day. These create the wrong incentives and tell you nothing about whether engineering is delivering business value.

How do I set KPI targets without micromanaging my engineers?

Set outcome-based targets, not activity-based ones. Focus on 'deploy at least twice per week' rather than 'write X lines of code per day'. Involve your engineering team in setting targets so they have ownership. Start by measuring current state for a month before setting any targets, then aim for 20 to 30 percent improvement over six months. The goal is to create visibility and alignment, not to create surveillance.

What is deployment frequency and why does it matter?

Deployment frequency measures how often you ship code to production. It matters because frequent deployments mean smaller changes, which means lower risk per deploy, faster customer feedback, and quicker fixes when things break. Teams that deploy daily or multiple times per day can respond to customer needs and market changes far faster than teams that deploy monthly. It is a leading indicator of engineering effectiveness.

When do engineering KPIs become counterproductive?

When they are gamed rather than used for learning. This is Goodhart's Law: when a measure becomes a target, it ceases to be a good measure. If you tie compensation or job security directly to KPIs, engineers will optimise for the metric rather than the business outcome. If deployment frequency becomes a target, engineers will deploy trivial changes to hit the number. Use KPIs as conversation starters, not as scorecards.

Can I track these KPIs without expensive tools?

Yes. Start simple. Track deployments in a spreadsheet or Slack channel. Measure lead time manually by comparing commit timestamps to deploy timestamps for a sample of changes each week. Track escaped defects in your issue tracker. You do not need expensive dashboards to get started. Once you have a few months of baseline data and the discipline to track it weekly, then consider automation. Most teams overcomplicate this and end up tracking nothing.

Mike Tempest

Mike Tempest

Fractional CPTO

Mike is a Fractional CPTO helping UK startups make better technology decisions. With experience scaling products from zero to millions of users at Risika and RefME, he brings commercial thinking to technical decisions. Book a free day at fcto.uk/free-day.

Learn more about Mike