You know how to measure sales: revenue, pipeline, conversion rates. You know how to measure marketing: CAC, CPL, MQL to SQL conversion. These are clean, simple numbers that tell you whether the function is working.
So you try to apply the same thinking to engineering. Revenue per engineer. Lines of code per day. Features shipped per sprint. Story points completed. Hours logged. And then you wonder why your engineers get defensive when you ask about productivity.
The problem is that typical business KPIs assume a linear relationship between input and output. More sales calls equals more revenue. More ad spend equals more leads. But software does not work that way. The relationship between engineering effort and business value is non-linear, delayed, and context-dependent.
A feature that takes three days to build might generate ten times the revenue of a feature that took three weeks. An engineer who writes 100 lines of code might deliver more value than an engineer who writes 1,000, because the 100 lines deleted unnecessary complexity. Story points measure estimation accuracy, not delivered value. Hours logged measure presence, not progress.
At Risika, we learned this the hard way. Early attempts to track velocity and story points created perverse incentives. Engineers started inflating estimates to hit targets. Features got scoped to fit the sprint rather than to solve the customer problem. The metrics were green but the business outcomes were not improving.
The shift came when we stopped trying to measure activity and started measuring outcomes. Not how busy engineering was, but whether engineering was delivering the business results we needed. That required different metrics entirely.