“That job is more than 30 days old. What’s the story there?” asked the colonel, his eyes peering over his reading glasses.
I have no doubt that my face contorted the way it tends to when you hear something that sparks confusion and disbelief at that same time. Confusion, because he knew full well why the job was more than 30 days old, a shortage of specialized manpower due to summer rotations and a gap in repair parts availability driven by a lapsed production contract. Disbelief, because we’d had this same discussion days earlier. But this time he was throwing me under the bus in front of the one-star Assistant Division Commander for Support chairing the division maintenance meeting.
Better to subject a subordinate to a public bloodletting than explain why a particular metric had not been achieved.
When Metrics Go Wrong
Metrics by themselves aren’t a bad thing. In goal-oriented organizations, they provide measures with which to gauge progress toward achieving those goals. Briefing the status of maintenance “jobs” over 30 days old, for example, ensured that leaders remained focused on equipment readiness, and that logistics expertise could be leveraged when systemic issues arose. However, when those leaders failed to maintain a proper focus on readiness or neglected to address known systemic issues – such as critical personnel shortfalls – it wasn’t unusual for them to simply lie or clear the way for the nearest bus to run down a subordinate in a crowded conference room.
“You really took one for the team today,” he later remarked as we left the conference center. Yeah, let’s just ignore the tire tracks on my back.
Choose the right metrics and use them correctly, and success tends to follow. The right metrics lead to informed assessments that not only help to achieve success, but also reveal patterns that can foreshadow systemic problems. When metrics go wrong, however, the results are often brutal. You don’t just fail to achieve designated goals; you often end up so far afield that you can’t remember why you set those goals in the first place. Generally, there are three ways metrics go wrong: they are misused, miscalculated, or misunderstood.
In one of my graduate classes, students complete a case analysis in which misused metrics are at play. In that case, students are confronted with a real-world situation where employees at a factory are rewarded with time off for accumulating a certain number of accident-free days. It was a perverse incentive: rather than improve safety, the metric led to a reluctance to report on-the-job injuries. The misuse of the metric was only revealed after an employee who had been struck by a forklift tried to “walk off” a severely broken leg.
Miscalculated metrics can be just as bad. In the example of the 30-day-old maintenance job, the colonel’s staff had miscalculated the metric and insisted that I was wrong when I initially raised concerns. It wasn’t until they discovered their mistake that panic began to settle in. Faced with an avoidable human error, the colonel had a choice to make: accept the blame for his staff’s mistake or let someone else take the fall. We all know how that turned out.
Finally, at the intersection of people with stubby pencils and assessment you find misunderstood metrics. As Dan Miklovic explained in a 2014 article, misunderstood metrics manifest themselves in three principal ways:
- Too many metrics. People who don’t understand how assessments work often assign so many metrics to be tracked that the measuring detracts from the doing. Ultimately, it’s impossible to see the forest for the trees.
- Arbitrary metrics. For assessments to work as intended, the metrics must relate to the processes being measured and provide meaningful insight into performance. If a metric doesn’t pass the so what test, then it’s likely arbitrary and, therefore, meaningless.
- Static metrics. The world isn’t static, so metrics shouldn’t be, either. Metrics need to be as dynamic as the world around them, so they need to be adjusted as knowledge and understanding of the larger system increases.
Obsessing Over Metrics
Recently, Lt. Gen. Jody Daniels, the Chief of the Army Reserve, wrote on the dangerous obsession with metrics among her subordinate leaders. “Company commanders are reporting that they spend one or two nights a week briefing metrics to a higher headquarters,” she wrote. This obsession, she continued, is driven by a couple of factors. One – and this is a common problem in a number of professions – “metrics are the easy button.” They provide a quantitative means to gauge performance. Two, our information systems allow for senior staffs to build dashboards with which to track those metrics. Neither offer any sense of context, so in an attempt to gain that context, senior staffs and leaders will levy a barrage of questions at hapless junior leaders.
Because senior staffs are focused on the metrics instead of context, junior leaders are left measuring instead of doing. The time they should spend on achieving the readiness desired by higher headquarters is lost in an obsessive-compulsive quest to measure everything. Instead of enabling subordinate leaders, senior staffs are impeding progress. “Leaders need to stop monitoring metrics every week,” Daniels wrote. “Instead, they need to… [understand] the readiness of their unit.”
This obsession with metrics is nothing new. In their 2015 monograph, “Lying to Ourselves,” Leonard Wong and Stephen Gerras wrote at length about how quickly that obsession led to dishonesty among military leaders. If you know that failing to achieve a specified metric might lead to your public humiliation, then you might be more apt to lie about it. One officer interviewed for the monograph provided a telling anecdote:
We needed to get [sexual harassment] training done and reported to higher headquarters, so we called the platoons and told them to gather the boys around the radio and we said, ‘Don’t touch girls.’ That was our quarterly training.
The obsession with metrics ultimately leads to what Wong and Gerras call ethical fading, where decisions are divorced from their inherent moral implications. Ethical fading allows people to “transform morally wrong behavior into socially acceptable conduct by dimming the glare and guilt of the ethical spotlight.” The signs of ethical fading are all too familiar in the language that disguises it – euphemisms such as “just checking the box” or “telling them what they want to hear.”
This is the point where good leaders look at themselves in the mirror and wonder aloud, “Am I doing that?” If you’re asking the question, then it’s a fair bet that you’re not. And if you’re not asking yourself the question, then you’re probably already well on your way to throwing someone under the bus.