Agile Metrics You’re Not Supposed to See

Now that I’ve ticked off half the agile community, let’s be fair. I should offer something back. You guys can trash this, copy it, or ignore it, whatever works for you. Just don’t say I never gave you anything.

Let me count the ways metrics have been presented to agile teams.

  • Agile Metrics don’t exist – Metrics are for waterfall teams
  • Nothing to see here, please move along. – Metrics for agile teams is just like metrics for regular teams
  • They’re like regular metrics, only cuter – Look! We use colored pens!
  • What we really need is an agile checklist – We need to know if people are doing agile the way they’re supposed to be doing agile. If so, there won’t be other problems
  • It’s all in there – Existing agile artifacts give you everything you need

Let’s cut the crap. The only metrics I really care about is: is the team worth the money they are getting paid? Do things get delivered in a way that the customer is ecstatic about? Is the team happy and productive?

I can make the obvious statement that if you’re worried about something, you should track it — at least until you’re not worried about it any more. But I’m not going to do that.

Or I can give you another platitude: you can’t manage what you don’t measure, you can’t measure what you don’t define, and you can’t define something without a common vocabulary and grammar.

But I won’t do that either.

Instead of defects-per-hogshead, or man-hours-per-story-point, let’s get down to brass tacks. Let’s make it personal. What do I use to measure teams? Certainly when working with a dozen teams or more I have to have some kind of insight into what’s going on. Something I can share with others. Some way of telling if I’m doing a good job, what the real problems are.

Well, what is it?

I’m about to show you two metrics. One is mostly useless. The second is terribly vague and confusing. Put together they do a pretty job of nailing down how to keep an eye on the agility of several teams at once. And most folks don’t want to see them.

A spider graph of agile practices the team is using

Stuff you’re doing

This is a spider graph of stuff you’re doing — TDD, co-location, pair-programming. It’s all in there. Sure, the titles read like verbs “collaborating”, but in reality it’s a checklist of stuff you’re doing. The assumption is that if you do all the “collaborating” things, you must be collaborating, right?


a MAT graph of problems the team is facing

Stuff you’re worried about

I invented this. It’s called the MAT, or Markham Assessment Tool. It’s a graph of the things you are worried about. It’s based on four principles:

  • You can’t fix something that you don’t have a common language for
  • There are only so many goals in technology development: learn what needs doing, test what you did, etc. These over-arching goals are present no matter what the project
  • There are only so many obstacles to reaching your goals: management doesn’t support you, you don’t know how, you don’t have enough time, etc.
  • We combine a common set of goals with a common set of obstacles, measure that, and we’ve got something useful. A DSL for retrospectives, if you like

I used to really hate checklists and graphs like the first graph. I found them mostly useless. In fact, if anything, they are counter-productive. The first time you show a “what stuff are you doing graph” to a manager, he’s going to ask “well why aren’t we doing all of this agile stuff, then? Let’s make that graph sing, guys!”


As I’ve said before, you can do a bunch of best practices and still suck — no matter how well you do them. Likewise, you can do just fine with none of these. Or just a few. It’s not a yes/no question.

So what’s the point of the first graph? The answer lies in the second graph.

The second graph tells you stuff the team is worried about. Do we have enough resources to test like we want to? Does management support our efforts to co-located? Are we having honest discussions about what’s wrong and working on fixing it?

I used to love the second graph. After all, I invented the dang thing. I could walk into any team and within a few minute be able to determine if they might need training, or working with middle-level management, or just some help understanding how the tools fit together. 30 days of interviews couldn’t tell you that much, and I could give the MAT in about 20 minutes. It was a great kick-starter to helping teams. Plus the information neatly rolled up into larger organizations, helping them track common problems and needs across many teams at once.

But it didn’t work.

Why didn’t it work? Because teams are always having problems, that’s why. Even good teams — and I might add especially good teams — are always working on a list of problems that are intractable. (The difference between good teams and bad teams is that good teams are always making excuses for bending the rules. Bad teams are always making excuses for not getting any work done). So it didn’t really tell you if a team was in trouble.

But there was a bigger problem with the MAT. A really big problem.

People didn’t want to see it. People didn’t want to know that the ten teams they are funding are all complaining about not knowing the tools. After all, we just spent ten million dollars on tools! They don’t want to hear that managers may say they let teams make decisions, but really they’re in there everyday mucking around with stuff. People don’t want to hear that after we brought in two new testers the team is still worried sick about testing.

Worse still, after running the MAT a few dozen times on various teams, it became clear to me that when a team is facing a bunch of obstacles for a particular goal — when there are a lot of red cells in any one row — they may have no ability at all to make things better They’re stuck. The organization itself has conspired to prevent them from succeeding. You can train your ass off, and do all kinds of pin-the-tail-on-the-storyboard games, but they’re not going anywhere. Think anybody wants to hear that?

Now, the cruelest blow.

The consultants didn’t want to hear the results of a MAT either. You think if you’ve been teaching TDD for three months and all of your students think testing sucks you’re going to want to know that? You think if you’ve just convinced management to send 100 folks to CSM class and the numbers show that it’s not a concern of the team that you’re going to be happy? Like to hear how much that off-shoring complex code delivery monstrosity is costing you?

Didn’t think so.

When you ask people about this they’ll say of course, we definitely want to know where there are problems with the existing systems. [Insert long speech about bunnies and the goodness of everything that the little folks do here. If you like, you can add some philosophy and political theory too.]

But not really. If you really press them, they’ll tell you that teams are always complaining. “There are a few of them in every bunch” some will say.

The consultants are even more hilarious, or depressing, depending on how you look at it. Many times I’ve said something like “see, looks like the team is worried about project management skills. They’re lacking knowledge. Maybe we should help them with training.” only to hear something like “you see, the team doesn’t really know what it needs or wants. We’re here as consultants to ‘help’ them by telling them what to do”

Piece of cow crap.

Yes, teams are well unaware of how much better they can do. And this, after all, is the job of the outsider — helping folks see and do things they wouldn’t otherwise try. Outside consultants have a key role. As long as their heads are screwed on straight.

But you can’t help somebody fix something that they don’t see as a problem. This sounds totally silly and easy, but I bet you a majority of agile teachers and outside consultants in general secretly do not believe this. It’s a tragedy.

Making this worse — much, much worse — is that because the “I don’t want to see that” is pervasive, it’s really everybody outside the team. The team is the true driver of value in an organization. They’re the guys you pay to make things happen. But everybody else? As well-meaning as most of these folks are (and I am one of them) and as much as they want to help, there’s really not much incentive for direct, honest communication about the realities of whatever we’ve been foisting on them. If I’m here on an H1B Visa and you ask me how that new whizbang sever system you bought is doing, I’m probably going to tell you it’s doing fine. If I’m the guy who’s been here ten years and seen seven different management fads, I’m probably not going to tell you your latest fad sucks.

So we get no traction. And instead of finding out why we have no traction, what do we do? We start cheerleading. We cajole. We tie incentives to our new ideas. We train, we browbeat, we threaten, we get angry. Anything but what really needs doing.

So Mr. Smart-Ass, what’s your answer?

As for me? I look at what the team is doing (first graph), then I look at what the team is worried about (second graph), then I look at what matters to the check-writers — the project burn-down (or some other indicator of value/time performance).

This gets me in a good place to start a conversation. I take that conversation and try to learn as much as I can. I’m the idiot here, after all. I’m just the guy with hundreds of little tips and tricks about stuff, most of which are inapplicable. These are the people who know what the heck is going on. Together we figure out what to fix.

Over time, I look at three things.

Is the team changing their practices? Not just “doing more agile” but really trying different things? The enemy of performance in technology teams is communication and getting in a rut. So are they taking away stuff that didn’t work? Trying out new things?

Are the things the team is worried about changing? If you were worried about access to the user at the beginning of the project, did somebody do something to fix that? Changing up practices and worrying about different things tells me the team is communicating, which is the other big problem.

Finally, is the team doing a good job for the company? If so, forget the other two things. If people love them and they’re rocking on into the free world, then leave them the hell alone. Don’t fix something that isn’t broken.

Continue your journey with this book on being a ScrumMaster..

If you've read this far and you're interested in Agile, you should take my No-frills Agile Tune-up Email Course, and follow me on Twitter.

5 thoughts on “Agile Metrics You’re Not Supposed to See

  1. Bob MacNeal

    I’m with you on this.

    Agile Metrics are the primary food source for controllers, but empty calories for doers.

    Most agile teams measure speed under the delusion of measuring velocity, forgetting that velocity vectors have a direction component. Direction frequently makes or breaks a software project.

    One of my rants vis-a-vis measuring speed (e.g., typical burn down chart) is:

    It doesn’t matter how fast we deliver if we deliver junk.

    Thought provoking post. Thanks!

  2. Dave Nicolette

    I really like this post. It rings true for me. The only problem I have with your approach is that you seem to have a bug up your ass about “agile.” The problems you describe are endemic in the IT industry. Have been for as long as I can remember. Got nothing to do with “agile” as such. Otherwise, right on target.

  3. DanielBMarkham

    Thanks for the comment, Dave.
    Sorry about the bug — this is a continuation from a previous post about agile adoption.
    You are correct that this is prevalent in many situations, as I’ve used this technique on everything from Six Sigma to Real-Estate.
    For some reason I had “an agile bug up my ass” for the last week or so. I’m sure I’ll move on to something else in time. Thanks for stopping by.


Leave a Reply

Your email address will not be published. Required fields are marked *

* Copy this password:

* Type or paste password here:

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>