Thursday, April 25, 2013

http://www.rallydev.com/community/agile/using-economics-prioritize-your-backlog?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+agilecommons%2Fcommonsblog+%28Agile+Blog%29

http://www.rallydev.com/community/agile/using-economics-prioritize-your-backlog?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+agilecommons%2Fcommonsblog+%28Agile+Blog%29

How do you prioritize your features?  Is it a gut-feel kind of thing?  Is it based on who's yelling the loudest?  Is it based on what drives your next big sale?  Do you do it collaboratively or alone?
In his book Principles of Product Development Flow, Don Reinertsen suggests using calculated economic models to decide what work to do first.  Specifically, he advocates an approach called Weighted Shortest-Job-First, or WSJF.  All other things being equal, the shortest job will deliver value soonest, so you should do that one first.  
But all other things are not equal - some projects reduce risk more than others, some enable other opportunities, and some are more important to your customers.   So you weight the scores - roughly, value over job size.
This approach is gaining popularity in part because the SAFe framework suggests you use it.  And it sounds great on paper.  But how does it work in the real world?
At Rally, we recently held a roadmap planning meeting with one of our product lines, and we tried a collaborative game to incorporate this economic technique into our planning session.  
We started by estimating relative job size.  Our group of 14 tech leads, product owners, and other leaders started with a bunch of stickies on a whiteboard representing value we wanted to deliver.   I asked the team to sort them by size - smallest items at the top of the board; largest at the top.  As they moved the items, they discussed each move.  People took turns, and asked clarifying questions.  
"Why do you think that one is so huge?" asked Greg, a product marketing director.  
"Because I have no idea what it means! It could be anything!" replied Ryan, a dev manager.   Together, they were able to clarify some details and get it sized, and we processed about 40 items in about 20 minutes.
As the movements slowed down, I stepped in and forced the stickies into 5 clusters.  I asked the team to correct my clustering if I had made mistakes, and they adjusted a few stickies.
We then translated them into rough cost.
I then asked the group about the smallest cluster.   "Do we think each one of these on its own could be completed by a team in significantly less than 3 months?".  They said yes.  "How about the next group? Is that still less than 3 months?"  Yes.  "How about this next group?  Is it about 3 months, or is it more?".  I did some simple math to figure a rough loaded cost for a team for a month, and used that to put rough dollar amounts on each size - $100k, $150k, $250k, $750, $1M+.
A couple of things about this:
  1. You don't have to be very accurate about this.  You just need a rough sense of relative size.
  2. The dollar cost went up steeply for the larger items.  I wanted a dollar amount on the bigger items that would prompt fear, and then conversation about how it could be broken down.  Beyond 3 months, you have no idea how big these items really are.  But, if they're valuable, you can break them down more.
Value Scoring
Once we had our jobs roughly sized, we used a Google spreadsheet to value score them.  To strictly follow Reinertsen's approach, you'd actually calculate relative scores for user value, time value, and risk/reduction opportunity enablement value for each item.  But I had 14 people in the room and I figured across the group I could get similar results a quicker way.
I went with Johanna Rothman's suggestion to let people distribute value points across all the items any way they wanted, and then explain their rationale.  
Here's how it looked:
It's not a 'buy-a-feature' activity.  Rather, each person got 10,000 points (enough to feel rich) to distribute however they wanted across all the items.  After 5 minutes, each person talked through their rationale.
This was the important part.
If Tom and Alan are using completely different rationales to prioritize, and then we just use a formula to rank our items, and that formula happens to put Alan's favorites at the top of the list, Tom doesn't feel heard.  The reality is that each person has a very good reason for the scores they offered.  The goal of this meeting is to have a really rich conversation about value.  I want to go beyond Reinertsen's goal of getting our priorities right.
I want the whole team bought in to our decisions.  The conversation about the rationale helps us get there.
Then we sorted the list, highest value score first.  This was interesting - we saw a lot of obviously important items bubble to the top.  Some of them were very large.  We talked about how people felt about the results - what was missing that they felt should have been higher.  
The magical calculation
Then Michael, an internal coach, spoke up, and suggested we try a weighted-shortest-job-first score.  To do this, we divided our total value score by the cost (the value/cost column in the picture above).  A number of items that were small but valuable jumped higher up in our list.  This led to another valuable conversation
So we're done, right?
Does the WSJF scoring solve all prioritization problems?  Do you work on items in exactly that order?  Not exactly.    We did another activity to lay out our work into our roadmap, and this led to further conversations about the capabilities of different teams, dependencies with other groups, and the like.  It's not a perfect technique, but it was an incredibly valuable input for us.
For more on  managing a portfolio, tracking and prioritizing work according to its value, and effectively aligning business strategy with development work, join our Portfolio webinar series.

No comments:

Post a Comment