Blog


Comparing Opportunities in an Agile Portfolio

Agile approaches such as Scrum are excellent for successfully delivering great complex products but how do we know what products to develop and when? 

Take a look at the Agile Planning Onion, which I first saw many years ago in Mike Cohn’s Agile Estimating and Planning book:

In this article I want to explore how we can use agile techniques to help us make the comparisons required to make good portfolio decisions about what projects to start, stop or even pause regardless of what agile approach we prefer.

Agile Portfolio Management

There is a lot more to agile portfolio management than comparing projects.  If you'd like a more in depth view on the topic I'd recommend reading “Agile Portfolio Management” by Jochen Krebs as a good starting point - however, in this article I'll focus primarily on comparing projects, with the assumption that an experience team would be doing more than just comparing.

What Are We Comparing?

Whether we call what we’re developing projects, products, opportunities, features, epics or just big user stories really doesn’t matter to me. Here, I’ll refer to something we’ve not started as an opportunity and something that is in already in progress as an active project.

We can compare metrics such as cost, benefit, duration, current chance of success, strategic alignment, project type and active RAG (Red, Amber Green) status to help us make good decisions.

Who Is Doing the Comparing?

All the benefits associated with agile teams such as self-organisation and continual improvement can be gained by creating small and stable teams to analyse and compare our opportunities and active projects. We want these people to become really good at this kind of work, we might even want to call them a discovery team.

Hint: Jeff Patton wrote a great book called User Story Mapping, which includes really good practical advice, tools and techniques for building shared understanding about development opportunities.

How Are We Comparing?

The Discovery Team can use short, two-week or even one week iterations to gather just enough information to be able to perform the quick comparisons that we need. They will rely on multiple rich interactions with people from around the organisation and maybe even beyond - this can include customers, users and other stakeholders in order to build a good shared understanding of the opportunity.

Discovery Iteration Reviews

As part of the Discovery Iteration process they can hold iteration reviews where stakeholders can be shown what progress has been made towards building that shared understanding. These reviews give the team the opportunity to do three things:

Trash Opportunities - we have learned enough to know that we don’t want to carry on with the opportunity. Trashing things that would otherwise burn resource for little or no benefit early should be seen as a fantastic result. Present Estimates – we have built enough shared understanding to relatively estimate the opportunity’s metrics. Iterate Again - we haven’t built enough shared understanding to allow us to trash or estimate, so we iterate again. It’s All Relative

I’m one of those people who finds it useful to relatively estimate user stories using story points.  It allows me to quickly compare one story against another in terms of overall size, which is exactly what we’re looking to do with each of our opportunity metrics.

And just like estimating user stories, we are not looking for precision; we are looking to for a useful level of information such as “Opportunity A costs about twice as much as Opportunity B”.

We can play Planning Poker and generate fierce debate and relatively estimate each metric against other opportunities or projects that The Discovery Team understands really well, which is one of the advantages of a small stable team.

Visualising the Comparisons

We should now have some really valuable information that can be used to make good portfolio decisions. It’s important to document this information to avoid it being lost or forgotten, so we might want to create a table like the one below.

Doing this is practical but it’s not very visual. Krebs suggests the alternative of using a simple quadrant diagram, which I’ve adapted a little to include a few extra dimensions and to accommodate our relative estimates for each metric.  So we can plot our data like so:

Here’s how to use the diagram:

The shape identifies the type of opportunity or project. For our example, a circle represents something our organisation is choosing to do and a triangle represents a mandatory piece of work required to comply with a regulatory body.  The size of the outer shape is the relative benefits that our organisation hopes to harvest by developing the solution. The size of the inner shape is the relative cost of developing the solution. If there is only one shape the cost of the development is equal to or higher than the expected benefits. The letter in the centre identifies the opportunity or project and can be colour coded to represent a RAG status for active projects. A solid line is an active project and a dashed line is an opportunity. The colour of the shape represents a certain strategy that the opportunity or project is aligned to. All metrics are relatively represented by either the size of the shapes or the positon along either axis. The above diagram is for demonstration purposes only and is not exactly to scale. Drawing Conclusions

Using this diagram we can see, at glance, that we have four active projects and four opportunities that are a combination of optional and mandatory projects.   The portfolio seems fairly well balanced across our strategic alignment and also across the four quadrants. What other useful conclusions can we draw?

A is relatively inexpensive piece of work that has a good chance of successfully delivering positive benefits; which is supported by its green RAG status. B is a hugely expensive, long running piece and has a very low chance of success, which is probably reflected in its RAG status of red. Perhaps we should be looking at stopping it. C is a relatively inexpensive piece of work but its single shape indicates that we don’t expect to harvest any significant benefit from doing it as it is likely to run for a relatively high duration. Its RAG status is amber despite its high chance of success.  We should definitely take a closer look at this.  D is a fairly inexpensive, short, mandatory project that will be of benefit. However, its RAG is red; perhaps this is linked to its low chance of success.  We have no choice but to deliver this project, so perhaps should investigate what can be done to increase the chance of success.  E is inexpensive and short with a great chance of successfully delivering huge benefits. We should look to get this started soon, potentially by freeing up resources from Project B.  F costs a bit more, which is reflected in its long duration but it does have a decent chance of success of delivering positive benefits. It’s worth a closer look but it may need to wait.  G won’t cost that much and will yield modest benefits. Its lower chance of a success is a concern but at that cost it might be worth taking the risk.  H is another mandatory opportunity, which is more expensive and had a very low chance of success of yielding estimated benefits. However, it needs to be done, so perhaps we look into why the chance of success is so low and try and address those concerns before staring it.

The combination of the relatively estimates metrics and the diagram gives us useful visual clues that we can use to aid our portfolio conversations and decisions making process.  You can change, add or remove any of the metrics I’ve used as you feel appropriate for your own situation.

Please share your thoughts using the comments section below.