Comparing Opportunities in an Agile Portfolio

Agile approaches such as Scrum are excellent for successfully delivering great complex products but how do we know what products to develop and when? 

Take a look at the Agile Planning Onion, which I first saw many years ago in Mike Cohn’s Agile Estimating and Planning book:

In this article I want to explore how we can use agile techniques to help us make the comparisons required to make good portfolio decisions about what projects to start, stop or even pause regardless of what agile approach we prefer.

Agile Portfolio Management

There is a lot more to agile portfolio management than comparing projects.  If you'd like a more in depth view on the topic I'd recommend reading “Agile Portfolio Management” by Jochen Krebs as a good starting point - however, in this article I'll focus primarily on comparing projects, with the assumption that an experience team would be doing more than just comparing.

What Are We Comparing?

Whether we call what we’re developing projects, products, opportunities, features, epics or just big user stories really doesn’t matter to me. Here, I’ll refer to something we’ve not started as an opportunity and something that is in already in progress as an active project.

We can compare metrics such as cost, benefit, duration, current chance of success, strategic alignment, project type and active RAG (Red, Amber Green) status to help us make good decisions.

Who Is Doing the Comparing?

All the benefits associated with agile teams such as self-organisation and continual improvement can be gained by creating small and stable teams to analyse and compare our opportunities and active projects. We want these people to become really good at this kind of work, we might even want to call them a discovery team.

Hint: Jeff Patton wrote a great book called User Story Mapping, which includes really good practical advice, tools and techniques for building shared understanding about development opportunities.

How Are We Comparing?

The Discovery Team can use short, two-week or even one week iterations to gather just enough information to be able to perform the quick comparisons that we need. They will rely on multiple rich interactions with people from around the organisation and maybe even beyond - this can include customers, users and other stakeholders in order to build a good shared understanding of the opportunity.

Discovery Iteration Reviews

As part of the Discovery Iteration process they can hold iteration reviews where stakeholders can be shown what progress has been made towards building that shared understanding. These reviews give the team the opportunity to do three things:

Trash Opportunities - we have learned enough to know that we don’t want to carry on with the opportunity. Trashing things that would otherwise burn resource for little or no benefit early should be seen as a fantastic result. Present Estimates – we have built enough shared understanding to relatively estimate the opportunity’s metrics. Iterate Again - we haven’t built enough shared understanding to allow us to trash or estimate, so we iterate again. It’s All Relative

I’m one of those people who finds it useful to relatively estimate user stories using story points.  It allows me to quickly compare one story against another in terms of overall size, which is exactly what we’re looking to do with each of our opportunity metrics.

And just like estimating user stories, we are not looking for precision; we are looking to for a useful level of information such as “Opportunity A costs about twice as much as Opportunity B”.

We can play Planning Poker and generate fierce debate and relatively estimate each metric against other opportunities or projects that The Discovery Team understands really well, which is one of the advantages of a small stable team.

Visualising the Comparisons

We should now have some really valuable information that can be used to make good portfolio decisions. It’s important to document this information to avoid it being lost or forgotten, so we might want to create a table like the one below.

Doing this is practical but it’s not very visual. Krebs suggests the alternative of using a simple quadrant diagram, which I’ve adapted a little to include a few extra dimensions and to accommodate our relative estimates for each metric.  So we can plot our data like so:

Here’s how to use the diagram:

The shape identifies the type of opportunity or project. For our example, a circle represents something our organisation is choosing to do and a triangle represents a mandatory piece of work required to comply with a regulatory body.  The size of the outer shape is the relative benefits that our organisation hopes to harvest by developing the solution. The size of the inner shape is the relative cost of developing the solution. If there is only one shape the cost of the development is equal to or higher than the expected benefits. The letter in the centre identifies the opportunity or project and can be colour coded to represent a RAG status for active projects. A solid line is an active project and a dashed line is an opportunity. The colour of the shape represents a certain strategy that the opportunity or project is aligned to. All metrics are relatively represented by either the size of the shapes or the positon along either axis. The above diagram is for demonstration purposes only and is not exactly to scale. Drawing Conclusions

Using this diagram we can see, at glance, that we have four active projects and four opportunities that are a combination of optional and mandatory projects.   The portfolio seems fairly well balanced across our strategic alignment and also across the four quadrants. What other useful conclusions can we draw?

A is relatively inexpensive piece of work that has a good chance of successfully delivering positive benefits; which is supported by its green RAG status. B is a hugely expensive, long running piece and has a very low chance of success, which is probably reflected in its RAG status of red. Perhaps we should be looking at stopping it. C is a relatively inexpensive piece of work but its single shape indicates that we don’t expect to harvest any significant benefit from doing it as it is likely to run for a relatively high duration. Its RAG status is amber despite its high chance of success.  We should definitely take a closer look at this.  D is a fairly inexpensive, short, mandatory project that will be of benefit. However, its RAG is red; perhaps this is linked to its low chance of success.  We have no choice but to deliver this project, so perhaps should investigate what can be done to increase the chance of success.  E is inexpensive and short with a great chance of successfully delivering huge benefits. We should look to get this started soon, potentially by freeing up resources from Project B.  F costs a bit more, which is reflected in its long duration but it does have a decent chance of success of delivering positive benefits. It’s worth a closer look but it may need to wait.  G won’t cost that much and will yield modest benefits. Its lower chance of a success is a concern but at that cost it might be worth taking the risk.  H is another mandatory opportunity, which is more expensive and had a very low chance of success of yielding estimated benefits. However, it needs to be done, so perhaps we look into why the chance of success is so low and try and address those concerns before staring it.

The combination of the relatively estimates metrics and the diagram gives us useful visual clues that we can use to aid our portfolio conversations and decisions making process.  You can change, add or remove any of the metrics I’ve used as you feel appropriate for your own situation.

Please share your thoughts using the comments section below.

Common Pitfalls of Agile Teams

Agile, and Scrum in particular, has become mainstream for most development teams. Despite so much knowledge in the field and an ever-increasing number of coaches, many teams still struggle with common pitfalls which could hinder their productivity.

The Development Team is Responsible

When talking about teams, it is not only the development portion of it that’s important but everyone else as well, including the product owner and Scrum Master.

The whole team is responsible for providing an incremented and potentially shippable product at the end of each sprint: The added value is identified and defined by the product owner before being materialized by the development team under the guidance and support of the Scrum Master.

It is a shared responsibility, and each member of the team is responsible for the overall goal.


Some teams might misuse the retrospectives by transforming them into a blaming or complaining (and sometimes justification) event. A retrospective is the team’s opportunity to take a break and reflect on the events and outcomes of the finishing sprint. It is an opportunity to learn, inspect and adapt. Teams shouldn’t skip retrospectives and should try to identify as many potential lessons as possible.

Agile is Fast

The nomenclature of agile might falsely give the impression that agility means speed. Although most agile teams deliver faster than normal teams, this is not what agile really means. Agile really means responding quickly to change. There might be short sprints with shippable product increments at the end of each, as in Scrum or, more often, prioritization and releases as in Kanban.

The essence is to get earlier feedback and rapidly respond to customer requirements and business change. Teams shouldn’t be tempted to sacrifice quality or improvements to achieve the illusion of being fast. If they fall into this trap, they will actually get slower by incurring technical debt. So, it’s crucial that teams understand what agility really means and take care to focus on aspects other than speed.


The most commonly argued question in the software industry is: “What can be considered a bug?” Some teams might find it easier to close a failing user story and create a bug as follow-up. Well, such a team is not one whose members understand agility.

The team should focus on delivery and ensure that each user story that is closed can be potentially delivered to the customer the next day. A bug is only reported when something is not working as expected after delivery, but closing a user story after reporting a bug for it within the sprint is more like showing and reporting progress than building and delivering products.

The Big Picture

Teams are not working on user stories in sprints. Rather, teams use an incremental approach to deliver functionalities. This means that they need to see the big picture of what they are working on and how to maximize the value delivered to their stakeholders.

Again, it is necessary to highlight that the product owner is part of the team and that it is his or her duty to draw this picture, make it visible to the rest of the team and get their feedback on it. The big picture could be named a theme, an epic or a feature—call it what you wish, but please make sure that it is present.

These are some of the common misunderstandings and pitfalls agile teams could fall into. Did you face any of them? Have you experienced others? Feel free to share in the comments section.




The Product Roadmap as a Driver for Innovation and Engagement

A product roadmap links organization strategy to tactical actions. It drives us where we are going, asks us why we are going there and reminds us of what questions we are trying to answer, all within a specified timeframe. According to Mike Cohn, "a product is something (physical or not) created through a process that provides benefits to a market.” In this post, I’ll give you a quick and pragmatic view of how to build a product roadmap and what to avoid in the process.

Translate Business Goals Into Concrete Actions

You can use the Business Model Canvas, which captures your organization’s market and business model assumptions in order to state and validate your ideas. With this tool in hand, you will have three of the most important answers that drives the creation of the product roadmap:

Who are the personas, markets and market segments that you are aiming to target? What is the proposed value and solution in terms of capabilities to deliver? What are the business value and key metrics we are going to address? When should we expect to deliver to our customers? How often? Focus on Features and Business Value, Not Timelines

A product roadmap is not a project chart/Gantt chart, thus there is no need to depict it as many deliverables and tasks representing a detailed schedule and exact dates. It’s well known that it affects the creative imagination and it can easily lead to a “do what I told you to do in following the plan” mindset. As product owners, know that planning something in advance is not feasible, because our work depends on constant feedback, learning and adaptation. In other words, KISS (keep it simple, stupid!).

Get All Involved and Buy-In

A product roadmap cannot be built by just one person, including the product owner. The inputs should come from engineering, marketing, operations, security, risk, sales and so on. All are invited to contribute and generate ideas. Without buy-in, it would be extremely challenging to execute and deliver, especially in medium to large companies.

A roadmap is broken down from strategic opportunities to distill a problem or purpose to a tactical level, where it can be finally materialized. It acts as a glue between strategy and tactics, and it also clearly communicates what is being delivered and is updated very often. Essentially, it is a live product communication that maps to all stakeholders.

A Release Plan is Tactical, While a Roadmap is Strategical

A product roadmap is not a release plan: The former is linked to the strategy (longer term), while the latter is linked to the tactics (shorter term). A release plan can contain several releases, with each one divided into a few sprints, covering the next 3 to 6 months. A product roadmap, on the other hand, generally covers one year. A release plan consists of user stories and product backlog items, while the roadmap incorporates high-level features, goals and product capabilities. 

Imagine With a Different Perspective

What’s the real definition of innovation? A new idea, method or equipment--something fresh. But, is that all? Is it just about creativity and invention? The answer, in short, is no. In fact, it can also introduce something new to an existing product or method. Innovation creates value for people.

For instance, the first computer was an invention, the first laptop either an invention or an innovation and the first touchscreen computer an innovation. In technological terms, the real value of applying innovation is to solve a problem to serve a group of people. If you want to attract a large amount of people to use your product, you will need an open mindset consisting of multiple different cultures.

So, what does this mean? A good product roadmap is one that gathers the best and most innovative ideas that fit to the company strategy. For inspiration, check out the photo below, where Slack invites people to collaborate with the product development platform.

Never Set in Stone

A product roadmap in agile is often revisited and updated, not only because it relies on user inputs and experimentation, but also because it can change as the market evolves. To start, you can open the product roadmap in three simple columns to gather insights from a group of people of different areas of expertise.

Pragmatic Roadmap Adds Benefits

Product roadmaps come with plenty of advantages. They:

Engage people to collaborate and think creatively and differently. Connect to stakeholders and influence them to make the product successful. Align the expectations of shareholders, sponsors and high-level executives. Cultivate a melting pot of cultures, races, genres and people for a combined and common value of experience. Communicate and set a purpose for the upcoming months. Help to plan the team size required for the product, as well as assist the coordination with the other teams that might be a dependency. Support budget planning and forecasting along with ROI, if possible. Drive prioritization efforts.

Do you agree with my assessment of the value of product roadmaps, or have any tips for ensuring their success? Let me know in the comments section below.

#NoEstimates: The Division of Scrum

Have you encountered the #noestimates hashtag yet? If so, you likely have strong feelings on the topic, whichever way you lean--it’s just about the most divisive topic you can find in the Scrum world these days.

Devotees on either side will be happy to tell you why they’re right and the other side is wrong: State your mild opinion on the topic in the wrong crowd and you could find yourself feeling like a Democrat at a Republican convention, or vice versa.

At its core, the argument is over the benefit of spending a team’s time on estimating stories. Those who support estimates will tell you it’s a valuable tool that provides businesses with fact-based projections on when a project might be complete. Opponents will say that the benefit of estimating--namely, being able to predict when a project might complete its work--is not worth the time devoted to creating that projection and that a team is better off reclaiming that time to develop software.

It's important to recognize that, when it comes to the issue of estimates, the Scrum framework is Switzerland. It takes no official position on estimating and it’s considered an optional practice, though one widely used by Scrum teams. If you take a Certified Scrum Master course, you are likely to find a module devoted to the topic and time spent practicing Planning Poker. The entire concept of User Stories comes from the XP framework, not Scrum.

If you trace the history of the #noestimates hashtag, you will undoubtedly come across the name of Mr. Woody Zuill. Woody came through my town last year and spoke at our local Scrum Alliance sponsored meetup group to discuss the topic.

Having considered myself to be smack dab in the Mike Cohn camp on estimating at the time, I was skeptical of this whole #noestimates concept. My experience when researching it involved mostly argumentative individuals intent on proving those of us who were still estimating to be relics of a time gone by. My expectation then was that Woody would take a similar stance and spend his time trashing the estimating crowd.

As it turned out, I could not have been more wrong.

Woody Zuill is one of the kindest, most humble and intelligent speakers I’ve ever heard. His demeanor will instantly disarm even the harshest critic; the impression he gives is of someone who has vast experience in software development and has experimented with numerous ways in which to optimize the agile process with his teams. Plus, all his illustrations are drawn by his daughter. How can you not respect that?

After hearing Woody out on the topic, I realized that his stance on the issue was incredibly different that I’d previously assumed. Woody teaches that, to him, #noestimates means using estimates only when there is a clear value to doing so. He suggests that many are spending far too much time estimating because that is what they have always done or have been taught; they haven’t stopped to ask if they are really getting a benefit from the time they spend estimating with their teams.

I found myself in complete agreement with him. In fact, I would go even further and say that any practice we find ourselves repeating with our teams should be examined from time to time to make sure we are getting the intended value out of them. How often do we have teams blindly recite the three questions for the Daily Scrum without making sure they understand why we are asking those questions, and what the point of the Daily Scrum is in the first place?

Since being exposed to #noestimates, I have found that when I am coaching a team I now make a regular practice of stating at the outset of every meeting our reason for attending that meeting and what we hope to get out of it. If my teams can better self-organize at their Daily Scrums by abandoning the standard three questions in favor of some other communication to organize their daily work, more power to them! After all, that is the point.

I would invite you then to attempt to see Scrum’s practices with new eyes. We all know what to do in our ceremonies but do we all know why we do them? Could we accomplish our goals for these meetings more fully by experimenting with alternatives? If our goal is to empower our teams to become the highest performing incarnation of themselves as possible, why wouldn’t we seek out new and unique ways to encourage this?

While I might never call myself a #noestimates supporter, I do wholeheartedly endorse the spirit behind the movement as Woody presented it. Examine why we do what we do and ask yourself if there might be a better way.

What do you think about the #noestimates movement? Do you find value in estimating with your teams? What practices have you taken a fresh look at in an attempt to enhance the benefit to your teams? Let me know in the comments section below.

Scrum Masters: Don’t Tell Development Teams How to Do Tech Work

I recently found a type of paper slip in my post office box that I had not seen before. Thinking it was for a package delivery, I handed it to the lady at the counter, who grimaced.

She and another employee explained with some frustration that it was used as a second-notice slip, but that sometimes the system printed them out even after the package had been handed over (the other employee did, in fact, return empty-handed after looking for whatever package I was supposed to receive).

What surprised me the most was their assertion that this process must have been thought up by someone who didn’t do the actual work of customer service.

The Scrum Guide states that “no one (not even the Scrum Master) tells the development team how to turn product backlog into increments of potentially releasable functionality” (emphasis added). There is a good reason for this: The developers are the ones closest to the code, and are usually in a better position than the Scrum Master to understand what low-level decisions should be made and why.

A Scrum Master might come from a technical background in the same area that the team is working in, and might in fact be more experienced than the developers, but that does not give them the right to guide the development work. The Scrum Master’s job is to guide the Scrum process, not the implementation of features.

This can be a challenge when the Scrum Master is in a hybrid role that requires being at least somewhat hands-on technically. Scrum Masters who are asked to serve as team leads must find a balance between letting the team be self-organizing and actively taking part in producing quality code.

Your best allies in that situation are technical leads or other senior developers who can give “teeth” to any rules you propose, as well as let you know which of the rules that you’re proposing make no sense. Getting them on your side will improve your technical credibility with the rest of the development team, whereas trying to enforce technical guidelines if you’re not involved in doing the actual work can earn you the development team’s disrespect and even disdain.

For example, I stopped being a full-time developer a couple of years ago when I moved into my current project manager/Scrum Master role. During that time, we received a ticket in the form of an unhelpful generic error message, and I somewhat forcefully told the developer at the time that we should not be using generic error messages.

But, it wasn’t until I started getting back into code that I realized just how out-of-touch I must have sounded. Ideally, yes, all your error messages should be clear, helpful and actionable. Realistically, however, the sheer number of possible failure points sometimes makes this impractical.

Forcing developers to work at the level of detail that you think they should can easily turn into a distraction from the more important tasks they should be doing. Just like whoever came up with the idea for that post office paper slip probably had good intentions, but might not have understood how it would impact the employees’ work.

There is a caveat to this. The Scrum Guide also says that the Scrum Master is allowed to participate in “executing the work of the sprint backlog.” Doesn’t that mean they are allowed to lead technically as well? My interpretation of that is that the Scrum Master can lead technically as a developer, and that their leadership role is limited to their technical merits, just like it would be for any other developer on the team.

Their technical leadership does not come from the fact that they are wearing the Scrum Master hat. If they are working in a project manager-type role and just feel like calling the team out for doing X or Y because they think they know the best way to do it, then they should channel their concerns through other developers instead of trying to get the team to accept their technical edicts directly.

As a Scrum Master, even one in a hybrid lead role, try not to dictate how software should be built. Do communicate the “who, what, when, where and why” so the team doesn’t feel like they are developing in a vacuum. But, leave the “how” to the subject matter experts, which, in this context, usually does not include you. This will help you build a strong, mutually respectful relationship with the development team, and allow all of you to do your best work without encroaching on one another’s responsibilities.

Have you had any positive or negative experiences in trying to be a technical coach while also being a Scrum Master at the same time? I’d love to read about them in the comments section.

To Ask or Not to Ask

Starting a New Job

Many years ago, when I started a new job, I was excited because, well, I am not exactly known for being tolerant of boredom. New job, new domain and some new tech: heaven for a curious person such as myself.

My induction task looked fairly innocuous: Refactor some part of the code and add a few features that customers had been asking for. Doable, you’d think. Well, yes, at first glance.

The code base was larger than any I had seen before. Someone who was no longer with the company had done a refactor earlier, but the need for further work was obvious. The code was still unnecessarily complicated, and getting to grips with it wasn’t easy.

I went about that task in the most ineffective manner possible: I tried to figure it all out by myself. While I did ask questions, I asked nowhere near enough.

Guiding New Hires

A couple of years later, by then well-versed in the code base and the functionality of the application, I volunteered to help new hires get up to speed. It was fun, and I liked it a lot. My heart always jumps a little when I see the lights go on behind someone’s eyes.

The arrangement was simple: We’d discuss what the part of application they were working on was intended to do, any problems that existed with it and any features that needed to be added. Then, I’d show them around the relevant code and give them a few “search terms” to find other related code. After that, I was available for any questions they might have.

Though not ideal, it worked well--at least, it did at first. It fell apart when a new hire didn’t ask enough questions. We had some of those, and the consequences weren’t pretty: oversights, bugs and inefficiencies in places where performance was crucial. Many were caught at review time. More were caught by the QA or acceptance testers. Some were not caught until they reached customers. All of them required extra work to rectify what had fallen between the cracks.


What stopped me from asking questions? What allowed some new hires to ask questions, some of them ad nauseam? What stopped other new hires asking enough questions?

There are plenty of reasons why people do not ask questions. Over-confidence is one. Not wanting to bother busy people is another. Fear, as the opposite of trust, is what interests me.

Though I didn’t think of it at the time, in hindsight, fear – or a lack of trust – was the main reason for my lack of questions. While I haven’t explicitly asked any of the new hires about this issue, I do remember many conversations we had around the topic, including some very illuminating responses that all point to that same reason.

So, what was driving that lack of trust?

We just started out in our jobs and were still feeling out our new surroundings. Every interaction we had would have influenced our trust levels, as would every interaction we observed. Still, trust doesn’t erode that quickly.

Whenever you go into a new situation with new people, you are thrown back onto yourself. So, unless you actively discourage questions, how many questions people ask in the first few weeks is driven by their levels of self-trust.

I do mean self-trust and not self-confidence. My self-confidence has always been pretty high. I know what I can and cannot do and I tend to think I can do almost anything given a bit of time and practice.

My self-trust, however, has until recently been extremely low, if not non-existent.

When you look at BRAVING, it’s the N that’s the kicker in these situations. The more judgmental your self-talk is, the less likely you are to open yourself up to judgments from other people. Asking questions becomes fraught with danger, the core of which is: “They will think I’m incompetent.”


When you start a new job, or join a group of people:

Realize that the judgments you fear from others are actually your own judgments about yourself which have formed as a result of your self-talk. Practice non-judgment (towards yourself) and ask the questions that enter your head. Of course, you should try yourself first, but don’t remain stuck by not asking. Show what you’ve tried when asking your question. It will go a long way toward other people gladly helping you out when they see that you don’t just cry for help at the first obstacle. Realize that asking questions shows your reliability. This may sound strange, but it shows that you are aware of your competencies as well as your limitations. People can rely on you to ask for help when you need it.

When someone new starts in your team:

Realize there are several reasons for not asking questions: over-confidence, not wanting to bother busy people or lack of self-trust, to name a few. Realize that no matter how often you assure people that it is okay to ask questions, it does not guarantee they will do so. Respond to questions with patience and non-judgment. Judgment and lack of generosity are the quickest ways to kill trust. Showing impatience or getting irritated by questions is just as much a judgment as an explicit statement. Proactively engage with new hires. Show interest and ask open questions daily, if not multiple times a day. Realize that “how are you” and “how are you getting on” may start with “how,” but are actually closed questions because they only allow for a very short answer: “Good.”

Have you or a colleague ever struggled with a fear of asking questions? Let me know in the comments below.

Be brave and braving!

How IT Service Teams Can Be Agile

Agile software development approaches were originally created to address software development challenges. As enterprises that don’t sell software as their main product adopt agile, they find the need to adjust their approach in order to apply agile across their IT organization and entire enterprise. 

In previous posts, I explored how you can use agile to work with COTS and business intelligence, both of which are common activities that are different from (but not entirely unlike) software development activities frequently found in IT organizations. In this post, I’d like to explore how you can perform another activity, IT service, in an agile fashion.

Characteristics of IT Service Work

IT service work includes areas such as service desk, release management and configuration management that are related to providing information technology services in an organization. There are several other areas that could be included in this description, and I could spark several semantic arguments about what is included in the IT service work category.

To keep things simple, what I’m talking about in this post are those teams in an IT organization that tend to have work with the following characteristics (as opposed to teams that work on software development projects):

Work items usually takes the form of a request or a ticket. These items show up at unpredictable times. Some items are extremely time-sensitive, especially if they deal with people not being able to get their work done, while other items represent changes which are not quite as time-sensitive. Each item is independent from every other item. This means that as soon as you finish the requested action you can deliver the results to the requester, without waiting to finish other, unrelated items. Items generally have less uncertainty surrounding them than items on a software development backlog might have. The only exception to this is when the item represents a bug that requires investigation to identify the root cause. Work Flows Through an IT Service Team

You can group agile frameworks into two groups: those based on timeboxed iterations, such as Scrum, and those based on flow, such as Kanban. 

Each type of framework is better suited for different situations. The context in which you want to apply each framework helps you determine which framework works better for your team to establish your methodology.

Flow frameworks are better suited to IT service work than timeboxed iterations. There’s a variety of reasons for that:

Flow approaches provide greater flexibility to deal with priorities that change frequently. You can change your priority every time you start working on a new item. Flow approaches are better suited for work items that show up randomly, are independent of each other and are generally not part of a bigger whole. Flow approaches are better suited for work items that you can deliver when you are done rather than those that need to be grouped together with other changes. How Work Flows in an IT Service Team

To describe how to apply a flow approach to an IT service team, let's look at a team that maintains a not-for-profit professional organization’s website, including membership administration, event registration and community groups.

Work flows into this team when someone submits a question, request or complaint via a “contact us” form on the organization’s website. The team can’t predict when these questions will come in, and the items vary in their urgency. The team also has a few ongoing efforts to introduce changes to the website driven by new programs or events.

The team has five members, each with their own area of responsibility. A few of the members have the ability to back up the others when necessary or deal with the more involved support situations.

The team is distributed so they use Trello to keep track of the items they must work on and the items that are in progress. Because the items do not all have a consistent workflow, they keep their board straightforward by dividing items into the following sections:




Newly identified items are placed here when either the team identifies work they need to do or someone submits a request via a form on the website.

On Deck

Items that have been triaged and have sufficient information to proceed.

In Process

Items that a team member has picked up to work on.

Waiting for Verification

An item where a team member has completed the work and has requested verification that the work was completed appropriately.


The requestor or other team member verified that the work was done satisfactorily.


Change was implemented on the website or request was delivered to the requestor.

When someone submits a request via a form on organization’s website, the form sends an email notification which Trello converts to a work item in the “new” section. 

The team members take turns triaging the items in the “new” column. As part of their triage activity, they determine the urgency of the item and classify what type of work it is (i.e., does it relate to membership, event registration or website content). The team member doing the triage sets the appropriate label to indicate the type of work. They then move the item to “on deck” and place the new item in a specific order in the “on deck” section based on the item’s priority.

When a team member is able to move an item they were working on from “in process” to “waiting for verification,” they check the “on deck” section for a new work item to start on. The team has some rules that everyone uses to select an item that are based on priority and the type of work that each item represents.

Experiencing the Best of Both Approaches

Even though the team uses a flow approach, they’ve adopted some practices used by teams working in an iterative fashion, mainly because they found value in adopting those practices, not because anyone required them to.

A regular planning cadence

Once every other week, the team gets together to look at the “on deck” section to determine if the items are in the proper order, whether there are any new initiatives that they need to account for and whether there are any items in “on deck” that could be removed altogether from their board. 

This discussion is like a sprint planning meeting with the exception that the resulting queue - in this case the “on deck” section - is not set in stone. Essentially, this discussion serves to reorder the “on deck” items based on the team’s current understanding of the priorities.

Daily stand-ups at the board

The team gets together once per day to briefly discuss what everyone is doing that day and to determine if there are any in process items that have hit a roadblock. During these stand-ups, the team members discuss:

What the team needs to do to get things moved across the board. Based on what is known right now, what item the team should start working on next whenever there is room. Retrospectives around the board

The team also finds value once every other week to discuss how to improve their process. They have this retrospective while looking at the board, because it provides insight into how the team is doing. The questions the team asks during these retrospectives include:

Is there any hidden work that’s not represented on the board? Do we need to add a queue section? Are there any impediments? How can we remove them? Are we tracking things at the right level?

The team used these retrospectives to end up at their current process. They certainly didn’t start using a flow approach with the process described here. Rather, that process evolved as the team got some experience and held retrospectives to gradually improve the process.

Measure and learn

To aid with their retrospective, the team tracked a small set of metrics that also helped them to identify obstacles in their process. These metrics include:

Throughput: the number of items completed this week Lead time for each item (completed date and start date) Average lead time for this week Items completed with > 0 blocked days Total blocked days A list of places where items were blocked How Can You Get Started?

Would you like to begin using a flow approach in your IT service team? The best way to get started is to map your current process using a board similar to what I described previously.  Then, place all the work your team is currently working on in the appropriate section and start working together as a team to gradually improve your process using stand-ups, metrics and retrospectives.

Do you have any experience applying flow techniques in an IT service team? Share your experiences in the comments below.

What the V Model Can Teach Agilists

Last month, a client and I were sitting in a nondescript conference room in downtown New York.

My client was relatively new to software development, and I was explaining that, in traditional software development life cycles (SDLCs), it is absolutely critical to get requirements right. Why? Because if requirement errors and omissions are not caught until very late in the process, they can cost a lot of money and rework down the road.

To illustrate my point, I showed him a typical V Model illustration (focusing on requirements elements):

The V Model is certainly not new to the world of software development. It can be viewed as a simple evolution of a typical waterfall model that has the testing and verification phases bent up to form the shape of the letter V. When this is done, there is a correspondence between the phases on the left (that elaborate the product) and the phases on the right (that validate those earlier phases).

The conversation ended with my client saying, “All of this is good for you to know, but if you’re working in an agile environment, you won’t have to worry about it.” But, on the long train ride back to Washington, it occurred to me that this isn’t exactly the case.

Indeed, the V Model has a few things to show us agilists.

It Highlights the Importance of Testing

The most obvious characteristic of the V Model is that it strongly favors testing and validation of all the work done earlier in the project. Yes, all the work, whether you are talking about unit tests validating a module’s code or project acceptance validating that the project delivered the business requirements.

But what is our real day-to-day consideration of (non-unit) testing? At most, it will usually be a quick “let’s run it by a couple (actual) users.” More often, we have testers (or even just the product owner) poking around at it and making sure it works. And that makes sense, because our next iteration is coming out in two weeks anyway, so we can effectively just have our users test the product in production. Right?

No! Not right! Remember that we are not just tasked with delivering software frequently, but also with “deliver[ing] working software frequently.” Too often, we can take an expedient view of getting the product out the door, and this can have a massive impact on our customers’ impressions of the product and indeed on our businesses.

It Encourages Us to Question Test Coverage

Here’s the deal: There are two different definitions of “working,” and we need to hit them both.

What do we usually consider it to mean? First, that it’s bug-free, or at least has a low defect count. I’m certainly not going to challenge our traditional concept of quality, but this definition is less important that the second criterion, namely:

Does the product do what the customers want it to do?

Most of our agile teams don’t have any process in place to verify that the next release will pass this test. And that’s why we are frequently surprised when a particular version isn’t well received. We met our definition of done and acceptance criteria, after all.

Make no mistake: rapid iteration is not a solution to mediocre quality.

It Shows Us Why We Do Agile

Have I talked enough about quality? Good! Let’s shift gears to focus on the V Model as the most effective marketing tool for agile development ever created.

The next time a customer or CTO asks you why they should switch to agile methods, pull out a copy of the trusty V Model, and you can’t go wrong. Simply follow this three-step process:

Point out the lateral distance between “business requirements” and “project acceptance.” Tell them that represents time. Not project time, but real-world time in which their customers’ needs are changing and their competitors are working to beat them. Mention that, unfortunately, the project’s scope will be frozen during this period, unless they want write a new statement of work (SOW). Point out the distance between “functional requirements” and “user acceptance testing.” Tell them this represents the time gap between their providing requirements and their seeing the first version of the system. Business-technology collaboration isn’t written into waterfall processes during this time (although, it does kind of show up for non-agile iterative methods).

As you can see, the V Model has a lot to teach us – or at least to remind us. Good product development practices can often be found in the last place we’d expect to find them, and it’s worth taking a fresh look from time to time.

How about you? Do you see any other leverage points for the V Model in agile? Let me know in the comments below.