Blog


Measuring the Progress of Agile

The eleventh State of Agile survey has just been published by VersionOne. These reports are invaluable in helping agile practitioners understand where their practices, problems and challenges fit the context of the wider world. 

As with all VersionOne’s previous reports, the eleventh survey paints a picture of the onward march of agile. Its progress is rarely in a straight line, however, and this latest survey has revealed a very interesting contradiction.

Increased Focus on Business Value?

When respondents were asked how the success of agile initiatives were measured, the second ranked answer after “on-time delivery” was “business value.” In the previous report, “business value” was the fourth-ranked answer. This time, some 46 percent of respondents chose it ahead of “customer/user satisfaction” (44 percent), “product quality” (42percent) and “product scope” (40 percent). No other answer was given by more than a quarter of respondents.

When, in the same survey, participants were asked how the success of agile projects (as opposed to initiatives) are being measured, the percentage who answered “business value” fell by half. The implication here would seem to be that the day-to-day metrics collected about the work of agile teams were not primarily focused on business value. In fact, there were eleven other measurements that scored higher than business value. Those measurements were (in ranking order):

Velocity (67 percent) Sprint burndown (51 percent) Release burndown (38 percent) Planned vs. actual stories per iteration (37 percent) Burn-up chart (34 percent) Work in Progress (32 percent) Defects into production (30 percent) Customer/user satisfaction (28 percent) Planned vs. actual release dates (26 percent) Cycle time (23 percent) Defects over time (23 percent)  Old Muscle Memory

The list of day-to-day metrics being collected shows us the grip that the old muscle memory of waterfall still has on the agile community. The mantra of traditional project management is “Plan the Work, Work the Plan.” It assumes predictability and, consequently, the metrics that are believed to be important are those which show whether there is deviation from the masterplan. Any discrepancies are considered likely to be due to a lack of efficiency.

Velocity, for example, which is right at the top of the list above, tells us nothing about progress or success. Rather, it is a metric which is useful to the development team because it allows them to judge how much work they can pull into a sprint.

Nobody else -- with the possible exception of the product owner, who can use target velocity to estimate release dates -- should be interested in velocity. When managers try to drive up a team’s velocity, it almost always causes the defect rate to peak and the delivery of value to the customer to slow down.

The next four measurements in the list, and that of “planned vs. actual release dates” which comes further down, are all about whether the team is working to plan. Again, these can be very useful to the team itself so that it can decide whether its own plan needs adjustment to achieve a sprint goal or a release goal. Used by anyone else, they just offer opportunities for micromanagement. 

“Work in progress” and “cycle time” are useful for measuring the smoothness (or lack thereof) of the development and delivery pipeline, while the “defects into production” and “defects over time” can tell us something about the quality of the product.

Progress is in the Product

In short, these can all be useful measurements, but apart from measuring customer/user satisfaction, they are at best secondary when it comes to measuring progress. The value -- and therefore the success of the project -- is in the product and nowhere else. The most crucial factors to measure are the product’s delivery and its impact on the world.

If management, stakeholders or anyone else outside the agile team wants to know the progress being made then all they need do is show up to the sprint review where (in Scrum, at least) they will get to see the latest increment and can suggest what might be done next. Everything else should be left to the team itself.

Please feel free to use the comments section below to tell me if you are surprised (or unsurprised) by the survey’s findings or if there are any additional key factors you use to measure your team’s progress.

 

The Fear and Vulnerability Retrospective

What makes your teammates lose sleep at night?

The best way to find out is simple, but not always easy: ask them!

In this post, I’ll be presenting a technique--suitable for a retrospective or a lift off--that opens up channels of communication about the topics of fear and vulnerability.

Set the Stage 

When I run this exercise, I begin with some conceptual groundwork.

I first draw a diagram of three concentric circles and explain that each one of us steps in and out of those zones as we interact as a team:

The Comfort Zone - Ah, an easy and nice place to be. Nothing here worth losing sleep about, and little to no anxiety. The Stretch Zone - AKA No Pain, No Gain. This is a place of risk, and it can be downright uncomfortable for many. However, this zone is where growth usually happens. The Panic Zone – The land of freeze, flee, or fight. It’s best to not go here, since in this zone we actually tend to get dumber. Studies have shown that when we are in a stressed or threatened state (i.e., when the amygdala is on high alert), there is literally less oxygen and glucose for the neocortex’s cognitive functions.

I then explain that moving outside of one’s comfort zone and into the stretch zone where perceptions expand and transformations take place can be scary. Fear is often about risk; more specifically, the risk of losing something. Maybe we get defensive or aggressive when faced with that risk. Maybe our stomachs get queasy. We all have our own personal early warning systems.

I share a quote from Abraham Maslow: “In any given moment, we have two options: to step forward into growth or to step back into safety.”

Choose a similar mantra or quote that makes sense in your team’s context.

I then explain that in order to continuously improve as a team each and every one of us needs to get out of our comfort zone, and doing so requires at least two things: trust and honesty.

Trust at Multiple Levels Personal: You trust yourself to have the courage and bravery to persevere and recover from any failure Personal + Relational: Trust in yourself to recognize your panic zone and be heard by your teammates if you say, “too far!” Relational: Trust in your teammates, coach and leader that they will pull, push or prod you along and support you when you need it Honesty To admit your fears and your sense of vulnerability To bring your full, authentic self without pretense To shine a light on your flaws (as the great Zen master Shunryu Suzuki once said, “Everything is perfect and there is always room for improvement.”) Four Windows into Fear

Next, I draw a quadrant on the whiteboard and discuss four viewpoints:

Environmental: Everything external to the team systems, including other value streams, handoffs, rules, corporate culture and policies Organizational: Refers to the team structure, methods, metrics, processes, decision making patterns, leadership and team micro-culture Relational: This is about “We” – a shared vision and interpersonal dynamics among peers Personal: This is about "Me" - the psychological, the inner world

I explain that during this exercise the team is going to explore the ideas of trust, honesty, fear, and vulnerability in the each of those four contexts.

The Really Scary Part

As I hand out Post-it notes and pens, I ask, “What are the things you fear? Where is trust at risk? Write them down...one per note.”

I then give the team 7-10 minutes to silently brainstorm. When that timebox has been hit or the energy in the room has diminished, I have the team post their notes on the whiteboard in one of the four quadrants (many fears turn out to overlap quadrants, so they end up on the boundaries).

During the process of posting, I encouraged folks to affinity group their notes.

We quickly review the board. I then hand out self-adhesive dots (5±2 depending on the number of affinity groups) and have individuals indicate their top concerns via dot voting.

What emerges time and time again follows a similar pattern: almost no post it group has a single dot, and all fears fall into one of the following bakers’ dozen:

Fear of being held accountable where I'm not really responsible or in real control Fear of failure Fear of being under appreciated Fear of loss of job Fear of loss of respect Fear of letting others on the team down Fear of stagnation and lack of personal growth Fear of conflict Fear of separation and being an outsider Fear of being wrong Fear of what others think of me Fear of embarrassment, humiliation and ridicule Fear of loss of identity and individualism (i.e., resistance is futile) What Can We Do About Our Fears?

I then facilitate group idea generation while emphasizing that nothing is off the table. What can the team can do about their fears? Example actions that have come up with include:

Take responsibility for yourself. Acknowledging that you are becoming defensive or fearful. Try telling the person you are with that you’re starting to notice your own defensiveness and fear. Slow Down. Take space, stay quiet for at least a 10-count, take two deep breaths and check or change your posture. Confront your negative self-talk. See if you can switch from red-zone to green-zone self-talk. (Read more about attitudes and intentions.) Check your assumptions. Everyone must make many assumptions on a daily basis in order to get by. There is nothing wrong with making assumptions, and it would be impossible to live a normal life without making them. The biggest problem with assumptions is the rigidity with which we hold them. Explore your fear with conscious awareness, trying to understand the root causes, and ask yourself, “What am I trying to override?” Try to look in all directions. Start over. When you realize that you have become defensive or fearful, acknowledge that, take some action to reduce your defensiveness/fear and then start over. (Read more on uncovering your defensive patterns.) Look to the Future

No retrospective is worth its time unless it results in a plan for some things to change. So, I ask each participant to come up with a self-action plan. I provide some thought starters that correspond to the four windows:

Personal What I’m expected to do is… I want to learn more about… I should ask for help when... Relational One thing we can both stop/start doing is…. We are both motivated by…. What we don’t dare to do yet is… Organizational What I expect from my team leadership is…. The reason the team is at risk is…. What the team needs most is…. Environment I’m proud of our team/business unit/company when… What our stakeholders can expect from us is…. The reason our value stream can be blocked is… 

To wrap up, I ask if anyone would like to share their plans with the group. (This is where a facilitator’s ability to withstand awkward silence comes in handy!)

I’ll share with you a facilitator secret. If I suspect a team I’m working with doesn’t yet have a solid foundation of trust, and as a result the majority is hesitant to be vulnerable in front everyone, I will have a “plant” (someone I’ve spoken with beforehand) volunteer to speak up first. Sometimes all it takes to open the gates is for a single individual to step up and do something that feels unsafe and uncomfortable.

Build Upon the Foundation

A few days after the exercise, I will share Brene Brown’s wonderful thoughts on vulnerability as "the birthplace of innovation, creativity and change” via her TED Talks and YouTube videos. Brown’s ideas will reinforce many of the concepts that likely bubbled up during the retrospective.

On any team, vulnerability-based trust is not a given, nor does it magically appear after a single retrospective. It will take time and many instances of individuals admitting their fears, their mistakes and taking risks that move them outside of their comfort zones.

I’ll share with you one more technique that I’ve found helpful in accelerating development of deep levels of trust on a team: Personal Maps – Getting to Know the Whole Human Being.

Take either exercise for a spin with your team, and feel free to share your experiences in the comments. What besides fear could be holding you back?

Agile Strategies to Explain Doing Less Work

It might seem counterintuitive to convince people to do less work, but reducing work is actually one of the most effective ways to deliver the most value to your customers as quickly as possible. In fact, the authors of “The Agile Manifesto” suggested minimizing work in their Twelve Principles of Agile Software which states, “Simplicity--the art of maximizing the amount of work not done--is essential.”

However, it can also be difficult to convince someone that they shouldn’t take on work that they see as necessary to provide a specific value. I previously wrote about how to convince people to de-prioritize work, but there is some work that just shouldn’t be done at all.

To help navigate this discussion, I’ve created a quick overview of common scenarios where pursuing work can often be a bad idea and how you can explain to someone that the work they suggest should not be pursued. The conversation can become uncomfortable at times, but remember that your organization stands to better itself by following your agile-minded leadership. The following are just a few examples of commonly requested work that should almost always be avoided. 

Work That Adds the Wrong Value

Sometimes, adding value isn’t a good thing. For example, it might not be sustainable if it costs a lot to maintain over time. It may also force your team to deviate from your core roadmap that delivers more value to your customers. There are many other reasons why work can’t be justified simply by “it adds value,” but a lot of people naturally see value as a good thing by default.

For example, someone might suggest that Toyota should be selling Prius models that include a turbocharger. After all, Prius drivers might want to go faster considering the acceleration and speed limitations of their car.

However, the smart folks at Toyota probably realized that the parts, labor and maintenance costs would far outweigh any fuel efficiency gains or eco-friendly image – all of which are driving reasons behind why people buy a Prius. In this case, the added value of the turbocharger isn’t worth the cost and isn’t really anything the target customer asked for or wants.

In cases like this, ask the person requesting the work what the costs are related to the work, and collaborate with them to identify the ongoing costs needed to maintain that work. Also determine if the work is suitable to your target user base and if it makes sense in the context of your overall product roadmap. 

Work That Solves Problems That Don’t Exist for the User

When people request work that solves a problem that doesn’t exist for the user, they usually aren’t thinking of the problem as nonexistent. Unfortunately, this is more common than anyone would like to admit, since people often act based on their own personal situation or their past experiences rather than user research.

For example, a team member in an early-stage startup that manufactures bicycles might suggest that the bicycle should be able to allow the rider to go up stairs in offices and other buildings in an automated way in order to be compliant with disability laws.

While improving accessibility for all individuals is a noble endeavor and should be at least considered in any product, it might not be economically feasible for the startup to pursue disability law compliance during its early stages, and it likely isn’t even required of bicycle manufacturers.

In this case, there is no legal requirement in place and the typical user of the bicycle is understood to have some level of expectation around whether it can automatically climb stairs. This is a problem that may not exist for users of this product, since there are other products that can help disabled individuals ascend stairs in a more efficient and effective way.

In addition, it may be more economical for a third-party manufacturer to create custom enhancements or focus specifically on Adaptive Cycling users, rather than each bicycle manufacturer taking on the problem independently. Adaptive bicycles could also make up a future product line as the startup matures. 

In cases like this, ask your colleague for documentation that the problem exists, which in this scenario might be legal compliance requirements or regulations. Solving problems that do not exist can be very costly both in terms of the work done and the work that could have been done instead. 

Work That Creates Duplicate Efforts

Duplicate efforts can become a significant waste in workflow if not properly identified and avoided. This applies to all areas of creating, supporting and selling the product, not only the product development process.

For example, you may have a backend interface that allows your internal team members to generate reports, as well as a separate customer-facing reporting interface that allows customers to generate reports. In that case, you would have two completely separate reporting engines that have overlap in the data they’re trying to access and the reports they’re trying to generate. 

Instead, you may want to reduce duplicate efforts by moving all reporting to the customer-facing interface, but restricting access to internal-only reports for your team members exclusively. This way, you can enjoy the benefits of focusing all of your reporting-related time and effort on building and optimizing one reporting feature rather than two separate ones. 

In cases where people request duplicative work, they often don’t realize there is duplication involved and they may not see the whole picture of your product, internal tools or other aspects. To help them understand, explain where the duplication exists and how eliminating duplication will produce significant advantages. Be sure to rationalize your plan by explaining the cost involved and how the benefit will outweigh that cost, assuming it does. 

The Possibilities are Endless, but Your Resources Aren’t

In many cases, team members can stop at the initial stages of analysis where they see that a specific task will accomplish a specific outcome. However, there are several possible detractors that are often not identified, such as the cost to complete the task, alternative work that could be more valuable or potential problems with scaling such as ongoing cost and upkeep.

Your agile team is a limited resource with limited manpower and limited time (assuming that building a time machine is still far off on your product backlog). Before agreeing to send work to them, consider the areas mentioned above and any other reasons why simplifying the workload per agile best practices will help ensure the most value is delivered to the customers or users as quickly as possible.

These are just some of the common examples that may come up from time to time, so feel free to share any that you regularly encounter in the comments section below.

Behavior-Driven Development for Product Owners

A product owner should always be looking for opportunities to bring value to the customer while simultaneously boosting their team’s efficiency. Acceptance criteria is a vitally important part of the user story, and yet it is sometimes ignored, incomprehensible or overly detailed. The key question to ask is: is the team reading, designing and implementing the code according to the acceptance criteria as well as the product owner’s expectations?

If the answer to that question is “no,” I wouldn’t assume it’s the team’s fault, since there are often better ways for the product owner to drive, clarify and define the acceptance criteria. One of those ways is the 3 “C”s Theory by Ron Jeffries. Another is behavior-driven development, which I’ll be exploring in this post.

The Benefits of BDD

The 3 “C”s Theory is an approach which doesn’t depict the acceptance criteria in too many sentences or words. In a nutshell, the first “C” refers to the card where the user story is written. The second “C” refers to the conversation that takes place when the team begins to collaborate with the product owner and understand how the deliverable will work. Finally, after the user story is implemented and tested, the team passes through the third “C,” which refers to the confirmation of the final result of the user story.

Another method that product owners can use to work with the team to define the acceptance criteria is known as behavior-driven development (BDD). BDD builds upon test-driven development (TDD) by going beyond the development team. In other words, BDD is a technique used to write the acceptance criteria in a way that anyone can read and comprehend. In addition to facilitating communication between a team, their company and its technical stakeholders, this approach has several other benefits:

Allows the product owner to clearly define what is expected from the user story itself. Creates fewer communication gaps. Opens communication between the business and the development team by having multiple hands on the deliverable of each user story. Builds a strong sense of collaboration between the product owner and the development team by removing ambiguous language. Increases agility and efficiency by turning the outcome of BDD into automated test cases which can be integrated into the continuous delivery pipeline and tool. More executable documentation and less waste.

BDD was designed by Dan North, author of the book “The RSpec Book: Behavior-Driven Development with RSpec, Cucumber, and Friends” as an evolutionary practice of TDD.

Applying Behavior-Driven Design

In order to apply BDD, I would recommend following this five-step flow:

Identify and define the user stories. Refine the acceptance criteria. Implement the code using TDD. Demonstrate the user story. Automate the acceptance tests through the continuous integration pipeline.

I will be focusing on the acceptance criteria where BDD is initially applied, but the five-step flow can give you an idea of how it can be applied to the entire life cycle.

Define User Stories

The user stories are first created by the product owner and then refined in either a workshop or a product backlog grooming session before the sprint planning. In both cases, a multifunctional team, product owner and technical stakeholder are required.

Some preliminary questions should be asked in order to form examples of utilization or business case scenarios which can help the team to better understand what has been discussed.

Suppose that a potential buyer goes to an e-commerce website and, after searching for a product, he or she receives a message that the product is currently unavailable. How can the  website retain that customer? To find out, let’s go ahead and write a user story, then ask a few questions that might come up during the subsequent meeting.

User story:

“As a customer, I want to be notified via email whenever the product is available again.”

Some questions that might be raised are:

Does the user have to own a site account in order to be notified? If so, what happens when the user is not logged in? What happens if the user already created an alert for the same product?

After answering those questions, the team will have a better understanding of what the product owner is expecting from the user story. Next, the team can collaborate with the product owner to write the acceptance criteria with as little ambiguity as possible. For example:

User has to own a site account; otherwise, redirect the user to the registration page. User has to be signed in; otherwise, redirect the user to the login page. Users who are registered and signed in must confirm their email address. Refine the Acceptance Criteria

The next step is organizing the acceptance criteria in the format required by the BDD framework. There are several BDD frameworks, including FIT, FitNesse, Cucumber, Concordian, Robot Framework, RSpec, Jnario and more. For our example, I’ll be using Cucumber: 

In this case, I used Gherkin, which is the language that Cucumber understands. It is a business readable, domain specific language that lets us describe software behavior without detailing how that behavior is implemented. Gherkin is primarily used for two purposes: documentation and automated tests. The product owner should write the acceptance criteria in Gherkin for each user story.

In summary, every scenario contains the following keywords: Given, When, Then, But and And. Another useful keyword is Examples where you can outline rows with different values for each attribute. You can also use Data Tables to pass a list of values. Learn more from the reference guide for Gherkin. 

Implement the Code Using TDD

Next, the development team comes into the picture to start implementing the code while applying TDD in order to pass all acceptance tests.

Demonstrate the Acceptance Tests Results

After all successful tests, the user story is validated by the product owner and is often shown in a sprint review or showcase.

This phase shouldn’t be a major concern for the team, due to the fact that the development team will have followed the acceptance criteria written in BDD by the product owner.

Automate the Acceptance Tests Through the Continuous Integration Pipeline

Finally, it’s time to include the acceptance tests in the continuous integration tool and make them run automatically for each commit and build. That’s where the fun starts, and there is no need to validate the test cases manually.

BDD in Summary

Behavior-driven development is a very pragmatic approach which offers a number of advantages, the most significant of which is the ability to share a clear understanding of each new user story and what will be delivered.

When used properly, it creates less rework and waste, more agility, a higher level of collaboration, common understanding and more productive teams. In fact, the product owner can use BDD to produce executable documentation that will be reused all over the development code, thereby bringing more value to the team and delivery pipeline.

If something is not clear or you would like to learn more about BDD, please feel free to ask a question or share your thoughts in the comments section below.

Are We Done Yet?

The definition of done (DoD) is one of the most important and least-understood elements of the Scrum Framework. It is specifically called out in “The Scrum Guide” in what is probably its biggest section, and yet, I’ve seen so-called ‘definitions’ of Scrum that fail to mention it at all.

In this post, we’ll be talking about why, exactly, the DoD is so important. 

DoD Explained

So, what is the definition of done? Fundamentally, it is the Scrum team’s agreement about the standard of quality that it will apply across the product. This concept is closely related to that of the Potentially Shippable Increment that must be created at the end of each and every sprint. The two words in that phrase that the DoD concerns are “potentially” and “increment." 

While all agile approaches – Scrum included – aspire to “deliver early and deliver often,” this does not mean that a product must be handed over to the customer at the end of every sprint. Whether enough useful value has been accumulated to warrant a product’s release is a business decision, and one that is the product owner’s responsibility to make.

If, however, the product is not of releasable quality, then the product owner is effectively relieved of that responsibility. So, scrum requires that the latest increment -- whether it is going to be released or not -- is of sufficient quality that it could be handed over.

What the product owner and the development team are agreeing on when they establish the DoD is the quality bar that will determine what can be shown in the sprint review. You may have heard phrases like “done, done” and even “done, done, done” from some agile practitioners, but in the world of Scrum there is only “done” and “not done.”  

An increment – and that’s the fundamental level at which the DoD works – is done only if it meets the definition of done and can be demonstrated in the sprint review. If it doesn’t meet that standard, then it cannot be shown to the stakeholders.

At the very least, the DoD should mean that the increment has passed all its tests and is fully integrated with the previous increments. In this way, what is being shown is a quality-assured, small and skinny version of the product.

Governance

Clearly, the DoD has implications for governance as well as quality. “The Scrum Guide” says that if there are corporate standards in place then they form the default DoD. My interpretation of this is that if, say, there is a company standard for code quality, then that standard should be incorporated into the development team’s practices as well.

Agilists never trade quality for speed, and so we should never lower that standard. In some cases, however, existing corporate standards might get in the way of efficient development. Organizations that use PRINCE2, for example, will often have a phase gate-based governance approach.

Such an approach might work well in a product development that has a high level of predictability, but where there are many unknowns (as in most software development) an approach that is instead based on feedback and responsiveness is needed.

Because of its reliance on predictability, phase gate governance can kill an agile product development. So, in organisations that use phase gate governance, the Scrum team will need to have a conversation with the wider organisation to find better ways of giving stakeholders confidence.

Workflow

A DoD which clarifies what is needed for the sprint review and which is enacted by the Scrum team accordingly is the perfect starting point for feedback-based governance. Since there is no one-size-fits-all DoD, what the DoD includes is always going to be situational. But, if we accept that the increment must be fully tested and integrated, then several practices naturally suggest themselves.

First, each PBI which makes up the increment will need to have passed both its acceptance and unit tests. Acceptance tests are specific to each PBI, of course, but the DoD will presumably state the policy that all items must pass their acceptance tests to be accepted.

The entire increment will also need to pass integration testing and, as it is unlikely that all the PBIs will be finished at the same time, each of those will need to be integrated incrementally. Therefore, regression and integration testing is strongly suggested at the PBI level. 

In other words, there are implications about workflow embedded in the DoD.

Team-Building 

There is yet another aspect to the importance of DoD: the team.

A group of individuals is just that, and can only become a genuine team when it rallies around a common goal. But how can a team meet a goal if they don’t know when their work is finished? 

I once interviewed two programmers on the same team and asked them how they dealt with quality. One of them opened up his IDE and showed me the unit and acceptance tests he ran on his code. The other told me it was the QA department’s job to deal with quality – not his. As you can imagine, this product (and the group building it) did not fare well in the end.

Those were two people with the same functional background-- imagine having a new development team, with all the different skill sets needed to create the product, and no clear DoD. The result certainly isn’t pretty.

Revisiting the DoD

Scrum teams, for various reasons, may not be able to take product increments to a potentially releasable level every sprint. This might be due to the team’s level of performance (if they are a new team, for example), or because the production environment could not be fully replicated in the team’s development environment. 

In this situation, it is important that the DoD clearly indicates where the team’s responsibility ends. Any work that would still be needed to make an increment releasable would be categorized as “undone work” and would need to be listed in the DoD. The DoD would then be retrospected regularly to see what could be moved from the “undone” category into the “done” category as the team takes quality assurance more and more into its own routines.

“Undone” work should not be confused with unfinished work. Work required by the DoD which is unfinished means the item concerned is not “done” and thus cannot go into the review. An item can still be “done” if there is “undone” work.

To understand this, we can think of there being two quality bars: one for the sprint review and one for actual release. The gap between them is the undone work. In committing to any sprint goal, the team is implicitly committing to the DoD as well.

However, it is not committing to do the work listed as undone in every sprint -- that work will be done just prior to an actual release The development team’s job is simply to get the increment to “done,” and the product owner then decides whether the PBIs that are part of it can be accepted as complete.

So what happens if an item shown in the sprint review is rejected as not fit for purpose by the customer? The answer is that, since it passed the scrum team’s standards, it remains ‘done.” If the product owner believes the requirement still has value, he or she will put it into the product backlog as a new item and it will be prioritized accordingly. But, of course, the team should still reflect on what has happened, and may well strengthen the DoD to reduce the chances of “done” items being rejected in the future. 

Importance

So far, we’ve seen that the definition of done is important for: 

Product-wide quality standards Governance Workflow and engineering practices Team-building

We’ve also seen that Scrum teams need to revisit their current definition of done on a regular basis to strengthen their assurance of the product’s quality.

If you can think of any other things that the DoD is important for, feel free to let me know in the comments section below. And with that, we’re done.

The Scrum Task Board and the Self-Managing Team

In the early days of Scrum, the quickest way to locate a Scrum team’s work area was to look for the task board, which was usually mounted on a nearby wall. Work was managed using index cards, sharpies and spreadsheets, and the task board served as a tool for tracking work as well as an information radiator.

Anybody walking by could simply look at the task board and see the team’s progress at that point in time without having to ask a single question.

However, what inevitably happens in nearly every field is that new technology and tools are developed over time with the intention of “making it easier” to manage work, and the world of agile is no different. Some tools were built from the ground up to manage agile project work, while others were developed as add-ons to existing tools.

When an agile project is just beginning, it seems like the first question asked is always “What agile tool are we going to use?” Let’s face it, we in the IT industry love our tools, and I am no exception.

However, the technology we perceive as progress can sometimes have unintended consequences. Take, for instance, society’s extensive use of social media, texting, and other technological forms of communication. They were originally created to save time and effort, but we are only now discovering that these tools can lead to a sense of social isolation in certain segments of the population.

High-Tech Tools: More Harm Than Help?

So, what does this have to with Scrum teams? A Scrum team’s success is all about collaboration, which in turn is all about co-location and face-to-face communication. While technology can certainly enhance a distributed Scrum team’s collaboration, it also has the potential to hinder a co-located team: if the team relies too heavily on technology, it can start to act as an inadequate substitute for face-to-face communication and collaboration.

For example, I was working with two Scrum teams over the course of many sprints and, while all their information was readily available in a high-tech agile tool, I rarely saw it displayed on anyone’s screen. I also noticed that their stand-ups were functioning as more of a status report than an opportunity for the team to share information and level-set the team’s progress in the sprint. 

Although the team reported a high level of confidence in completing stories during the mid-sprint, I could see from the story point burn-down chart that they were scrambling to complete stories in the later stages of the sprint. I knew that all the team members were solid professionals, so their work ethic clearly wasn’t the problem.

Eventually, I realized that, while they may have been focused as individuals, they weren’t focused as a team. I also realized that the unintended consequence of technology was that the team’s most crucial information was buried in a tool that no one bothered to access.

A Low-Tech Solution

Since I didn’t have two 70-inch monitors to put in the team rooms, I decided to go old-school. So, the next day I came in with painter’s tape and put a task board on the wall. I then printed out the stories and tasks from our agile tool and recreated the task board to reflect the status of the sprint.

I told the team that, during the sprint stand-up, each team member would go to the task board to address the team. I also told them to focus on the team and ignore anybody else in the room, and that each time they spoke about a specific piece of work they would need to move the corresponding tasks on the board to the appropriate columns as well.

It took some time for them to get comfortable with doing the stand-up in this way, but the result was that the task board started to provide them with the focus they needed as a team. It had a constant presence, easily showed the team’s progress and gave each team member the satisfaction of physically moving their work across the board from the “to-do” column to the “done” column. 

During the mid-sprint checks, the accuracy of the team’s confidence level vote increased dramatically. And, when a mid-sprint check indicated that the team might have a problem, they used the task board to determine how to resolve the problem and re-allocate resources accordingly. For these teams, as well as many others, the task board quickly became their primary tool for self-managing.

The Value of Planning

I always tell my teams that the most important aspect of sprint planning is not the plan itself but the fact that they engaged in the act of planning in the first place. This is because the act of planning gives the team a shared understanding of what must be accomplished.

And, given that things rarely go according to plan, we must constantly re-plan “in light of what we know now,” and every team member should be fully aware of the changes in the revised plan. With the help of a humble task board, teams can easily collaborate, re-plan and focus for the duration of a sprint, and that’s the sign of a truly effective agile tool.

 

You Need a Scrum Master

“We can’t afford a Scrum Master.”

“What does a Scrum Master do other than schedule meetings?”

“Any developer can call themselves a Scrum Master and send meeting invitations!”

“We don’t need a Scrum Master, we do things differently!”

These are some of the statements often heard from management when asked whether they need a Scrum Master on their teams. These types of responses could reflect some of the misconceptions about the value of the Scrum Master role, and might also reflect a previous, unpleasant experience with an incompetent one.

Although this question has been asked several times in various agile blogs and has been answered with a clear “Yes” in almost all of them, it is still being asked every now and then, especially by smaller companies and startups. Some other arguments on this topic can be found here and here.

In this post, I would like to share two situations faced by friends of mine in two different companies which both decided against having a dedicated Scrum Master on their team.

The Team Lead/Scrum Master Knows All 

In a previous post, I shared my concerns about mixing roles in a scrum team, and sharing the Scrum Master role among developers was one of them.

In this real-life scenario, a developer (or the tech lead, to be more specific), was given the title of Scrum Master since he was “certified.” But, if we look at the team’s internal process, we will soon discover why such a team needed a dedicated Scrum Master.

In this team, they didn’t have sprint review meetings, the “daily” Scrum was done three times a week, there were no retrospectives or build automation and the tech lead was the only one doing code reviews for the whole team. It’s easy to imagine the massive queue of tasks that were waiting for review and deployment. The technical debt needed months to be fixed, and the product owner was anxious to get the features to production.

This team sorely needed a dedicated Scrum Master to coach the team on the value of core agile practices such as pair programming, automated tests and automated builds. Essentially, the team needed to understand the importance of collaborative code ownership as well as the power of frequent deliveries and early feedback.

We Don’t Need a Scrum Master

This next team was a small one, but the takeaway is similar to that of the first example. This team didn’t have a Scrum Master at all, and instead had a product owner, a distributed development team and a team lead. Like the previous example, the team lead was the only person who reviewed the code, which quickly created a lagging queue of tasks. Another challenge for the team was that some members of the development team were part-time employees, while others were dividing their attention between two projects simultaneously.

Some might wonder what a Scrum Master could possibly do for such a team. In short, a Scrum Master could act not only as a glue holding the team together but also as a grease to help the team get through challenges more smoothly. He or she could help the team organize themselves properly, improve communications, help the team share knowledge and improve code ownership.

In both cases, the lack of a Scrum Master increased and complicated the difficulties the teams faced. Without adequate understanding from management of the importance of such a role, teams like these will continue struggling to find their way and provide high-quality products.

Do you have other examples of a lack of a competent Scrum Master making a team’s life more arduous than necessary? If so, please feel free to share them in the comments section below.

Leadership & Effective Decision Making on High Performance Teams

Trust, Vision and Ownership

In a previous post, I identified "3 Necessary Conditions for 'Going Agile'": Trust, ownership and vision. There, I used up my quota of blog space by focusing on trust and its impact on a high-performance team.

Here, I will continue that voyage by exploring ownership.

High Performance Defined

To get started, here’s a definition of a high-performance team from Wikipedia:

A high-performance team (HPT) can be defined as a group of people with specific roles and complementary talents and skills, aligned with and committed to a common purpose, who consistently show high levels of collaboration and innovation, that produce superior results. 

Common characteristics of HPTs include:

Participative leadership Open and clear communication Clear goals, values and principles Mutual trust Managed conflict Flexibly defined roles and responsibilities Coordinative relationship Positive atmosphere Continuous improvement

Nice, right? Who wouldn’t want to have more of that in their organization?

Delegation and Decision Making

With the characteristics described above, ownership of and attention to results is distributed as equally as possible. To accomplish this, decision making is taken up by various team members so that “the call” is no longer the domain of any single individual, especially not “the boss,” because command and control is simply too slow.

Distribution of decision making, aka delegation, is a great deal more than “Either you do it or I do it.” To avoid that mindset, I coach my teams to experiment with a matrix that provides four types of delegation:

Type 1: Decisions that only the leader can make/take - Type 1 can be made with or without consultation/discussion/buy-in with/from the team, though consultation is of course preferred. Type 2: Decisions that individuals/the team can and should make/take themselves, but need to first run by a leader - In the event of a “tie” or a disagreement on how to proceed, the individual/team calls it. If the individual/team is not willing to take accountability, then it reverts back to a Type 1; there is no “Type 1.5.” Type 3: Decisions that individuals/the team can make/take themselves, and inform the leader of after the fact – These decisions can be made either “contemporaneously” or “routinely.” Type 4: Decisions that individuals and team can make/take themselves. In other words, the “Just do it” method.

Technically, there’s a fifth type as well:

Type 5: Wait until you are told what to do by the leader.

We don’t want Type 5 to be an option. Why? For a boatload of reasons, including these: No one person can know as much as a talented team, and being told what to do demotivates talented individuals and removes accountability.

Where is the Team Today?

To begin the distribution journey, start by developing a picture of your team’s present conditions. Grab the team and the leader, a pad of post-its and some pens. Crowd source a divergent list of all the decisions that need to be made over the course of the coming month. Have an unfiltered brain dump and fill the white board.

Next, affinity map the individual decisions to converge on topics and themes. Using those groups, build a matrix, placing topics/themes/details in rows and each of the four delegation levels in columns.

Then, like planning poker, have each team member pick a number from 1-4 to indicate what type of decision they think should be applied to the topic at hand today. The range of votes may or may not surprise the participants.

Then, discuss the results and see if both the team and the leader can agree on “where things are.”

Newly formed teams typically start off heavily skewed with lots of Type 1’s, and maybe a few 2, 3, or 4’s, while more experienced teams tend to have a more even distribution of all types.

Envision the Future

Once both the team and leader have a picture of where they are today, rinse and repeat the poker rounds for where folks want to be in the not-too-distant future, like the next sprint or the end of the next quarter.

As a team matures and becomes more gelled and performant (retrospectives help on this path), things should move as far to the right of the matrix as possible. Loads of type 4’s, 3’s, a few 2’s and almost zero 1’s is ideal.

Why? Because moving decision-making deep into the team and away from “the boss” reduces bottlenecks and wait times, reduces waste and multiplies by orders of magnitude the speed at which a team can move, experiment and learn.

Put a date on the calendar to do this exercise again, perhaps in three months. A team and its leaders should be careful not to build the future state matrix once and then never revisit it out of a misplaced need for consistency or a fear of stretching out of their comfort zone. Instead, remember to regularly inspect and adapt. 

Group Decision-Making Models

As decisions move into the realm of the team, a common issue that comes up is how exactly they should go about making them without resorting to asking the leader: “What should we do about this?” (by the way, if you are a team leader and you hear that question too often, try simply answering with: “What would you do if you were me?” More on this approach here.)

There are many models that can help a team choose options when it comes to team decision types 2, 3, and 4 (consult first, inform after or JFDI, respectively). None of these models are perfect, and each has its own strengths and weaknesses:

Consensus – A collaborative approach. Typically requires that the majority approves, but also that the minority agrees to go along. In other words, if the minority opposes the idea, consensus requires that it be modified to remove objectionable features (see: Fist to Five). Since everyone gets a voice, consensus is both useful and appealing, although it is also potentially dangerous. It can quickly devolve into a battle for who is willing to argue their point the longest, or a watered-down compromise. Advice – A simple form of decision making where any individual can make the call. But, before doing so, they must seek advice from all affected parties as well as those with expertise on the matter. The individual, however, is under no obligation to integrate every piece of advice they receive. Random – The group leaves the choice to chance. Put the options in a hat and pull out the winner, roll the dice or flip a coin. Unanimity (or “12 angry peers”) – The group discusses the issue until an agreement is reached by all those involved in the situation. I Can Live with It – Anyone might have well-reasoned objections to a given decision. However, they agree to work toward the goals of the decision anyway. Solidarity – Unwavering commitment wherein individual will is suppressed for “the good of the group.” Rock Paper Scissors – See “Random.” :) Experiment

Different decisions call for different decision making models and types of delegation. Some teams may not be ready for all of the types covered above, and not all types are the best fit for any given team.

For best results, try to come to an agreement on the following:

(Type 1, 2, 3 or 4, or any other nomenclature that fits your culture) Which one will be used for the decision at hand today Which one the team wants to use tomorrow

See what works, see what doesn’t and continuously improve. Not only will the team experience a significant increase in productivity, but there will be a greater degree of trust and innovation as well. Once again, that is the promise of truly embracing agile.