Back to Academy
Table of contents
I was at a conference a number of years ago when I saw someone walking around in a t-shirt that caught my eye. It had one very simple quote on it that has stuck with me ever since:
Of course, my mind immediately went to retrospectives! So many people don’t bring data to their retrospectives. What happens in a retrospective without data?
A lot of the time, people end up presenting their perception of the facts as the absolute truth, because it feels real to them! This is not deceptive behavior. Everyone experiences the world differently and their version of the facts feels real to them. But without a common set of data, it is hard to find out what is objectively true vs. what is subjective reality.
That’s why stopping to Gather Data is important. It gives The Team an opportunity to establish a shared understanding of what happened, which enables more meaningful conversation about how to improve going forward.
So what data should you bring to your next retrospective? And how might you use it effectively?
Let’s start with a story. Imagine your team has just sat down for it’s latest retrospective…
“Alright everyone,” Sophia, your Scrum Master, says. “Let’s take the next few minutes to silently write down the top 2-3 problems you think the team faced during the last two week sprint. I’ll set a timer and let you know when time’s up.”
Everyone on the team thinks to themselves about the latest iteration. Bob starts writing things down immediately. So does Sally. A few others sit and stare into space, wondering “what problems did we face, anyway? I’m too bogged down in my work to remember what happened a few days ago let alone last week.”
After the timer dings, Sophia says, “Ok, folks. What have we got?”
Bob, the team’s biggest extrovert, is the first to speak, as always. “The reason we didn’t finish all of the items in our sprint backlog is that we have so many bugs to deal with. It’s hard to deliver on our sprint goal when bugs keep us bogged down.”
Sally responds, “Maybe, Bob. But to me the main issue isn’t bugs, though those are annoying. It’s that we always underestimate how long it will take to deliver on our sprint backlog items, and end up over committing.”
Samuel is the next to speak. “I actually disagree with both Bob and Sally. Yes, we have bugs. And yes, we seem to underestimate how long it will take to accomplish our goals. But the biggest problem is the interruptions. Our boss always seems to add unrelated tasks mid-sprint and that’s distracting us from getting our work done.”
If you were Sophia, the team’s Scrum Master, what would you do next? How would you know what the team should discuss first? Think for a moment before scrolling down.
Hi again! 😊 So, what will your next step be?
When I’ve presented this scenario to people in the past, the most common response I get is: “I’d use dot voting to prioritize the discussion.” Which is great! Dot voting helps use the collective intelligence of the team to figure out what’s most important.
But what if the collective intelligence of the team is wrong? Is there a better way?
As you might have guessed given the title of this Chapter, one thing Sophia could have done is gathered and presented some relevant data before asking the team to analyze its impediments. This aligns closely with The 5 Phases of an Effective Retrospective in which Gathering Data comes one step before Generating Insights.
Gathering data creates a shared picture of what happened. Without a common picture, individuals tend to verify their own opinions and beliefs. Gathering data expands everyone’s perspective. Diana Larsen and Esther Derby, Agile Retrospectives: Making Good Teams Great
In this particular case, imagine if The Team had three poster boards hung around the room before the retrospective even started:
After kicking off the retro by Setting The Stage, Sophia could have next asked The Team to turn its attention to the data hanging around the room. Importantly, this would have happened before The Team was asked to share what they felt was most important to discuss.
Perhaps after looking at the bug data, Bob would have realized that bugs aren’t actually a big issue for the whole team, but instead something important mainly to Bob because he’s the one who always volunteers to fix them.
Or perhaps Sophia would have realized that The Team’s estimates are more on-target than she imagined. It’s just that there’s always a mad rush at the end of the sprint to get everything done, and she’s always the one to pick up the load.
But no matter what The Team finds, by analyzing the data together, they will have built a shared understanding of the facts. And this will enable them to have a more productive conversation during the rest of the retrospective.
In her course Powerful Retrospectives, Esther Derby shares that there are two categories of data that you can utilize in your retrospective. The first is Objective, or Hard, Data. The second is Subjective, or Soft, Data.
Let’s focus first on Objective Data.
Objective Data is sometimes referred to as “hard data.” Objective Data is any information that can be measured and verified.
There is nearly a limitless amount of Objective Data you can bring to your retrospective, but let’s dive into a few that I’ve found to be particularly helpful.
If you’re using Scrum, you almost certainly have a Sprint Burn-Down Chart readily available. If it’s on a physical sheet of poster board, hang it around your conference room before the retrospective starts. If it’s in a tool like Jira, draw it by hand or print it out.
How will your Burn-Down Chart help? Let’s dive into a few examples to find out.
Scenario 1: Burndown showing we were ahead of schedule and then fell behind
Remember, in a Burn-Down Chart, the y-axis represents the amount of work left and the x-axis represents time.
In this scenario, The Team started out fast and then fell behind, before catching up. To understand why, here are some questions you might ask:
Scenario 2: Estimated Time Remaining Increased
In this scenario, the estimated amount of work remaining in the sprint went up in the middle of the sprint. Why did that happen? Here are some questions you might ask:
Scenario 3: A slow start
In this scenario, The Team started the sprint slowly. Not much work was getting completed. And then there was a mad rush to finish all the work before the end of the sprint. To understand why, you could ask:
Another piece of Objective Data that is particularly helpful is Cycle Time. If you’re in the Lean or Kanban world, you likely already know why Cycle Time is such a powerful piece of data to have. If you’re following Scrum, you might be wondering, what is Cycle Time? It’s actually quite simple.
Cycle Time is the total time it takes for a task to be completed
For example, if you start work on a user story on Monday at 9am and finish it on Wednesday at 9am, the story’s Cycle Time was 2 days. (Complex math, I know.)
You can calculate Cycle Time for anything you work on — user stories, bugs, tickets, tasks, etc. Once you have the data, you can plot it like this:
Here’s how to read this chart. On the y-axis you see time. This represents the number of days it took for a work item to be completed. On the x-axis you see day of the week. This represents the day that the work item finished.
So for example, the dot above Monday represents the fact that The Team completed a work item on Monday and it took them about 3 days to complete. Hence, that work item’s Cycle Time was 3 days.
Now that you understand how to read the Cycle Time plot, take a look at it again. Before scrolling down to get my take, what jumps out at you?
When I look at this chart, two things jump out at me pretty quickly.
If I were in this team’s retrospective, I’d want to dig in. Why was there an outlier? What work item was this? What extenuating circumstances were there that caused this work item to take so long? Is there anything we can do to prevent this from happening again?
Another Cycle Time Example
Let’s examine another Average Cycle Time plot.
What jumps out at you with this plot? I immediately see two things:
Both of these topics would be great discussion points in this team’s retrospective.
Let’s look at another type of Objective Data that I’ve found to be particularly helpful.
Imagine you have four steps to your development process. First, a task is selected for development (perhaps in Sprint Planning). Then, some amount of time passes before development actually begins.
Once the developers believe the task is done, it moves to QA. After QA is complete, the developers put in a Pull Request for final technical review.
Visually, here is what the process looks like:
1. Selected For Development =>
2. In Progress =>
3. In QA =>
4. Pull Review
Imagine now that you’ve noticed your development process has slowed down. In other words, the Cycle Time is increasing. Wouldn’t it be useful to know why? To know where the bottlenecks in your process are?
That’s where the Average Time In Status plot can come in handy. Here’s what it looks like:
Here's how to interpret this chart.
On the x-axis is time (in this case over the course of an entire year) and on the y-axis is the number of days.
Each line represents one of the steps in this team’s process. Blue for “selected for development”, green for “in progress”, and so on. And the value of each line shows the average number of days a work item spent in that step of the process.
So for example, in January, the average work item spent just under 10 days in development and just under 5 days in QA.
Now that you understand how to read the graph, take a look again at the graph. What jumps out at you?
I immediately find two things of interest:
In the previous three examples of Objective Data, we’ve looked at technical and engineering information specific to your team. But if the purpose of building software is to deliver value to your customers, then it makes sense to also inspect business data in your retrospectives!
Is all the work we are doing as a team having an impact? Are our customers happier as a result of the work we are doing? Did we cause revenue to go up with a new product feature or enhancement?
What business data should you look at? A good place to start is with the metrics most closely associated with your company’s top 3 measurable business goals for the year. If you don’t know them, ask your manager!
Example 1: Net Promoter Score (NPS)
For example, you might learn that the business is focused this year on increasing your Net Promoter Score (NPS). Whether you know the name NPS or not, you’ve almost certainly seen the single question that all NPS surveys ask, “How likely are you to recommend this product to a friend?”.
By looking at NPS data across time, you can tell whether the product features you are adding are resulting to increased happiness among your customers. If not, why? Perhaps you are prioritizing the wrong user stories, or perhaps your Product Owner isn’t talking frequently enough with your customers to know what they actually value. Without looking at NPS, you’d never know.
Example 2: Churn Rate
Here’s another example. Suppose your company sells a subscription product (like a cellphone plan or a food subscription box). Your leadership is focused this year on keeping customers for longer and they use a metric called “churn rate” to see how many customers cancel every month.
When they give you churn rate data, you see that over the past year, churn rate has actually been increasing! Then you look at the features you have delivered this year, and realize that most of them have been focused on making it easier for new customers to get value from the product, rather than on keeping existing customers happy. And the one time you added a feature focused on decreasing churn, it didn’t have any impact at all!
By connecting the dots between the business and the engineering team, you’ve discovered something really valuable. It’s unlikely you would have discovered this otherwise.
You can see how using Objective Data can help your team focus on what’s most important to discuss. The difficulty with Objective Data is that there are many different types of data you can collect and analyze. Here are some additional pieces of Objective Data you can consider using in your retrospectives:
Velocity across sprints
As you likely know, your team’s velocity is the number of completed story points over an iteration. If you track this across time, you will be able to analyze your team’s trend. Your goal should not be to increase velocity every sprint. Instead, if your velocity is increasing, ask why. If your velocity is decreasing, ask why.
Amount of time spent in meetings
Meetings aren’t inherently good or bad. Some meetings add value and others don’t. But it is certainly true that the more time spent in meetings, the less time you have for other work. If you see the amount of time you are spending in meetings is increasing over time, ask why. Maybe this extra time in meetings is needed, and maybe it’s not. If you see the amount decreasing over time, ask why. This might not be a good thing! (Or maybe it is.)
Number of new support requests over time
Do support requests frequently interrupt the team? Track the number of new support requests over time. If it’s going up, maybe it’s a sign of bugs in the system. Or maybe it’s a process problem — some of these support requests could have been handled without ever reaching the development team.
Percent of time spent on bugs, features, ad-hoc requests, etc
Building software requires long periods of heads down time. Some people call this focus time. Others refer to it as being “in the flow”. If your days are full of interruptions, your team’s productivity will likely decrease. The problem is that asking your team to track its time is onerous! Joe Wright, a software development coach, suggests using Legos to track the team’s time. Here’s a pic from twitter of what that might look like in practice:
There are many more types of Objective Data you might consider analyzing. Think about what is relevant to your team. Bring whatever that is to your next retrospective, and see what happens.
Subjective data is sometimes referred to as “soft data”. Analytical people sometimes scoff at Subjective Data (“just give me the facts, who cares about how we feel“), but people are emotional beings and sometimes it’s impossible to fully understand what happened using Objective Data alone.
Subjective Data includes personal opinions, feelings, and emotions on the team. Whereas Objective Data presents the facts, Subjective Data can reveal what your team thinks is important about the facts.
Like with Objective Data, there is a nearly limitless amount of Subjective Data you can bring to your retrospectives. Here are a few specific examples of Subjective Data I’ve found to be most helpful.
Throughout your iteration, your team members will experience tons of different emotions. Sometimes they will be happy. Sometimes they will be motivated. Other times, some people on your team might feel annoyed or frustrated. And so on.
It’s important to recognize that the emotional state of your team will likely have a big impact on its productivity.
Here’s an example of how that might play out. Suppose your team had a particularly bad sprint and was unable to deliver on the Sprint Backlog.
As you Gather Data, you ask your team to take a look at the Sprint Burn-Down Chart (which, remember, is Objective Data):
What happened? Why was the sprint a failure? If The Team was looking solely at Objective Data, it might then look at the accuracy of its estimates or analyze the Git commit log. Which is great! Do that!
But what if you asked your team to map out how it felt during the same period of time?
Here’s how that works. Simply ask everyone to put a dot underneath the Burn-Down Chart representing how happy or sad they felt at various points during the sprint.
You’ll notice that at the beginning of the sprint, The Team was happy! Things felt great, even though The Team was behind according to the Burn-Down Chart.
And then … something happened. All of the sudden the entire Team felt bad. Why?
Maybe it was something internal: perhaps The Team realized it had underestimated the complexity of a user story and got frustrated because it realized it would never be able to deliver on time.
Maybe it was something external: The Team learned that their request to collectively attend a conference was denied yet again by senior management.
But whatever happened is worth discussing, and without mapping The Team’s emotions, it’s likely you’d never have that conversation.
In fact, according to Diana Larsen and Esther Derby in their book Agile Retrospectives: Making Good Teams Great:
Creating a structured way for people to talk about feelings makes it more comfortable to raise topics that have an emotional charge. When people avoid emotional content, it doesn’t go away; it goes underground and saps energy and motivation.
Keep in mind that you can map more than just the team’s happiness across time. You could measure engagement, empowerment, satisfaction, autonomy, or any other emotion you want to consider.
This popular retrospective technique helps highlight your team’s emotions. To run Mad Sad Glad, simply setup three poster boards around the room titled Mad, Sad, and Glad. Ask everyone to privately write on sticky notes what they felt Mad about, what they felt Sad about, and what they felt Glad about. Once everyone is done brainstorming, have everyone place their sticky notes up on the board.
Then, ask your team questions like:
4Ls, originally created by Mary Gorman and Ellen Gottesdiener, is similar to Mad Sad Glad in that it asks your team to think through how it felt and write down responses on sticky notes. 4Ls asks your team:
After The Team is finished brainstorming, you can optionally split into breakout groups of 2-4 people to discuss the results, before reporting back to the entire team.
Sometimes you’ll run into situations in which certain members of your team push back on the use of Subjective Data. “Let’s focus on the hard facts instead of all this mushy feelings stuff,” they might say.
If that’s the situation you find yourself in (and even if not), you can use Team Radar to create Subjective Data that is more quantifiable.
Team Radar is a technique that uses individual numerical ratings to provide a sense of how the overall team is doing on various aspects of its work. For example, you might run a Team Radar based on the 5 Scrum Values of commitment, courage, focus, openness, and respect.
You’d ask everyone to think about how well they think The Team is doing on each of these five values, and then write down a rating from 1 (“poor”) to 5 (“excellent”) for each one.
You can then map out the responses in a radar diagram:
Once you’ve collected this data, ask everyone what they notice. Does anything surprise you? Two things jump out at me:
From this information, you can start diving in deeper on a particular topic. For example, you could spend the rest of the retrospective diving deep on how the team could increase its commitment going forward. Or you could talk about why The Team’s courage is so great and what has to be done to maintain that going forward.
But no matter what you focus on, The Team now has a shared understanding that it didn’t have before.
A lot of business data you’ll have access to won’t be objective, but will still be incredibly useful to bring to your retrospective.
Example: Annual Employee Survey Results
Suppose, for example, that you work at Company Alpha, which recently released the results of its Annual Employee Survey. The survey asked a number of questions around employee engagement, including:
When you took a look at the survey results, you found some great news: across the company, employees seemed to love working there! 🥰
But the survey also broke out responses by division, and it turns out that the division you worked for had the lowest level of employee engagement out of any division in the company.
This would be fantastic data to bring to your next retrospective. Ask the team: why? What are we doing that is causing engagement to be lower than elsewhere in the company? Is there anything under our control that we can change? If not, who should we talk to?
As with Objective Data, the sky is the limit in terms of what Subjective Data you can collect and use in your retrospectives. Use your imagination! And if you need some help, a great place to start is by looking at the various activities for Gathering Data over at Retromat, a website that maintains a list of various retrospective techniques.
Now you’ve seen the power of using data in your retrospective. It all sounds great, right? Data helps your team focus on what’s most important to discuss. And it gives your team a shared understanding of what actually happened. What could go wrong?
It turns out, a whole lot. In her course Powerful Retrospectives, Esther Derby identifies a number of “anti-patterns” to watch out for:
Now that the retro is over, you’ve got to actually improve something based on what you’ve learned!, and unfortunately for us, change is hard. So that’s the bad news. What’s the good news? The good news is that there is a simple three-step process you can follow to dramatically increase the odds your retrospectives will lead to true improvements.