Kaizen: Experiments in Scaling Agile
Agile Teams must adapt to thrive through retrospective and process experiments. But what about organizations? Scaled agile teams? How can they experiment as the cost of changes exponentially increases with the number of those affected?
We practice kaizen, or continuous process improvement, as we build Rally Software (formerly CA Agile Central), so we applied it to answer these questions. Below we are going to share our failures, successes, and learnings, so you are inspired to experiment too!
Why experiment with teams of teams?
On our Release Train we have seven front-end development teams spread across three locations in the United States, all collaborating in a monolithic codebase. We have a strong base of scaled agile practices and ceremonies that are widely adopted and help ensure we build the right product the right way. Teams commit to a set of features each quarter and execute on them consistently.
Despite these practices, we found ourselves faced with several challenges. Specifically, we have had pain around how we executed our product vision of revamping key aspects of Rally, like building new backlog pages and boards, crucial tools for any agile team.
Issues we faced included:
- Marketable releases were disjointed with no clear theme
- Teams felt disconnected between their work and customer value
- Long time from idea to market and corresponding low morale
- Lacked of progress on larger product vision
As these issues came up in retrospectives, we decided to run an experiment to hopefully solve them. Our hypothesis was that if we organized teams cohesively around related groups of features, or “initiatives,” then we would see improvement in the above areas. We would also see an increase in value delivered, due to shorter time to market, and increased work through the system. At the start of 2016 we began this experiment, and we called these initiative teams of teams “swarms”.
What are swarms?
Two to five teams worked together on a common initiative, acting both as individual teams, and collaborating together as a larger swarm team. Each individual team had a Product Owner, Scrum Master, several Developers, and a Tester. Each swarm had a Product Manager, Architect, UX designer, UX researcher, and Agile Coach. Swarms were given the autonomy to organize ceremonies and collaborate in ways that worked best for them. Some had standups with the entire initiative team twice a week, some moved their desks so all teams were sitting near one another, some had video hangouts open to easily hear and converse across locations, and some even combined planning meetings and other agile ceremonies.
We came together as a release train at our communal planning events, then met once a week to make adjustments to the plan together and stay connected. Thus we had opportunities to continue to collaborate as a Release Train as well.
To measure the success of our hypothesis, we tracked throughput and other data, and surveyed our teams to get their Net Promoter Score and other feedback on swarms. Whether it would succeed or fail, we would learn a lot!
The survey was sent
We worked in swarms for three quarters. During that time we found our feature delivery rate remained constant by count and by estimate. We surveyed our release train using the below survey, to see how they felt about swarms.
Our NPS of -85% was horrible! We could see that swarms were hurting our agile culture. In the chart below, green shows % who agree, and red shows % who disagree, showing that employees disagreed with many of our hypothesis statements.
|Would you recommend swarms to a peer? (NPS)||-85%|
|Helped us prioritize||-45%|
|Delivered customer value faster||-40%|
|Gave teams autonomy||-40%|
|Gave Teams purpose||-25%|
|Helped teams grow||-25%|
|Busted knowledge silos||-20%|
|Gave Teams focus||0%|
What we heard was that focusing on related features didn’t feel like focus at all. It led us to larger implementation plans that required extensive collaboration and cumbersome refactoring. To keep that many teams working on one initiative, implementation grew to as many as five related features in progress at once. That’s a lot of work in progress! Teams were frustrated as they stepped on each other toes and were forced to delete code or spend hours refactoring. Refractors broke work of multiple teams at a time, creating ill will and stress that was detrimental to our culture. It felt like teams were moving ahead of customer feedback, which led to questions of how the work directly correlated with delivering customer value. Teams started to ask “what is our focus?” “Where are we going?”
On the positive side, chaos led to new collaboration occurring between roles. With work moving so fast, the entire swarm had to learn to communicate better and more frequently. Swarm meetings evolved and joint stand-ups emerged. We started to see developers discussing how they approached the code, defining best practices together, reviewing code across teams to share knowledge, and catching issues earlier.
The general consensus was that teams felt we did have better focus on delivering customer value, but that we lacked a clearer plan to deliver that value. We struggled with collaboration across so many teams. Teams didn’t want to lose the sense of camaraderie gained by working closely with those they would not normally work with, but wanted smaller swarms with the autonomy to prioritize work and define success. Ultimately the size and prioritization of the work left us with much to be desired.
So we did what agilists do best – we pivoted
Given the resulting challenges and feedback, we made several optimizations to our setup to help us plan for faster, more focused delivery, while maintaining the positive improvements to collaboration and best practices:
- Sourced swarms from teams in the same office
- Organized work to limit the number of teams in the same section of code
- Ensure all work is delivering customer value as quickly as possible
- Fed that focus with smaller slices of work that could be experimented on and released faster
- Shortened our Team planning horizon from 3 months to 6 weeks
Results round II
After a quarter of working with these adjustments, we ran another survey. NPS is up to -8%! And we saw improvement in every category.
|Metric||Previous Score||New Score|
|Would you recommend swarms to a peer? (NPS)||-85%||-8%|
|Helped us prioritize||-45%||-15%|
|Delivered customer value faster||-40%||23%|
|Gave teams autonomy||-40%||30%|
|Gave teams purpose||-25%||62%|
|Helped teams grow||-25%||23%|
|Busted knowledge silos||-20%||0%|
|Gave Teams focus||0%||38%|
Our NPS is still negative, so we have plenty of room for improvement. Most noticeably we scored low regarding our prioritization decisions, and will look to improve there next. In the coming quarter, we are exploring changes in communication and earlier collaboration with developer teams when determining priorities.
We continue to make new happy mistakes every day, and feel overall we are seeing continued improvements in morale and value delivered.
Now it’s time to runyour own experiment and let us know how you Kaizen!
If you are interested in how our swarms self-organized, or our planning and other ceremonies, stay tuned for our follow-up posts!
And check out more Agile Management information.
William Kammersell is a Sr. Product Owner at CA Technologies and loves building tools that help Teams realize their full potential using agile methods. He’s previously worn multiple agile hats, including Developer, Scrum Master, Product Manager, and Agile Coach. When he’s not a PI Planning hype man, he loves cooking, geocaching, and fixing whatever the kids have broken around the house.