agile development

The Power of Checklists

The Power of ChecklistsSurgeons, astronauts, airline pilots and software professionals. What do all these people have in common? Well, for one, all of these professionals are very highly trained – in most cases in takes many years to reach a point where you can practice without supervision.

But even highly trained, experienced professionals can have a bad day, and make the occasional mistake. The problem is, if you’re an astronaut, airline pilot, or surgeon, and you make a mistake, lives can be lost. Software development, perhaps, is somewhat less life-critical, most of the time.

Simple checklists can help reduce human error dramatically. Some reports suggest that surgical checklists introduced by the World Health Organization have helped reduce mortality rates in major surgery by as much as 47%. Neil Armstrong had a checklist printed on the back of his glove (pictured), to ensure he remembered the important things as he made history as the first person to walk on the moon.

So if checklists can save lives, keep aircraft in the air, and help take people to the moon and back, why not utilize them to keep software projects on track, and help maximize the delivery of value, and minimize the risk of project failure?

Checklists help highly trained professionals focus on, and remember, the stuff that is important, and critical to the success of the endeavor they are working on. Unlike traditional process documentation, checklists are, by definition, lean, light and concise, so work well with agile development. The point is that they don’t burden a professional with lots of extra things to remember, or try to be prescriptive about how things are done – experienced professionals can generally be trusted to do the job properly, and make the right decisions when circumstances demand it – a checklist simply acts as an “aide-memoir” so nothing vital is forgotten.

So what does a software project checklist look like? Fortunately, some smart people have already done some work in this area, identifying a core set of checklists that can be applied to any software project, regardless of practices being applied, life-cycle being followed, or the technology or languages being used. They have been particularly effective when used in conjunction with agile approaches such as Scrum. These checklists are available in card form as Alpha State Cards, or as an iOS app.

You can learn more about the checklists by attending this free webinar.

Your feedback is welcomed!

Story Points, Lean Principles and Product Development Flow

A “story point” is a unit or measure which is commonly used to describe the relative “size” of a user story (an agile requirements artifact) when it is being estimated. In many cases, a fibonacci sequence of 0,1,2,3,5,8,13,21,... is used during the estimation process to indicate this relative size. One of the reasons for this is to increase the speed in which an estimation is derived. For example, it’s more difficult to agree on the differences between a 5 and a 6 than it is to agree between a 5 and an 8.

It is very common for agile teams to use story points to describe the “level of effort” to complete a user story. That is to say a user story estimated at 5 points should be expected take 5 times as long as a user story with a point of 1. On the other hand, it is also very common for people to include effort, risk and uncertainty (or complexity) in the definition of a story point. That is an 8 is also 4 times riskier than a 2, 4 times more uncertain.

Mike Cohn has stated that it is a mistake to do this:

“I find too many teams who think that story points should be based on the complexity of the user story or feature rather than the effort to develop it. Such teams often re-label “story points” as “complexity points.” I guess that sounds better. More sophisticated, perhaps. But it's wrong. Story points are not about the complexity of developing a feature; they are about the effort required to develop a feature.”

So what do we do with this?

In Donald Reinertsen’s groundbreaking work “The Principles of Product Development Flow” we get a clear sense about how “batch sizes” affect our product development flow. Without going into much detail, it is fair to say that story points describe the “batch size” of a user story. Web definitions define batch size as: “quantity of product worked on in one process step”.

Reinertsen describes some of the principles of how batch sizes relate to our product development flow. For example: Principle B1 “The Batch Size Queueing Principle: Reducing batch size reduces cycle time”. The cycle time is how long it takes, or, the level of effort to complete the story. Smaller stories equal less effort than large ones. This principle would support the story point as a relative unit of effort as Cohn would suggest.

There are other principles that might not be so supportive of this perspective.

For example, looking forward to the second batch principle: Principle B2 “The Batch Size Variability Principle: Reducing batch size reduces variability in flow” we beging to see some of the challenges in this as a “relative level of effort comparison” only. If we have significantly increased variability in the size of the story, should we still “expect” a “5” to be the same as 5 “1’s”? All we can really expect” is more variability. Increased risk, schedule delays, unknowns are what we should “expect”.

Let’s look at a few more principles.

B3: The Batch Size Feedback Principle: Reducing batch size accelerates feedback.
In our scrum processes, for example, because of B1 (increased cycle time) would mean that the product owner doesn’t see the work product as quickly, and, in some cases, begins to lose faith or worries and increases pressure or interruptions of the team. Fast feedback is cornerstone to agile and product development flow.

Large batches lead to B7: The Psychology Principle of Batch Size: Large batches inherently lower motivation and urgency. We like to see things get done, it makes us happy. As humans it takes us more time to get on with a huge job, but a simple one we might just knock out and get on to the next one.

Very little good comes from large batch size as it relates to product development flow. There are 22 Batch Principles and they are all not too supportive of large batch size.

For example, some of them are:
B4: The Batch Size Risk Principle: Reducing batch size reduces risk.
B5: The Batch Size Overhead Principle: Reducing batch size reduces overhead.
B6: The Batch Size Efficiency Principle: Large batches reduce efficiency.
B8: The Batch Size Slippage Principle: Large batches cause exponential cost and schedule growth.

And so on. Large Stories are bad. Really bad. So, what can we do about this in our agile software development process? The most important thing is that we can not “expect” an “8” to take 4 times as long as a “2”. The “Principles of Product Development Flow” tell us that this is impossible.

We need to understand that, regardless of the size, they’re estimates, and not “exactimates” as The Agile Dad says. We need to accept that our estimate of a larger story is less accurate than our estimate of a smaller one.

We need to understand that, since we can not commit to an unknown, it is unrealistic and violates our lean principle of respect for people to ask them to commit to large stories.

We need to understand how batches affect our queues, our productivity, predictability and flow. We should study and apply product development flow.

We need to understand that we’ll do better in our flow if we have smaller stories, so learning the skills and breaking down stories into smaller sizes will help. We may never allow anything larger than a “5” for example, into a sprint.

With that said, if we have “8”’s in our backlog that (for some reason can’t be broken down into smaller stories) we should compensate in our velocity load estimates for stories in a given sprint. For example: If we have an estimated velocity of 20 and all we have are stories that have been estimated as “8”, we might want to schedule only 2 of these stories in the sprint, leaving us some capacity margin for safety.

One thing to note, however, is that, smaller stories might not be the best solution for geographically disparate teams, according to B17: The Proximity Principle: Proximity enables small batch sizes. Where it might be more effective to have the remote team working on a larger story. They might break it into smaller stories for efficiency. We need to understand the economics of batch handoff sizes and balance our efforts accordingly through reflection and adaptation.

Managing Non-Functional Requirements in SAFe

 Managing Non Functional Requirements in SAFeManaging non-functional requirements (NFR’s) in software development has always been a challenge. These “system capabilities”, such as ‘how fast a page loads’, ‘how many concurrent users the system can sustain’ or ‘how vulnerable to denial-of-server attacks can we be", traditionally have been ascribed as belonging to “quadrant four of the agile testing quadrants” of Brian Marick. That is, these are tests that are technology facing and which critique the product. That said, it has never been clear *why* this is so as this information  can be critical for the business to clearly understand.

In the Scaled Agile Framework (SAFe) NFR’s are represented as a symbol bolted to the bottom of the various backlogs in the system. This indicates that they apply to all of the other stories in the backlog. One of the challenges of managing them lies in at least one aspect of our testing strategies: When do we accept them if they represent a "constant" or "persistent constraint" on all the rest of the requirements?

This paper advances an approach to handling NFR’s in SAFe  which promotes the concept that NFRs are more valuable when considered as first class objects in our business facing testing and dialogs. It suggests that the business would be highly interested in knowing, for example, how many concurrent users the system can sustain on-line.  If you're not sure about this just ask the business people around the healthcare.gov project! One outcome of this approach is that we see a process emerge that reduces or need to treat them as a special class of requirements at all.

If we expose the NFR’s to the business, in a language and manner that would create shared understanding of them, we could avoid surprises while solving a major challenge.

Please consider the following Gherkin example:

Feature: Online performance

In order to ensure a positive customer experience while on our website

I’d like acceptable performance and reliability

So that the site visitor will not lose interest or valuable time

Scenario: Maximum concurrent signed-in user page response times

  • Given there are 1,000 people logged on
  • When they navigate to random pages on the site
  • Then no response should take longer than 4 seconds

Scenario: Maximum concurrent signed-in user error responses

  • Given there are 1,000 people logged on
  • When they navigate to random pages on the site for 15 minutes
  • Then all pages are viewed without any errors

These are pretty straight-forward and easy to understand test scenarios. If they were managed like any other feature in the system the creation, elaboration and implementation of them would serve as a ‘forcing function’  where derived value in the form of shared understanding between the business and the development would be gained. As well these directly executable specifications could be automated such that they could run against every build of the software. This fast feedback is very important to development flow. If we check in a change, perhaps a configuration parameter, or new library, that broke any NFR, we’d know immediately what changed (and where to go look!).  Something that is also very valuable (and often overlooked!) is that each build serves as a critical on-going baseline for comparison of performance and other system capabilities.

Any NFR expressed in this fashion becomes a form of negotiation. It makes visible economic trade-off possibilities that might not otherwise be well understood by the business. For example, if push came to shove, would there still be business value if, under sustained load, page responses were sometimes reduced to 5 seconds in some cases?

Another benefit of writing the test first is that it would increase the dialog about *how* we will implement the NFR scenario helps to ensure, by definition, that a "testable design" emerges.

This approach to requirements/test management is known as "Behavior Driven Development" (BDD) and "Specification By Example". The question of how and when to implement these stories in the flow sequence remains a challenge and the remainder of this article addresses this challenge directly. I’ll address one solution in SAFe.

The recommendation is to Implement the NFR an an executable requirement using natural language tools like Cucumber, SpecFlow (which supports Gherkin) or Fit/FitNesse (which uses natural language and tables) as soon as they are accepted as NFRs in an iteration as part of the architectural flow. Create a Feature in the Program backlog that describes implementation of the actual NFR (load, capacity, security etc.) and treat it like any other feature that point. Have the system team discuss, describe and build the architectural runway to drive the construction of the systems that will support the testing of them. Use the stories as acceptance against the architectural runway, if that is appropriate. If you do not implement the actual test itself right away (not recommended) at least wire it up to result in a “Pending” test failure (not really recommended but I’ll describe that more in a moment). When the Scenarios are running in your continuous integration (CI) environment, the story can be accepted. With regards to your CI, keep in mind that some of these tests, with large data sets or with long up time requirements will take a while to complete so it is very important to separate them from your fast failing unit tests.

The next important step is to make these tests visible to the business and to the development team. To the business, one way to make them visible, along with your other customer facing acceptance tests, is you use a tool like Relish that can publish them, along with markup and images as well as navigation and search.

Another recommendation in this approach would be to build a “quality” dashboard using the testing quadrants as described earlier. That is, each quadrant would report a pass/fail/pending status that could be used for governance and management of the system. When all quadrants are green, you can release. You can get quite creative with this approach and use external data sources, such as Sonar and Cast (coverage and code quality tools, respectively) and even integrate with Q3 exploratory testing results, for example. There is work to be done in this area. Hopefully someone will write a Jenkins plugin or add this to a process management tool.

Using this approach you will always know what the status of your NFR’s are and get the information you need in a timely fashion, when there is still time to react. This approach would help to eliminate surprises and remove the need for a major (unknown cost) effort at the end of your development cycle. In the case above, even if these tests had been marked “Pending” you’d have the knowledge that the status of these NFR’s were unknown, which would increase trust and share the responsibility across the entire value stream.

Learn more about the Scaled Agile Framework: download SAFe Foundations.

Learn more about our Scaled Agile Framework Training and Certification Classes.

A Hammerhead Shark versus James Bond in Speedos

A  Hammerhead Shark versus James Bond in SpeedosI’ve often found that most of the questioning about the worth of agile tends to come from the Project Management community. That’s not a criticism on PM’s but an acknowledgement that for them it’s probably more difficult to see how this agile concept can work.

Traditionally PM’s have tended to need their eyes pointing in different directions – one on the day to day development activities of the team, the detailed planning and daily progress, and one on the bigger picture, the long term roadmap and strategic planning side of a project. And, unless you’re a Hammerhead Shark – this is always going to be a tricky feat.

The trouble with agile, or more accurately, the trouble with some people’s interpretation of agile, is that it can be seen as an excuse to just focus on the tactical side of planning which leaves PM’s wondering what happens to all the stuff their other eye is usually pointing at.

So does being agile really mean ignoring the high level strategic side of managing and planning a project? Will the scent of burning Prince2 manuals soon pervade?

Fortunately this is not what agile means at all – in fact Scrum, which we all know and love, is pretty keen to remind us that we should still do release planning, risk management, and all those important things, it just doesn’t presume to tell us how to do them (in much the same way as it doesn’t tell us how to breathe, eat, sleep or do any other number of bodily functions we should still be doing whilst we’re Scrumming). What we are left to figure out for ourselves, as fully capable agile dudes, is how to ensure that we can stay agile for the long haul, which means having a sustainable and scalable approach to agility.

So how does that work? Is it really possible to add the governance, compliance, risk management and high level planning elements of managing a project to an agile approach without losing the agility? (Let’s hope so, for agile’s sake, because it is clearly and undeniably necessary).

Well, yes, of course, it is possible – otherwise agile just wouldn’t work. But it has to be done in a certain way. Let’s face it – you wouldn’t send James Bond out in full suit of armour, a wetsuit, padded ski suit and a parachute every time he went on a mission. Not only would it be a tad cumbersome, it would also be unnecessary (given that sometimes he gets away with just a small pair of Speedos). What you would do is give him exactly the right amount of kit required for a given situation. The same applies to agile. What’s needed is exactly the right amount of governance, planning and compliance for a given project – no more, no less.

So hang on – what have we got so far? James Bond in Speedos and a hammerhead shark. Which one is the PM? Well in a way it’s both, and neither. Confused? Good. Me too.

And I guess that is the point. A PM’s job is not easy and while they would love to be 007 in speedos (figuratively) – agile, unencumbered, able to work quickly and focus on getting the job done, they still need that hammerhead with one eye on all the ‘other stuff’.

I don’t think we can ever get rid of all that ‘other stuff’. It’s necessary and important. But we can minimise it so that only the right amount of ‘other stuff’ is put in place and we do what NEEDS TO BE DONE, building up from a minimal set (should I mention speedos again?) rather than starting out with everything, including the wetsuit and the parachute. This then, in essence, is the key to disciplined agile.

The PM still needs and will always need to be able to look at both the strategic and tactical side of a project, but with this approach maybe they need be less of a hammerhead. With agile self-organising teams the tactical planning side of a project is very much a team effort and, along with release and sprint burn-downs, daily stand-ups and sprint retrospectives, the tactical management is much less of an overhead.

So maybe, now, a normal shaped head will do, with just two eyes and some kind of innovative mechanism that will turn that head, allowing the PM to focus on the strategic but throw a glance towards the tactical when necessary.

Or maybe I’m just sticking my neck out.

Learn more about lightweight essential governance for agile projects.

Read the article: Agile and SEMAT - Perfect Partners.

Alpha State Cards in Practice: Experiences from an Agile Trainer

Having had the opportunity to incorporate the Alpha State Cards within some recent training classes I have been delighted to see the enthusiastic reaction they receive. The cards are great at bringing what can sometimes be complex and difficult concepts to life and making them of real practical value to people.

In each Agile training class I deliver now, I use the cards to support an exercise that challenges the groups to determine the current status of a sample software development project. In the discussions afterwards, the groups have consistently reported:

  • They were really impressed on how the cards enabled more effective team communication and collaboration
  • The cards demonstrated in both a tangible and visual way that significant progress was being made, as the Alpha State Cards made it quick and easy to cut through the “noise” to the heart of what mattered
  • The whole experience showed how quick it was to come to a useful conclusion and each group found it an enjoyable way to work

When asked how the cards might support other team activities the feedback included the following:

  • The cards offer a great set of objectives to assist with task identification, task planning and  prioritization activities
  • The states on the card provide a basis for simplifying governance objectives and evaluation criteria, as well as making the whole process leaner
  • Revisiting the card abacus during iteration reviews to demonstrate iteration progress from a state perspective
  • They recognize the ability for the cards to advance and retreat along the abacus, to reflect significant change impacting one or more Alphas

I get a lot of satisfaction from using the cards in these courses as they really act to break down the barriers early in the course by helping the attendees to relax and to build their confidence working collaboratively together. They also gain a sense of experiencing something not only new, but highly effective too, and something they can easily apply back on their own projects to add value and improve ways of working.

For me as a trainer and coach, the cards significantly contribute towards an enhanced learning experience, and in so doing, increase the knowledge retention for the key learning points. The bottom line is it’s best to use the cards in a group situation to truly appreciate their value.

The Alpha State Cards, games and further guidance is available here if you want to try them yourself.

Also there is now a Alpha State Card LinkedIn group which is a great place to share ideas & ask questions about using the cards.

Balancing Agility with Governance

Balancing Agility with GovernanceIt’s not often that I get the opportunity to help facilitate at an agile conference, but yesterday I did just that. I had the pleasure of helping Ian Spence deliver his session at RallyON 2013 in London. The theme of the session was about balancing the goals of agility with the need for governance, compliance and standards.

Most of us by now are familiar with the agile manifesto, and how it states “while we value the things on the right, we value the things on the left more” i.e. individuals and interactions are more valuable than processes and tools, but there is still some value in the latter. The point of the session was that we need to achieve a balance between agility and other things like governance, compliance and standards – things which are very often thought of as conflicting with agile and therefore “the enemy”! This is especially true in large organisations. But people whose job is to implement governance regimes, ensure compliance, and that standards are followed, are also people – people that agile development teams need to interact with.

Anyway, theory over, it was time to play some games – card games to be precise. Ian introduced Alpha State Cards, a simple tool for understanding project health and progress, by focusing on underlying performance indicators – indicators that are essential to all software endeavors regardless of method, process, life-cycle or practices being followed.

We only played a couple of these games: the first was using the cards to understand the state of an example project, the second to determine the required state of key project indicators before a team would be ready to start sprinting. But it was enough to see that a simple lightweight card-based approach could be a useful addition to one's agile toolkit, and help facilitate conversations between different stakeholders in an entirely method-neutral manner.

Ian then showed us how, using the cards to create checkpoints, a lean and lightweight governance model can be quickly constructed: one that is based on objective outcomes, rather than documentation.

The games, and the cards, are both available here if you want to try them out.

What does it mean for the enterprise to be agile?

What does it mean for the enterprise to be agile?Closely allied to establishing the business objectives in adopting agile practices, an understanding of what it means for an enterprise to be agile should be clear to everyone in the enterprise. This post summarizes what it means for an enterprise to be agile from the perspective of the senior executives and stakeholders.

“Agile” is a set of behaviors that help a business achieve its objectives. The most prevalent agile practice, Scrum, defines a set of project management-based behaviors that help practitioners (especially software practitioners) achieve those objectives. However, little is said in Scrum about how to be agile outside of the immediate environment of the Scrum teams. Team agility does not automatically engender enterprise agility.

Deciding where a so-called value chain starts and ends is going to vary according to the individual enterprise considerably, according to factors such as size, business area, degree of specialization, vendors and suppliers as part of the larger value chain (or even ‘ecosystem’). However, this is a bit like a “5 WHYS” analysis, you have to recognize where it makes sense to stop the analysis at. Mostly, a company’s corporate boundary makes a natural place to stop (though ideally the whole external supply chain would be synchronized and agile). However, this may be too great a challenge for many organisations to begin with, so smaller organizational units and business units within the enterprise may have to suffice for the initial vision and implementation.

As a reference point, for a hardware-based product company, the groups that might be considered for inclusion in the scope for enterprise agility could look like: Sales, Marketing, HR, Executive Management, Software Engineering, Hardware Engineering, Product Definition, Product Releasing, Product Testing, Technical Documentation, Project Management, Programme Management, Quality Assurance. Where any of these groups are excluded, there will probably be a detrimental reduction in overall agility.

Here are some of the major characteristics that an agile enterprise will typically exhibit, at the ‘manager’ and ‘senior executive’ levels (some apply more to some groups than others):

  • Commitment through close involvement and engagement with agile teams
  • Removal of organisational impediments and issues
  • Flexibly determining release content and being responsive to change: based on sustainable organisational capacity and economic value (including cost of delay); taking into account (test) results and feedback
  • Be Servant Leaders: inspire, motivate, lead by example: including: allowing teams to self-organise - “Self-organisation does not mean that workers instead of managers engineer an organisation design. It does not mean letting people do whatever they want to do. It means that management commits to guiding the evolution of behaviours that emerge from the interaction of independent agents instead of specifying in advance what effective behaviour is.” – Philip Anderson, The Biology of Business
  • Demonstrating trust, especially in avoiding delving into (and controlling) the detail: but note also that trust is engendered by successful delivery
  • Focusing on throughput of (valuable) work rather than on 100% Resource Utilization
  • Recognizing the differences between repeatable and highly variable knowledge work (avoid purely “widget engineering”)
  • Evolving legacy practices into new (e.g. by evaluating and challenging old Ways of Working): powerful corporate forces can be afoot, so this is not easy.

Leffingwell’s Scaled Agile Framework provides a suitable structure for scaling Scrum to enterprise levels and fills in on many of the executive roles and functions required for success in agile at the enterprise level.

Some useful links:

The Rudiments of Scalability

The Rudiments of ScalabilityWhat does scalability mean and when do you have to consider that?

The scaling of software and systems agility is still very much in the early stages of evolution, such that there doesn’t seem to be any clear consensus on exactly what is meant by scaling, or rather, where that starts and ends.

In this post, I’ll use Scrum as my “baseline” and reference agile way of working since that is by far my biggest experience, but you can substitute other agile paradigms.

Scalability in this context refers to the innate ability to apply single team-based agile techniques in successively larger and interdependent organisational units. I often think of this as having two dimensions:

  • Teams working on the same product, project or programme. Typically, one or two teams (and at maximum three) will work on a product owned by a single Product Owner, and Scrum works entirely native in that situation. But where several products within a project or programme have dependencies between each other, a degree of coordination is required between them to ensure overall project/programme/system integrity. This can be to minimise waste, for example in ensuring that dependent components are developed just-in-time for when they are needed (no sooner, and obviously no later). It can also be to satisfy technical dependencies (especially in component-based development teams), part of the very essence of project, product and programme development.
  • Teams working on disparate products but within the same organisation.It may also be highly desirable to be able to use broadly similar agile practices across many (or all) parts of the development organisation. For example, this reduces ‘learning curve’ overheads when teams are formed or rearranged. It also helps contain and manage the cost of operational and tool support in large organisations, an important consideration for any business, even though we should beware of the tail wagging the dog.

There is no question that Scrum can be applied to a large number of teams. In small products, projects and programmes, there are typically a “handful” of teams that need to collaborate and the original concept of Scrum-of-Scrums works perfectly well. To some extent, this is also further scalable, in both dimensions described above. Things start to get a lot more tricky however as you start to get beyond 6-12 teams. This is partly because there has never really been much in terms of detail of how Scrum-of-Scrums should work that I have come across, and that isn’t also bespoke to a particular organisation or scenario. However, it’s an even bigger headache in terms of how to manage an ever-increasing number of teams that need to ‘cross-coordinate’. Examples of Scrum-of-Scrum-of-Scrums (and beyond) have been tried and made to work (or so I have been led to believe) so this might handle a limit of up to 50 teams, at its best, but this is introducing hierarchies of ‘reporting’ and function that generally have pretty severe disadvantages too.

This returns me to the original question of what we mean by scaling agility. To many organisations, scalability to 50 teams would indeed seem ambitious and round about the upper limit. My experience has been with organisations like Nokia and Symbian, where we were dealing with a phone product development of 200+ teams, at peak. This can truly be called Enterprise Agility. Scrum-of-Scrums begins to look a little lightweight in this context.

Without some method of scaling for the enterprise, it will be pretty well impossible to achieve a coordinated system (or enterprise or system-of-systems etc) that always runs, that delivers value to stakeholders in small increments, that is responsive to change, delivers just in time (only) and minimises the Cost of Delay.

Unsurprisingly, businesses that I have worked with over the years have all placed great importance to the predictability of schedule and predictability of quality (whether they are using agile methods or not). After all, these characteristics are the basis of survival in an ever-increasingly fierce and competitive marketplace.

In a highly complex product being built using changing/emergent technology and having frequently changing priorities for features, it is still reasonable to set business deadlines based on forecasts of market need , economic value, and quantified Cost of Delay. It isn’t realistic to expect dozens (or hundreds) of teams to self-organise; it’s a risky approach almost bound to fail. The need for a way to combine Scrum with higher-level release and portfolio planning becomes vital if we want to preserve the benefits that agile can bring (e.g. small increments of value at product quality, fast feedback, ability to react to change).

Shown here is a picture summarising the model for scaling agility defined by Dean Leffingwell; you can read about in many sources including his blog. This was the basis of the model in use at Nokia. In a later post, I intend to talk about the details of the model with some modifications and minor enhancements we adopted. For example, the model allows for just enough planning (and at the right level) to be able to define the next release, and beyond in an increasingly less detailed way.

It quickly becomes clear that any adoption of scaled agile practices requires commitment from more parts of the organisation than small-scale Scrum does. This raises a number of questions for the enterprise aspiring to reap agile benefits. Some of these arise for any ‘size’ of agile adoption but require even clearer answers in an enterprise agile transformation:

  • Will all affected parts of the product development chain adopt agile behaviour? It is so often the case that, at best, senior executives endorse the practice of agile techniques, but do not consider that it is something that will (or even should) affect their own behaviour. It can be regarded by leaders as something ‘they’ do but obviously we, as the decision-makers, need not be constrained by its patterns.A little knowledge is dangerous (in the wrong hands). Having learnt that agile practices embrace change (changing priorities and product scope), there can be a tendency to assert too many changes on the development organisation and assume there is no associated cost at all. Some change is of course healthy but too much change (let’s call it ‘churn’) has a high cost and also is indicative of a business in chaotic mode.
  • What are the business objectives in adopting agile practices?
  • What does it mean for the enterprise to be agile?
  • What is the initial willingness of the workforce to become agile?
  • Will it be acceptable for the product architecture to emerge or is there a light (but significant) steering design required?
  • How should large product teams be organised? In sufficiently large and complex products, is it feasible to have feature-based teams, or does component-based teams become a necessity in practice?
  • Does the enterprise have an organisational structure that will support scaled agility? For example, how collaborative are those parts of the organisation dealing with system/product testing, requirements definition, code-line integration (configuration management)?
  • How robust are the underlying Technical Practices (for designing, building and testing)? Scrum itself quickly reveals flaws in how teams actually develop software, but these become magnified exponentially when scaling agile practices.

    In subsequent posts, I hope to tackle a number of these issues in more detail.

The Achilles Heel of Agile

The Achilles Heel of AgileAccording to at least one source, Achilles, the hero of the Greeks in the Trojan War, was said to have been made invulnerable when his mother, Thetis, dipped him in the Styx, the great river of the Underworld, immersing all but the heel by which she held him.  From this we are given the expression the "Achilles Heel", which implies that something is not only a source of weakness, but that the cause of the weakness is also the source of great strength.

Agile has an Achilles Heel: the Product Owner.  The involvement of an empowered, knowledgeable, committed and engaged Product Owner is a source of tremendous strength and benefit of the agile approach; in fact, the Product Owner is the single indispensable person on the project, without whom nothing can be done. Like Achilles, the Product Owner seems to need to have powers beyond that of mortals: they must negotiate consensus with all the stakeholders in the business, they need to be almost omniscient in their knowledge of the business domain and the goals of the project, and they must be unfailing in their vision of the solution to be developed.  Oh yes, they also need to be available any time the development team needs them for feedback.  When presented in these terms, the Product Owner seems just as mythical a being as Achilles himself.

The reality, on real projects, is that it is almost impossible for one person to do all these things well.  If the Product Owner is a skilled negotiator of consensus among stakeholders they are not going to be in the team room all the time.  If they have a strong vision for the solution that vision is likely needed elsewhere as well. Visionaries also sometimes have a great "global view" but are sketchy on important details.  If they don't have a compelling vision for the solution they may vacillate or even flip-flop on important decisions.  If they lack important negotiating skills they may tell the team one thing only to have their decisions overruled later by stakeholders who were not involved.

The reality is that there is often no one person who can fill all the roles the Product Owner must fill. In these cases you will have to find different people to play different aspects of the Product Owner role.  The best Product Owner is probably one who can negotiate consensus, make effective and timely decisions based on that consensus, and be available on a predictable basis to work with the team (though not all the time). When she lacks understanding of the details of how something should work, she will enlist the support of the right people and get them engaged as needed. The grand vision is important but as long as it is provided by one of the stakeholders the Product Owner need not be the "big picture visionary".

The most important thing the Product Owner can provide the team is consistency, in participation, and in direction. If she can do that, she need not be some mythical hero. As long as the right people can be engaged at the right time, and as long as the guidance provided by the Product Owner accurately reflects the desires of the stakeholders and does not fluctuate over time, the agile team can make great progress toward delivering a solution that meets the needs of the business.

Warming Up to Agile

Warming Up to AgileSuccessful agile development introduces a lot of new things, or at least things that are often new to the people starting to adopt an agile approach.  Working with a backlog, engaging a product owner, and utilizing new techniques for planning and estimation often obscure the fact that in order to work in an agile way you have to be highly proficient at what we might call foundational software engineering. Some agile approaches talk about "sprinting", and perhaps this is a good metaphor.  Before you can sprint, you need to be in reasonable shape and you need to warm up a bit in order to avoid injury. Agile software development, delivering working, tested code in short time increments, requires some pretty solid skills in some specific areas. For the moment I'll focus on a couple of key areas we often find wanting when working with teams just starting their agile journey.

In order to develop in short cycles, let's say 2 weeks, you are going to need to build software quickly - certainly daily, and ideally continuously.  Concretely this means that builds need to be automated and executable anytime someone needs to do a build.  If you have to ask someone to do the build, and if it requires manual effort, you simply won't be able to build effortlessly. You need to be able to run a build without thinking about it, because you will have a lot of other more important things to think about.  If you're thinking about "going agile" and don't yet have an automated build capability, focus on that first.

Once you've got the build automated, you'll need to automate tests as well.  When we say that an agile approach focuses on producing working software, the evidence that it works is provided by tests. If you can't run at least unit tests every time the software is built you will never keep up with the demands on testing.  Realistically speaking, just automating unit tests is not enough; you need to be able to do integration, system and regression testing for every build, at least in theory, but unit testing is a good place to start.  In addition, all tests other than User Acceptance and Usability tests should be automated.  Building up from automated unit tests, you should be able to test everything in the system except the User Interface through programmatically-invoked tests.  If you don't do this, you will never keep ahead of the testing workload and your test coverage will remain weak.  If you don't have automated testing, runnable after every build, you will be able to "play" at being agile but you will never really achieve the kind of quality that you will need. At some point you will suffer from lack of testing and something important will blow up in production and that will be the end of your agile experiment.  As with builds, if you are not automating testing now you really need to get your testing act together before you get very far down the agile path.

While you're working on automating your builds, there is another skill in which you'll want to be proficient: refactoring.  There are a number of good books and web resources about this topic, but knowing is different than doing.  In short, you'll want to be comfortable with making significant changes in the structure of the code from one build to the next, done in concert with changes to the unit tests, so that the build and automated tests don't break from one build to the next.  You won't have the time to step back and do "big design" efforts, so you're going to have to get comfortable with making small changes that, over time, add up to big changes, all while continuing to do other development. This takes time and practice, and once you start sprinting you will either have to have it mastered or get good at it very quickly.

This leads to another foundational skill: source code management (SCM).  First, you need to use a source code control system to keep track of changes so that you can roll back to prior versions in the event you change your mind about an approach.  You need to be able to keep work that you are doing out of the build until everyone is ready for it.  You need to version everything, including unit tests.  Sometimes, when undertaking major refactoring, you may need to branch the code, and then merge it later once the changes have been proven. This is all really basic SCM, but it is essential to being able to achieve reasonable velocity and throughput for the team.  In fact, I've put things a bit out of order, because it's really virtually impossible to automate the build or refactor without solid mastery of SCM.

These are by no means the only things that you'll need to master in order to work in an agile way, but they are the foundations on which the rest of agile development is built. Teams that have not gained proficiency in these foundational skills need to focus on doing so before embarking further down the agile path.  There will be plenty of other new things to learn along the way.

Page 1 of 212