The Power of Checklists

The Power of ChecklistsSurgeons, astronauts, airline pilots and software professionals. What do all these people have in common? Well, for one, all of these professionals are very highly trained – in most cases in takes many years to reach a point where you can practice without supervision.

But even highly trained, experienced professionals can have a bad day, and make the occasional mistake. The problem is, if you’re an astronaut, airline pilot, or surgeon, and you make a mistake, lives can be lost. Software development, perhaps, is somewhat less life-critical, most of the time.

Simple checklists can help reduce human error dramatically. Some reports suggest that surgical checklists introduced by the World Health Organization have helped reduce mortality rates in major surgery by as much as 47%. Neil Armstrong had a checklist printed on the back of his glove (pictured), to ensure he remembered the important things as he made history as the first person to walk on the moon.

So if checklists can save lives, keep aircraft in the air, and help take people to the moon and back, why not utilize them to keep software projects on track, and help maximize the delivery of value, and minimize the risk of project failure?

Checklists help highly trained professionals focus on, and remember, the stuff that is important, and critical to the success of the endeavor they are working on. Unlike traditional process documentation, checklists are, by definition, lean, light and concise, so work well with agile development. The point is that they don’t burden a professional with lots of extra things to remember, or try to be prescriptive about how things are done – experienced professionals can generally be trusted to do the job properly, and make the right decisions when circumstances demand it – a checklist simply acts as an “aide-memoir” so nothing vital is forgotten.

So what does a software project checklist look like? Fortunately, some smart people have already done some work in this area, identifying a core set of checklists that can be applied to any software project, regardless of practices being applied, life-cycle being followed, or the technology or languages being used. They have been particularly effective when used in conjunction with agile approaches such as Scrum. These checklists are available in card form as Alpha State Cards, or as an iOS app.

You can learn more about the checklists by attending this free webinar.

Your feedback is welcomed!

Introducing Collaborative Lifecycle Management & Deployment Planning at a Major UK Bank

IJI has been engaged with a major UK bank over the last 18 months helping them introduce new methods of working, and IBM’s Collaborative Lifecycle Management tooling to support these methods. Business Analysts now capture their requirements in Rational Requirements Composer (RRC), Solution Architects create their designs in Rational Software Architect (RSA) using the Unified Modelling Language (UML), Infrastructure & Security Architects add deployment topologies to the designs using RSA’s Deployment Planning extension, and everything is underpinned by Work Items and Source Control in Rational Team Concert (RTC).

This programme of change marks a major shift in the bank’s IT culture away from disparate production of Microsoft Word documents and Visio diagrams towards a supportable solution of collaborative, model-driven architecture and design. IJI has contributed to the specification of new practices, the creation and delivery of training material, guidance documentation in text and video form, the founding of a Community of Practice (CoE), advanced training and development programmes for Champions within the CoE, mentoring support for project teams adopting the new methods and tools, and customisation of the Rational toolset to deliver specific capabilities required by the IT teams.

One significant aspect of our engagement has been to roll out the Deployment Planning extension to RSA. This add-on delivers features for the design and specification of deployment infrastructure. The Unified Modelling Language (UML) already offers the deployment diagram as a means to show how software components execute upon middleware and hardware, plus the other elements that are required to deliver a fully working system. Critics argue that the UML deployment diagram offers little more than pictures, lacking a rich enough semantic for tool-based validation; furthermore there is insufficient information to enable useful integrations with industry-standard build and provisioning engines.

The Deployment Planning extension replaces the UML deployment diagram with a new modelling concept called a topology. Topologies are analogous with UML models in that they capture the elements of an infrastructure design, the relationships between elements, and views of the design via diagrams. To achieve this a different modelling language is used, the Topology Modelling Language (TML).

The method which underpins the use of TML requires that several topologies are created when considering deployment architectures for a system, with each topology refining the previous one and introducing ever greater levels of detail. The first is the Logical Topology and its role is two-fold:

  • Understand the deployment context by adding TML elements to represent physical and logical Locations (e.g. data centres, security zones) within which Nodes sit that host the in-scope software Components.
  • Ensure traceability with source UML models by creating TML equivalents of Components and Actors.

TML nodes are best thought of as placeholders for some stack of hardware and middleware. This stack may already exist or may still need to be specified and provisioned, but either way this is a level of detail that does not need to be considered while the deployment context is being determined. And to help determine the system context, actors may be included in the topology to maintain focus on scenarios of use.

An example logical topology is shown in the image below:

Introducing Collaborative Lifecycle Management & Deployment Planning at a Major UK Bank

You can see two locations on the diagram, ‘Internet’ containing a primary actor and ‘Data Centre’ with two nodes hosting the components to be deployed. Each component is linked to its equivalent UML Component, and an example link is shown in the ‘Properties’ view.

Once the Logical Topology is sufficiently complete, a Physical Topology is created to refine the infrastructure design and begin specifying the technology which will be used for deployment:

  • Nodes are realised with physical stacks of hardware, operating systems, databases, networking, and so on.
  • Additional infrastructure is included as required to complete the system.

TML provides a feature whereby technology units may be labelled conceptual, meaning that the unit (e.g. an x86 server) is not fully defined and thus retains a certain level of abstraction; the benefit for system architects and designers is that a physical topology can be used to validate a deployment solution at a high level with a focus on performance, robustness, throughput and resiliency. Design details such as processor architectures, operating system versions, inter-process messaging solutions and the like should be deferred for now.

An example physical topology is shown in the image below:

Introducing Collaborative Lifecycle Management & Deployment Planning at a Major UK Bank

In RSA’s ‘Project Explorer’ on the left, you can see that we have a logical and physical topology. Traceability between the two is achieved via an Import Diagram, visible on the left of the diagramming pane. The import contains two nodes and each is realised by a specific stack of technology; each stack is conceptual, denoted by braces around the name.

The Physical Topology contains mainly conceptual units thus is not a complete design, therefore one or more Deployment Topologies is created to finalise the design:

  • Conceptual units are realised with equivalent, non-conceptual units.
  • Full details are added such as server names, IP addresses, patch versions, communication protocols, port numbers, etc.

At this level, a single conceptual server may be realised by several concrete servers to represent a load balancing or hot-standby infrastructure solution. Furthermore, a non-specific operating system must now be realised by a real solution, whether that be Windows or Red Hat Linux or whatever.

IJI was tasked with extending the Deployment Planning palette with new topology units that best represent the bank’s IT estate as well as strict constraints on the relationships which may exist between units. The resulting solution has enabled Architects to specify infrastructure and security designs much quicker and with greater quality than before, resulting in faster progress through internal governance and less re-work. Furthermore all the bank’s Architects are seeing huge benefits by working in a more collaborative fashion using Rational Software Architect and Collaborative Lifecycle Management.

Learn more about IJI's Supportable Solutions approach, part of the Three Imperatives for Mainstream Agile Adoption.

Story Points, Lean Principles and Product Development Flow

A “story point” is a unit or measure which is commonly used to describe the relative “size” of a user story (an agile requirements artifact) when it is being estimated. In many cases, a fibonacci sequence of 0,1,2,3,5,8,13,21,... is used during the estimation process to indicate this relative size. One of the reasons for this is to increase the speed in which an estimation is derived. For example, it’s more difficult to agree on the differences between a 5 and a 6 than it is to agree between a 5 and an 8.

It is very common for agile teams to use story points to describe the “level of effort” to complete a user story. That is to say a user story estimated at 5 points should be expected take 5 times as long as a user story with a point of 1. On the other hand, it is also very common for people to include effort, risk and uncertainty (or complexity) in the definition of a story point. That is an 8 is also 4 times riskier than a 2, 4 times more uncertain.

Mike Cohn has stated that it is a mistake to do this:

“I find too many teams who think that story points should be based on the complexity of the user story or feature rather than the effort to develop it. Such teams often re-label “story points” as “complexity points.” I guess that sounds better. More sophisticated, perhaps. But it's wrong. Story points are not about the complexity of developing a feature; they are about the effort required to develop a feature.”

So what do we do with this?

In Donald Reinertsen’s groundbreaking work “The Principles of Product Development Flow” we get a clear sense about how “batch sizes” affect our product development flow. Without going into much detail, it is fair to say that story points describe the “batch size” of a user story. Web definitions define batch size as: “quantity of product worked on in one process step”.

Reinertsen describes some of the principles of how batch sizes relate to our product development flow. For example: Principle B1 “The Batch Size Queueing Principle: Reducing batch size reduces cycle time”. The cycle time is how long it takes, or, the level of effort to complete the story. Smaller stories equal less effort than large ones. This principle would support the story point as a relative unit of effort as Cohn would suggest.

There are other principles that might not be so supportive of this perspective.

For example, looking forward to the second batch principle: Principle B2 “The Batch Size Variability Principle: Reducing batch size reduces variability in flow” we beging to see some of the challenges in this as a “relative level of effort comparison” only. If we have significantly increased variability in the size of the story, should we still “expect” a “5” to be the same as 5 “1’s”? All we can really expect” is more variability. Increased risk, schedule delays, unknowns are what we should “expect”.

Let’s look at a few more principles.

B3: The Batch Size Feedback Principle: Reducing batch size accelerates feedback.
In our scrum processes, for example, because of B1 (increased cycle time) would mean that the product owner doesn’t see the work product as quickly, and, in some cases, begins to lose faith or worries and increases pressure or interruptions of the team. Fast feedback is cornerstone to agile and product development flow.

Large batches lead to B7: The Psychology Principle of Batch Size: Large batches inherently lower motivation and urgency. We like to see things get done, it makes us happy. As humans it takes us more time to get on with a huge job, but a simple one we might just knock out and get on to the next one.

Very little good comes from large batch size as it relates to product development flow. There are 22 Batch Principles and they are all not too supportive of large batch size.

For example, some of them are:
B4: The Batch Size Risk Principle: Reducing batch size reduces risk.
B5: The Batch Size Overhead Principle: Reducing batch size reduces overhead.
B6: The Batch Size Efficiency Principle: Large batches reduce efficiency.
B8: The Batch Size Slippage Principle: Large batches cause exponential cost and schedule growth.

And so on. Large Stories are bad. Really bad. So, what can we do about this in our agile software development process? The most important thing is that we can not “expect” an “8” to take 4 times as long as a “2”. The “Principles of Product Development Flow” tell us that this is impossible.

We need to understand that, regardless of the size, they’re estimates, and not “exactimates” as The Agile Dad says. We need to accept that our estimate of a larger story is less accurate than our estimate of a smaller one.

We need to understand that, since we can not commit to an unknown, it is unrealistic and violates our lean principle of respect for people to ask them to commit to large stories.

We need to understand how batches affect our queues, our productivity, predictability and flow. We should study and apply product development flow.

We need to understand that we’ll do better in our flow if we have smaller stories, so learning the skills and breaking down stories into smaller sizes will help. We may never allow anything larger than a “5” for example, into a sprint.

With that said, if we have “8”’s in our backlog that (for some reason can’t be broken down into smaller stories) we should compensate in our velocity load estimates for stories in a given sprint. For example: If we have an estimated velocity of 20 and all we have are stories that have been estimated as “8”, we might want to schedule only 2 of these stories in the sprint, leaving us some capacity margin for safety.

One thing to note, however, is that, smaller stories might not be the best solution for geographically disparate teams, according to B17: The Proximity Principle: Proximity enables small batch sizes. Where it might be more effective to have the remote team working on a larger story. They might break it into smaller stories for efficiency. We need to understand the economics of batch handoff sizes and balance our efforts accordingly through reflection and adaptation.

Managing Non-Functional Requirements in SAFe

 Managing Non Functional Requirements in SAFeManaging non-functional requirements (NFR’s) in software development has always been a challenge. These “system capabilities”, such as ‘how fast a page loads’, ‘how many concurrent users the system can sustain’ or ‘how vulnerable to denial-of-server attacks can we be", traditionally have been ascribed as belonging to “quadrant four of the agile testing quadrants” of Brian Marick. That is, these are tests that are technology facing and which critique the product. That said, it has never been clear *why* this is so as this information  can be critical for the business to clearly understand.

In the Scaled Agile Framework (SAFe) NFR’s are represented as a symbol bolted to the bottom of the various backlogs in the system. This indicates that they apply to all of the other stories in the backlog. One of the challenges of managing them lies in at least one aspect of our testing strategies: When do we accept them if they represent a "constant" or "persistent constraint" on all the rest of the requirements?

This paper advances an approach to handling NFR’s in SAFe  which promotes the concept that NFRs are more valuable when considered as first class objects in our business facing testing and dialogs. It suggests that the business would be highly interested in knowing, for example, how many concurrent users the system can sustain on-line.  If you're not sure about this just ask the business people around the healthcare.gov project! One outcome of this approach is that we see a process emerge that reduces or need to treat them as a special class of requirements at all.

If we expose the NFR’s to the business, in a language and manner that would create shared understanding of them, we could avoid surprises while solving a major challenge.

Please consider the following Gherkin example:

Feature: Online performance

In order to ensure a positive customer experience while on our website

I’d like acceptable performance and reliability

So that the site visitor will not lose interest or valuable time

Scenario: Maximum concurrent signed-in user page response times

  • Given there are 1,000 people logged on
  • When they navigate to random pages on the site
  • Then no response should take longer than 4 seconds

Scenario: Maximum concurrent signed-in user error responses

  • Given there are 1,000 people logged on
  • When they navigate to random pages on the site for 15 minutes
  • Then all pages are viewed without any errors

These are pretty straight-forward and easy to understand test scenarios. If they were managed like any other feature in the system the creation, elaboration and implementation of them would serve as a ‘forcing function’  where derived value in the form of shared understanding between the business and the development would be gained. As well these directly executable specifications could be automated such that they could run against every build of the software. This fast feedback is very important to development flow. If we check in a change, perhaps a configuration parameter, or new library, that broke any NFR, we’d know immediately what changed (and where to go look!).  Something that is also very valuable (and often overlooked!) is that each build serves as a critical on-going baseline for comparison of performance and other system capabilities.

Any NFR expressed in this fashion becomes a form of negotiation. It makes visible economic trade-off possibilities that might not otherwise be well understood by the business. For example, if push came to shove, would there still be business value if, under sustained load, page responses were sometimes reduced to 5 seconds in some cases?

Another benefit of writing the test first is that it would increase the dialog about *how* we will implement the NFR scenario helps to ensure, by definition, that a "testable design" emerges.

This approach to requirements/test management is known as "Behavior Driven Development" (BDD) and "Specification By Example". The question of how and when to implement these stories in the flow sequence remains a challenge and the remainder of this article addresses this challenge directly. I’ll address one solution in SAFe.

The recommendation is to Implement the NFR an an executable requirement using natural language tools like Cucumber, SpecFlow (which supports Gherkin) or Fit/FitNesse (which uses natural language and tables) as soon as they are accepted as NFRs in an iteration as part of the architectural flow. Create a Feature in the Program backlog that describes implementation of the actual NFR (load, capacity, security etc.) and treat it like any other feature that point. Have the system team discuss, describe and build the architectural runway to drive the construction of the systems that will support the testing of them. Use the stories as acceptance against the architectural runway, if that is appropriate. If you do not implement the actual test itself right away (not recommended) at least wire it up to result in a “Pending” test failure (not really recommended but I’ll describe that more in a moment). When the Scenarios are running in your continuous integration (CI) environment, the story can be accepted. With regards to your CI, keep in mind that some of these tests, with large data sets or with long up time requirements will take a while to complete so it is very important to separate them from your fast failing unit tests.

The next important step is to make these tests visible to the business and to the development team. To the business, one way to make them visible, along with your other customer facing acceptance tests, is you use a tool like Relish that can publish them, along with markup and images as well as navigation and search.

Another recommendation in this approach would be to build a “quality” dashboard using the testing quadrants as described earlier. That is, each quadrant would report a pass/fail/pending status that could be used for governance and management of the system. When all quadrants are green, you can release. You can get quite creative with this approach and use external data sources, such as Sonar and Cast (coverage and code quality tools, respectively) and even integrate with Q3 exploratory testing results, for example. There is work to be done in this area. Hopefully someone will write a Jenkins plugin or add this to a process management tool.

Using this approach you will always know what the status of your NFR’s are and get the information you need in a timely fashion, when there is still time to react. This approach would help to eliminate surprises and remove the need for a major (unknown cost) effort at the end of your development cycle. In the case above, even if these tests had been marked “Pending” you’d have the knowledge that the status of these NFR’s were unknown, which would increase trust and share the responsibility across the entire value stream.

Learn more about the Scaled Agile Framework: download SAFe Foundations.

Learn more about our Scaled Agile Framework Training and Certification Classes.

A Hammerhead Shark versus James Bond in Speedos

A  Hammerhead Shark versus James Bond in SpeedosI’ve often found that most of the questioning about the worth of agile tends to come from the Project Management community. That’s not a criticism on PM’s but an acknowledgement that for them it’s probably more difficult to see how this agile concept can work.

Traditionally PM’s have tended to need their eyes pointing in different directions – one on the day to day development activities of the team, the detailed planning and daily progress, and one on the bigger picture, the long term roadmap and strategic planning side of a project. And, unless you’re a Hammerhead Shark – this is always going to be a tricky feat.

The trouble with agile, or more accurately, the trouble with some people’s interpretation of agile, is that it can be seen as an excuse to just focus on the tactical side of planning which leaves PM’s wondering what happens to all the stuff their other eye is usually pointing at.

So does being agile really mean ignoring the high level strategic side of managing and planning a project? Will the scent of burning Prince2 manuals soon pervade?

Fortunately this is not what agile means at all – in fact Scrum, which we all know and love, is pretty keen to remind us that we should still do release planning, risk management, and all those important things, it just doesn’t presume to tell us how to do them (in much the same way as it doesn’t tell us how to breathe, eat, sleep or do any other number of bodily functions we should still be doing whilst we’re Scrumming). What we are left to figure out for ourselves, as fully capable agile dudes, is how to ensure that we can stay agile for the long haul, which means having a sustainable and scalable approach to agility.

So how does that work? Is it really possible to add the governance, compliance, risk management and high level planning elements of managing a project to an agile approach without losing the agility? (Let’s hope so, for agile’s sake, because it is clearly and undeniably necessary).

Well, yes, of course, it is possible – otherwise agile just wouldn’t work. But it has to be done in a certain way. Let’s face it – you wouldn’t send James Bond out in full suit of armour, a wetsuit, padded ski suit and a parachute every time he went on a mission. Not only would it be a tad cumbersome, it would also be unnecessary (given that sometimes he gets away with just a small pair of Speedos). What you would do is give him exactly the right amount of kit required for a given situation. The same applies to agile. What’s needed is exactly the right amount of governance, planning and compliance for a given project – no more, no less.

So hang on – what have we got so far? James Bond in Speedos and a hammerhead shark. Which one is the PM? Well in a way it’s both, and neither. Confused? Good. Me too.

And I guess that is the point. A PM’s job is not easy and while they would love to be 007 in speedos (figuratively) – agile, unencumbered, able to work quickly and focus on getting the job done, they still need that hammerhead with one eye on all the ‘other stuff’.

I don’t think we can ever get rid of all that ‘other stuff’. It’s necessary and important. But we can minimise it so that only the right amount of ‘other stuff’ is put in place and we do what NEEDS TO BE DONE, building up from a minimal set (should I mention speedos again?) rather than starting out with everything, including the wetsuit and the parachute. This then, in essence, is the key to disciplined agile.

The PM still needs and will always need to be able to look at both the strategic and tactical side of a project, but with this approach maybe they need be less of a hammerhead. With agile self-organising teams the tactical planning side of a project is very much a team effort and, along with release and sprint burn-downs, daily stand-ups and sprint retrospectives, the tactical management is much less of an overhead.

So maybe, now, a normal shaped head will do, with just two eyes and some kind of innovative mechanism that will turn that head, allowing the PM to focus on the strategic but throw a glance towards the tactical when necessary.

Or maybe I’m just sticking my neck out.

Learn more about lightweight essential governance for agile projects.

Read the article: Agile and SEMAT - Perfect Partners.

Alpha State Cards in Practice: Experiences from an Agile Trainer

Having had the opportunity to incorporate the Alpha State Cards within some recent training classes I have been delighted to see the enthusiastic reaction they receive. The cards are great at bringing what can sometimes be complex and difficult concepts to life and making them of real practical value to people.

In each Agile training class I deliver now, I use the cards to support an exercise that challenges the groups to determine the current status of a sample software development project. In the discussions afterwards, the groups have consistently reported:

  • They were really impressed on how the cards enabled more effective team communication and collaboration
  • The cards demonstrated in both a tangible and visual way that significant progress was being made, as the Alpha State Cards made it quick and easy to cut through the “noise” to the heart of what mattered
  • The whole experience showed how quick it was to come to a useful conclusion and each group found it an enjoyable way to work

When asked how the cards might support other team activities the feedback included the following:

  • The cards offer a great set of objectives to assist with task identification, task planning and  prioritization activities
  • The states on the card provide a basis for simplifying governance objectives and evaluation criteria, as well as making the whole process leaner
  • Revisiting the card abacus during iteration reviews to demonstrate iteration progress from a state perspective
  • They recognize the ability for the cards to advance and retreat along the abacus, to reflect significant change impacting one or more Alphas

I get a lot of satisfaction from using the cards in these courses as they really act to break down the barriers early in the course by helping the attendees to relax and to build their confidence working collaboratively together. They also gain a sense of experiencing something not only new, but highly effective too, and something they can easily apply back on their own projects to add value and improve ways of working.

For me as a trainer and coach, the cards significantly contribute towards an enhanced learning experience, and in so doing, increase the knowledge retention for the key learning points. The bottom line is it’s best to use the cards in a group situation to truly appreciate their value.

The Alpha State Cards, games and further guidance is available here if you want to try them yourself.

Also there is now a Alpha State Card LinkedIn group which is a great place to share ideas & ask questions about using the cards.

Balancing Agility with Governance

Balancing Agility with GovernanceIt’s not often that I get the opportunity to help facilitate at an agile conference, but yesterday I did just that. I had the pleasure of helping Ian Spence deliver his session at RallyON 2013 in London. The theme of the session was about balancing the goals of agility with the need for governance, compliance and standards.

Most of us by now are familiar with the agile manifesto, and how it states “while we value the things on the right, we value the things on the left more” i.e. individuals and interactions are more valuable than processes and tools, but there is still some value in the latter. The point of the session was that we need to achieve a balance between agility and other things like governance, compliance and standards – things which are very often thought of as conflicting with agile and therefore “the enemy”! This is especially true in large organisations. But people whose job is to implement governance regimes, ensure compliance, and that standards are followed, are also people – people that agile development teams need to interact with.

Anyway, theory over, it was time to play some games – card games to be precise. Ian introduced Alpha State Cards, a simple tool for understanding project health and progress, by focusing on underlying performance indicators – indicators that are essential to all software endeavors regardless of method, process, life-cycle or practices being followed.

We only played a couple of these games: the first was using the cards to understand the state of an example project, the second to determine the required state of key project indicators before a team would be ready to start sprinting. But it was enough to see that a simple lightweight card-based approach could be a useful addition to one's agile toolkit, and help facilitate conversations between different stakeholders in an entirely method-neutral manner.

Ian then showed us how, using the cards to create checkpoints, a lean and lightweight governance model can be quickly constructed: one that is based on objective outcomes, rather than documentation.

The games, and the cards, are both available here if you want to try them out.

Use Case 2.0 – Slices and Relationships: Extension

Since the introduction of Use-Case 2.0 we have received a number of questions about use-case slices and in particular how they relate to the concepts on include, extend and generalization. In this blog we will look at the impact of the extend relationship on use-case slicing.

What’s the Problem?

An extend relationship can be used to factor out optional behaviour from a use-case narrative. It is particularly useful in the following situations:

  1. Where the optional behaviour will be part of a separately purchased extension
  2. Where different customers require different variations of the same behaviour
  3. Where already implemented use cases need to be extended
  4. Where additions need to be made to previously frozen or signed off use cases

Consider a hotel management system with which customers can make online room reservations. As shown in figure 1, the primary use cases would be “Reserve Room”, “Check In Customer” and “Check Out Customer”.

Use Case 2.0 – Slices and Relationships: Extension

Figure 1 – Hotel Management System Use Cases

Now let’s consider what happens if the hotel management system being built is to be a modular commercial product with an optional waiting list feature. This feature allows a customer to be put on a waiting list in the case where the room they like is already booked. The customer will then be informed when the room becomes available or can have the room reserved automatically within a given timeframe. This feature could easily be captured within use case “Reserve Room” but since it is an optional feature, it is factored out into an extension use case.

Use Case 2.0 – Slices and Relationships: Extension

Figure 2 – The re-factored Reserve Room Use Case

Now, to provide a little more context, let’s first have a look at the use-case narrative of the original Reserve Room use-case.

Use Case 2.0 – Slices and Relationships: Extension

Figure 3 – Reserve Room use-case narrative

Without the extending use case we would have had only one use case to slice up - Reserve Room.  Consider the following example Reserve Room use-case slices.

Use Case 2.0 – Slices and Relationships: Extension

Figure 4 - Use-case slices for the Reserve Room use case

The question now is what will happen to these slices when we make use of extension:

  • Does an extending use case have its own use-case slices?
  • Does using extend change the number of use-case slices?
  • Does using extend have an impact on any existing use-case slices?

Does an extending use case have its own use-case slices?
The answer is yes. Using extend means that we move behaviour from one use case to another; we start by literally cutting and pasting text between the two use-case narratives. More specifically we take out one or more alternative flows and place them in a use case of their own. In this case the Alternative Flows AF16, 17, 18 and 19, which are all about the waiting list, would be moved to the new Handle Waiting List use case.

We could have left all the behaviour related to handling a waiting list in the Reserve Room use case. By using extend we have made optional behaviour explicit. In the case of extension the extending use case is performed in the context of the original use case but without the original use case’s knowledge. This means that any use-case slice that requires behaviour of the extension use case must belong to the extension use case. So, extension use cases do have their own use-case slices

Does extension change the number of use-case slices?
Before refactoring we had one use case with its set of use-case slices. The question is what will happen to this set when we factor out the optional behaviour using the extension. Most likely the total number of use-case slices will remain the same because any alternative flow significant enough to get moved to an extension use-case would probably have got its own slice or slices.

Does extension have an impact on the use-case slices?
Yes and no. Yes, because in the use-case slices we must refer to the right flows of the right use cases;   the original use case or the extension use case. No, because the stories and test conditions remain the same independent of the use case they belong to.

Some Examples

Let’s first have a look at the refined narrative of the Reserve Room use case and the narrative of the Handle Waiting List use case

Use Case 2.0 – Slices and Relationships: Extension

Figure 5 – Updated use-case narrative

And below you will find use-case slices from the extending Handle Waiting List use case

Use Case 2.0 – Slices and Relationships: Extension

Figure 6 - Use-case slices of the Handle Waiting List use case

Notice that in the example use-case slice above:

  1. The basic flow of the Reserve Room is always required because the Handle Waiting List (extension) use case cannot be performed without it.
  2. Alternative Flow 16 of the original Reserve Room use case has become the basic flow of the  Handle Waiting List (extension) use case.
  3. Alternative Flow 17, 18 and 19 of the original Reserve Room use case have become Alternative Flow 1, 2 and 3 respectively of the Handle Waiting List (extension) use case.

Final words

So, as you can see use-case slices are as effective for use cases and use-case models that use the extension mechanism as those that don’t.  In the next blog in this series we will examine the effect of the generalize relationship on the slicing of the use cases.

This post was co-authored with Ian Spence.

Useful links:

Use Case 2.0 Training Classes

What does it mean for the enterprise to be agile?

What does it mean for the enterprise to be agile?Closely allied to establishing the business objectives in adopting agile practices, an understanding of what it means for an enterprise to be agile should be clear to everyone in the enterprise. This post summarizes what it means for an enterprise to be agile from the perspective of the senior executives and stakeholders.

“Agile” is a set of behaviors that help a business achieve its objectives. The most prevalent agile practice, Scrum, defines a set of project management-based behaviors that help practitioners (especially software practitioners) achieve those objectives. However, little is said in Scrum about how to be agile outside of the immediate environment of the Scrum teams. Team agility does not automatically engender enterprise agility.

Deciding where a so-called value chain starts and ends is going to vary according to the individual enterprise considerably, according to factors such as size, business area, degree of specialization, vendors and suppliers as part of the larger value chain (or even ‘ecosystem’). However, this is a bit like a “5 WHYS” analysis, you have to recognize where it makes sense to stop the analysis at. Mostly, a company’s corporate boundary makes a natural place to stop (though ideally the whole external supply chain would be synchronized and agile). However, this may be too great a challenge for many organisations to begin with, so smaller organizational units and business units within the enterprise may have to suffice for the initial vision and implementation.

As a reference point, for a hardware-based product company, the groups that might be considered for inclusion in the scope for enterprise agility could look like: Sales, Marketing, HR, Executive Management, Software Engineering, Hardware Engineering, Product Definition, Product Releasing, Product Testing, Technical Documentation, Project Management, Programme Management, Quality Assurance. Where any of these groups are excluded, there will probably be a detrimental reduction in overall agility.

Here are some of the major characteristics that an agile enterprise will typically exhibit, at the ‘manager’ and ‘senior executive’ levels (some apply more to some groups than others):

  • Commitment through close involvement and engagement with agile teams
  • Removal of organisational impediments and issues
  • Flexibly determining release content and being responsive to change: based on sustainable organisational capacity and economic value (including cost of delay); taking into account (test) results and feedback
  • Be Servant Leaders: inspire, motivate, lead by example: including: allowing teams to self-organise - “Self-organisation does not mean that workers instead of managers engineer an organisation design. It does not mean letting people do whatever they want to do. It means that management commits to guiding the evolution of behaviours that emerge from the interaction of independent agents instead of specifying in advance what effective behaviour is.” – Philip Anderson, The Biology of Business
  • Demonstrating trust, especially in avoiding delving into (and controlling) the detail: but note also that trust is engendered by successful delivery
  • Focusing on throughput of (valuable) work rather than on 100% Resource Utilization
  • Recognizing the differences between repeatable and highly variable knowledge work (avoid purely “widget engineering”)
  • Evolving legacy practices into new (e.g. by evaluating and challenging old Ways of Working): powerful corporate forces can be afoot, so this is not easy.

Leffingwell’s Scaled Agile Framework provides a suitable structure for scaling Scrum to enterprise levels and fills in on many of the executive roles and functions required for success in agile at the enterprise level.

Some useful links:

The Rudiments of Scalability

The Rudiments of ScalabilityWhat does scalability mean and when do you have to consider that?

The scaling of software and systems agility is still very much in the early stages of evolution, such that there doesn’t seem to be any clear consensus on exactly what is meant by scaling, or rather, where that starts and ends.

In this post, I’ll use Scrum as my “baseline” and reference agile way of working since that is by far my biggest experience, but you can substitute other agile paradigms.

Scalability in this context refers to the innate ability to apply single team-based agile techniques in successively larger and interdependent organisational units. I often think of this as having two dimensions:

  • Teams working on the same product, project or programme. Typically, one or two teams (and at maximum three) will work on a product owned by a single Product Owner, and Scrum works entirely native in that situation. But where several products within a project or programme have dependencies between each other, a degree of coordination is required between them to ensure overall project/programme/system integrity. This can be to minimise waste, for example in ensuring that dependent components are developed just-in-time for when they are needed (no sooner, and obviously no later). It can also be to satisfy technical dependencies (especially in component-based development teams), part of the very essence of project, product and programme development.
  • Teams working on disparate products but within the same organisation.It may also be highly desirable to be able to use broadly similar agile practices across many (or all) parts of the development organisation. For example, this reduces ‘learning curve’ overheads when teams are formed or rearranged. It also helps contain and manage the cost of operational and tool support in large organisations, an important consideration for any business, even though we should beware of the tail wagging the dog.

There is no question that Scrum can be applied to a large number of teams. In small products, projects and programmes, there are typically a “handful” of teams that need to collaborate and the original concept of Scrum-of-Scrums works perfectly well. To some extent, this is also further scalable, in both dimensions described above. Things start to get a lot more tricky however as you start to get beyond 6-12 teams. This is partly because there has never really been much in terms of detail of how Scrum-of-Scrums should work that I have come across, and that isn’t also bespoke to a particular organisation or scenario. However, it’s an even bigger headache in terms of how to manage an ever-increasing number of teams that need to ‘cross-coordinate’. Examples of Scrum-of-Scrum-of-Scrums (and beyond) have been tried and made to work (or so I have been led to believe) so this might handle a limit of up to 50 teams, at its best, but this is introducing hierarchies of ‘reporting’ and function that generally have pretty severe disadvantages too.

This returns me to the original question of what we mean by scaling agility. To many organisations, scalability to 50 teams would indeed seem ambitious and round about the upper limit. My experience has been with organisations like Nokia and Symbian, where we were dealing with a phone product development of 200+ teams, at peak. This can truly be called Enterprise Agility. Scrum-of-Scrums begins to look a little lightweight in this context.

Without some method of scaling for the enterprise, it will be pretty well impossible to achieve a coordinated system (or enterprise or system-of-systems etc) that always runs, that delivers value to stakeholders in small increments, that is responsive to change, delivers just in time (only) and minimises the Cost of Delay.

Unsurprisingly, businesses that I have worked with over the years have all placed great importance to the predictability of schedule and predictability of quality (whether they are using agile methods or not). After all, these characteristics are the basis of survival in an ever-increasingly fierce and competitive marketplace.

In a highly complex product being built using changing/emergent technology and having frequently changing priorities for features, it is still reasonable to set business deadlines based on forecasts of market need , economic value, and quantified Cost of Delay. It isn’t realistic to expect dozens (or hundreds) of teams to self-organise; it’s a risky approach almost bound to fail. The need for a way to combine Scrum with higher-level release and portfolio planning becomes vital if we want to preserve the benefits that agile can bring (e.g. small increments of value at product quality, fast feedback, ability to react to change).

Shown here is a picture summarising the model for scaling agility defined by Dean Leffingwell; you can read about in many sources including his blog. This was the basis of the model in use at Nokia. In a later post, I intend to talk about the details of the model with some modifications and minor enhancements we adopted. For example, the model allows for just enough planning (and at the right level) to be able to define the next release, and beyond in an increasingly less detailed way.

It quickly becomes clear that any adoption of scaled agile practices requires commitment from more parts of the organisation than small-scale Scrum does. This raises a number of questions for the enterprise aspiring to reap agile benefits. Some of these arise for any ‘size’ of agile adoption but require even clearer answers in an enterprise agile transformation:

  • Will all affected parts of the product development chain adopt agile behaviour? It is so often the case that, at best, senior executives endorse the practice of agile techniques, but do not consider that it is something that will (or even should) affect their own behaviour. It can be regarded by leaders as something ‘they’ do but obviously we, as the decision-makers, need not be constrained by its patterns.A little knowledge is dangerous (in the wrong hands). Having learnt that agile practices embrace change (changing priorities and product scope), there can be a tendency to assert too many changes on the development organisation and assume there is no associated cost at all. Some change is of course healthy but too much change (let’s call it ‘churn’) has a high cost and also is indicative of a business in chaotic mode.
  • What are the business objectives in adopting agile practices?
  • What does it mean for the enterprise to be agile?
  • What is the initial willingness of the workforce to become agile?
  • Will it be acceptable for the product architecture to emerge or is there a light (but significant) steering design required?
  • How should large product teams be organised? In sufficiently large and complex products, is it feasible to have feature-based teams, or does component-based teams become a necessity in practice?
  • Does the enterprise have an organisational structure that will support scaled agility? For example, how collaborative are those parts of the organisation dealing with system/product testing, requirements definition, code-line integration (configuration management)?
  • How robust are the underlying Technical Practices (for designing, building and testing)? Scrum itself quickly reveals flaws in how teams actually develop software, but these become magnified exponentially when scaling agile practices.

    In subsequent posts, I hope to tackle a number of these issues in more detail.

Page 1 of 1512345678910...Last »