The Importance of Practice-based Development for Sustaining Agile Change

Most software development teams rarely lack access to defined techniques or processes, but information can

quickly build to become unwieldy, providing teams with too much information, and often with far too much “friction”

and “noise” – with people using different terminology for similar ideas, or indeed the same terminology for different

ideas, causing unnecessary debate and confusion and distracting everyone from the primary goal of adding value

to the software product.

Often the most valuable nuggets of professional practice become buried and lost inside other larger descriptions

of apparently highly prescriptive processes, which considered as a whole are heavy-weight, impractical for many

purposes, and become unpopular and discredited over time.

While it is critical that local empowerment of teams are able to adopt the practices that will work best for their team,

at the organisational level there are many drivers and benefits that require commonality of language and approach.

These include effective communication with stakeholders, transparency of project status and progress for the purpose

of governance, as well as project continuity and sustained improvement — despite change-overs in suppliers,

contractors and employees over time.

Many organisations struggle with recognizing and implementing cohesive delivery cultures and approaches across

many types of projects that have their own internal challenges and structures.

Effective work practices can be lost to faddism. Dr. Ivar Jacobson, chairman of Ivar Jacobson International and a

father of modern business engineering, is often quoted as saying that software development is as trendy as the

fashion industry.

“Instead of a true engineering discipline for software, what we see today is a tendency to adopt

new ideas based on popular fashion rather than appropriateness; a lack of a sound, widely accepted

theoretical basis; a huge number of methods, whose differences are little understood; a

lack of credible experimental evaluation and validation; and a split between industry practice and

academic research.” Real Software Engineering by Ivar Jacobson and Ed Seidewitz


There must be a better way

What if we had some common ground — a common terminology that describes the things that are fundamentally

true and critically important for all software development? A way to share and combine good ideas and practices

from many sources within this shared common framework? An improved way would drive a consistent approach and

delivery across an organisation and enable flexibility of use across teams.

The above content was taken from Part 2 of Ivar Jacobson International's Creating Sustainable Change epub series. To read the full piece, register for the series here.

Creating Sustainable Agile Change

Creating Sustainable Agile ChangeIn 2011, Marc Andreesson, founder of Netscape and investor in many Silicon Valley start-ups penned an article in the Wall Street Journal titled “Why Software is Eating the World”, which outlined how all companies now rely on software to drive their business. Whether software is at the core of a business such as Google, or an integral part of an organisation’s go-to-market strategy such as Walmart or critical for a government agency to communicate to millions of citizens, software is changing the world. Organisations are increasingly reliant on software to differentiate from their competitors, respond to regulatory or legislative requirements, improve legacy systems, get to market faster, deliver new and innovative services or products to customers, and the list goes on.

Software has now become core to any business and many change initiatives are driven out of the need to deliver better, faster, cheaper solutions. Those organisations that can introduce change, adopt it and sustain it will have competitive advantage while those unable to embrace change will fall behind.

In this fast-paced, responsive world, software development teams are adopting agile techniques to speed up development times and reduce risks, while simultaneously becoming more responsive to the needs of their customers. Many organisations have kick-started their agile journey and many have successfully introduced agile on a team basis; however, the real challenge is ensuring that agile can scale and is sustainable as corporate plans and personnel evolve over time.  Different people will have varying views and ideas of applying and scaling agile and often good corporate practices become lost as a change initiative matures.  Helping organizations embed, sustain and scale agile ways of working is at the heart of Ivar Jacobson International’s (IJI) expertise, skillset and intellectual property.

We have produced a multi-part epub series to share with readers our approach to creating sustainable change. You can register for the series here.

700 Engineers, 72 Teams, 3 Continents and Self-Sufficiency in less than 9 months

700 Engineers, 72 Teams, 3 Continents and Self Sufficiency in less than 9 monthsCell phones, smart phones and tablets are pervasive. We live in a mobile world of wirelessly connected devices that have transformed the way we live, work and play.  The internet is now becoming so ubiquitous that half of the world’s population will be connected to the Internet by 2017.1 In 2012, 26% of Internet traffic originated with non-PC devices, but by 2017 the non-PC share of Internet traffic will grow to 49%.2

Behind the scenes of the services and technology that we often now take for granted are large equipment vendors who produce the software and equipment for mobile and fixed network operators around the globe.

Ivar Jacobson International provided its consulting services to one particular business unit at a large telecommunications equipment vendor, which had approximately 1500 employees, based in Europe and Asia with approximately half of these people involved in software development.

Business Drivers for Change

In 2013, due to intensely competitive pressures in a fierce global market, the telecommunications vendor decided it needed to change the way it organized its software teams in a particular division and the way it went about delivering software. They urgently needed to create competitive advantage by:

  • Improving responsiveness (delivering what customers really needed)
  • Increasing delivery precision (delivering product when customers needed it)
  • Building greater quality into the finished product.

The root cause of the issues lay in the existing, very traditional, stove-piped, development process, with many different handovers between teams. An agile approach was decided as the best way of meeting the improvement needs and transforming their software development operation.  However, simply adopting Scrum would not be enough due to the sheer scale of the organization. What was needed was an approach that could scale, both to the numbers of people involved, and to the large programs of work being undertaken. Based on previous positive experience of working with Ivar Jacobson International (IJI), in mid-2013, IJI was selected to advise on the way forward and assist with the agile adoption.

Read the full Case Study

SAFe 3.0

Ivar Jacobson International remains committed to the Scaled Agile Framework – delivering Leading SAFe (SAFe Agilist) and SAFe Program Consultant (SPC) Certifications as well as providing SAFe consulting services.

We are currently putting together our 2015 SAFe training schedule and plan to expand our geographic reach as well as the number of classes that will be available. Watch for dates and times coming soon!

On July 28, 2014, a new version of SAFe was released. This release included extensive refinements to many elements of the methodology infrastructure, updates to most articles, as well as new content and guidance that helps enterprises better organize around value delivery, and improve coordination of large value streams.

IJI’s upcoming courses: SAFe Program Consultant on October 6, 2014, and our Leading SAFe course on November 17, 2014, will both teach the new release of the Scaled Agile Framework – SAFe 3.0.

Ivar Jacobson International is the only European SAFe partner with two SPCT  instructors. Our SAFe program consultants have years of experience of guiding large enterprises through successful agile implementations.

The Power of Checklists

The Power of ChecklistsSurgeons, astronauts, airline pilots and software professionals. What do all these people have in common? Well, for one, all of these professionals are very highly trained – in most cases in takes many years to reach a point where you can practice without supervision.

But even highly trained, experienced professionals can have a bad day, and make the occasional mistake. The problem is, if you’re an astronaut, airline pilot, or surgeon, and you make a mistake, lives can be lost. Software development, perhaps, is somewhat less life-critical, most of the time.

Simple checklists can help reduce human error dramatically. Some reports suggest that surgical checklists introduced by the World Health Organization have helped reduce mortality rates in major surgery by as much as 47%. Neil Armstrong had a checklist printed on the back of his glove (pictured), to ensure he remembered the important things as he made history as the first person to walk on the moon.

So if checklists can save lives, keep aircraft in the air, and help take people to the moon and back, why not utilize them to keep software projects on track, and help maximize the delivery of value, and minimize the risk of project failure?

Checklists help highly trained professionals focus on, and remember, the stuff that is important, and critical to the success of the endeavor they are working on. Unlike traditional process documentation, checklists are, by definition, lean, light and concise, so work well with agile development. The point is that they don’t burden a professional with lots of extra things to remember, or try to be prescriptive about how things are done – experienced professionals can generally be trusted to do the job properly, and make the right decisions when circumstances demand it – a checklist simply acts as an “aide-memoir” so nothing vital is forgotten.

So what does a software project checklist look like? Fortunately, some smart people have already done some work in this area, identifying a core set of checklists that can be applied to any software project, regardless of practices being applied, life-cycle being followed, or the technology or languages being used. They have been particularly effective when used in conjunction with agile approaches such as Scrum. These checklists are available in card form as Alpha State Cards, or as an iOS app.

You can learn more about the checklists by attending this free webinar.

Your feedback is welcomed!

Introducing Collaborative Lifecycle Management & Deployment Planning at a Major UK Bank

IJI has been engaged with a major UK bank over the last 18 months helping them introduce new methods of working, and IBM’s Collaborative Lifecycle Management tooling to support these methods. Business Analysts now capture their requirements in Rational Requirements Composer (RRC), Solution Architects create their designs in Rational Software Architect (RSA) using the Unified Modelling Language (UML), Infrastructure & Security Architects add deployment topologies to the designs using RSA’s Deployment Planning extension, and everything is underpinned by Work Items and Source Control in Rational Team Concert (RTC).

This programme of change marks a major shift in the bank’s IT culture away from disparate production of Microsoft Word documents and Visio diagrams towards a supportable solution of collaborative, model-driven architecture and design. IJI has contributed to the specification of new practices, the creation and delivery of training material, guidance documentation in text and video form, the founding of a Community of Practice (CoE), advanced training and development programmes for Champions within the CoE, mentoring support for project teams adopting the new methods and tools, and customisation of the Rational toolset to deliver specific capabilities required by the IT teams.

One significant aspect of our engagement has been to roll out the Deployment Planning extension to RSA. This add-on delivers features for the design and specification of deployment infrastructure. The Unified Modelling Language (UML) already offers the deployment diagram as a means to show how software components execute upon middleware and hardware, plus the other elements that are required to deliver a fully working system. Critics argue that the UML deployment diagram offers little more than pictures, lacking a rich enough semantic for tool-based validation; furthermore there is insufficient information to enable useful integrations with industry-standard build and provisioning engines.

The Deployment Planning extension replaces the UML deployment diagram with a new modelling concept called a topology. Topologies are analogous with UML models in that they capture the elements of an infrastructure design, the relationships between elements, and views of the design via diagrams. To achieve this a different modelling language is used, the Topology Modelling Language (TML).

The method which underpins the use of TML requires that several topologies are created when considering deployment architectures for a system, with each topology refining the previous one and introducing ever greater levels of detail. The first is the Logical Topology and its role is two-fold:

  • Understand the deployment context by adding TML elements to represent physical and logical Locations (e.g. data centres, security zones) within which Nodes sit that host the in-scope software Components.
  • Ensure traceability with source UML models by creating TML equivalents of Components and Actors.

TML nodes are best thought of as placeholders for some stack of hardware and middleware. This stack may already exist or may still need to be specified and provisioned, but either way this is a level of detail that does not need to be considered while the deployment context is being determined. And to help determine the system context, actors may be included in the topology to maintain focus on scenarios of use.

An example logical topology is shown in the image below:

Introducing Collaborative Lifecycle Management & Deployment Planning at a Major UK Bank

You can see two locations on the diagram, ‘Internet’ containing a primary actor and ‘Data Centre’ with two nodes hosting the components to be deployed. Each component is linked to its equivalent UML Component, and an example link is shown in the ‘Properties’ view.

Once the Logical Topology is sufficiently complete, a Physical Topology is created to refine the infrastructure design and begin specifying the technology which will be used for deployment:

  • Nodes are realised with physical stacks of hardware, operating systems, databases, networking, and so on.
  • Additional infrastructure is included as required to complete the system.

TML provides a feature whereby technology units may be labelled conceptual, meaning that the unit (e.g. an x86 server) is not fully defined and thus retains a certain level of abstraction; the benefit for system architects and designers is that a physical topology can be used to validate a deployment solution at a high level with a focus on performance, robustness, throughput and resiliency. Design details such as processor architectures, operating system versions, inter-process messaging solutions and the like should be deferred for now.

An example physical topology is shown in the image below:

Introducing Collaborative Lifecycle Management & Deployment Planning at a Major UK Bank

In RSA’s ‘Project Explorer’ on the left, you can see that we have a logical and physical topology. Traceability between the two is achieved via an Import Diagram, visible on the left of the diagramming pane. The import contains two nodes and each is realised by a specific stack of technology; each stack is conceptual, denoted by braces around the name.

The Physical Topology contains mainly conceptual units thus is not a complete design, therefore one or more Deployment Topologies is created to finalise the design:

  • Conceptual units are realised with equivalent, non-conceptual units.
  • Full details are added such as server names, IP addresses, patch versions, communication protocols, port numbers, etc.

At this level, a single conceptual server may be realised by several concrete servers to represent a load balancing or hot-standby infrastructure solution. Furthermore, a non-specific operating system must now be realised by a real solution, whether that be Windows or Red Hat Linux or whatever.

IJI was tasked with extending the Deployment Planning palette with new topology units that best represent the bank’s IT estate as well as strict constraints on the relationships which may exist between units. The resulting solution has enabled Architects to specify infrastructure and security designs much quicker and with greater quality than before, resulting in faster progress through internal governance and less re-work. Furthermore all the bank’s Architects are seeing huge benefits by working in a more collaborative fashion using Rational Software Architect and Collaborative Lifecycle Management.

Learn more about IJI's Supportable Solutions approach, part of the Three Imperatives for Mainstream Agile Adoption.

Story Points, Lean Principles and Product Development Flow

A “story point” is a unit or measure which is commonly used to describe the relative “size” of a user story (an agile requirements artifact) when it is being estimated. In many cases, a fibonacci sequence of 0,1,2,3,5,8,13,21,... is used during the estimation process to indicate this relative size. One of the reasons for this is to increase the speed in which an estimation is derived. For example, it’s more difficult to agree on the differences between a 5 and a 6 than it is to agree between a 5 and an 8.

It is very common for agile teams to use story points to describe the “level of effort” to complete a user story. That is to say a user story estimated at 5 points should be expected take 5 times as long as a user story with a point of 1. On the other hand, it is also very common for people to include effort, risk and uncertainty (or complexity) in the definition of a story point. That is an 8 is also 4 times riskier than a 2, 4 times more uncertain.

Mike Cohn has stated that it is a mistake to do this:

“I find too many teams who think that story points should be based on the complexity of the user story or feature rather than the effort to develop it. Such teams often re-label “story points” as “complexity points.” I guess that sounds better. More sophisticated, perhaps. But it's wrong. Story points are not about the complexity of developing a feature; they are about the effort required to develop a feature.”

So what do we do with this?

In Donald Reinertsen’s groundbreaking work “The Principles of Product Development Flow” we get a clear sense about how “batch sizes” affect our product development flow. Without going into much detail, it is fair to say that story points describe the “batch size” of a user story. Web definitions define batch size as: “quantity of product worked on in one process step”.

Reinertsen describes some of the principles of how batch sizes relate to our product development flow. For example: Principle B1 “The Batch Size Queueing Principle: Reducing batch size reduces cycle time”. The cycle time is how long it takes, or, the level of effort to complete the story. Smaller stories equal less effort than large ones. This principle would support the story point as a relative unit of effort as Cohn would suggest.

There are other principles that might not be so supportive of this perspective.

For example, looking forward to the second batch principle: Principle B2 “The Batch Size Variability Principle: Reducing batch size reduces variability in flow” we beging to see some of the challenges in this as a “relative level of effort comparison” only. If we have significantly increased variability in the size of the story, should we still “expect” a “5” to be the same as 5 “1’s”? All we can really expect” is more variability. Increased risk, schedule delays, unknowns are what we should “expect”.

Let’s look at a few more principles.

B3: The Batch Size Feedback Principle: Reducing batch size accelerates feedback.
In our scrum processes, for example, because of B1 (increased cycle time) would mean that the product owner doesn’t see the work product as quickly, and, in some cases, begins to lose faith or worries and increases pressure or interruptions of the team. Fast feedback is cornerstone to agile and product development flow.

Large batches lead to B7: The Psychology Principle of Batch Size: Large batches inherently lower motivation and urgency. We like to see things get done, it makes us happy. As humans it takes us more time to get on with a huge job, but a simple one we might just knock out and get on to the next one.

Very little good comes from large batch size as it relates to product development flow. There are 22 Batch Principles and they are all not too supportive of large batch size.

For example, some of them are:
B4: The Batch Size Risk Principle: Reducing batch size reduces risk.
B5: The Batch Size Overhead Principle: Reducing batch size reduces overhead.
B6: The Batch Size Efficiency Principle: Large batches reduce efficiency.
B8: The Batch Size Slippage Principle: Large batches cause exponential cost and schedule growth.

And so on. Large Stories are bad. Really bad. So, what can we do about this in our agile software development process? The most important thing is that we can not “expect” an “8” to take 4 times as long as a “2”. The “Principles of Product Development Flow” tell us that this is impossible.

We need to understand that, regardless of the size, they’re estimates, and not “exactimates” as The Agile Dad says. We need to accept that our estimate of a larger story is less accurate than our estimate of a smaller one.

We need to understand that, since we can not commit to an unknown, it is unrealistic and violates our lean principle of respect for people to ask them to commit to large stories.

We need to understand how batches affect our queues, our productivity, predictability and flow. We should study and apply product development flow.

We need to understand that we’ll do better in our flow if we have smaller stories, so learning the skills and breaking down stories into smaller sizes will help. We may never allow anything larger than a “5” for example, into a sprint.

With that said, if we have “8”’s in our backlog that (for some reason can’t be broken down into smaller stories) we should compensate in our velocity load estimates for stories in a given sprint. For example: If we have an estimated velocity of 20 and all we have are stories that have been estimated as “8”, we might want to schedule only 2 of these stories in the sprint, leaving us some capacity margin for safety.

One thing to note, however, is that, smaller stories might not be the best solution for geographically disparate teams, according to B17: The Proximity Principle: Proximity enables small batch sizes. Where it might be more effective to have the remote team working on a larger story. They might break it into smaller stories for efficiency. We need to understand the economics of batch handoff sizes and balance our efforts accordingly through reflection and adaptation.

Managing Non-Functional Requirements in SAFe

 Managing Non Functional Requirements in SAFeManaging non-functional requirements (NFR’s) in software development has always been a challenge. These “system capabilities”, such as ‘how fast a page loads’, ‘how many concurrent users the system can sustain’ or ‘how vulnerable to denial-of-server attacks can we be", traditionally have been ascribed as belonging to “quadrant four of the agile testing quadrants” of Brian Marick. That is, these are tests that are technology facing and which critique the product. That said, it has never been clear *why* this is so as this information  can be critical for the business to clearly understand.

In the Scaled Agile Framework (SAFe) NFR’s are represented as a symbol bolted to the bottom of the various backlogs in the system. This indicates that they apply to all of the other stories in the backlog. One of the challenges of managing them lies in at least one aspect of our testing strategies: When do we accept them if they represent a "constant" or "persistent constraint" on all the rest of the requirements?

This paper advances an approach to handling NFR’s in SAFe  which promotes the concept that NFRs are more valuable when considered as first class objects in our business facing testing and dialogs. It suggests that the business would be highly interested in knowing, for example, how many concurrent users the system can sustain on-line.  If you're not sure about this just ask the business people around the healthcare.gov project! One outcome of this approach is that we see a process emerge that reduces or need to treat them as a special class of requirements at all.

If we expose the NFR’s to the business, in a language and manner that would create shared understanding of them, we could avoid surprises while solving a major challenge.

Please consider the following Gherkin example:

Feature: Online performance

In order to ensure a positive customer experience while on our website

I’d like acceptable performance and reliability

So that the site visitor will not lose interest or valuable time

Scenario: Maximum concurrent signed-in user page response times

  • Given there are 1,000 people logged on
  • When they navigate to random pages on the site
  • Then no response should take longer than 4 seconds

Scenario: Maximum concurrent signed-in user error responses

  • Given there are 1,000 people logged on
  • When they navigate to random pages on the site for 15 minutes
  • Then all pages are viewed without any errors

These are pretty straight-forward and easy to understand test scenarios. If they were managed like any other feature in the system the creation, elaboration and implementation of them would serve as a ‘forcing function’  where derived value in the form of shared understanding between the business and the development would be gained. As well these directly executable specifications could be automated such that they could run against every build of the software. This fast feedback is very important to development flow. If we check in a change, perhaps a configuration parameter, or new library, that broke any NFR, we’d know immediately what changed (and where to go look!).  Something that is also very valuable (and often overlooked!) is that each build serves as a critical on-going baseline for comparison of performance and other system capabilities.

Any NFR expressed in this fashion becomes a form of negotiation. It makes visible economic trade-off possibilities that might not otherwise be well understood by the business. For example, if push came to shove, would there still be business value if, under sustained load, page responses were sometimes reduced to 5 seconds in some cases?

Another benefit of writing the test first is that it would increase the dialog about *how* we will implement the NFR scenario helps to ensure, by definition, that a "testable design" emerges.

This approach to requirements/test management is known as "Behavior Driven Development" (BDD) and "Specification By Example". The question of how and when to implement these stories in the flow sequence remains a challenge and the remainder of this article addresses this challenge directly. I’ll address one solution in SAFe.

The recommendation is to Implement the NFR an an executable requirement using natural language tools like Cucumber, SpecFlow (which supports Gherkin) or Fit/FitNesse (which uses natural language and tables) as soon as they are accepted as NFRs in an iteration as part of the architectural flow. Create a Feature in the Program backlog that describes implementation of the actual NFR (load, capacity, security etc.) and treat it like any other feature that point. Have the system team discuss, describe and build the architectural runway to drive the construction of the systems that will support the testing of them. Use the stories as acceptance against the architectural runway, if that is appropriate. If you do not implement the actual test itself right away (not recommended) at least wire it up to result in a “Pending” test failure (not really recommended but I’ll describe that more in a moment). When the Scenarios are running in your continuous integration (CI) environment, the story can be accepted. With regards to your CI, keep in mind that some of these tests, with large data sets or with long up time requirements will take a while to complete so it is very important to separate them from your fast failing unit tests.

The next important step is to make these tests visible to the business and to the development team. To the business, one way to make them visible, along with your other customer facing acceptance tests, is you use a tool like Relish that can publish them, along with markup and images as well as navigation and search.

Another recommendation in this approach would be to build a “quality” dashboard using the testing quadrants as described earlier. That is, each quadrant would report a pass/fail/pending status that could be used for governance and management of the system. When all quadrants are green, you can release. You can get quite creative with this approach and use external data sources, such as Sonar and Cast (coverage and code quality tools, respectively) and even integrate with Q3 exploratory testing results, for example. There is work to be done in this area. Hopefully someone will write a Jenkins plugin or add this to a process management tool.

Using this approach you will always know what the status of your NFR’s are and get the information you need in a timely fashion, when there is still time to react. This approach would help to eliminate surprises and remove the need for a major (unknown cost) effort at the end of your development cycle. In the case above, even if these tests had been marked “Pending” you’d have the knowledge that the status of these NFR’s were unknown, which would increase trust and share the responsibility across the entire value stream.

Learn more about the Scaled Agile Framework: download SAFe Foundations.

Learn more about our Scaled Agile Framework Training and Certification Classes.

A Hammerhead Shark versus James Bond in Speedos

A  Hammerhead Shark versus James Bond in SpeedosI’ve often found that most of the questioning about the worth of agile tends to come from the Project Management community. That’s not a criticism on PM’s but an acknowledgement that for them it’s probably more difficult to see how this agile concept can work.

Traditionally PM’s have tended to need their eyes pointing in different directions – one on the day to day development activities of the team, the detailed planning and daily progress, and one on the bigger picture, the long term roadmap and strategic planning side of a project. And, unless you’re a Hammerhead Shark – this is always going to be a tricky feat.

The trouble with agile, or more accurately, the trouble with some people’s interpretation of agile, is that it can be seen as an excuse to just focus on the tactical side of planning which leaves PM’s wondering what happens to all the stuff their other eye is usually pointing at.

So does being agile really mean ignoring the high level strategic side of managing and planning a project? Will the scent of burning Prince2 manuals soon pervade?

Fortunately this is not what agile means at all – in fact Scrum, which we all know and love, is pretty keen to remind us that we should still do release planning, risk management, and all those important things, it just doesn’t presume to tell us how to do them (in much the same way as it doesn’t tell us how to breathe, eat, sleep or do any other number of bodily functions we should still be doing whilst we’re Scrumming). What we are left to figure out for ourselves, as fully capable agile dudes, is how to ensure that we can stay agile for the long haul, which means having a sustainable and scalable approach to agility.

So how does that work? Is it really possible to add the governance, compliance, risk management and high level planning elements of managing a project to an agile approach without losing the agility? (Let’s hope so, for agile’s sake, because it is clearly and undeniably necessary).

Well, yes, of course, it is possible – otherwise agile just wouldn’t work. But it has to be done in a certain way. Let’s face it – you wouldn’t send James Bond out in full suit of armour, a wetsuit, padded ski suit and a parachute every time he went on a mission. Not only would it be a tad cumbersome, it would also be unnecessary (given that sometimes he gets away with just a small pair of Speedos). What you would do is give him exactly the right amount of kit required for a given situation. The same applies to agile. What’s needed is exactly the right amount of governance, planning and compliance for a given project – no more, no less.

So hang on – what have we got so far? James Bond in Speedos and a hammerhead shark. Which one is the PM? Well in a way it’s both, and neither. Confused? Good. Me too.

And I guess that is the point. A PM’s job is not easy and while they would love to be 007 in speedos (figuratively) – agile, unencumbered, able to work quickly and focus on getting the job done, they still need that hammerhead with one eye on all the ‘other stuff’.

I don’t think we can ever get rid of all that ‘other stuff’. It’s necessary and important. But we can minimise it so that only the right amount of ‘other stuff’ is put in place and we do what NEEDS TO BE DONE, building up from a minimal set (should I mention speedos again?) rather than starting out with everything, including the wetsuit and the parachute. This then, in essence, is the key to disciplined agile.

The PM still needs and will always need to be able to look at both the strategic and tactical side of a project, but with this approach maybe they need be less of a hammerhead. With agile self-organising teams the tactical planning side of a project is very much a team effort and, along with release and sprint burn-downs, daily stand-ups and sprint retrospectives, the tactical management is much less of an overhead.

So maybe, now, a normal shaped head will do, with just two eyes and some kind of innovative mechanism that will turn that head, allowing the PM to focus on the strategic but throw a glance towards the tactical when necessary.

Or maybe I’m just sticking my neck out.

Learn more about lightweight essential governance for agile projects.

Read the article: Agile and SEMAT - Perfect Partners.

Alpha State Cards in Practice: Experiences from an Agile Trainer

Having had the opportunity to incorporate the Alpha State Cards within some recent training classes I have been delighted to see the enthusiastic reaction they receive. The cards are great at bringing what can sometimes be complex and difficult concepts to life and making them of real practical value to people.

In each Agile training class I deliver now, I use the cards to support an exercise that challenges the groups to determine the current status of a sample software development project. In the discussions afterwards, the groups have consistently reported:

  • They were really impressed on how the cards enabled more effective team communication and collaboration
  • The cards demonstrated in both a tangible and visual way that significant progress was being made, as the Alpha State Cards made it quick and easy to cut through the “noise” to the heart of what mattered
  • The whole experience showed how quick it was to come to a useful conclusion and each group found it an enjoyable way to work

When asked how the cards might support other team activities the feedback included the following:

  • The cards offer a great set of objectives to assist with task identification, task planning and  prioritization activities
  • The states on the card provide a basis for simplifying governance objectives and evaluation criteria, as well as making the whole process leaner
  • Revisiting the card abacus during iteration reviews to demonstrate iteration progress from a state perspective
  • They recognize the ability for the cards to advance and retreat along the abacus, to reflect significant change impacting one or more Alphas

I get a lot of satisfaction from using the cards in these courses as they really act to break down the barriers early in the course by helping the attendees to relax and to build their confidence working collaboratively together. They also gain a sense of experiencing something not only new, but highly effective too, and something they can easily apply back on their own projects to add value and improve ways of working.

For me as a trainer and coach, the cards significantly contribute towards an enhanced learning experience, and in so doing, increase the knowledge retention for the key learning points. The bottom line is it’s best to use the cards in a group situation to truly appreciate their value.

The Alpha State Cards, games and further guidance is available here if you want to try them yourself.

Also there is now a Alpha State Card LinkedIn group which is a great place to share ideas & ask questions about using the cards.

Page 1 of 1612345678910...Last »