Project Management

MoSCoW Anxiety

According to Wikipedia MoSCoW is a prioritization technique and a core aspect of agile software development. Its purpose is to focus a team on the most important requirements, for example to meet a deadline.

I know of many project teams that struggle using this technique because their stakeholders are unwilling to do the prioritization or accept the technique itself. If you have participated on a project where all requirements were classified as must have, I am sure you know what I mean.

What can you do when this happens to you? Well, valid options that might come to mind are run and hide and performing a coup for a decent Product Owner. However, before you go to extremes you might want to try another option first.

Basically what we have here is a kind of anxiety, MoSCoW anxiety if you will.  Anxiety? Yes!  In my experience many stakeholders simply become afraid they will not get what they have asked for when they are asked to classify requirements below the must have level. This makes perfect sense when you reckon that many projects deliver only a (small) part of the promised functionality and a lot of stakeholders have felt let down by IT at least a number of times. Read More

Scaling Agile Teams by Ivar Jacobson

Traditionally many large software organizations have one group to write requirements, other groups design and code, and still others to test, etc. Thus, every group has some form of specialist competency.  This is a kind of “siloed” organization. Project work is moved from group to group, with hand-offs between groups that result in  delay and inefficiency due to loss of time and important information at each hand-off.   This is not agile. Read More

Learning by example by Ivar Jacobson

In the work that we do people often want a recipe for developing software, a series of steps that predictably produce a result. Recipes are good, whether in cooking or in other areas, but they are not enough, and not everything that is interesting can be reduced to a simple recipe.

Over the years I've had the chance to observe how people learn. Reflecting on how I learn new things as well, I've come to the conclusion that many people, myself included, don't learn very well from following a recipe. In fact I'm rather hopeless at following step-by-step instructions.  As a kid I liked to tear things apart, figure out how they worked, and then put them back together. And most of the time they actually worked when I did eventually get them back together.

Taking something apart and putting it back together is a special case of learning by example where the thing you are taking apart is the example.  Once you've done this enough you can start improvising and designing new ways to solve the problem.

A lot of software development works the same way - whether it is a piece of code or a requirements specification, a lot can be learned from tearing apart a good example, understanding why it is good and how it works, and then, over time, starting to improvise those lessons learned on new problems.  In fact, given the choice between templates and a good example I'll choose the good example any time.  Even if you don't understand all the principles right away, most of us are clever enough to copy the parts that work that we don't understand and be creative in areas where we need to.  Over time we learn and the need to mimic goes away.  This is the way that all of us learned our native tongues, and the approach still serves us well today.

"Earning" earned value by Ivar Jacobson

Traditional project management approaches focus on planning in detail, assigning the resulting tasks to people and then tracking "progress" as measured by completed tasks. The problem with measuring progress this way is that completing a task, while important, is hard to correlate with progress against the overall goal - just because you've completed 20% of the tasks does not mean that you're 20% done - and for tasks that take a long time to complete the self-reported estimates of "percent complete" is often merely "wishful thinking".

My preference is to measure progress in a concrete and measurable way - in the form of tested scenarios, following an iterative project management approach.  In other words, planning works iteration by iteration, with each iteration developing and testing one or more scenarios.  At the end of each iteration, you have a set of developed and tested scenarios, making progress easier to measure: knowing that you've developed and tested 20 out of 100 scenarios is a lot more meaningful than knowing that you've completed 20% of the tasks - especially if those tasks are focused on creating documentation rather than running and tested code.  Scenarios correlate nicely with business value - each scenario should be useful to at least some subset of the stakeholders.  In my view, only when you've successfully tested a scenario can you claim to have "earned value".

Measuring Project Success and Managing Expectations by Ivar Jacobson

There are a number of studies that cite poor performance of software projects - The Standish Group being the authors of one of the more often cited, their Chaos Report (and old version from 1995 is posted here, and although the data is old the conclusions are not dramatically changed).  The gist of these studies is that the majority of projects (as high as 70%) fail when measured against original schedule, budget, and expected features. I would be the last to argue the general conclusion: that it is very hard to manage a project to success. 

Most projects lack clear direction and purpose, and many are rife with disagreements about what success looks like.  There is, however, something in the assumptions behind these studies that rings hollow: that the initial schedule, budget and expectations for projects is a reliable milepost against which to measure. Most projects are vaguely conceived at best - they often lack a clear understanding of why they should exist and what problems they need to solve.  At their initiation they are usually poorly scoped and vaguely purposed, and the funding associated with them is often assigned based on an  allocation of an arbitrarily assigned budget.  Their schedules, at least those produced at the start of the project, are largely speculative endeavors, a mixture of gut and guesswork, that bears little basis in reality.  Measuring project performance against the initial schedule, scope and budget is of little value except to illustrate the point that there is a large disconnect between the expectations of business sponsors and the ability of teams to deliver against those expectations.  There are, to be sure, rampant problems with performance, but there are also widespread woes of expectations that are just as important to address.

Where should we start?  The first place is probably with project funding and measurement. The real thing of importance to measure is whether the project produced (or exceeded) the business value  expected of it.  If a delay in the project caused a market window of opportunity to be missed, that is significant, but it is the decline in value delivered that needs to be measured, not a schedule variance that cannot be correlated with economic activity.  Forcing a focus on business value produced would also put the right attention on the role of the business in following-through on their assertions of the value that will accrue from having the solution.  Requesting projects based on business needs has an opportunity cost - choosing one project over another should affect the value delivered to shareholders - and accountability for assertions by the line of business is just as important as accountability for project delivery.

If we shift our attention to value delivered rather than meeting schedule and budget, we may free the development team to find better ways to deliver the value, which may or may not include the initial set of features envisioned by the business sponsor.  Initial feature lists are usually vaguely conceived and don't provide a very good target for delivery.  Work is usually required to ascertain the real needs from this initial list of "features", some of which contribute to satisfying real needs but many of which are simply good initial starting points for discussion about real needs.  It may very well take longer than expected to solve the real problems (it usually does, as we all tend to be more optimistic than we should about how long things will take).

The problem is that most teams are set up to fail from the start.  By measuring them against budgets and schedules based on arbitrary assumptions and often a poor understanding of the real business value that needs to be produced, we find them constantly struggling against a plan that cannot possibly succeed.   Measuring against initial schedule, budget and expected features is not merely meaningless, it's actually part of the problem.  We need to shift our focus to better articulating problems to be solved and needs to be satisfied, and measuring business value produced.  Once we start to do that, we can focus on the plans and milestones needed to ensure the delivery of business value.