Ivarblog

Introducing Collaborative Lifecycle Management & Deployment Planning at a Major UK Bank

IJI has been engaged with a major UK bank over the last 18 months helping them introduce new methods of working, and IBM’s Collaborative Lifecycle Management tooling to support these methods. Business Analysts now capture their requirements in Rational Requirements Composer (RRC), Solution Architects create their designs in Rational Software Architect (RSA) using the Unified Modelling Language (UML), Infrastructure & Security Architects add deployment topologies to the designs using RSA’s Deployment Planning extension, and everything is underpinned by Work Items and Source Control in Rational Team Concert (RTC).

This programme of change marks a major shift in the bank’s IT culture away from disparate production of Microsoft Word documents and Visio diagrams towards a supportable solution of collaborative, model-driven architecture and design. IJI has contributed to the specification of new practices, the creation and delivery of training material, guidance documentation in text and video form, the founding of a Community of Practice (CoE), advanced training and development programmes for Champions within the CoE, mentoring support for project teams adopting the new methods and tools, and customisation of the Rational toolset to deliver specific capabilities required by the IT teams.

One significant aspect of our engagement has been to roll out the Deployment Planning extension to RSA. This add-on delivers features for the design and specification of deployment infrastructure. The Unified Modelling Language (UML) already offers the deployment diagram as a means to show how software components execute upon middleware and hardware, plus the other elements that are required to deliver a fully working system. Critics argue that the UML deployment diagram offers little more than pictures, lacking a rich enough semantic for tool-based validation; furthermore there is insufficient information to enable useful integrations with industry-standard build and provisioning engines.

The Deployment Planning extension replaces the UML deployment diagram with a new modelling concept called a topology. Topologies are analogous with UML models in that they capture the elements of an infrastructure design, the relationships between elements, and views of the design via diagrams. To achieve this a different modelling language is used, the Topology Modelling Language (TML).

The method which underpins the use of TML requires that several topologies are created when considering deployment architectures for a system, with each topology refining the previous one and introducing ever greater levels of detail. The first is the Logical Topology and its role is two-fold:

  • Understand the deployment context by adding TML elements to represent physical and logical Locations (e.g. data centres, security zones) within which Nodes sit that host the in-scope software Components.
  • Ensure traceability with source UML models by creating TML equivalents of Components and Actors.

TML nodes are best thought of as placeholders for some stack of hardware and middleware. This stack may already exist or may still need to be specified and provisioned, but either way this is a level of detail that does not need to be considered while the deployment context is being determined. And to help determine the system context, actors may be included in the topology to maintain focus on scenarios of use.

An example logical topology is shown in the image below:

Introducing Collaborative Lifecycle Management & Deployment Planning at a Major UK Bank

You can see two locations on the diagram, ‘Internet’ containing a primary actor and ‘Data Centre’ with two nodes hosting the components to be deployed. Each component is linked to its equivalent UML Component, and an example link is shown in the ‘Properties’ view.

Once the Logical Topology is sufficiently complete, a Physical Topology is created to refine the infrastructure design and begin specifying the technology which will be used for deployment:

  • Nodes are realised with physical stacks of hardware, operating systems, databases, networking, and so on.
  • Additional infrastructure is included as required to complete the system.

TML provides a feature whereby technology units may be labelled conceptual, meaning that the unit (e.g. an x86 server) is not fully defined and thus retains a certain level of abstraction; the benefit for system architects and designers is that a physical topology can be used to validate a deployment solution at a high level with a focus on performance, robustness, throughput and resiliency. Design details such as processor architectures, operating system versions, inter-process messaging solutions and the like should be deferred for now.

An example physical topology is shown in the image below:

Introducing Collaborative Lifecycle Management & Deployment Planning at a Major UK Bank

In RSA’s ‘Project Explorer’ on the left, you can see that we have a logical and physical topology. Traceability between the two is achieved via an Import Diagram, visible on the left of the diagramming pane. The import contains two nodes and each is realised by a specific stack of technology; each stack is conceptual, denoted by braces around the name.

The Physical Topology contains mainly conceptual units thus is not a complete design, therefore one or more Deployment Topologies is created to finalise the design:

  • Conceptual units are realised with equivalent, non-conceptual units.
  • Full details are added such as server names, IP addresses, patch versions, communication protocols, port numbers, etc.

At this level, a single conceptual server may be realised by several concrete servers to represent a load balancing or hot-standby infrastructure solution. Furthermore, a non-specific operating system must now be realised by a real solution, whether that be Windows or Red Hat Linux or whatever.

IJI was tasked with extending the Deployment Planning palette with new topology units that best represent the bank’s IT estate as well as strict constraints on the relationships which may exist between units. The resulting solution has enabled Architects to specify infrastructure and security designs much quicker and with greater quality than before, resulting in faster progress through internal governance and less re-work. Furthermore all the bank’s Architects are seeing huge benefits by working in a more collaborative fashion using Rational Software Architect and Collaborative Lifecycle Management.

Learn more about IJI's Supportable Solutions approach, part of the Three Imperatives for Mainstream Agile Adoption.

Use Case 2.0 – Slices and Relationships: Extension

Since the introduction of Use-Case 2.0 we have received a number of questions about use-case slices and in particular how they relate to the concepts on include, extend and generalization. In this blog we will look at the impact of the extend relationship on use-case slicing.

What’s the Problem?

An extend relationship can be used to factor out optional behaviour from a use-case narrative. It is particularly useful in the following situations:

  1. Where the optional behaviour will be part of a separately purchased extension
  2. Where different customers require different variations of the same behaviour
  3. Where already implemented use cases need to be extended
  4. Where additions need to be made to previously frozen or signed off use cases

Consider a hotel management system with which customers can make online room reservations. As shown in figure 1, the primary use cases would be “Reserve Room”, “Check In Customer” and “Check Out Customer”.

Use Case 2.0 – Slices and Relationships: Extension

Figure 1 – Hotel Management System Use Cases

Now let’s consider what happens if the hotel management system being built is to be a modular commercial product with an optional waiting list feature. This feature allows a customer to be put on a waiting list in the case where the room they like is already booked. The customer will then be informed when the room becomes available or can have the room reserved automatically within a given timeframe. This feature could easily be captured within use case “Reserve Room” but since it is an optional feature, it is factored out into an extension use case.

Use Case 2.0 – Slices and Relationships: Extension

Figure 2 – The re-factored Reserve Room Use Case

Now, to provide a little more context, let’s first have a look at the use-case narrative of the original Reserve Room use-case.

Use Case 2.0 – Slices and Relationships: Extension

Figure 3 – Reserve Room use-case narrative

Without the extending use case we would have had only one use case to slice up - Reserve Room.  Consider the following example Reserve Room use-case slices.

Use Case 2.0 – Slices and Relationships: Extension

Figure 4 - Use-case slices for the Reserve Room use case

The question now is what will happen to these slices when we make use of extension:

  • Does an extending use case have its own use-case slices?
  • Does using extend change the number of use-case slices?
  • Does using extend have an impact on any existing use-case slices?

Does an extending use case have its own use-case slices?
The answer is yes. Using extend means that we move behaviour from one use case to another; we start by literally cutting and pasting text between the two use-case narratives. More specifically we take out one or more alternative flows and place them in a use case of their own. In this case the Alternative Flows AF16, 17, 18 and 19, which are all about the waiting list, would be moved to the new Handle Waiting List use case.

We could have left all the behaviour related to handling a waiting list in the Reserve Room use case. By using extend we have made optional behaviour explicit. In the case of extension the extending use case is performed in the context of the original use case but without the original use case’s knowledge. This means that any use-case slice that requires behaviour of the extension use case must belong to the extension use case. So, extension use cases do have their own use-case slices

Does extension change the number of use-case slices?
Before refactoring we had one use case with its set of use-case slices. The question is what will happen to this set when we factor out the optional behaviour using the extension. Most likely the total number of use-case slices will remain the same because any alternative flow significant enough to get moved to an extension use-case would probably have got its own slice or slices.

Does extension have an impact on the use-case slices?
Yes and no. Yes, because in the use-case slices we must refer to the right flows of the right use cases;   the original use case or the extension use case. No, because the stories and test conditions remain the same independent of the use case they belong to.

Some Examples

Let’s first have a look at the refined narrative of the Reserve Room use case and the narrative of the Handle Waiting List use case

Use Case 2.0 – Slices and Relationships: Extension

Figure 5 – Updated use-case narrative

And below you will find use-case slices from the extending Handle Waiting List use case

Use Case 2.0 – Slices and Relationships: Extension

Figure 6 - Use-case slices of the Handle Waiting List use case

Notice that in the example use-case slice above:

  1. The basic flow of the Reserve Room is always required because the Handle Waiting List (extension) use case cannot be performed without it.
  2. Alternative Flow 16 of the original Reserve Room use case has become the basic flow of the  Handle Waiting List (extension) use case.
  3. Alternative Flow 17, 18 and 19 of the original Reserve Room use case have become Alternative Flow 1, 2 and 3 respectively of the Handle Waiting List (extension) use case.

Final words

So, as you can see use-case slices are as effective for use cases and use-case models that use the extension mechanism as those that don’t.  In the next blog in this series we will examine the effect of the generalize relationship on the slicing of the use cases.

This post was co-authored with Ian Spence.

Useful links:

Use Case 2.0 Training Classes

Shop Floor Agility Resistance

Agile software development can be an answer to over-prescriptive, over-documented and command & control style development. Agility is by no means anti-methodology, anti-documentation or anti-control. The idea is to restore balance and put people first.

In my experience most people think that, foremost, customers and management have to be convinced that adopting agile practices will lead to better, faster, cheaper and/or happier software development. And they also assume that transitioning to agile practices on the shop floor is a piece of cake. It is about more freedom, craftsmanship and collaboration, so who will object to that?

Well, in reality there are lots of reasons why analysts, developers, testers and people in roles alike object against agile. For example, there are people that object against teamwork because it threatens their knowledge based status.  Or what about team members that consistently refuse to update their tasks preventing them from having an up-to-date sprint burn chart. Another example is team members that just cannot focus on one of two user stories, always do work outside the scope of the sprint and never finish their tasks. Normal project or people management you say? What about team members that tell you that they do not have the required discipline nor are willing to try. Or what about staff who get violent when they are asked to move to another office space in order to create a co-located environment? Read More

MoSCoW Anxiety

According to Wikipedia MoSCoW is a prioritization technique and a core aspect of agile software development. Its purpose is to focus a team on the most important requirements, for example to meet a deadline.

I know of many project teams that struggle using this technique because their stakeholders are unwilling to do the prioritization or accept the technique itself. If you have participated on a project where all requirements were classified as must have, I am sure you know what I mean.

What can you do when this happens to you? Well, valid options that might come to mind are run and hide and performing a coup for a decent Product Owner. However, before you go to extremes you might want to try another option first.

Basically what we have here is a kind of anxiety, MoSCoW anxiety if you will.  Anxiety? Yes!  In my experience many stakeholders simply become afraid they will not get what they have asked for when they are asked to classify requirements below the must have level. This makes perfect sense when you reckon that many projects deliver only a (small) part of the promised functionality and a lot of stakeholders have felt let down by IT at least a number of times. Read More

Semat – what happens? by Ivar Jacobson

I would like to draw your attention to three recent blog entries: http://sematblog.wordpress.com/

1) "You are a developer - what is in Semat for you".

2) "Agile in everything".  One of the underlying principles of Semat is that working with methods needs to be agile (not just the methods themselves but working with them).  This implies features not previously found in how to define, use and adapt methods.

3) "A Major Milestone: On the way to a new standard".  An RFP of a standard based on the key ideas of Semat has been issued by OMG.  Letters of Intent are due on November 22, 2011; submissions are due on February 22, 2012.

This is very good progress, but honestly I don't feel the acceptance of the RFP is a sufficient step to declare success.  In the blog "A Major Milestone: On the way to a new standard", we finish by saying:

"Getting the RFP approved by OMG was one of the major milestones of Semat. Quoting Churchill: “Now this is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning.” Now we need to create something that will go beyond anything previously done by any standards body working with methods: getting the standard adopted by the broad developer community.

This is a challenge that cannot be overestimated.  This requires new fresh ideas that are not typical for standard bodies and methods work.  Fortunately, the Semat teams have several such new ideas. ‘Separation of concerns’ and ‘agile in everything’ will guide us, but more is needed.”

We have fresh new ideas for how to describe methods and practices in a very light way, ideas that significantly will improve readability.  The kernel will allow us to not just learn practices easily, but most importantly also allow us to use them during real work.  Earlier approaches have been completely silent on use, but modern approaches such as Kanban and Lean rely on similar ideas.

The number of people working on Semat has more than doubled over the last couple of months.  New chapters of Semat are set up in China and Latin America.  Still we would like to welcome more talented people to work with us.

--Ivar

Use Cases – What is New? by Ivar Jacobson

As we refine and improve use cases we are careful to make sure that we don't impact any of these things that are key to their popularity and success. Use cases as we deal with them today have gone through a major face-lift.  Without really changing the key ideas, the impact of the changes is dramatic.  The result is a fundamentally more efficient way of developing software than the original use cases.

What is new about use cases?
The impact comes essentially from two areas: user stories and aspect-orientation.  The result is that we adapt them for backlog-driven development and managing cross-cutting concerns.

User stories:
In the past we had two concepts – use cases and scenarios.  Scenarios were a kind of user story.  In 2003 we introduced the concept use-case module (published in a paper [1] and in the aspect book [2]).  A use-case module was a slice through the system.  It included a use case (or a part of a use case), its analysis counterpart, its design, its code and its test.  Influenced by Scrum and user stories we sharpened these concepts and improved the terminology.  Now we talk about use cases, stories and use-case slices. Thus we have now:
1)    Use cases are, as they have always been, sets of structured stories (user stories if you want) in the form of flows of events, special requirements and test scenarios.
2)    Each story is managed and implemented as a use-case slice, which takes the story from its identification through its realization, implementation and test allowing the story to be executed.
3)    Thus a use-case slice is all the work that goes together with a particular story. Each story and thus its slice is designed to be a proper backlog element, and realized within an iteration or a sprint.
4)    The use-case strategy (starting from a use-case model) makes it significantly easier compared to the traditional user story strategy to identify the right user stories to work with and to understand how the ones selected make up the whole system.  As the use cases are now developed slice-by-slice, the size of the use cases is not any longer a problem!
Thus, use cases are what they always have been.  Stories are abstract scenarios a la user stories.  Use-case slices are use-case modules made smaller, suitable as backlog entries.   The terms scenario and use-case module will thus be replaced by story to remove the ambiguity between the abstract story-like scenarios and the concrete test scenarios and use-case slice.

Note: this can be compared with the user story approach where:
1)    The stories are captured as a set of unstructured user stories.
2)    Each user story is managed and implemented as one-or-more user story slices, which take the story from its identification through its realization, implementation and test allowing the user story to be executed.
3)    If a user story is too much too implement in one go the story is sliced up into a number of smaller user stories and the original user story disposed of. This illustrates how it is the user story slices that are implemented and not the user stories.
4)    Additional story types, such as Epic and Theme, are added to act as placeholders for user stories that we know will have to be sliced before they can be implemented.

Aspects:
Aspect-orientation has inspired us to deal with not just application-specific use cases (functional requirements), but also with infrastructure use cases (non-functional use cases).  The latter are dealt with as cross-cutting concerns, allowing us to add behavior to an existing system without actually changing the code of the existing system.  Examples of such non-functional behavior are persistency, logging of transactions, security.  This has helped us to deal with requirements (and their realizations) for systems of systems (enterprise systems, product lines, service-oriented architectures), and for partial systems such as frameworks and patterns.  See our book on aspects [ref. 2]

Thus, the key ideas have not changed, but they have been augmented with features that support backlog-driven development and working with non-functional requirements.

Use cases with stories and story slices address many of the issues now raised with the sole user story strategy.  Use cases with cross-cutting concerns address many of the problems analysts have raised with non-functional requirements.  To people, who already had adopted use cases, the new changes are not seen as large, but its impact on the way we develop software is dramatic.

-- Ivar

[1] Jacobson Ivar, Use cases and aspects - working seamlessly together, Journal of object technology, July-Aug 2003

[2] Jacobson Ivar, Ng Pan Wei, Aspect-oriented software development with use cases, Addison-Wesley, 2005.

Use-cases – why successful and popular? by Ivar Jacobson

I am pleased, honored and gratified that use cases are still a popular way of working with requirements.  Googling “use case” yields 6 times more hits than Googling “user story”, but software development should not be driven by popularity.  Instead we should use the most practical way of working.  And, of course we have learnt something from other techniques.  For instance, as I will discuss in my next blog, user stories and aspect-orientation have inspired us to make use cases even better while maintaining their core values.

The popularity of use cases has led to some misunderstandings and some distortions of the original technique.  This is natural, and while it is encouraging to see authors take the original concept and adapt it to solve new problems, some of the misconceptions and distortions have clouded the original vision.

Common misunderstandings

Before further discussing the improved use cases, let’s first discuss common misunderstandings about what we have had since their inception (1986-1992).  Many people believe that:

1) Use cases are for requirements only, which is not true. In fact, from the very beginning, they have also been used to analyze behavior, design software, and drive test identification, just to name a few uses.

2) Use cases are heavyweight; that you need to write fat specifications of each use case, which is also not true.  In fact, use cases can be exceptionally lightweight (a brief description only), to lightweight (just an outline of the flows), to comprehensive (full descriptions of all behavior), and every variation in between.  For most systems an outline can be very valuable and yet still be very lightweight.  Today, we express this in a better way: when describing use cases, focus on the essentials, what can serve as placeholders for conversations.

3) Use cases are a technique for decomposing the behavior of a system, which is also not true.  Some authors have introduced levels of decomposition, and others try to show use cases “calling” other use cases as if they were subroutines.  Neither of these is right.  A use case shows how a system delivers something of value to a stakeholder. Use cases that need to be “composed” in order to provide value are not real use cases.

4) Use cases are complicated.  In fact, using use cases, if done right, makes a system easier to understand.

o It is impossible to understand what a system does from looking at many hundreds of user stories; the equivalent use-case model might express the system’s behavior in a few handfuls of use cases. 

o A user is represented by a stick figure and a use case is represented by an oval.  Their interconnection by a simple line.

o The relationship between a use case and its scenarios are likewise very easy to represent. 

o To solve this problem with user stories, people have started to invent concepts such as themes and epics, making a case that the user story by itself is an incomplete concept. 

o The use-case approach can accommodate a wide range of levels of detail without introducing new and potentially confusing concepts.

5. Use cases are only seen as being good for green field development, which of course is not true.  They are great to explain large legacy systems as with such systems there is often little or no documentation left.  Use case modeling is a technique that is cheap and easy to get started with to capture the usages of the system.

What people like about use cases

The reason use cases have become so widely accepted is that since their introduction they are useful in so many ways in software development. 

1) A use-case model (a picture) already mentioned, which thus allows you to describe even a complex system in an easy to understand way, and which tells in simple terms what the system is going to do for its users. 

2) Use cases give value to a particular user, not to an unidentifiable user community.

3) Use cases are test cases, so when you have specified your use cases, you have also after complementing with test data, specified your test scenarios,

4) Use cases are the starting point to design effective user experiences, for instance for a web site.

5) Use cases ‘drive’ the development through design and code.  Each use case is a number of scenarios; each scenario is implemented and tested separately.

Moving forward
As we refine and improve use cases we are careful to make sure that we don't impact any of these things that are key to their popularity and success.  In my next blog I will describe how we adapted use cases to backlog driven development and managing cross-cutting concerns.

-- Ivar

Semat – moving forward by Ivar Jacobson

Semat moving forward

During the last many months I have been very silent, but not inactive. I have been very active working with a dozen other people on moving Semat forward.  You will soon hear a lot more from us, but I would already now like to give you a quick update on the progress.

As you may recall, the Grand Vision of Semat was to re-found software engineering based on a widely agreed upon kernel representing the essence of software engineering.  The kernel would include elements covering societal and technical needs that support the concerns of industry, academia and practitioners.

The troika (Bertrand, Richard and I) were pleased, honored and gratified to find that in a short period of time, a dozen corporate and academic organizations, and some 3 dozen well-known individuals from the field of software engineering and computer science, became signatories to support the vision.  In addition, more than 1400 other supporters agreed to the call.

In November 2010, the troika agreed that we would move the work on the kernel to OMG (Object Management Group) to get the proper governance we needed.  Since then we have been working in three different but overlapping groups on three tasks:

Moving the development of the kernel to OMG.

In order to move the work to OMG, OMG first needed to submit a request for proposal (RFP).  A couple of people from Semat have worked together with a couple of OMG members to specify an RFP for what now is called 'A domain-specific language and a kernel of essentials for software engineering.’  Early December 2010, an early version of this RFP was presented to the Analysis and Design Task Force of OMG in Santa Clara. It was very well received.  Several other OMG members have now joined us to work on the RFP, which will be published within a few weeks.  March 21-24 the RFP will be discussed at an OMG meeting in Arlington/Washington DC.  We hope and expect it to be approved, and thus the work on proposals can start.  Anyone can submit a proposal, and so we will too.

Our proposal to a kernel

Semat itself will of course not give a proposal to the RFP, but key players are now working together to continue the work we started within Semat.  There is one team lead, Paul MacMahon, who along with 12-15 participants will now continue the work in a couple of tracks.  The idea of doing architectural spikes is continued.  The plan is still to be able to deliver something that can be used by the industry by April 1.  Personally, I think the work has slowed down because of the work with OMG and the continued work on Semat, which I will describe next.  However, we will deliver something of interest and also of value in a couple of months.

The kernel is just a first step in the Grand Vision of Semat.  However, much more work needs to be done.

Even if the development of the kernel now has been moved under the OMG’s umbrella, Semat still has a lot of work to do. We need for example to:

  • be a demanding “customer” of OMG, making sure that as a community, we get what we want,
  • support the community in its effort to get reusable practices,
  • move the work to the academic community to inspire the development of new curricula and useful research.

Thus, a vision for the next couple of years is needed.  A team of 8 people have been working for more than a month to develop a proposal for a Three Year Vision of Semat. This proposed vision should be published within a couple of weeks.  We will focus on what impact we expect to have on three key user groups: the practitioner, the industry and the academia.  The impact should be measurable and not just hand-waving.  How we will work to get the results specified in the vision will be discussed separately.  First we want to agree on where we want to go.

As I am sure you understand, working to ensure that the vision of Semat becomes reality is a challenging task to say the least.  However, it is one well worth the effort.  Please join us.

More accurate requirements: Who framed Roger Rabbit?

Last June at Innovate 2010 in Florida Kurt Bittner envisioned the new role and responsibilities of the next generation business analyst. If you were not able to attend, his presentation is available online so you can check it out: Transforming the role of the Business Analyst. The need for a different role and responsibilities is to provide solutions for ongoing problems a lot of companies are faced with. These are common problems like:

  • Users expecting  functionality they did not initially ask for
  • Users demanding functionality they will never use
  • Contradictory of conflicting requirements

In order to be more successful, a number of changes are to be made and lessons are to be learned. One of them is that business analysts need to be more focused on desired outcomes rather than features. And another is that business analysts need to probe into root causes rather than being satisfied with just identifying the wants. Being focused on outcomes and unraveling root causes can be hard work and sometimes it is easy to mix them up or to get stuck. A smarter way it is to be more aware of the language that is used for questioning and context frames . Read More

Dutch post: Meer heldere requirements: Kies de juiste verpakking

Mijn collega Kurt Bittner heeft afgelopen juni  tijdens IBM Innovate 2010 (Florida) zijn visie gegeven op de nieuwe rol en verantwoordelijkheden van de nieuwe generatie informatieanalisten. Wanneer je geen kans hebt gezien om zijn presentatie bij te wonen, bekijk die dan via Slideshare: Transforming the role of the Business Analyst. Hieronder volgende enkele observaties of veel voorkomende problemen die hebben geleid tot zijn visie:

  • Gebruikers verwachten andere  functionaliteit dan waar ze oorspronkelijk om hebben gevraagd.
  • Gebruikers eisen functionaliteit die ze nooit zullen gebruiken
  • Gebruikers geven tegenstrijdige of conflicterende requirements Read More
Page 1 of 9123456789