UML

Introducing Collaborative Lifecycle Management & Deployment Planning at a Major UK Bank

IJI has been engaged with a major UK bank over the last 18 months helping them introduce new methods of working, and IBM’s Collaborative Lifecycle Management tooling to support these methods. Business Analysts now capture their requirements in Rational Requirements Composer (RRC), Solution Architects create their designs in Rational Software Architect (RSA) using the Unified Modelling Language (UML), Infrastructure & Security Architects add deployment topologies to the designs using RSA’s Deployment Planning extension, and everything is underpinned by Work Items and Source Control in Rational Team Concert (RTC).

This programme of change marks a major shift in the bank’s IT culture away from disparate production of Microsoft Word documents and Visio diagrams towards a supportable solution of collaborative, model-driven architecture and design. IJI has contributed to the specification of new practices, the creation and delivery of training material, guidance documentation in text and video form, the founding of a Community of Practice (CoE), advanced training and development programmes for Champions within the CoE, mentoring support for project teams adopting the new methods and tools, and customisation of the Rational toolset to deliver specific capabilities required by the IT teams.

One significant aspect of our engagement has been to roll out the Deployment Planning extension to RSA. This add-on delivers features for the design and specification of deployment infrastructure. The Unified Modelling Language (UML) already offers the deployment diagram as a means to show how software components execute upon middleware and hardware, plus the other elements that are required to deliver a fully working system. Critics argue that the UML deployment diagram offers little more than pictures, lacking a rich enough semantic for tool-based validation; furthermore there is insufficient information to enable useful integrations with industry-standard build and provisioning engines.

The Deployment Planning extension replaces the UML deployment diagram with a new modelling concept called a topology. Topologies are analogous with UML models in that they capture the elements of an infrastructure design, the relationships between elements, and views of the design via diagrams. To achieve this a different modelling language is used, the Topology Modelling Language (TML).

The method which underpins the use of TML requires that several topologies are created when considering deployment architectures for a system, with each topology refining the previous one and introducing ever greater levels of detail. The first is the Logical Topology and its role is two-fold:

  • Understand the deployment context by adding TML elements to represent physical and logical Locations (e.g. data centres, security zones) within which Nodes sit that host the in-scope software Components.
  • Ensure traceability with source UML models by creating TML equivalents of Components and Actors.

TML nodes are best thought of as placeholders for some stack of hardware and middleware. This stack may already exist or may still need to be specified and provisioned, but either way this is a level of detail that does not need to be considered while the deployment context is being determined. And to help determine the system context, actors may be included in the topology to maintain focus on scenarios of use.

An example logical topology is shown in the image below:

Introducing Collaborative Lifecycle Management & Deployment Planning at a Major UK Bank

You can see two locations on the diagram, ‘Internet’ containing a primary actor and ‘Data Centre’ with two nodes hosting the components to be deployed. Each component is linked to its equivalent UML Component, and an example link is shown in the ‘Properties’ view.

Once the Logical Topology is sufficiently complete, a Physical Topology is created to refine the infrastructure design and begin specifying the technology which will be used for deployment:

  • Nodes are realised with physical stacks of hardware, operating systems, databases, networking, and so on.
  • Additional infrastructure is included as required to complete the system.

TML provides a feature whereby technology units may be labelled conceptual, meaning that the unit (e.g. an x86 server) is not fully defined and thus retains a certain level of abstraction; the benefit for system architects and designers is that a physical topology can be used to validate a deployment solution at a high level with a focus on performance, robustness, throughput and resiliency. Design details such as processor architectures, operating system versions, inter-process messaging solutions and the like should be deferred for now.

An example physical topology is shown in the image below:

Introducing Collaborative Lifecycle Management & Deployment Planning at a Major UK Bank

In RSA’s ‘Project Explorer’ on the left, you can see that we have a logical and physical topology. Traceability between the two is achieved via an Import Diagram, visible on the left of the diagramming pane. The import contains two nodes and each is realised by a specific stack of technology; each stack is conceptual, denoted by braces around the name.

The Physical Topology contains mainly conceptual units thus is not a complete design, therefore one or more Deployment Topologies is created to finalise the design:

  • Conceptual units are realised with equivalent, non-conceptual units.
  • Full details are added such as server names, IP addresses, patch versions, communication protocols, port numbers, etc.

At this level, a single conceptual server may be realised by several concrete servers to represent a load balancing or hot-standby infrastructure solution. Furthermore, a non-specific operating system must now be realised by a real solution, whether that be Windows or Red Hat Linux or whatever.

IJI was tasked with extending the Deployment Planning palette with new topology units that best represent the bank’s IT estate as well as strict constraints on the relationships which may exist between units. The resulting solution has enabled Architects to specify infrastructure and security designs much quicker and with greater quality than before, resulting in faster progress through internal governance and less re-work. Furthermore all the bank’s Architects are seeing huge benefits by working in a more collaborative fashion using Rational Software Architect and Collaborative Lifecycle Management.

Learn more about IJI's Supportable Solutions approach, part of the Three Imperatives for Mainstream Agile Adoption.

Taking the temperature of UML by Ivar Jacobson

More than twelve years have passed since UML, the Unified Modeling Language, became a standard. During these years the perception of UML has ranged from the heights of the heavens to the depths of the earth.

At the beginning of the 1990s there were 26 published methods on object-orientation, most methods with its own notation. It was to address at least the notation problem that UML was conceived. As most of you probably know, Grady Booch, Jim Rumbaugh and I initiated the design of what became UML, but very quickly many other great people joined the effort and OMG undertook to launch the result. UML quickly made most other notations superfluous. Along with the notations many methods also fell by the wayside, leaving the Unified Process, which had embraced UML, as a de-facto standard along with UML. UML’s adoption spread like wildfire. Read More

Will MDD Ever Come to Fruition? by Ivar Jacobson

I am often asked the question: “will MDD ever be successful and will the tools ever really be put in place to support it?” and I was recently asked this again, so here are my thoughts.  

The following is solely my opinion and can be argued or agreed with, but it comes from 15+ years of experience building application and data modeles, modeling tools including ERwin, Rational Rose and other tools, writing 2 books on UML and working directly with clients who are modeling. 

Model Driven Development in concept is great, but to date the tools and the people who would have to use them have not been able to keep up with the concepts.  There are some very good tools on the market like Telelogic Tau, Rational RoseRT and a few others which do a fairly complete job of generating the needed code, but this is usually focused in the "systems" space and has not translated well to IT as it is based on state's and generating the logic from those states, etc.   

On the IT side, we do have similar concepts, but they start from business process models using tools like WebSphere Business Modeler and similar tools from BEA for example which connect to Business Process runtime engines and generate the connection points of existing applications to automate communication and business process execution.   

This all said, the uptake MDD has not been that of other areas for what I believe are 3 reasons: 

1.      Developers are developers because they want to write code and don't see models as being complete enough nor for their role to build models beyond simple architecture structures.

2.      Most projects today are starting with something already in production and therefore doing modeling of the architectures, use cases and processes are quite good to understand what needs to be done, how it connects together and prioritize work, but it makes it difficult to generate a system that is only a piece to connect up or an update.

3.      I believe #3 can stand on its own, but also lends itself to the first 2 comments and that is the creation of a "Black Box".  Using MDD tools creates a black box effect where runtime generation, transformations and other constructs are managed by a proprietary engine which is difficult to alter and manage. 

a.      For Developers, they often think they can do it better and faster and don’t want to rely on something they cannot touch. 

b.      Because of the black box approach, it is often requires a rewrite of the engines that have already been put into place for the existing systems causing added cost that nobody is willing to fund. 

We have tried in the past similar types of technologies which few have been successful as well.  Runtime Data Access is a great example where tools and companies popped up in the late 90's which created object-to-data mapping and automated the creation of the runtime access at what they claimed to be much faster and cheaper than doing it yourself.  Yes, this was good at least in theory, but few bought into it.  Why?  Black box approach, hard to test, developers think they can write faster algorithms, management doesn't trust something they cannot see, etc.  This is very similar to MDD and its lack of success in my opinion.   

That all said, I do have some collegues who are using MDD on some very interesting projects building components for airplanes for example which they feel are quite successful, but these also seem to be few and far between.

UML 2.0 by Ivar Jacobson

For many years I travelled regularly to Japan. I became acquainted with many companies and many individuals. Several of my books were translated to Japanese so I got many “friends” there. However, since 2001 I have not been back to Japan, so I was very excited to return when I hit the ground in Japan two days ago.

I was invited to give a keynote at a UML conference in Tokyo. My presentation was on the Essential Unified Process with the Essential Unified Modeling Language, EssUP+EssUML. You already know what EssUP is from my previous postcards, but I have so far not described our work on EssUML.

UML was originally developed by a very tight team with Grady Booch, Jim Rumbaugh and I as members. We were nicknamed The Three Amigos – a term I never really adopted myself. Once we had defined what we wanted to achieve other methodologists from around the world were invited to participate. The core of the UML came from the work by the three of us, no doubt about that, but several other individuals made significant contributions. Grady, Jim and I had regular teleconferences in which most decisions were taken. We worked very efficiently. The first reasonably well-defined version of the language was UML 1.1 which was adopted by OMG in the fall of 1997, which was less than a year after we started to work seriously together. This must be a record for any kind of language standardization effort.

UML 2.0 was the next step in the evolution of UML and it was achieved in a very different way following standard OMG procedures. Now a committee with more than twenty persons took over. While we worked on UML 1 in an agile way, the committee worked on UML 2 in any other way than agile. At the UML World conference in Austin, Texas, June 16, 2005, during a panel discussion on UML, Steve Cook said something to the effect “UML 1 was excellent, whereas UML 2 is terrible.”

UML 2 is too big. Its specification is about 1,000 pages long! It includes many good ideas but it is written by tool vendors for tool vendors. No practitioner will read this specification.

There is a lot to be said about what happened in moving from UML 1 to UML 2. UML 1 needed a consolidation period for experience gathering and fine tuning, which it unfortunately never got. Instead the work on UML 2 started with a, too a large extent, new group of persons willing and wanting to change anything they didn’t fancy.

Grady and I were not involved in the UML 2 work. Many questionable changes were made. As an example, I found out that the committee had decided to change use cases from being what they always had been since I first introduced them to being something completely different. They claimed that I really didn’t understand what use cases were, and they had to fix it! Yes, this is a true story.

Once the decision to change use cases was taken, and I was informed about it, I wrote an email to the committee and expressed my severe concerns. I told them that I would not be able to support UML 2 if they changed the fundamental meaning of use cases. That email had no effect, the committee knew better! Thus I had to bring my concerns to top management at Rational. Rational made it clear to the committee that if changes of this nature were made, the user community would react very negatively and the reputation of the language would be seriously damaged. Rational expressed that this was unacceptable and more or less threatened to walk away from the UML 2 effort. Our two participants in the UML 2 committee were instructed to be very cautious with changes and to consult with a team of Rational experts before accepting any changes.

As you can understand this period was quite dramatic for anyone involved. There was a time when I believed I would have to publicly denounce UML 2. Fortunately, the UML committee came to their senses and I didn’t need to take such a dramatic step. Still, UML grew too much under the influence of a large committee. Today all UML fans suffer from this mistake.

However, at its roots UML is good. The basic techniques are proven as practical for many years. As an example, Ericsson could claim that it used UML already in 1969, because at that time we used component diagrams, sequence diagrams, collaboration diagrams, state charts and state transition diagrams (a combination of state charts and activity diagrams).

Thus on one hand UML has a sound base but on the other hand it has become too bulky. We know that with less than 20% of UML you can model more than 80% of all software systems. What we need to do is to extract these 20% that people need and define it precisely. We have started doing this and we call these 20% EssUML. EssUML is not a new language. It doesn’t even change UML. It is just a proper subset of UML.

EssUML is one more thing though. It presents UML in a way attractive to developers, quite unlike the UML 2 specification which intended audience is meta-modellers and tool builders. To achieve this we use a similar technique as when presenting EssUP. Every language element is represented by a card or a set of cards, and its description is usage centric (or use-case centric).

Now you may ask yourself, what are those 80% that we don’t need in the UML? In this case the expression “the devil is in details” could not be truer. The UML elements and their graphical representation in diagrams are simply overloaded with too many nice to haves that rarely are useful (or used). Essential UML is, as the name indicates, about extracting what is truly essential. What this exactly means in terms of diagram types and element types is a bit early to say but my personal opinion is that, e.g., Activity Diagrams (and all their associated element types and detail) do not qualify as essential. The experience is that it is not cost effective to develop and maintain this kind of diagram, especially if you also produce textual flow-of-events descriptions.

While they are not really defined as part of the language there is often (at least in tools) an artificial division into various types of structural diagrams, such as class diagrams, component diagrams, deployment diagrams, and package diagrams. I think this division often misleads people to believe that they need all these diagram types and that they are distinctly different from each other and that is of course not true. Anyway, going forward we will define this and also specifically what the 80% that we don’t need are. Every diagram and element type will eventually have a card describing its essentials, child elements, relationships and so on. In particular the card should focus on describing the use cases of the diagram/element.

As an example a card representing the component diagram (if we choose to have such a diagram type) could look like this:

Of course a card doesn’t contain enough information about the component diagram so we provide a guideline that goes a bit deeper. In most cases this is enough as developers will learn more as they use the diagram in practice.

The guideline is 2-4 pages and it describes the essential content of the diagram, its use cases, hints and tips and common mistakes. It also references the OMG specification and other UML books. However, in general developers don’t read much. There is a law of nature:

Law of Nature: People don’t read books

Over the years I have come to realize that even if people may buy books it does not necessarily mean that they read them. This is certainly true for manuals such as process manuals or language specifications. I believe we can get everyone in a team to read the cards. If we write really well we might get up to 25% of the developers to read the guidelines, but almost no one will read the books being referenced. Additionally we of course have to acknowledge that the Essential 20% varies a bit depending on the specific situation at hand, i.e., the underlying business reason for modelling in the first place can vary.

  • Are the diagrams/models key ingredients in driving the understanding and design as we go, or are they primarily useful as documentation to, e.g., the next project?
  • Is our intent to generate code for large parts of the system or do we model to simply illustrate key aspects of the system and its design?
  • What is the size, complexity and longevity of the product/project?

The answers to questions such as these will of course have an impact on the process we use and follow, the amount of documentation we produce, and of course how much and how detailed we model.

As you know, I am a strong believer in that intelligent agents is the most efficient way to capture and disseminate knowledge and that is of course applicable for practices as in EssUP as well as for technological underpinnings like EssUML. I strongly believe in having intelligent agents to provide context-sensitive advice and training on line. They can help you by performing mundane tasks, recognize design patterns that you may want to apply, review your work, etc. These are some of the things that WayPointer can help you with.

My keynote in Tokyo was very well received. I was invited to come back to this exciting city and I will be back real soon. It was five years since I was here the last time. As it looks like now I will be back almost every month. This is wonderful and I really look forward to it.