UML

Introducing Collaborative Lifecycle Management & Deployment Planning at a Major UK Bank

IJI has been engaged with a major UK bank over the last 18 months helping them introduce new methods of working, and IBM’s Collaborative Lifecycle Management tooling to support these methods. Business Analysts now capture their requirements in Rational Requirements Composer (RRC), Solution Architects create their designs in Rational Software Architect (RSA) using the Unified Modelling Language (UML), Infrastructure & Security Architects add deployment topologies to the designs using RSA’s Deployment Planning extension, and everything is underpinned by Work Items and Source Control in Rational Team Concert (RTC).

This programme of change marks a major shift in the bank’s IT culture away from disparate production of Microsoft Word documents and Visio diagrams towards a supportable solution of collaborative, model-driven architecture and design. IJI has contributed to the specification of new practices, the creation and delivery of training material, guidance documentation in text and video form, the founding of a Community of Practice (CoE), advanced training and development programmes for Champions within the CoE, mentoring support for project teams adopting the new methods and tools, and customisation of the Rational toolset to deliver specific capabilities required by the IT teams.

One significant aspect of our engagement has been to roll out the Deployment Planning extension to RSA. This add-on delivers features for the design and specification of deployment infrastructure. The Unified Modelling Language (UML) already offers the deployment diagram as a means to show how software components execute upon middleware and hardware, plus the other elements that are required to deliver a fully working system. Critics argue that the UML deployment diagram offers little more than pictures, lacking a rich enough semantic for tool-based validation; furthermore there is insufficient information to enable useful integrations with industry-standard build and provisioning engines.

The Deployment Planning extension replaces the UML deployment diagram with a new modelling concept called a topology. Topologies are analogous with UML models in that they capture the elements of an infrastructure design, the relationships between elements, and views of the design via diagrams. To achieve this a different modelling language is used, the Topology Modelling Language (TML).

The method which underpins the use of TML requires that several topologies are created when considering deployment architectures for a system, with each topology refining the previous one and introducing ever greater levels of detail. The first is the Logical Topology and its role is two-fold:

  • Understand the deployment context by adding TML elements to represent physical and logical Locations (e.g. data centres, security zones) within which Nodes sit that host the in-scope software Components.
  • Ensure traceability with source UML models by creating TML equivalents of Components and Actors.

TML nodes are best thought of as placeholders for some stack of hardware and middleware. This stack may already exist or may still need to be specified and provisioned, but either way this is a level of detail that does not need to be considered while the deployment context is being determined. And to help determine the system context, actors may be included in the topology to maintain focus on scenarios of use.

An example logical topology is shown in the image below:

Introducing Collaborative Lifecycle Management & Deployment Planning at a Major UK Bank

You can see two locations on the diagram, ‘Internet’ containing a primary actor and ‘Data Centre’ with two nodes hosting the components to be deployed. Each component is linked to its equivalent UML Component, and an example link is shown in the ‘Properties’ view.

Once the Logical Topology is sufficiently complete, a Physical Topology is created to refine the infrastructure design and begin specifying the technology which will be used for deployment:

  • Nodes are realised with physical stacks of hardware, operating systems, databases, networking, and so on.
  • Additional infrastructure is included as required to complete the system.

TML provides a feature whereby technology units may be labelled conceptual, meaning that the unit (e.g. an x86 server) is not fully defined and thus retains a certain level of abstraction; the benefit for system architects and designers is that a physical topology can be used to validate a deployment solution at a high level with a focus on performance, robustness, throughput and resiliency. Design details such as processor architectures, operating system versions, inter-process messaging solutions and the like should be deferred for now.

An example physical topology is shown in the image below:

Introducing Collaborative Lifecycle Management & Deployment Planning at a Major UK Bank

In RSA’s ‘Project Explorer’ on the left, you can see that we have a logical and physical topology. Traceability between the two is achieved via an Import Diagram, visible on the left of the diagramming pane. The import contains two nodes and each is realised by a specific stack of technology; each stack is conceptual, denoted by braces around the name.

The Physical Topology contains mainly conceptual units thus is not a complete design, therefore one or more Deployment Topologies is created to finalise the design:

  • Conceptual units are realised with equivalent, non-conceptual units.
  • Full details are added such as server names, IP addresses, patch versions, communication protocols, port numbers, etc.

At this level, a single conceptual server may be realised by several concrete servers to represent a load balancing or hot-standby infrastructure solution. Furthermore, a non-specific operating system must now be realised by a real solution, whether that be Windows or Red Hat Linux or whatever.

IJI was tasked with extending the Deployment Planning palette with new topology units that best represent the bank’s IT estate as well as strict constraints on the relationships which may exist between units. The resulting solution has enabled Architects to specify infrastructure and security designs much quicker and with greater quality than before, resulting in faster progress through internal governance and less re-work. Furthermore all the bank’s Architects are seeing huge benefits by working in a more collaborative fashion using Rational Software Architect and Collaborative Lifecycle Management.

Learn more about IJI's Supportable Solutions approach, part of the Three Imperatives for Mainstream Agile Adoption.

What Drives Me by Ivar Jacobson

What Drives Me

“The best way to predict the future is to invent it!“ (Alan Kay)

A few days ago, a very simple but thought provoking question was raised to me: “what it is that drives me?” The simple truth is that I do not know. But I do know what it is that does not drive me. It is not about money. Actually, never has it been about money. Neither is it about power. I am happy to step aside and I am happy to delegate both up and down. It is not about popularity – but I do like to be appreciated for what I do.

No, it has to do with helping others improve themselves over and over again. I get a kick out of seeing others become successful because I helped them. It was like that in the late 1960s and the ‘70s when the Ericsson AXE system beat all competition and won every contract thanks to being component-based. Similarly, when Rational was successful because of UML and Objectory. And Telelogic because of SDL. I am happy when people are successful thanks to use cases.

Read More

Will MDD Ever Come to Fruition? by Ivar Jacobson

I am often asked the question: “will MDD ever be successful and will the tools ever really be put in place to support it?” and I was recently asked this again, so here are my thoughts.  

The following is solely my opinion and can be argued or agreed with, but it comes from 15+ years of experience building application and data modeles, modeling tools including ERwin, Rational Rose and other tools, writing 2 books on UML and working directly with clients who are modeling. 

Model Driven Development in concept is great, but to date the tools and the people who would have to use them have not been able to keep up with the concepts.  There are some very good tools on the market like Telelogic Tau, Rational RoseRT and a few others which do a fairly complete job of generating the needed code, but this is usually focused in the "systems" space and has not translated well to IT as it is based on state's and generating the logic from those states, etc.   

On the IT side, we do have similar concepts, but they start from business process models using tools like WebSphere Business Modeler and similar tools from BEA for example which connect to Business Process runtime engines and generate the connection points of existing applications to automate communication and business process execution.   

This all said, the uptake MDD has not been that of other areas for what I believe are 3 reasons: 

1.      Developers are developers because they want to write code and don't see models as being complete enough nor for their role to build models beyond simple architecture structures.

2.      Most projects today are starting with something already in production and therefore doing modeling of the architectures, use cases and processes are quite good to understand what needs to be done, how it connects together and prioritize work, but it makes it difficult to generate a system that is only a piece to connect up or an update.

3.      I believe #3 can stand on its own, but also lends itself to the first 2 comments and that is the creation of a "Black Box".  Using MDD tools creates a black box effect where runtime generation, transformations and other constructs are managed by a proprietary engine which is difficult to alter and manage. 

a.      For Developers, they often think they can do it better and faster and don’t want to rely on something they cannot touch. 

b.      Because of the black box approach, it is often requires a rewrite of the engines that have already been put into place for the existing systems causing added cost that nobody is willing to fund. 

We have tried in the past similar types of technologies which few have been successful as well.  Runtime Data Access is a great example where tools and companies popped up in the late 90's which created object-to-data mapping and automated the creation of the runtime access at what they claimed to be much faster and cheaper than doing it yourself.  Yes, this was good at least in theory, but few bought into it.  Why?  Black box approach, hard to test, developers think they can write faster algorithms, management doesn't trust something they cannot see, etc.  This is very similar to MDD and its lack of success in my opinion.   

That all said, I do have some collegues who are using MDD on some very interesting projects building components for airplanes for example which they feel are quite successful, but these also seem to be few and far between.

UML 2.0 by Ivar Jacobson

For many years I travelled regularly to Japan. I became acquainted with many companies and many individuals. Several of my books were translated to Japanese so I got many “friends” there. However, since 2001 I have not been back to Japan, so I was very excited to return when I hit the ground in Japan two days ago.

I was invited to give a keynote at a UML conference in Tokyo. My presentation was on the Essential Unified Process with the Essential Unified Modeling Language, EssUP+EssUML. You already know what EssUP is from my previous postcards, but I have so far not described our work on EssUML.

UML was originally developed by a very tight team with Grady Booch, Jim Rumbaugh and I as members. We were nicknamed The Three Amigos – a term I never really adopted myself. Once we had defined what we wanted to achieve other methodologists from around the world were invited to participate. The core of the UML came from the work by the three of us, no doubt about that, but several other individuals made significant contributions. Grady, Jim and I had regular teleconferences in which most decisions were taken. We worked very efficiently. The first reasonably well-defined version of the language was UML 1.1 which was adopted by OMG in the fall of 1997, which was less than a year after we started to work seriously together. This must be a record for any kind of language standardization effort.

UML 2.0 was the next step in the evolution of UML and it was achieved in a very different way following standard OMG procedures. Now a committee with more than twenty persons took over. While we worked on UML 1 in an agile way, the committee worked on UML 2 in any other way than agile. At the UML World conference in Austin, Texas, June 16, 2005, during a panel discussion on UML, Steve Cook said something to the effect “UML 1 was excellent, whereas UML 2 is terrible.”

UML 2 is too big. Its specification is about 1,000 pages long! It includes many good ideas but it is written by tool vendors for tool vendors. No practitioner will read this specification.

There is a lot to be said about what happened in moving from UML 1 to UML 2. UML 1 needed a consolidation period for experience gathering and fine tuning, which it unfortunately never got. Instead the work on UML 2 started with a, too a large extent, new group of persons willing and wanting to change anything they didn’t fancy.

Grady and I were not involved in the UML 2 work. Many questionable changes were made. As an example, I found out that the committee had decided to change use cases from being what they always had been since I first introduced them to being something completely different. They claimed that I really didn’t understand what use cases were, and they had to fix it! Yes, this is a true story.

Once the decision to change use cases was taken, and I was informed about it, I wrote an email to the committee and expressed my severe concerns. I told them that I would not be able to support UML 2 if they changed the fundamental meaning of use cases. That email had no effect, the committee knew better! Thus I had to bring my concerns to top management at Rational. Rational made it clear to the committee that if changes of this nature were made, the user community would react very negatively and the reputation of the language would be seriously damaged. Rational expressed that this was unacceptable and more or less threatened to walk away from the UML 2 effort. Our two participants in the UML 2 committee were instructed to be very cautious with changes and to consult with a team of Rational experts before accepting any changes.

As you can understand this period was quite dramatic for anyone involved. There was a time when I believed I would have to publicly denounce UML 2. Fortunately, the UML committee came to their senses and I didn’t need to take such a dramatic step. Still, UML grew too much under the influence of a large committee. Today all UML fans suffer from this mistake.

However, at its roots UML is good. The basic techniques are proven as practical for many years. As an example, Ericsson could claim that it used UML already in 1969, because at that time we used component diagrams, sequence diagrams, collaboration diagrams, state charts and state transition diagrams (a combination of state charts and activity diagrams).

Thus on one hand UML has a sound base but on the other hand it has become too bulky. We know that with less than 20% of UML you can model more than 80% of all software systems. What we need to do is to extract these 20% that people need and define it precisely. We have started doing this and we call these 20% EssUML. EssUML is not a new language. It doesn’t even change UML. It is just a proper subset of UML.

EssUML is one more thing though. It presents UML in a way attractive to developers, quite unlike the UML 2 specification which intended audience is meta-modellers and tool builders. To achieve this we use a similar technique as when presenting EssUP. Every language element is represented by a card or a set of cards, and its description is usage centric (or use-case centric).

Now you may ask yourself, what are those 80% that we don’t need in the UML? In this case the expression “the devil is in details” could not be truer. The UML elements and their graphical representation in diagrams are simply overloaded with too many nice to haves that rarely are useful (or used). Essential UML is, as the name indicates, about extracting what is truly essential. What this exactly means in terms of diagram types and element types is a bit early to say but my personal opinion is that, e.g., Activity Diagrams (and all their associated element types and detail) do not qualify as essential. The experience is that it is not cost effective to develop and maintain this kind of diagram, especially if you also produce textual flow-of-events descriptions.

While they are not really defined as part of the language there is often (at least in tools) an artificial division into various types of structural diagrams, such as class diagrams, component diagrams, deployment diagrams, and package diagrams. I think this division often misleads people to believe that they need all these diagram types and that they are distinctly different from each other and that is of course not true. Anyway, going forward we will define this and also specifically what the 80% that we don’t need are. Every diagram and element type will eventually have a card describing its essentials, child elements, relationships and so on. In particular the card should focus on describing the use cases of the diagram/element.

As an example a card representing the component diagram (if we choose to have such a diagram type) could look like this:

Of course a card doesn’t contain enough information about the component diagram so we provide a guideline that goes a bit deeper. In most cases this is enough as developers will learn more as they use the diagram in practice.

The guideline is 2-4 pages and it describes the essential content of the diagram, its use cases, hints and tips and common mistakes. It also references the OMG specification and other UML books. However, in general developers don’t read much. There is a law of nature:

Law of Nature: People don’t read books

Over the years I have come to realize that even if people may buy books it does not necessarily mean that they read them. This is certainly true for manuals such as process manuals or language specifications. I believe we can get everyone in a team to read the cards. If we write really well we might get up to 25% of the developers to read the guidelines, but almost no one will read the books being referenced. Additionally we of course have to acknowledge that the Essential 20% varies a bit depending on the specific situation at hand, i.e., the underlying business reason for modelling in the first place can vary.

  • Are the diagrams/models key ingredients in driving the understanding and design as we go, or are they primarily useful as documentation to, e.g., the next project?
  • Is our intent to generate code for large parts of the system or do we model to simply illustrate key aspects of the system and its design?
  • What is the size, complexity and longevity of the product/project?

The answers to questions such as these will of course have an impact on the process we use and follow, the amount of documentation we produce, and of course how much and how detailed we model.

As you know, I am a strong believer in that intelligent agents is the most efficient way to capture and disseminate knowledge and that is of course applicable for practices as in EssUP as well as for technological underpinnings like EssUML. I strongly believe in having intelligent agents to provide context-sensitive advice and training on line. They can help you by performing mundane tasks, recognize design patterns that you may want to apply, review your work, etc. These are some of the things that WayPointer can help you with.

My keynote in Tokyo was very well received. I was invited to come back to this exciting city and I will be back real soon. It was five years since I was here the last time. As it looks like now I will be back almost every month. This is wonderful and I really look forward to it.

UML by Ivar Jacobson

October 30, 2003

The largest conference on object technology is OOPSLA. The first OOPSLA conference was in 1986 and I have attended all of them but one. And, I have presented papers, or given tutorials or been on panels each time. Last year I decided not to go there. I wanted to break the rule (to go to OOPSLA) so that I wouldn't become a slave under it. This year the conference took place in Los Angeles, or more precisely in Anaheim - the home of Disneyland.

OOPSLA is a meeting-place for pioneers in software technology from all over the world. Everywhere there are friends from many years back, and we really get together and discuss what we are doing, what we want to do, what we think is hot and what we need to change.

This year I participated in two panels, one on Reuse and Repositories, another on Model Driven Architecture. The most interesting and passionate one was the one on MDA. Much thanks to my friend Dave Thomas (also on the advisory board of Jaczone) who dared to question the UML as well as MDA, as envisioned by OMG, in a very direct way.

Dave: "UML is currently a language with no conventional syntax, and not even a comprehensive standard way for tool interchange (XMI is both awkward and inconsistently implemented). UML was designed by committee, most of whom are methodologists and hence have little experience with language design. UML lacks a simple semantic account as an executable language. There is no published semantic account although proponents claim it is well defined via action semantics."

In general (not only from Dave) I understand the critique against UML as follows. UML is designed by people who have no experience from traditional language design, which means that the language is a mess. Even worse, they have no experience of real software development but they come from the methodology camp with no real software product background. These people have never developed any real software.

Of course, in a setting like OOPSLA Dave's words fell in very good soil. It was a very popular standpoint.

While I am not completely positive to everything in UML, I heard nothing really new in Dave's position. To me it was a dejà vue. I have heard the same critique for more than thirty years. I heard it when I introduced component-based development with visual modelling back in 1967. I heard it when we developed SDL that became a world standard in 1976 within telecommunications. And now I hear it again.

I was originally very critical to how we worked with UML 1.0. Some people wanted a very intuitive approach and define the language with basically just a notation, some meta-model diagrams and plain English text. UML 1.0 became hopeless to understand. I advocated that we should apply classical language design techniques. Starting with an abstract syntax, build a static semantic model on top of the syntax, introduce dynamic semantic domains. I didn't suggest that we should define the operational semantics formally. I made this compromise primarily because most people wouldn't understand it and very few people would care. Moreover this wouldn't be different in approach from most other language designs. I have personally done a lot of language design work when we defined SDL and CHILL, I also did it when working with my doctoral thesis. Thus from a practical point of view I couldn't recommend it. Also, I felt comfortable we could add it later.

During the work with UML 1.1 my approach was adopted. We didn't formalize the dynamic semantic domains. We defined the operational semantics textually only.

Thus it is not correct to say that language-design amateurs defined UML. My own thesis was a definition of a modelling language using classical language design techniques.

It is also not true that UML was designed by people with no practical experience from software development. I was not the only one with many years of hard work in real software development. The UML team was also backed by 10,000's of developers working behind the scenes and applying the technologies being proposed. We had reference groups all around the world that helped us understand the needs to satisfy.

This was UML 1.1.

With UML 2.0 the situation changed. A lot of new people entered the scene, and a committee took over. With "a fresh look", they changed everything they didn't like. And they added what their "fresh look" told them should be added. Personally, I was so bored working in a committee so I stayed out of the work almost completely (I had done language designs three times before, so I couldn't feel any excitement). I said "almost" because I had to get involved on one occasion. That was when the committee decided to remove use cases from UML. Well, they removed the kind of use cases we have had, and replaced them with something different…and then they called this new thing use cases. They were so arrogant, that they told me, without a blink, that I had never understood use cases, but now they had got them right. They voted in the committee and agreed to remove our use cases. Luckily, people outside the committee set things straight and put use cases back into UML.

Many members of that committee had never produced any useful software. Some members were really competent, but the politics became a burden. Now the committee took a lot of time, they were delayed by years, and the resulting proposal UML 2.0 is very complex. Still, I am optimistic. The interest for UML is so high and there are so many people that are willing to fix what potentially is broken. Thus, I am convinced without a single doubt that UML will continue to be successful. It represents very much what a good part of the software community wants to have.

However, this part is not in majority. It may today be 20% of the software community. The remaining 80% have not adopted visual modelling at all. Twenty years ago I believe that it was less than 5% that used visual modelling.

Thus I believe the reason UML is criticized have other roots than bad language designs. The critique represents symptoms not the root cause. I believe that the roots of the critique are that some people don't believe in visual modelling as such but that textual languages work better.

The critique is healthy. UML will most certainly become better defined. We will get better and better tools. The tools will make software development less complex, we will be more productive, get higher quality, and get software much faster --- very much thanks to UML and how you can apply UML.

Anyway, it was a very good panel. People love a good fight. Dave was the clear winner, but that was a given. He was playing on home ground.

During the aftermath of the heated discussion, someone said that whether you are for or against UML or MDA rests on religious grounds. That is something I can wholeheartedly agree with. On the other hand, this is what I heard when I introduced components, use cases, … I have been very honest about it. I have never been able to prove that my approach is better than anyone else's. At best, I have been able to make it believable.

With UML and MDA we have a similar situation. It is very much about gut feeling. Either your gut feeling tells you that UML and MDA are good, or it does not. Your way of working will be coloured by your feelings.

Regarding UML, I have absolutely not a single doubt about its long-term success. About MDA, I am very sympathetic. I know that MDD (model-driven development) has been successful and will become even more successful in the future. Whether MDA will be successful is a question of execution, i.e., how well OMG and its members succeed in implementing the vision. I will support them, and I wish them very good luck.