MDA

Introducing Collaborative Lifecycle Management & Deployment Planning at a Major UK Bank

IJI has been engaged with a major UK bank over the last 18 months helping them introduce new methods of working, and IBM’s Collaborative Lifecycle Management tooling to support these methods. Business Analysts now capture their requirements in Rational Requirements Composer (RRC), Solution Architects create their designs in Rational Software Architect (RSA) using the Unified Modelling Language (UML), Infrastructure & Security Architects add deployment topologies to the designs using RSA’s Deployment Planning extension, and everything is underpinned by Work Items and Source Control in Rational Team Concert (RTC).

This programme of change marks a major shift in the bank’s IT culture away from disparate production of Microsoft Word documents and Visio diagrams towards a supportable solution of collaborative, model-driven architecture and design. IJI has contributed to the specification of new practices, the creation and delivery of training material, guidance documentation in text and video form, the founding of a Community of Practice (CoE), advanced training and development programmes for Champions within the CoE, mentoring support for project teams adopting the new methods and tools, and customisation of the Rational toolset to deliver specific capabilities required by the IT teams.

One significant aspect of our engagement has been to roll out the Deployment Planning extension to RSA. This add-on delivers features for the design and specification of deployment infrastructure. The Unified Modelling Language (UML) already offers the deployment diagram as a means to show how software components execute upon middleware and hardware, plus the other elements that are required to deliver a fully working system. Critics argue that the UML deployment diagram offers little more than pictures, lacking a rich enough semantic for tool-based validation; furthermore there is insufficient information to enable useful integrations with industry-standard build and provisioning engines.

The Deployment Planning extension replaces the UML deployment diagram with a new modelling concept called a topology. Topologies are analogous with UML models in that they capture the elements of an infrastructure design, the relationships between elements, and views of the design via diagrams. To achieve this a different modelling language is used, the Topology Modelling Language (TML).

The method which underpins the use of TML requires that several topologies are created when considering deployment architectures for a system, with each topology refining the previous one and introducing ever greater levels of detail. The first is the Logical Topology and its role is two-fold:

  • Understand the deployment context by adding TML elements to represent physical and logical Locations (e.g. data centres, security zones) within which Nodes sit that host the in-scope software Components.
  • Ensure traceability with source UML models by creating TML equivalents of Components and Actors.

TML nodes are best thought of as placeholders for some stack of hardware and middleware. This stack may already exist or may still need to be specified and provisioned, but either way this is a level of detail that does not need to be considered while the deployment context is being determined. And to help determine the system context, actors may be included in the topology to maintain focus on scenarios of use.

An example logical topology is shown in the image below:

Introducing Collaborative Lifecycle Management & Deployment Planning at a Major UK Bank

You can see two locations on the diagram, ‘Internet’ containing a primary actor and ‘Data Centre’ with two nodes hosting the components to be deployed. Each component is linked to its equivalent UML Component, and an example link is shown in the ‘Properties’ view.

Once the Logical Topology is sufficiently complete, a Physical Topology is created to refine the infrastructure design and begin specifying the technology which will be used for deployment:

  • Nodes are realised with physical stacks of hardware, operating systems, databases, networking, and so on.
  • Additional infrastructure is included as required to complete the system.

TML provides a feature whereby technology units may be labelled conceptual, meaning that the unit (e.g. an x86 server) is not fully defined and thus retains a certain level of abstraction; the benefit for system architects and designers is that a physical topology can be used to validate a deployment solution at a high level with a focus on performance, robustness, throughput and resiliency. Design details such as processor architectures, operating system versions, inter-process messaging solutions and the like should be deferred for now.

An example physical topology is shown in the image below:

Introducing Collaborative Lifecycle Management & Deployment Planning at a Major UK Bank

In RSA’s ‘Project Explorer’ on the left, you can see that we have a logical and physical topology. Traceability between the two is achieved via an Import Diagram, visible on the left of the diagramming pane. The import contains two nodes and each is realised by a specific stack of technology; each stack is conceptual, denoted by braces around the name.

The Physical Topology contains mainly conceptual units thus is not a complete design, therefore one or more Deployment Topologies is created to finalise the design:

  • Conceptual units are realised with equivalent, non-conceptual units.
  • Full details are added such as server names, IP addresses, patch versions, communication protocols, port numbers, etc.

At this level, a single conceptual server may be realised by several concrete servers to represent a load balancing or hot-standby infrastructure solution. Furthermore, a non-specific operating system must now be realised by a real solution, whether that be Windows or Red Hat Linux or whatever.

IJI was tasked with extending the Deployment Planning palette with new topology units that best represent the bank’s IT estate as well as strict constraints on the relationships which may exist between units. The resulting solution has enabled Architects to specify infrastructure and security designs much quicker and with greater quality than before, resulting in faster progress through internal governance and less re-work. Furthermore all the bank’s Architects are seeing huge benefits by working in a more collaborative fashion using Rational Software Architect and Collaborative Lifecycle Management.

Learn more about IJI's Supportable Solutions approach, part of the Three Imperatives for Mainstream Agile Adoption.

Will MDD Ever Come to Fruition? by Ivar Jacobson

I am often asked the question: “will MDD ever be successful and will the tools ever really be put in place to support it?” and I was recently asked this again, so here are my thoughts.  

The following is solely my opinion and can be argued or agreed with, but it comes from 15+ years of experience building application and data modeles, modeling tools including ERwin, Rational Rose and other tools, writing 2 books on UML and working directly with clients who are modeling. 

Model Driven Development in concept is great, but to date the tools and the people who would have to use them have not been able to keep up with the concepts.  There are some very good tools on the market like Telelogic Tau, Rational RoseRT and a few others which do a fairly complete job of generating the needed code, but this is usually focused in the "systems" space and has not translated well to IT as it is based on state's and generating the logic from those states, etc.   

On the IT side, we do have similar concepts, but they start from business process models using tools like WebSphere Business Modeler and similar tools from BEA for example which connect to Business Process runtime engines and generate the connection points of existing applications to automate communication and business process execution.   

This all said, the uptake MDD has not been that of other areas for what I believe are 3 reasons: 

1.      Developers are developers because they want to write code and don't see models as being complete enough nor for their role to build models beyond simple architecture structures.

2.      Most projects today are starting with something already in production and therefore doing modeling of the architectures, use cases and processes are quite good to understand what needs to be done, how it connects together and prioritize work, but it makes it difficult to generate a system that is only a piece to connect up or an update.

3.      I believe #3 can stand on its own, but also lends itself to the first 2 comments and that is the creation of a "Black Box".  Using MDD tools creates a black box effect where runtime generation, transformations and other constructs are managed by a proprietary engine which is difficult to alter and manage. 

a.      For Developers, they often think they can do it better and faster and don’t want to rely on something they cannot touch. 

b.      Because of the black box approach, it is often requires a rewrite of the engines that have already been put into place for the existing systems causing added cost that nobody is willing to fund. 

We have tried in the past similar types of technologies which few have been successful as well.  Runtime Data Access is a great example where tools and companies popped up in the late 90's which created object-to-data mapping and automated the creation of the runtime access at what they claimed to be much faster and cheaper than doing it yourself.  Yes, this was good at least in theory, but few bought into it.  Why?  Black box approach, hard to test, developers think they can write faster algorithms, management doesn't trust something they cannot see, etc.  This is very similar to MDD and its lack of success in my opinion.   

That all said, I do have some collegues who are using MDD on some very interesting projects building components for airplanes for example which they feel are quite successful, but these also seem to be few and far between.

UML by Ivar Jacobson

October 30, 2003

The largest conference on object technology is OOPSLA. The first OOPSLA conference was in 1986 and I have attended all of them but one. And, I have presented papers, or given tutorials or been on panels each time. Last year I decided not to go there. I wanted to break the rule (to go to OOPSLA) so that I wouldn't become a slave under it. This year the conference took place in Los Angeles, or more precisely in Anaheim - the home of Disneyland.

OOPSLA is a meeting-place for pioneers in software technology from all over the world. Everywhere there are friends from many years back, and we really get together and discuss what we are doing, what we want to do, what we think is hot and what we need to change.

This year I participated in two panels, one on Reuse and Repositories, another on Model Driven Architecture. The most interesting and passionate one was the one on MDA. Much thanks to my friend Dave Thomas (also on the advisory board of Jaczone) who dared to question the UML as well as MDA, as envisioned by OMG, in a very direct way.

Dave: "UML is currently a language with no conventional syntax, and not even a comprehensive standard way for tool interchange (XMI is both awkward and inconsistently implemented). UML was designed by committee, most of whom are methodologists and hence have little experience with language design. UML lacks a simple semantic account as an executable language. There is no published semantic account although proponents claim it is well defined via action semantics."

In general (not only from Dave) I understand the critique against UML as follows. UML is designed by people who have no experience from traditional language design, which means that the language is a mess. Even worse, they have no experience of real software development but they come from the methodology camp with no real software product background. These people have never developed any real software.

Of course, in a setting like OOPSLA Dave's words fell in very good soil. It was a very popular standpoint.

While I am not completely positive to everything in UML, I heard nothing really new in Dave's position. To me it was a dejà vue. I have heard the same critique for more than thirty years. I heard it when I introduced component-based development with visual modelling back in 1967. I heard it when we developed SDL that became a world standard in 1976 within telecommunications. And now I hear it again.

I was originally very critical to how we worked with UML 1.0. Some people wanted a very intuitive approach and define the language with basically just a notation, some meta-model diagrams and plain English text. UML 1.0 became hopeless to understand. I advocated that we should apply classical language design techniques. Starting with an abstract syntax, build a static semantic model on top of the syntax, introduce dynamic semantic domains. I didn't suggest that we should define the operational semantics formally. I made this compromise primarily because most people wouldn't understand it and very few people would care. Moreover this wouldn't be different in approach from most other language designs. I have personally done a lot of language design work when we defined SDL and CHILL, I also did it when working with my doctoral thesis. Thus from a practical point of view I couldn't recommend it. Also, I felt comfortable we could add it later.

During the work with UML 1.1 my approach was adopted. We didn't formalize the dynamic semantic domains. We defined the operational semantics textually only.

Thus it is not correct to say that language-design amateurs defined UML. My own thesis was a definition of a modelling language using classical language design techniques.

It is also not true that UML was designed by people with no practical experience from software development. I was not the only one with many years of hard work in real software development. The UML team was also backed by 10,000's of developers working behind the scenes and applying the technologies being proposed. We had reference groups all around the world that helped us understand the needs to satisfy.

This was UML 1.1.

With UML 2.0 the situation changed. A lot of new people entered the scene, and a committee took over. With "a fresh look", they changed everything they didn't like. And they added what their "fresh look" told them should be added. Personally, I was so bored working in a committee so I stayed out of the work almost completely (I had done language designs three times before, so I couldn't feel any excitement). I said "almost" because I had to get involved on one occasion. That was when the committee decided to remove use cases from UML. Well, they removed the kind of use cases we have had, and replaced them with something different…and then they called this new thing use cases. They were so arrogant, that they told me, without a blink, that I had never understood use cases, but now they had got them right. They voted in the committee and agreed to remove our use cases. Luckily, people outside the committee set things straight and put use cases back into UML.

Many members of that committee had never produced any useful software. Some members were really competent, but the politics became a burden. Now the committee took a lot of time, they were delayed by years, and the resulting proposal UML 2.0 is very complex. Still, I am optimistic. The interest for UML is so high and there are so many people that are willing to fix what potentially is broken. Thus, I am convinced without a single doubt that UML will continue to be successful. It represents very much what a good part of the software community wants to have.

However, this part is not in majority. It may today be 20% of the software community. The remaining 80% have not adopted visual modelling at all. Twenty years ago I believe that it was less than 5% that used visual modelling.

Thus I believe the reason UML is criticized have other roots than bad language designs. The critique represents symptoms not the root cause. I believe that the roots of the critique are that some people don't believe in visual modelling as such but that textual languages work better.

The critique is healthy. UML will most certainly become better defined. We will get better and better tools. The tools will make software development less complex, we will be more productive, get higher quality, and get software much faster --- very much thanks to UML and how you can apply UML.

Anyway, it was a very good panel. People love a good fight. Dave was the clear winner, but that was a given. He was playing on home ground.

During the aftermath of the heated discussion, someone said that whether you are for or against UML or MDA rests on religious grounds. That is something I can wholeheartedly agree with. On the other hand, this is what I heard when I introduced components, use cases, … I have been very honest about it. I have never been able to prove that my approach is better than anyone else's. At best, I have been able to make it believable.

With UML and MDA we have a similar situation. It is very much about gut feeling. Either your gut feeling tells you that UML and MDA are good, or it does not. Your way of working will be coloured by your feelings.

Regarding UML, I have absolutely not a single doubt about its long-term success. About MDA, I am very sympathetic. I know that MDD (model-driven development) has been successful and will become even more successful in the future. Whether MDA will be successful is a question of execution, i.e., how well OMG and its members succeed in implementing the vision. I will support them, and I wish them very good luck.