SOA

Introducing Collaborative Lifecycle Management & Deployment Planning at a Major UK Bank

IJI has been engaged with a major UK bank over the last 18 months helping them introduce new methods of working, and IBM’s Collaborative Lifecycle Management tooling to support these methods. Business Analysts now capture their requirements in Rational Requirements Composer (RRC), Solution Architects create their designs in Rational Software Architect (RSA) using the Unified Modelling Language (UML), Infrastructure & Security Architects add deployment topologies to the designs using RSA’s Deployment Planning extension, and everything is underpinned by Work Items and Source Control in Rational Team Concert (RTC).

This programme of change marks a major shift in the bank’s IT culture away from disparate production of Microsoft Word documents and Visio diagrams towards a supportable solution of collaborative, model-driven architecture and design. IJI has contributed to the specification of new practices, the creation and delivery of training material, guidance documentation in text and video form, the founding of a Community of Practice (CoE), advanced training and development programmes for Champions within the CoE, mentoring support for project teams adopting the new methods and tools, and customisation of the Rational toolset to deliver specific capabilities required by the IT teams.

One significant aspect of our engagement has been to roll out the Deployment Planning extension to RSA. This add-on delivers features for the design and specification of deployment infrastructure. The Unified Modelling Language (UML) already offers the deployment diagram as a means to show how software components execute upon middleware and hardware, plus the other elements that are required to deliver a fully working system. Critics argue that the UML deployment diagram offers little more than pictures, lacking a rich enough semantic for tool-based validation; furthermore there is insufficient information to enable useful integrations with industry-standard build and provisioning engines.

The Deployment Planning extension replaces the UML deployment diagram with a new modelling concept called a topology. Topologies are analogous with UML models in that they capture the elements of an infrastructure design, the relationships between elements, and views of the design via diagrams. To achieve this a different modelling language is used, the Topology Modelling Language (TML).

The method which underpins the use of TML requires that several topologies are created when considering deployment architectures for a system, with each topology refining the previous one and introducing ever greater levels of detail. The first is the Logical Topology and its role is two-fold:

  • Understand the deployment context by adding TML elements to represent physical and logical Locations (e.g. data centres, security zones) within which Nodes sit that host the in-scope software Components.
  • Ensure traceability with source UML models by creating TML equivalents of Components and Actors.

TML nodes are best thought of as placeholders for some stack of hardware and middleware. This stack may already exist or may still need to be specified and provisioned, but either way this is a level of detail that does not need to be considered while the deployment context is being determined. And to help determine the system context, actors may be included in the topology to maintain focus on scenarios of use.

An example logical topology is shown in the image below:

Introducing Collaborative Lifecycle Management & Deployment Planning at a Major UK Bank

You can see two locations on the diagram, ‘Internet’ containing a primary actor and ‘Data Centre’ with two nodes hosting the components to be deployed. Each component is linked to its equivalent UML Component, and an example link is shown in the ‘Properties’ view.

Once the Logical Topology is sufficiently complete, a Physical Topology is created to refine the infrastructure design and begin specifying the technology which will be used for deployment:

  • Nodes are realised with physical stacks of hardware, operating systems, databases, networking, and so on.
  • Additional infrastructure is included as required to complete the system.

TML provides a feature whereby technology units may be labelled conceptual, meaning that the unit (e.g. an x86 server) is not fully defined and thus retains a certain level of abstraction; the benefit for system architects and designers is that a physical topology can be used to validate a deployment solution at a high level with a focus on performance, robustness, throughput and resiliency. Design details such as processor architectures, operating system versions, inter-process messaging solutions and the like should be deferred for now.

An example physical topology is shown in the image below:

Introducing Collaborative Lifecycle Management & Deployment Planning at a Major UK Bank

In RSA’s ‘Project Explorer’ on the left, you can see that we have a logical and physical topology. Traceability between the two is achieved via an Import Diagram, visible on the left of the diagramming pane. The import contains two nodes and each is realised by a specific stack of technology; each stack is conceptual, denoted by braces around the name.

The Physical Topology contains mainly conceptual units thus is not a complete design, therefore one or more Deployment Topologies is created to finalise the design:

  • Conceptual units are realised with equivalent, non-conceptual units.
  • Full details are added such as server names, IP addresses, patch versions, communication protocols, port numbers, etc.

At this level, a single conceptual server may be realised by several concrete servers to represent a load balancing or hot-standby infrastructure solution. Furthermore, a non-specific operating system must now be realised by a real solution, whether that be Windows or Red Hat Linux or whatever.

IJI was tasked with extending the Deployment Planning palette with new topology units that best represent the bank’s IT estate as well as strict constraints on the relationships which may exist between units. The resulting solution has enabled Architects to specify infrastructure and security designs much quicker and with greater quality than before, resulting in faster progress through internal governance and less re-work. Furthermore all the bank’s Architects are seeing huge benefits by working in a more collaborative fashion using Rational Software Architect and Collaborative Lifecycle Management.

Learn more about IJI's Supportable Solutions approach, part of the Three Imperatives for Mainstream Agile Adoption.

Do we need Event-Driven Architecture? by Ivar Jacobson

A software system with an Event-Driven-Architecture (EDA) is built around the idea that events are the most significant elements in the system and that they are produced somewhere in the system and consumed somewhere else in the system.

The business value is that you can easily extend such a system with new things that are ready to produce or consume events that already are in place in the system.  Of course you can add new events as you go. 

Yes, this is absolutely great.  If you build something new there is no reason why you shouldn’t use this kind of architecture.  However, focusing on the events is not the only thing you should do.   

Instead, you should just build an architecture in which you have components or services and some kinds of “channels” between some of these components.  Over a channel an event can flow from one component – the producer – to another one – the consumer.  These components are loosely coupled and can exist in a distributed world.  Some of these events are such that you broadcast them to anybody that has subscribed to them. 

Thus don’t constrain your architecture to just be event-driven.  There is really no money to save by doing just that.  Let it be components with channels.  The channels I am talking about were already adopted in the telecom standard SDL back in 1982.  In EDA it is basically a mechanism for brokering events.   In the OMG standard CORBA from the early 1990s it was called the “Event Service”.  What a coincidence!  Actually one way of thinking about EDA conceptually is really that it is all that CORBA was meant to be, but in the Web/Internet world. 

The most interesting components are services.  You get service-oriented architecture at the same time, and more. 

However, those of you who think this is fundamentally new have really not done your homework.  It is probably true that the three letter combination EDA is new as it was for SOA.  We have also got some new great platforms that make it easier to implement these ideas.

Over the years I have seen trends in the component world that put more focus on the components than the channels (and thus the events) between them.  Other times it has been the other way around.  However, there is absolutely no reason to choose.  You should allow for both. But what we don’t need are more buzzwords. They don’t help us at all. 

To summarize, you should go for a component architecture without any compromises.  This is what made the Ericsson AXE system such an incredible success story more than 30 years ago.  And thanks to its architecture it is still probably the best selling product of its kind in the world.  However, Ericsson had to build its own infrastructure managing components with channels since such solutions didn’t exist at the time.

Of course, this is still new to people who have not previously developed a component architecture.  Thus those people have to come up to speed and that means training and mentoring.  And, to start with you need some good technical practices.  It is as easy as that! 

 

 

SOA by Ivar Jacobson

March 30, 2004

Before being invited to Tallahassee, I had never heard about it. I flew in to the city in the morning and back in the evening. I spent a day with the State of Florida. Bill Lucas did a wonderful job in making me feel very welcome and everyone I met was very friendly and interested in my work. I enjoyed my day very much. One of the questions we discussed was web services. The last couple of years services have become important elements for describing and building software. As with everything new, the software world has a tendency to believe that something fundamentally different has surfaced and that a new way of thinking is required. As a consequence we have got a whole arsenal of new concepts around the concept of services. We have got “service-oriented architectures”, “on demand”, “utility computing”...you name it. However, there is nothing fundamentally new with services. To organize software in services is an old practice.

Services were once a very important construct in RUP, actually in the version of RUP that we called 3.8. (It was the version prior to Rational buying my company, so it was called Objectory 3.8.) Unfortunately, the RUP team thought that downplaying services in RUP would make it significantly simpler. I disagreed with this opinion, but accepted it because almost everything else was adopted. It was very hard to argue for service-oriented design when the concept hadn’t hit the software industry. With Service-Oriented Architecture (SOA) on the table, the need is there.

In 1998, I wrote about services in the Unified Software Development Process book: Apart from providing use cases to its users, every system also provides a set of services to its customers. I made a distinction between end-users of the system and the customer who purchases a system for its users. For instance, a bank system has users which may be clients of the bank, and the bank itself is a customer of the system (maybe buying it from some system integrator). A customer acquires a suitable mix of services. Through these services the system will provide the necessary use cases for the users to do their business:

  • A use case specifies a sequence of actions: a thread is initiated by an actor, followed by interactions between the actor and the system, and completed and stopped after having returned a value to the actor. Usually, use cases don’t exist in isolation. For instance, the Withdraw Money use case assumes that another use case has created a user account and that the user’s address and other user data are accessible.
  • A service represents a coherent set of functionally related actions - a package of functionality - that is employed in several use cases. A customer of a system usually buys a mix of services to give its users the necessary use cases. A service is indivisible in the sense that the system needs to provide it completely or not at all.
  • Use cases are for users, and services are for customers. Use cases cross services, that is, a use case requires actions from several services. A service usually provides several use cases or parts of several use cases.

In the Unified Process, the service concept is in analysis (platform independent modelling) supported by service packages. The following can be noted about service packages:

  • A service package contains a set of functionally related classes.
  • A service package is indivisible. Each customer gets either all classes in the service package or none at all. Thus a service package is a configuration unit.
  • When a use case is realized, one or more service packages may be participants in the realization. Moreover, it is common for a specific service package to participate in several different use-case realizations.
  • A service package often has very limited dependencies toward other service packages.
  • A service package is usually of relevance to only one or a few actors.
  • The functionality defined by a service package can when designed and implemented be managed as a separate delivery unit. A service package can thus represent some “add-in” functionality of the system. When a service package is excluded, so is every use case whose realization requires the service package.
  • Service packages may be mutually exclusive, or they may represent different aspects or variants of the same service. For example, “spell checking for British English” and “spell checking for American English” may be two different service packages provided by a system. You configure the system with one or the other, but maybe not with both.
  • The service packages constitute an essential input to subsequent design and implementation activities, in that they will help structure the design and implementation models in terms of service subsystems. In particular, the service subsystems have a major impact on the system’s decomposition into binary and executable components. This is of course only true if the development is going top-down with no reuse of existing components: legacy systems, packaged solutions, web services. And fact is, we develop more and more with reusable components.

By structuring the system according to the services it provides, we prepare for changes in individual services, since such changes are likely to be localized to the corresponding service package. This yields a robust system that is resilient to change.  

Given that most software of today is developed with ready made components, why would you like to design an analysis model (a platform independent model) with service packages. There is one good reason: we still need to understand what we are doing. Building software is about understanding, understanding components developed by different vendors, divisions, teams. An analysis model - maybe even just a partial model - used as a start help you overcome these difficulties.