Architecture

Introducing Collaborative Lifecycle Management & Deployment Planning at a Major UK Bank

IJI has been engaged with a major UK bank over the last 18 months helping them introduce new methods of working, and IBM’s Collaborative Lifecycle Management tooling to support these methods. Business Analysts now capture their requirements in Rational Requirements Composer (RRC), Solution Architects create their designs in Rational Software Architect (RSA) using the Unified Modelling Language (UML), Infrastructure & Security Architects add deployment topologies to the designs using RSA’s Deployment Planning extension, and everything is underpinned by Work Items and Source Control in Rational Team Concert (RTC).

This programme of change marks a major shift in the bank’s IT culture away from disparate production of Microsoft Word documents and Visio diagrams towards a supportable solution of collaborative, model-driven architecture and design. IJI has contributed to the specification of new practices, the creation and delivery of training material, guidance documentation in text and video form, the founding of a Community of Practice (CoE), advanced training and development programmes for Champions within the CoE, mentoring support for project teams adopting the new methods and tools, and customisation of the Rational toolset to deliver specific capabilities required by the IT teams.

One significant aspect of our engagement has been to roll out the Deployment Planning extension to RSA. This add-on delivers features for the design and specification of deployment infrastructure. The Unified Modelling Language (UML) already offers the deployment diagram as a means to show how software components execute upon middleware and hardware, plus the other elements that are required to deliver a fully working system. Critics argue that the UML deployment diagram offers little more than pictures, lacking a rich enough semantic for tool-based validation; furthermore there is insufficient information to enable useful integrations with industry-standard build and provisioning engines.

The Deployment Planning extension replaces the UML deployment diagram with a new modelling concept called a topology. Topologies are analogous with UML models in that they capture the elements of an infrastructure design, the relationships between elements, and views of the design via diagrams. To achieve this a different modelling language is used, the Topology Modelling Language (TML).

The method which underpins the use of TML requires that several topologies are created when considering deployment architectures for a system, with each topology refining the previous one and introducing ever greater levels of detail. The first is the Logical Topology and its role is two-fold:

  • Understand the deployment context by adding TML elements to represent physical and logical Locations (e.g. data centres, security zones) within which Nodes sit that host the in-scope software Components.
  • Ensure traceability with source UML models by creating TML equivalents of Components and Actors.

TML nodes are best thought of as placeholders for some stack of hardware and middleware. This stack may already exist or may still need to be specified and provisioned, but either way this is a level of detail that does not need to be considered while the deployment context is being determined. And to help determine the system context, actors may be included in the topology to maintain focus on scenarios of use.

An example logical topology is shown in the image below:

Introducing Collaborative Lifecycle Management & Deployment Planning at a Major UK Bank

You can see two locations on the diagram, ‘Internet’ containing a primary actor and ‘Data Centre’ with two nodes hosting the components to be deployed. Each component is linked to its equivalent UML Component, and an example link is shown in the ‘Properties’ view.

Once the Logical Topology is sufficiently complete, a Physical Topology is created to refine the infrastructure design and begin specifying the technology which will be used for deployment:

  • Nodes are realised with physical stacks of hardware, operating systems, databases, networking, and so on.
  • Additional infrastructure is included as required to complete the system.

TML provides a feature whereby technology units may be labelled conceptual, meaning that the unit (e.g. an x86 server) is not fully defined and thus retains a certain level of abstraction; the benefit for system architects and designers is that a physical topology can be used to validate a deployment solution at a high level with a focus on performance, robustness, throughput and resiliency. Design details such as processor architectures, operating system versions, inter-process messaging solutions and the like should be deferred for now.

An example physical topology is shown in the image below:

Introducing Collaborative Lifecycle Management & Deployment Planning at a Major UK Bank

In RSA’s ‘Project Explorer’ on the left, you can see that we have a logical and physical topology. Traceability between the two is achieved via an Import Diagram, visible on the left of the diagramming pane. The import contains two nodes and each is realised by a specific stack of technology; each stack is conceptual, denoted by braces around the name.

The Physical Topology contains mainly conceptual units thus is not a complete design, therefore one or more Deployment Topologies is created to finalise the design:

  • Conceptual units are realised with equivalent, non-conceptual units.
  • Full details are added such as server names, IP addresses, patch versions, communication protocols, port numbers, etc.

At this level, a single conceptual server may be realised by several concrete servers to represent a load balancing or hot-standby infrastructure solution. Furthermore, a non-specific operating system must now be realised by a real solution, whether that be Windows or Red Hat Linux or whatever.

IJI was tasked with extending the Deployment Planning palette with new topology units that best represent the bank’s IT estate as well as strict constraints on the relationships which may exist between units. The resulting solution has enabled Architects to specify infrastructure and security designs much quicker and with greater quality than before, resulting in faster progress through internal governance and less re-work. Furthermore all the bank’s Architects are seeing huge benefits by working in a more collaborative fashion using Rational Software Architect and Collaborative Lifecycle Management.

Learn more about IJI's Supportable Solutions approach, part of the Three Imperatives for Mainstream Agile Adoption.

Do we need Event-Driven Architecture? by Ivar Jacobson

A software system with an Event-Driven-Architecture (EDA) is built around the idea that events are the most significant elements in the system and that they are produced somewhere in the system and consumed somewhere else in the system.

The business value is that you can easily extend such a system with new things that are ready to produce or consume events that already are in place in the system.  Of course you can add new events as you go. 

Yes, this is absolutely great.  If you build something new there is no reason why you shouldn’t use this kind of architecture.  However, focusing on the events is not the only thing you should do.   

Instead, you should just build an architecture in which you have components or services and some kinds of “channels” between some of these components.  Over a channel an event can flow from one component – the producer – to another one – the consumer.  These components are loosely coupled and can exist in a distributed world.  Some of these events are such that you broadcast them to anybody that has subscribed to them. 

Thus don’t constrain your architecture to just be event-driven.  There is really no money to save by doing just that.  Let it be components with channels.  The channels I am talking about were already adopted in the telecom standard SDL back in 1982.  In EDA it is basically a mechanism for brokering events.   In the OMG standard CORBA from the early 1990s it was called the “Event Service”.  What a coincidence!  Actually one way of thinking about EDA conceptually is really that it is all that CORBA was meant to be, but in the Web/Internet world. 

The most interesting components are services.  You get service-oriented architecture at the same time, and more. 

However, those of you who think this is fundamentally new have really not done your homework.  It is probably true that the three letter combination EDA is new as it was for SOA.  We have also got some new great platforms that make it easier to implement these ideas.

Over the years I have seen trends in the component world that put more focus on the components than the channels (and thus the events) between them.  Other times it has been the other way around.  However, there is absolutely no reason to choose.  You should allow for both. But what we don’t need are more buzzwords. They don’t help us at all. 

To summarize, you should go for a component architecture without any compromises.  This is what made the Ericsson AXE system such an incredible success story more than 30 years ago.  And thanks to its architecture it is still probably the best selling product of its kind in the world.  However, Ericsson had to build its own infrastructure managing components with channels since such solutions didn’t exist at the time.

Of course, this is still new to people who have not previously developed a component architecture.  Thus those people have to come up to speed and that means training and mentoring.  And, to start with you need some good technical practices.  It is as easy as that! 

 

 

EA Failed Big Way! by Ivar Jacobson

Enterprise Architecture failed big way!

Around the world introducing an Enterprise Architecture EA has been an initiative for most financial institutions (banks, insurance companies, government, etc.) for the last five years or so, and it is not over. I have been working with such companies and helped some of them to avoid making the worst mistakes. Most EA initiatives failed. My guess is that more than 90% never really resulted in anything useful.

 

Why did people fail? There are many reasons, but they can all be summarized by the word smart. They were not smart when they selected solution. They were not smart when they selected way of working. They were not smart when they organized their business and IT resources. Building an EA is not rocket science. 

There are two common reasons specific to EA failure:

  1. Focus on paper-ware instead of executable software.  When enterprise architects work in an ivory tower without caring about what can be implemented, they produce too much models and documentation without executable solutions.  Enterprise architectures should be implemented incrementally, starting as early as possible. We call such architectures for executable EAs. 
  2. Big gaps between layers instead of seamless relationships.  Usually there are several layers such as a business layer, an application layer, a data layer and a technical layer. There are huge gaps between these layers which results in very brittle architectures. It is like trying to stand on a skateboard which is on top of another skateboard which in its turn is on top of yet another skateboard, etc. To have a chance these skateboards need to behave like one which is hard enough.  Thus the relationship between the business layer end the application and data layers are not straightforward and the relationships between the application layer and the data layer is as hard to manage as it was 20-30 years when we used methods like functional decomposition, or structured analysis and design. It is amazing that people haven’t learnt anything from component based development (with or without objects).

There are many other mistakes that people have made, many of which are related to organizational change in general. Examples include lack of business support for EA, not communicating the scope and purpose of EA, no strong IT leadership etc. In addition to these common challenges, all it takes to succeed at EA is to use best practices for modern software development, avoid upfront academic modeling, build both top down and bottom up, look upon the whole enterprise system as a system of interconnected systems.

Many of the companies that failed are now looking for the next silver bullet – Service Oriented Architecture SOA. To me SOA is what EA should have become. SOA can be described as EA++ -- it is Enterprise Architecture made better. SOA is clearly on the right path, but again adopting it requires that you work smart!  

SOA by Ivar Jacobson

March 30, 2004

Before being invited to Tallahassee, I had never heard about it. I flew in to the city in the morning and back in the evening. I spent a day with the State of Florida. Bill Lucas did a wonderful job in making me feel very welcome and everyone I met was very friendly and interested in my work. I enjoyed my day very much. One of the questions we discussed was web services. The last couple of years services have become important elements for describing and building software. As with everything new, the software world has a tendency to believe that something fundamentally different has surfaced and that a new way of thinking is required. As a consequence we have got a whole arsenal of new concepts around the concept of services. We have got “service-oriented architectures”, “on demand”, “utility computing”...you name it. However, there is nothing fundamentally new with services. To organize software in services is an old practice.

Services were once a very important construct in RUP, actually in the version of RUP that we called 3.8. (It was the version prior to Rational buying my company, so it was called Objectory 3.8.) Unfortunately, the RUP team thought that downplaying services in RUP would make it significantly simpler. I disagreed with this opinion, but accepted it because almost everything else was adopted. It was very hard to argue for service-oriented design when the concept hadn’t hit the software industry. With Service-Oriented Architecture (SOA) on the table, the need is there.

In 1998, I wrote about services in the Unified Software Development Process book: Apart from providing use cases to its users, every system also provides a set of services to its customers. I made a distinction between end-users of the system and the customer who purchases a system for its users. For instance, a bank system has users which may be clients of the bank, and the bank itself is a customer of the system (maybe buying it from some system integrator). A customer acquires a suitable mix of services. Through these services the system will provide the necessary use cases for the users to do their business:

  • A use case specifies a sequence of actions: a thread is initiated by an actor, followed by interactions between the actor and the system, and completed and stopped after having returned a value to the actor. Usually, use cases don’t exist in isolation. For instance, the Withdraw Money use case assumes that another use case has created a user account and that the user’s address and other user data are accessible.
  • A service represents a coherent set of functionally related actions - a package of functionality - that is employed in several use cases. A customer of a system usually buys a mix of services to give its users the necessary use cases. A service is indivisible in the sense that the system needs to provide it completely or not at all.
  • Use cases are for users, and services are for customers. Use cases cross services, that is, a use case requires actions from several services. A service usually provides several use cases or parts of several use cases.

In the Unified Process, the service concept is in analysis (platform independent modelling) supported by service packages. The following can be noted about service packages:

  • A service package contains a set of functionally related classes.
  • A service package is indivisible. Each customer gets either all classes in the service package or none at all. Thus a service package is a configuration unit.
  • When a use case is realized, one or more service packages may be participants in the realization. Moreover, it is common for a specific service package to participate in several different use-case realizations.
  • A service package often has very limited dependencies toward other service packages.
  • A service package is usually of relevance to only one or a few actors.
  • The functionality defined by a service package can when designed and implemented be managed as a separate delivery unit. A service package can thus represent some “add-in” functionality of the system. When a service package is excluded, so is every use case whose realization requires the service package.
  • Service packages may be mutually exclusive, or they may represent different aspects or variants of the same service. For example, “spell checking for British English” and “spell checking for American English” may be two different service packages provided by a system. You configure the system with one or the other, but maybe not with both.
  • The service packages constitute an essential input to subsequent design and implementation activities, in that they will help structure the design and implementation models in terms of service subsystems. In particular, the service subsystems have a major impact on the system’s decomposition into binary and executable components. This is of course only true if the development is going top-down with no reuse of existing components: legacy systems, packaged solutions, web services. And fact is, we develop more and more with reusable components.

By structuring the system according to the services it provides, we prepare for changes in individual services, since such changes are likely to be localized to the corresponding service package. This yields a robust system that is resilient to change.  

Given that most software of today is developed with ready made components, why would you like to design an analysis model (a platform independent model) with service packages. There is one good reason: we still need to understand what we are doing. Building software is about understanding, understanding components developed by different vendors, divisions, teams. An analysis model - maybe even just a partial model - used as a start help you overcome these difficulties.  

Software Reuse by Ivar Jacobson

February 8, 2004

Many organizations are trying to achieve software reuse. Many of them think that they will achieve reuse if they just put their components in some sort of repository. Then they just expect others to reuse their valuable assets. I never cease to be surprised that people still do that. We have said for at least a decade that you don’t get any substantial reuse by just posting components. It seems as if people, instead of learning from others experience, always have to make the same mistakes as others over and over again.

The book I wrote with Martin Griss and Patrik Jonsson on Software Reuse is very explicit about how to achieve reuse. The subtitle of the book is: Architecture, Process and Organization for business success.

It all starts with Architecture. You design an architecture which identifies which components are reusable and which are not, thus a layered architecture. Usually two layers on top of middleware are enough. The top layer contains application components, which are not designed to be reusable. The second layer contains the domain components intended for reuse by several applications in the top layer. The only practical way to get a domain component is by designing an architecture in which the component has a clear role. Without this architectural setting, a component will never really be reused.

Process comes next. The way you develop the domain components is different from how you develop application components. They are documented for reuse. We suggested in our book that you design a façade for each domain component. This façade contains an extract of the models of the component, an extract defining how the component can be reused. In case the domain is big enough (e.g. banking, telecom, airline), the façades can through careful design evolve into a more powerful tool – a domain-specific language (DSL). Instead of designing on top of the interfaces of the domain components, expressed through their façades, you will design using a domain specific language.

Domain components also come with a kit containing tools for its reuse, examples of how to reuse it, and other things that help in getting it reused.

Developing application components is different. Application components don’t require to be documented for reuse. Instead they are designed using domain components and through interaction with other application components.

Finally Organization. In order to get successful reuse, you also need to develop an organization which maps 1:1 to the architecture. This means that for each component in the architecture there is a corresponding responsibility in the organization. Higher level components are the responsibility of a subsystem-owner. Lower level components are the responsibility of an individual developer. Without such a simple mapping between architecture and organization it will be practically impossible to get good reuse. Such an organization will become an expert organization – people will become experts in a subject matter. It grows people that want to create quality products. I call such an organization an architecture-based organization as opposed to a project-based organization. A project-based organization has a line manager for each project. The manager should only care about his own project. Such an organization is counterproductive to reuse.

In an architecture-based organization, the project managers are not line managers. Instead they own the budget for their project. Using this budget they can buy resources from the line organization (that is the subsystem-owners with domain expert developers) to get their projects done in time. The line organization can work with several projects concurrently and they have to prioritize them based on delivery dates. Yes, there may be conflicts, but these are usually not large. The total costs are substantially lower because the domain components are developed and maintained by experts. Thus in the case of conflicts, more resources can be added to remove the conflict, and still the costs are much lower than with a project-based organization.  

The approach I just presented also has the advantage that you grow the reusable components as you work with real projects. You will start with an empty architecture, but you fill it as you go. You don’t sit there and speculate about what might be reusable; instead you design the components in the architecture that are first needed. They have probably not stabilized for the first project, so they may have to grow through several projects before they really become reusable without redesign.

To achieve reuse is not easy. People have tried simple solutions like class libraries and component libraries into which component designers post their components hoping that other people will reuse them. We have seen very little reuse coming this way. Managers have thought it was the lack of incentives so they have stimulated all kinds of initiatives to get reuse. This is ignorant. You will get reuse, but only by standing on top of an integration of A, P and O, that is Architecture, Process and Organization. In another postcard I will tell you how you can move to an architecture-based organization from a project-based organization without revolution. It is really fascinating to see how easy it can be done, given that you are smart. However, that is not a problem, since you are smart, aren’t you?