The CMDB of the Future - Part 2
Automating the Service Model
January 14, 2014
Tom Lubinski
Share this

Start with: The CMDB of the Future - Part 1

Automating SOME of the Service Model Definition

The first step in making any system more automatic (and thus more reliable) is to dramatically reduce the amount of manual maintenance required. It is also essential that the methodology is deterministic and not heuristic (and prone to subtle errors). Let's use a concrete example to explore ways this can be done.

Consider a Java application that is built on a standard message-oriented integration platform. The application itself might be dependent on a JMS server and a few specific message queues, as well as an Oracle database, a Tomcat server, and several JVMs, all running on a couple of VMWARE Virtual Machines (VMs).

Using a strictly manual approach, one would list each individual component on which the application is dependent, and in doing so build the service model or CMDB table.

However, we can be smarter and automate much of the CMDB definition process if we take advantage of known dependencies between data types, as shown in the diagram to the right. For example, a TOMCAT-APP runs within a TOMCAT server which is a JVM that runs on a specific HOST. Similar logic could be applied to JMS Topics, WebLogic Applications, and many other component types.

By applying these rules, it is possible to specify just the name of the JMS-TOPIC and the URL of the TOMCAT-APP, and all the related components of the application can be derived automatically. With just two entries in the CMDB it is possible to deterministically describe the service model for this application ... and subsequently aggregate the important metrics for each component to calculate and present a measure of the health state of that application, as well as alert on exceptional conditions.


This technique can be even more effective if the identifier for each top-level component can be readily associated with the application that uses it. For example, if a queue associated with the "Inventory" application is named INVENTORY.QUEUE.1 and the one used by "Order Processing" is ORDERS.QUEUE.1, it is easy to parse the names and automatically populate the CMDB service model with the proper dependencies as well as dynamically update it with zero manual intervention.

In working with dozens of larger organizations, SL Corporation has seen greater adoption of these techniques and best practices to achieve more reliability and automation in monitoring critical applications. But there is big change in the IT landscape that influences CMDB evolution in an entirely different way.

Automating ALL of the Service Model Definition

The introduction of virtualization technology has spawned a significant revolution in computing, somewhat akin to the transition from discrete semiconductor components of the 1950s to the era of fully integrated circuits. What used to take hundreds of individual components wired together on a circuit board could now be embedded in a single chip and stamped out in huge volumes. The integrated circuits of today routinely contain tens or hundreds of thousands of components on a single chip.

Though still in the early stages, something similar has happened with virtualization and software as a service. In an instant you can provision a virtual machine with a complete operating system of your choice (Infrastructure as a Service). You can also include middleware components like a full-featured Application Server (Platform as a Service). With custom application software, we are not as far along ... you still need to wire lots of virtualized components together to make a complex integrated application.

It is easy to see the obvious benefits of virtualization for providing fast and cheaper access to computing power. But not everyone recognizes that there is a more subtle, yet powerful, force at work here ... one that can significantly enhance the ability to automate and make deterministic the monitoring of the health state of complex applications. It used to be that IT would order hardware and after it arrived would manually configure the operating system and other platform software required by application developers. Not so any more.


With virtualization, the provisioning of a new "platform" is completely data-driven. This means that configuration information about the physical location, IP address, service names and ports for all components is maintained in a file or database table (referred to as metadata) and is used to deploy the requested components.

This deployment metadata can do "double duty" and be used to configure a comprehensive monitoring solution to accompany each application. Ideally, every component of a complex application, from infrastructure to middleware to custom processes, and all their interdependencies will be identified in the metadata, providing precisely the functionality that has eluded developers of the current generation of CMDB.

Eventually, this "CMDB of the Future" will be one and the same as the deployment metadata used in provisioning systems. At a minimum it would be generated automatically. There is already a well-known CMDB data definition consisting of a repository of Configuration Items (CIs) and the relationships between them. Up to now it has been difficult to construct a dependable CMDB, whether manually or heuristically. However, by automatically deriving it from provisioning metadata, we could finally see emerge a completely deterministic and reliable CMDB used to map the health state of underlying components to the business services that are dependent on them.

Clearly, we are quite a ways from this goal, but progress may be just ahead. Sometimes the most significant advances are not the latest and greatest that you read about in the trade press but instead are the ones going on quietly behind the scenes and without a lot of fanfare. I suspect that the next generation Configuration Management Database may be just such a thing.

ABOUT Tom Lubinski

Tom Lubinski is President and CEO, and Board Chairman, of SL Corporation, which he founded in 1983. Lubinski has been instrumental in developing SL's Graphical Modeling System Software (SL-GMS) and the more recent RTView software. Prior to starting SL Corporation, he attended the California Institute of Technology and developed a substantial consulting practice specializing in Object-Oriented Programming and Graphical Visualization Systems. He has more than 30 years of experience in the development of computer hardware systems and software applications.

Related Links:

The CMDB of the Future - Part 1

Share this

The Latest

April 25, 2024

The use of hybrid multicloud models is forecasted to double over the next one to three years as IT decision makers are facing new pressures to modernize IT infrastructures because of drivers like AI, security, and sustainability, according to the Enterprise Cloud Index (ECI) report from Nutanix ...

April 24, 2024

Over the last 20 years Digital Employee Experience has become a necessity for companies committed to digital transformation and improving IT experiences. In fact, by 2025, more than 50% of IT organizations will use digital employee experience to prioritize and measure digital initiative success ...

April 23, 2024

While most companies are now deploying cloud-based technologies, the 2024 Secure Cloud Networking Field Report from Aviatrix found that there is a silent struggle to maximize value from those investments. Many of the challenges organizations have faced over the past several years have evolved, but continue today ...

April 22, 2024

In our latest research, Cisco's The App Attention Index 2023: Beware the Application Generation, 62% of consumers report their expectations for digital experiences are far higher than they were two years ago, and 64% state they are less forgiving of poor digital services than they were just 12 months ago ...

April 19, 2024

In MEAN TIME TO INSIGHT Episode 5, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the network source of truth ...

April 18, 2024

A vast majority (89%) of organizations have rapidly expanded their technology in the past few years and three quarters (76%) say it's brought with it increased "chaos" that they have to manage, according to Situation Report 2024: Managing Technology Chaos from Software AG ...

April 17, 2024

In 2024 the number one challenge facing IT teams is a lack of skilled workers, and many are turning to automation as an answer, according to IT Trends: 2024 Industry Report ...

April 16, 2024

Organizations are continuing to embrace multicloud environments and cloud-native architectures to enable rapid transformation and deliver secure innovation. However, despite the speed, scale, and agility enabled by these modern cloud ecosystems, organizations are struggling to manage the explosion of data they create, according to The state of observability 2024: Overcoming complexity through AI-driven analytics and automation strategies, a report from Dynatrace ...

April 15, 2024

Organizations recognize the value of observability, but only 10% of them are actually practicing full observability of their applications and infrastructure. This is among the key findings from the recently completed Logz.io 2024 Observability Pulse Survey and Report ...

April 11, 2024

Businesses must adopt a comprehensive Internet Performance Monitoring (IPM) strategy, says Enterprise Management Associates (EMA), a leading IT analyst research firm. This strategy is crucial to bridge the significant observability gap within today's complex IT infrastructures. The recommendation is particularly timely, given that 99% of enterprises are expanding their use of the Internet as a primary connectivity conduit while facing challenges due to the inefficiency of multiple, disjointed monitoring tools, according to Modern Enterprises Must Boost Observability with Internet Performance Monitoring, a new report from EMA and Catchpoint ...