Optimizing Business Processes with BTM
The Role of Business Transaction Management
November 27, 2010
Charley Rich
Share this

Gartner's Senior VP Research, Peter Sondergaard, recently spoke at Gartner Symposium 2010 about four trends changing computing. As part of this, he emphasized that, while IT departments have been internally focused on optimizing internal processes and costs for the past 20 years, it’s now more about IT’s involvement in optimizing business processes.

IT’s forward motion towards optimizing business processes and controlling their costs is aimed at empowering businesses to be more competitive at a lower cost and with an improved customer experience. This is crucial in today’s economy.

The key here is to enable IT teams to instantly see technology issues situationally in the context of their business and to automatically predict, and even prevent, the issue’s business impact. This allows companies to optimize the IT implementation of their business processes, and as a result, achieve greater productivity at a lower cost.

The Great Convergence

IT thought leaders have begun to realize that in order to successfully optimize business processes, they need to converge the disciplines of Business Transaction Management (BTM), Application Performance Management (APM) and Complex Event Processing (CEP). This convergence enables the correlation of operational metrics for IT applications, middleware and infrastructure with the real-time visibility into business transactions and the business processes they comprise.

Using this new convergence, businesses can avoid the all too common scenario where, following a serious technology problem, representatives from each IT silo gather in a room for several hours to determine what the root cause of the problem was with no clear consensus on whether this problem is of IT interest only or in fact impacts an important business process.

More effectively, this new approach uses the CEP engine at its core to constantly scan for patterns foretelling a transaction is heading for a “business abnormal” state and thus, via an “early warning system,” prevent the unnecessary costs optimization involved in cleaning up the damage of a transactional mishap.

Typically, when IT staff resolves a complex problem, they describe the resolution and share this process with the other members of support. The next time the problem occurs, this resolution is reused by the assigned member of support. This is a reactive approach, and an expensive one. While the mean-time-to-repair the problem is reduced after the first occurrence, support is still utilizing a manual process to detect, and then resolve, the problem. The side effect of this might be personally painful where an IT administrator is woken up two days in a row to deal with the same problem. At this point a problem has been detected, a ticket is open and most likely users and business processes are already impacted. This is an expensive approach to problem management.

A better approach would be the following: after the first time the problem has occurred, it could be described as a situation to the converged BTM solution along with appropriate business rules describing how to resolve this issue before it has business impact. This approach is, in effect, adding inference in order to predict problems, the ultimate key to being proactive monitoring of business transactions and the business processes they realize.

In this scenario, the pattern describing the problem is detected and immediately, an automated resolution is initiated before users are impacted or a business process is disrupted. Essentially, the mean-time-to-know a multi-tier composite application is no longer performing or behaving within a business normal state and the mean-time-to-react to the issue have both been reduced. Furthermore, the impact of the problem has been prevented via automated dynamic invocation of business rules. This is a much better business outcome with contained cost, but one that can only be leveraged when utilizing a solution that can, in real-time, detect the patterns and dependencies occurring across your complex composite applications and dynamically take action.

Over time, the system’s capabilities and value continuously improve. This approach continues to help optimize business processes by automatically learning and adjusting to what is normal and abnormal for your business via analysis of the real-time data provided by BTM and its integration with legacy event monitoring systems. Imagine a spiral where a complex problem occurs; its resolution is specified and over time, the system is learning to behave better and to handle more issues automatically. The 360-degree situational awareness this provides reduces costs and streamlines business processes and does this in a cycle of continuous availability improvement. The key points here are an increase in automation, prediction, prevention and a resultant decrease in operational cost and business process disruption.

Calculating ROI on the Ground and in the Clouds

Cloud Computing, one of the four trends Sondergaard discussed, is no longer a future direction and is being readily implemented today. While the benefits of Cloud Computing are many, the difficulty in managing the availability and performance of applications, business transactions and the business processes they actualize becomes incalculably more difficult to do. Why? This complexity is due to the very benefit of flexibility the Cloud provides – elasticity through virtualization. It is now much harder to achieve the 360-degree situational awareness a business needs in order to reduce support costs, improve service levels and achieve its desired ROI. In a Cloud Computing environment, the challenge of transaction, application and middleware message detection in real-time is much harder to do as the number of places they can be greatly expands and may constantly be in flux.

So how does one assess the benefits of business transaction management and whether or not it is yielding results in terms of optimizing business processes? ROI is calculated by a reduction in capital and operational expenditures, an avoidance of the lost revenue hidden in order fallout and customer attrition. This translates into the following benefits: fewer tickets at the service desk - reducing labor costs, improved customer experience - preserving customer loyalty, reduced disruption to business processes - maintaining profitability and compliance with service level agreements - steering clear of penalties.

Achieving these benefits enables a business to focus on using IT as it was intended for - to deliver more services to customers, grow market share and no longer spend the majority of the time and money allocated to IT on merely fixing problems in order to maintain current availability and performance levels.

In summary, with the recent worldwide recession, layoffs and cost-cutting efforts have turned the spotlight on IT infrastructure and how utilizing advancements in enterprise technology can optimize business processes. This new thought-leading approach to converging BTM with CEP and APM helps IT personnel find ways to squeeze stealth waste out of business processes by providing the full visibility prediction and performance that is necessary to reduce costs, improve service and manage risk.

About Charley Rich

Charley Rich is VP Product Management and Marketing at Nastel Technologies and has over 28 years of technical, hands-on experience working with large-scale customers to meet their application and systems management requirements. Prior to joining Nastel, Charley was Product Manager for IBM's Tivoli Application Dependency Discovery Manager software, where he co-authored an IBM Redbook, charted the product roadmap, managed an agile requirements process and was recognized for his accomplishments by winning the Tivoli General Manager's Award. Recently, Charley was granted a patent for an Application Discovery and Monitoring process.

Related Links:

www.nastel.com

Share this

The Latest

February 07, 2023

Digital transformation was a universal theme in 2022. As we track changes in the enterprise architecture landscape, we observe trends that we believe will shape EA in 2023. Here are our predictions for the coming year ...

February 06, 2023

This year 2023, at a macro level we are moving from an inflation economy to a recession and uncertain economy and the general theme is certainly going to be "Doing More with Less" and "Customer Experience is the King." Let us examine what trends and technologies will play a lending hand in these circumstances ...

February 02, 2023

As organizations continue to adapt to a post-pandemic surge in cloud-based productivity, the 2023 State of the Network report from Viavi Solutions details how end-user awareness remains critical and explores the benefits — and challenges — of cloud and off-premises network modernization initiatives ...

February 01, 2023

In the network engineering world, many teams have yet to realize the immense benefit real-time collaboration tools can bring to a successful automation strategy. By integrating a collaboration platform into a network automation strategy — and taking advantage of being able to share responses, files, videos and even links to applications and device statuses — network teams can leverage these tools to manage, monitor and update their networks in real time, and improve the ways in which they manage their networks ...

January 31, 2023

A recent study revealed only an alarming 5% of IT decision makers who report having complete visibility into employee adoption and usage of company-issued applications, demonstrating they are often unknowingly careless when it comes to software investments that can ultimately be costly in terms of time and resources ...

January 30, 2023

Everyone has visibility into their multi-cloud networking environment, but only some are happy with what they see. Unfortunately, this continues a trend. According to EMA's latest research, most network teams have some end-to-end visibility across their multi-cloud networks. Still, only 23.6% are fully satisfied with their multi-cloud network monitoring and troubleshooting capabilities ...

January 26, 2023

As enterprises work to implement or improve their observability practices, tool sprawl is a very real phenomenon ... Tool sprawl can and does happen all across the organization. In this post, though, we'll focus specifically on how and why observability efforts often result in tool sprawl, some of the possible negative consequences of that sprawl, and we'll offer some advice on how to reduce or even avoid sprawl ...

January 25, 2023

As companies generate more data across their network footprints, they need network observability tools to help find meaning in that data for better decision-making and problem solving. It seems many companies believe that adding more tools leads to better and faster insights ... And yet, observability tools aren't meeting many companies' needs. In fact, adding more tools introduces new challenges ...

January 24, 2023

Driven by the need to create scalable, faster, and more agile systems, businesses are adopting cloud native approaches. But cloud native environments also come with an explosion of data and complexity that makes it harder for businesses to detect and remediate issues before everything comes to a screeching halt. Observability, if done right, can make it easier to mitigate these challenges and remediate incidents before they become major customer-impacting problems ...

January 23, 2023

The spiraling cost of energy is forcing public cloud providers to raise their prices significantly. A recent report by Canalys predicted that public cloud prices will jump by around 20% in the US and more than 30% in Europe in 2023. These steep price increases will test the conventional wisdom that moving to the cloud is a cheap computing alternative ...