Gartner's Senior VP Research, Peter Sondergaard, recently spoke at Gartner Symposium 2010 about four trends changing computing. As part of this, he emphasized that, while IT departments have been internally focused on optimizing internal processes and costs for the past 20 years, it’s now more about IT’s involvement in optimizing business processes.
IT’s forward motion towards optimizing business processes and controlling their costs is aimed at empowering businesses to be more competitive at a lower cost and with an improved customer experience. This is crucial in today’s economy.
The key here is to enable IT teams to instantly see technology issues situationally in the context of their business and to automatically predict, and even prevent, the issue’s business impact. This allows companies to optimize the IT implementation of their business processes, and as a result, achieve greater productivity at a lower cost.
The Great Convergence
IT thought leaders have begun to realize that in order to successfully optimize business processes, they need to converge the disciplines of Business Transaction Management (BTM), Application Performance Management (APM) and Complex Event Processing (CEP). This convergence enables the correlation of operational metrics for IT applications, middleware and infrastructure with the real-time visibility into business transactions and the business processes they comprise.
Using this new convergence, businesses can avoid the all too common scenario where, following a serious technology problem, representatives from each IT silo gather in a room for several hours to determine what the root cause of the problem was with no clear consensus on whether this problem is of IT interest only or in fact impacts an important business process.
More effectively, this new approach uses the CEP engine at its core to constantly scan for patterns foretelling a transaction is heading for a “business abnormal” state and thus, via an “early warning system,” prevent the unnecessary costs optimization involved in cleaning up the damage of a transactional mishap.
Typically, when IT staff resolves a complex problem, they describe the resolution and share this process with the other members of support. The next time the problem occurs, this resolution is reused by the assigned member of support. This is a reactive approach, and an expensive one. While the mean-time-to-repair the problem is reduced after the first occurrence, support is still utilizing a manual process to detect, and then resolve, the problem. The side effect of this might be personally painful where an IT administrator is woken up two days in a row to deal with the same problem. At this point a problem has been detected, a ticket is open and most likely users and business processes are already impacted. This is an expensive approach to problem management.
A better approach would be the following: after the first time the problem has occurred, it could be described as a situation to the converged BTM solution along with appropriate business rules describing how to resolve this issue before it has business impact. This approach is, in effect, adding inference in order to predict problems, the ultimate key to being proactive monitoring of business transactions and the business processes they realize.
In this scenario, the pattern describing the problem is detected and immediately, an automated resolution is initiated before users are impacted or a business process is disrupted. Essentially, the mean-time-to-know a multi-tier composite application is no longer performing or behaving within a business normal state and the mean-time-to-react to the issue have both been reduced. Furthermore, the impact of the problem has been prevented via automated dynamic invocation of business rules. This is a much better business outcome with contained cost, but one that can only be leveraged when utilizing a solution that can, in real-time, detect the patterns and dependencies occurring across your complex composite applications and dynamically take action.
Over time, the system’s capabilities and value continuously improve. This approach continues to help optimize business processes by automatically learning and adjusting to what is normal and abnormal for your business via analysis of the real-time data provided by BTM and its integration with legacy event monitoring systems. Imagine a spiral where a complex problem occurs; its resolution is specified and over time, the system is learning to behave better and to handle more issues automatically. The 360-degree situational awareness this provides reduces costs and streamlines business processes and does this in a cycle of continuous availability improvement. The key points here are an increase in automation, prediction, prevention and a resultant decrease in operational cost and business process disruption.
Calculating ROI on the Ground and in the Clouds
Cloud Computing, one of the four trends Sondergaard discussed, is no longer a future direction and is being readily implemented today. While the benefits of Cloud Computing are many, the difficulty in managing the availability and performance of applications, business transactions and the business processes they actualize becomes incalculably more difficult to do. Why? This complexity is due to the very benefit of flexibility the Cloud provides – elasticity through virtualization. It is now much harder to achieve the 360-degree situational awareness a business needs in order to reduce support costs, improve service levels and achieve its desired ROI. In a Cloud Computing environment, the challenge of transaction, application and middleware message detection in real-time is much harder to do as the number of places they can be greatly expands and may constantly be in flux.
So how does one assess the benefits of business transaction management and whether or not it is yielding results in terms of optimizing business processes? ROI is calculated by a reduction in capital and operational expenditures, an avoidance of the lost revenue hidden in order fallout and customer attrition. This translates into the following benefits: fewer tickets at the service desk - reducing labor costs, improved customer experience - preserving customer loyalty, reduced disruption to business processes - maintaining profitability and compliance with service level agreements - steering clear of penalties.
Achieving these benefits enables a business to focus on using IT as it was intended for - to deliver more services to customers, grow market share and no longer spend the majority of the time and money allocated to IT on merely fixing problems in order to maintain current availability and performance levels.
In summary, with the recent worldwide recession, layoffs and cost-cutting efforts have turned the spotlight on IT infrastructure and how utilizing advancements in enterprise technology can optimize business processes. This new thought-leading approach to converging BTM with CEP and APM helps IT personnel find ways to squeeze stealth waste out of business processes by providing the full visibility prediction and performance that is necessary to reduce costs, improve service and manage risk.
About Charley Rich
Charley Rich is VP Product Management and Marketing at Nastel Technologies and has over 28 years of technical, hands-on experience working with large-scale customers to meet their application and systems management requirements. Prior to joining Nastel, Charley was Product Manager for IBM's Tivoli Application Dependency Discovery Manager software, where he co-authored an IBM Redbook, charted the product roadmap, managed an agile requirements process and was recognized for his accomplishments by winning the Tivoli General Manager's Award. Recently, Charley was granted a patent for an Application Discovery and Monitoring process.
Related Links:
The Latest
Users have become digital hoarders, saving everything they handle, including outdated reports, duplicate files and irrelevant documents that make it difficult to find critical information, slowing down systems and productivity. In digital terms, they have simply shoved the mess off their desks and into the virtual storage bins ...
Today we could be witnessing the dawn of a new age in software development, transformed by Artificial Intelligence (AI). But is AI a gateway or a precipice? Is AI in software development transformative, just the latest helpful tool, or a bunch of hype? To help with this assessment, DEVOPSdigest invited experts across the industry to comment on how AI can support the SDLC. In this epic multi-part series to be posted over the next several weeks, DEVOPSdigest will explore the advantages and disadvantages; the current state of maturity and adoption; and how AI will impact the processes, the developers, and the future of software development ...
Half of all employees are using Shadow AI (i.e. non-company issued AI tools), according to a new report by Software AG ...
On their digital transformation journey, companies are migrating more workloads to the cloud, which can incur higher costs during the process due to the higher volume of cloud resources needed ... Here are four critical components of a cloud governance framework that can help keep cloud costs under control ...
Operational resilience is an organization's ability to predict, respond to, and prevent unplanned work to drive reliable customer experiences and protect revenue. This doesn't just apply to downtime; it also covers service degradation due to latency or other factors. But make no mistake — when things go sideways, the bottom line and the customer are impacted ...
Organizations continue to struggle to generate business value with AI. Despite increased investments in AI, only 34% of AI professionals feel fully equipped with the tools necessary to meet their organization's AI goals, according to The Unmet AI Needs Surveywas conducted by DataRobot ...
High-business-impact outages are costly, and a fast MTTx (mean-time-to-detect (MTTD) and mean-time-to-resolve (MTTR)) is crucial, with 62% of businesses reporting a loss of at least $1 million per hour of downtime ...
Organizations recognize the benefits of generative AI (GenAI) yet need help to implement the infrastructure necessary to deploy it, according to The Future of AI in IT Operations: Benefits and Challenges, a new report commissioned by ScienceLogic ...
Splunk's latest research reveals that companies embracing observability aren't just keeping up, they're pulling ahead. Whether it's unlocking advantages across their digital infrastructure, achieving deeper understanding of their IT environments or uncovering faster insights, organizations are slashing through resolution times like never before ...
A majority of IT workers surveyed (79%) believe the current service desk model will be unrecognizable within three years, with nearly as many (77%) saying new technologies will render it "redundant" by 2027, according to The Death (and Rebirth) of the Service Desk from Nexthink ...