Optimizing ERP Application Performance in an Increasingly Complex Delivery Environment
May 23, 2013

Kieran Taylor
Broadcom

Share this

According to industry statistics, the average costs of downtime for a leading ERP system can range between $535,780 and $838,100 per hour. Put another way, almost $15,000 is lost every minute an ERP application is down. And that’s just the tip of the iceberg, because poor application performance also exposes businesses to a wide range of risks including lost competitive edge, wasted resources, tarnished brand image, reduced customer satisfaction, increased financial and legal scrutiny and non-compliance.

In essence, the health of ERP application performance is a proxy for business health, and fast, reliable applications have never been more important. However, the increased complexity within modern application delivery environments makes it very difficult to ensure strong performance. As a result, many applications supporting businesses today are running at less than optimal levels, putting expensive and highly-visible ERP investments on the line.

Business-critical ERP applications depend on a wide range of data center components working together, including databases, operating systems, servers, networks, storage, management tools and back-up software. Within this complex environment there are many potential points of failure and performance degradation. More traditional approaches to managing application performance often measure components like database efficiency, and other likely problem spots like the network. But what they don’t demonstrate is the end-to-end performance of business transactions.

So how can enterprises ensure high-performing ERP applications today? First, businesses must flip the problem diagnosis paradigm. It’s no longer sufficient to look just for opportunities to optimize different components without an understanding of how these improvements directly translate to an improved end-user experience.

Instead, businesses must proactively gain an understanding of the end-user experience; then, they can trace back to all the different elements to identify where bottlenecks are and what should be changed in order to resolve them.

This approach helps businesses be proactive in preventing end-user complaints from arriving at the help desk, when it’s likely too late and the damage may already be done.

This approach also helps organizations to pinpoint the source of existing and potential performance problems quickly. To this end, businesses must also monitor all transactions all the time.

Sampling is not sufficient because there’s no guarantee that a performance problem will occur during a sampling interval, especially in this age of mobile devices when end users are accessing applications all the time.

Second, businesses must have a consolidated view of all the variables impacting ERP application performance, from the end users’s browser, across the network, through the data center and into the integrated subsystems. This is known as having a complete view across the ERP application delivery chain, and it’s the key to having more control over it. Once a business understands the end-user experience and the complete picture supporting it, they can then more effectively identify areas for acceleration that will result in faster transactions.

No doubt, today’s complex delivery environments make it more challenging than ever to ensure strong application performance. The good news is that new approaches to application performance management (APM), including focusing on end-user transaction performance, consolidating all application delivery chain variables in a “single pane of glass” approach and monitoring all applications 24x7, can make it easier to ensure high performance, quickly and cost-effectively.

Kieran Taylor is Sr Director, Product & Solutions Marketing, APM & DevOps, CA Technologies .

Kieran Taylor is Head of AIOps Product Marketing at Broadcom
Share this

The Latest

March 28, 2023

This blog presents the case for a radical new approach to basic information technology (IT) education. This conclusion is based on a study of courses and other forms of IT education which purport to cover IT "fundamentals" ...

March 27, 2023

To achieve maximum availability, IT leaders must employ domain-agnostic solutions that identify and escalate issues across all telemetry points. These technologies, which we refer to as Artificial Intelligence for IT Operations, create convergence — in other words, they provide IT and DevOps teams with the full picture of event management and downtime ...

March 23, 2023

APMdigest and leading IT research firm Enterprise Management Associates (EMA) are partnering to bring you the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 2 - Part 1 Pete Goldin, Editor and Publisher of APMdigest, discusses Network Observability with Shamus McGillicuddy, Vice President of Research, Network Infrastructure and Operations, at EMA ...

March 22, 2023

CIOs have stepped into the role of digital leader and strategic advisor, according to the 2023 Global CIO Survey from Logicalis ...

March 21, 2023

Synthetic monitoring is crucial to deploy code with confidence as catching bugs with E2E tests on staging is becoming increasingly difficult. It isn't trivial to provide realistic staging systems, especially because today's apps are intertwined with many third-party APIs ...

March 20, 2023

Recent EMA field research found that ServiceOps is either an active effort or a formal initiative in 78% of the organizations represented by a global panel of 400+ IT leaders. It is relatively early but gaining momentum across industries and organizations of all sizes globally ...

March 16, 2023

Managing availability and performance within SAP environments has long been a challenge for IT teams. But as IT environments grow more complex and dynamic, and the speed of innovation in almost every industry continues to accelerate, this situation is becoming a whole lot worse ...

March 15, 2023

Harnessing the power of network-derived intelligence and insights is critical in detecting today's increasingly sophisticated security threats across hybrid and multi-cloud infrastructure, according to a new research study from IDC ...

March 14, 2023

Recent research suggests that many organizations are paying for more software than they need. If organizations are looking to reduce IT spend, leaders should take a closer look at the tools being offered to employees, as not all software is essential ...

March 13, 2023

Organizations are challenged by tool sprawl and data source overload, according to the Grafana Labs Observability Survey 2023, with 52% of respondents reporting that their companies use 6 or more observability tools, including 11% that use 16 or more.