According to industry statistics, the average costs of downtime for a leading ERP system can range between $535,780 and $838,100 per hour. Put another way, almost $15,000 is lost every minute an ERP application is down. And that’s just the tip of the iceberg, because poor application performance also exposes businesses to a wide range of risks including lost competitive edge, wasted resources, tarnished brand image, reduced customer satisfaction, increased financial and legal scrutiny and non-compliance.
In essence, the health of ERP application performance is a proxy for business health, and fast, reliable applications have never been more important. However, the increased complexity within modern application delivery environments makes it very difficult to ensure strong performance. As a result, many applications supporting businesses today are running at less than optimal levels, putting expensive and highly-visible ERP investments on the line.
Business-critical ERP applications depend on a wide range of data center components working together, including databases, operating systems, servers, networks, storage, management tools and back-up software. Within this complex environment there are many potential points of failure and performance degradation. More traditional approaches to managing application performance often measure components like database efficiency, and other likely problem spots like the network. But what they don’t demonstrate is the end-to-end performance of business transactions.
So how can enterprises ensure high-performing ERP applications today? First, businesses must flip the problem diagnosis paradigm. It’s no longer sufficient to look just for opportunities to optimize different components without an understanding of how these improvements directly translate to an improved end-user experience.
Instead, businesses must proactively gain an understanding of the end-user experience; then, they can trace back to all the different elements to identify where bottlenecks are and what should be changed in order to resolve them.
This approach helps businesses be proactive in preventing end-user complaints from arriving at the help desk, when it’s likely too late and the damage may already be done.
This approach also helps organizations to pinpoint the source of existing and potential performance problems quickly. To this end, businesses must also monitor all transactions all the time.
Sampling is not sufficient because there’s no guarantee that a performance problem will occur during a sampling interval, especially in this age of mobile devices when end users are accessing applications all the time.
Second, businesses must have a consolidated view of all the variables impacting ERP application performance, from the end users’s browser, across the network, through the data center and into the integrated subsystems. This is known as having a complete view across the ERP application delivery chain, and it’s the key to having more control over it. Once a business understands the end-user experience and the complete picture supporting it, they can then more effectively identify areas for acceleration that will result in faster transactions.
No doubt, today’s complex delivery environments make it more challenging than ever to ensure strong application performance. The good news is that new approaches to application performance management (APM), including focusing on end-user transaction performance, consolidating all application delivery chain variables in a “single pane of glass” approach and monitoring all applications 24x7, can make it easier to ensure high performance, quickly and cost-effectively.
Kieran Taylor is Sr Director, Product & Solutions Marketing, APM & DevOps, CA Technologies .
The Latest
Incident management processes are not keeping pace with the demands of modern operations teams, failing to meet the needs of SREs as well as platform and ops teams. Results from the State of DevOps Automation and AI Survey, commissioned by Transposit, point to an incident management paradox. Despite nearly 60% of ITOps and DevOps professionals reporting they have a defined incident management process that's fully documented in one place and over 70% saying they have a level of automation that meets their needs, teams are unable to quickly resolve incidents ...
Today, in the world of enterprise technology, the challenges posed by legacy Virtual Desktop Infrastructure (VDI) systems have long been a source of concern for IT departments. In many instances, this promising solution has become an organizational burden, hindering progress, depleting resources, and taking a psychological and operational toll on employees ...
Within retail organizations across the world, IT teams will be bracing themselves for a hectic holiday season ... While this is an exciting opportunity for retailers to boost sales, it also intensifies severe risk. Any application performance slipup will cause consumers to turn their back on brands, possibly forever. Online shoppers will be completely unforgiving to any retailer who doesn't deliver a seamless digital experience ...
Black Friday is a time when consumers can cash in on some of the biggest deals retailers offer all year long ... Nearly two-thirds of consumers utilize a retailer's web and mobile app for holiday shopping, raising the stakes for competitors to provide the best online experience to retain customer loyalty. Perforce's 2023 Black Friday survey sheds light on consumers' expectations this time of year and how developers can properly prepare their applications for increased online traffic ...
This holiday shopping season, the stakes for online retailers couldn't be higher ... Even an hour or two of downtime for a digital storefront during this critical period can cost millions in lost revenue and has the potential to damage brand credibility. Savvy retailers are increasingly investing in observability to help ensure a seamless, omnichannel customer experience. Just ahead of the holiday season, New Relic released its State of Observability for Retail report, which offers insight and analysis on the adoption and business value of observability for the global retail/consumer industry ...
As organizations struggle to find and retain the talent they need to manage complex cloud implementations, many are leaning toward hybrid cloud as a solution ... While it's true that using the cloud is not a "one size fits all" proposition, it is clear that both large and small companies prefer a hybrid cloud model ...
In the same way a city is a sum of its districts and neighborhoods, complex IT systems are made of many components that continually interact. Observability requires a comprehensive and connected view of all aspects of the system, including even some that don't directly relate to its technological innards ...
Multicasting in this context refers to the process of directing data streams to two or more destinations. This might look like sending the same telemetry data to both an on-premises storage system and a cloud-based observability platform concurrently. The two principal benefits of this strategy are cost savings and service redundancy ...
In today's rapidly evolving business environment, Chief Information Officers (CIOs) and Chief Technology Officers (CTOs) are grappling with the challenge of regaining control over their IT roadmap. The constant evolution and introduction of new technology releases, combined with the pressure to deliver innovation on shrinking budgets, has added layers of complexity for executives who must transform the perception of the role of the IT leader from cost managers and maintainers to strategic enablers of growth and profitability ...
Artificial intelligence (AI) has saturated the conversation around technology as compelling new tools like ChatGPT produce headlines every day. Enterprise leaders have correctly identified the potential of AI — and its many tributary technologies — to generate new efficiencies at scale, particularly in the cloud era. But as we now know, these technologies are rarely plug-and-play, for reasons both technical and human ...