APMdigest asked experts from across the IT industry for their opinions on what IT departments should be monitoring to ensure digital performance. Part 4 covers the infrastructure, including the cloud and the network.
Data performance monitoring is the most important aspect to ensure digital performance. Data transformed into content will dictate the end user experience. Data represented as text, images, video or voxels in extended reality requires continual monitoring to ensure quality of experience. IT departments can determine the amount of investment required to modify the network and application components based upon data performance. Data visualization formats can also be modified to function on the status quo infrastructure until the upgrade investments are in place.
Director, Technical Marketing, Quali
Monitor slow flow data - collected by standard discovery tools to map changes in network, apps, data, location or users. And fast flow data - collected by log analyzers, webcasters, and real time discovery to overlay changes to dependency map.
Author and Strategist, iSpeak Cloud
BIG DATA TECHNOLOGIES
Organizations need to be able to monitor the big data technology that modern applications are increasingly reliant upon. These apps need fast access to technologies such as Hadoop, Kafka, Spark and Hbase, in order to make business critical decisions across all verticals, including financial services, retail, manufacturing, healthcare and telecom. For example, in finance, fraud detection leverages streaming data from systems like Kafka and Spark Streaming to collect and process information in order to detect any irregular patterns and prevent fraudulent transactions. Streaming apps like these have complex distributed architectures and produce massive volumes of data that is constantly changing. This makes them susceptible to performance issues, jeopardizing the important business process they were supporting. It's critical that enterprises monitor these modern data apps with a strong application performance management (APM) platform that has end-to-end observability and AI-driven automation at the core.
CEO, Unravel Data
Ever since servers of all types began processing transactions between businesses and the web/mobile devices, the need for millisecond performance between the end-user and the mainframe back-end data repository has existed. Along this path, there are numerous moving parts that contribute to a delightful or disastrous user experience. Focus must be given to monitoring mainframe performance as most web and mobile applications end with a purchase, a bank account deposit, or some exchange that ultimately takes place on a back-end mainframe.
Performance Consultant, Compuware
One component that people often miss: the middleware. Whether it be on-premises ESBs, cloud-based iPaaS, or some combination, if the middleware has an issue, it can adversely impact the customer experience.
INTERACTIONS BETWEEN CLOUDS
As organizations increasingly adopt multi-cloud strategies, there's a growing need to monitor not just the performance (speed, reliability) of individual cloud infrastructures themselves, but also the interactions between these platforms. When deploying a multi-cloud environment, fast, reliable interoperability between multiple cloud regions and providers can be the key to strong performance for entire end-to-end componentized applications.
CEO and Founder, Catchpoint
Digital performance is a full-stack affair, so it's essential to monitor the network at the wire level all the way to the applications and real user experience.
The most important metric for IT teams to monitor is the performance of the network.
VP and General Manager, Viavi Enterprise and Cloud Business Unit
Digital performance monitoring requires complete network visibility to expose hidden problems and soon-to-be problems.
Senior Manager, Solutions Marketing, Ixia Solutions Group a Keysight Technologies business
Networks can be needy: They often require constant attention to ensure continuous uptime and properly defend against cyberattacks. Traditional symptom-based SNMP monitoring isn't enough to ensure (or enhance) digital performance – IT teams need to leverage proactive network monitoring and contextualized visibility. Waiting for a problem to occur and then trying to track down the source with an outdated map or protocols results in more time troubleshooting and increased MTTR, leaving less time and brain space for making strategic updates or otherwise optimizing digital performance. Instead of always operating in crisis mode, network teams need to employ network automation to continuously monitor for underlying faults, identify problems in context to speed recovery and proactively enforce best practices.
Product Specialist, NetBrain
Ensuring digital performance today can be a proxy for keeping employees productive. When networks are slow, people are slow. The adoption of SD-WAN is enticing for large enterprises seeking to lower costs or increase flexibility for remote locations, but it's not a silver bullet and what's missing from this discussion is end-to-end performance baselining. SD-WAN has no effect on issues outside of the WAN and the application delivery path is constantly changing so monitoring before, during, and after SD-WAN deployment across the entire connection is essential to finding and fixing issues that degrade user experience no matter where the issue occurs.
VP of Product, AppNeta
Read What You Should Be Monitoring to Ensure Digital Performance - Part 5, the final installment, with some recommendations you may not have thought about.
As the data generated by organizations grows, APM tools are now required to do a lot more than basic monitoring of metrics. Modern data is often raw and unstructured and requires more advanced methods of analysis. The tools must help dig deep into this data for both forensic analysis and predictive analysis. To extract more accurate and cheaper insights, modern APM tools use Big Data techniques to store, access, and analyze the multi-dimensional data ...
Modern enterprises are generating data at an unprecedented rate but aren't taking advantage of all the data available to them in order to drive real-time, actionable insights. According to a recent study commissioned by Actian, more than half of enterprises today are unable to efficiently manage nor effectively use data to drive decision-making ...
According to a study by Forrester Research, an enhanced UX design can increase the conversion rate by 400%. If UX has become the ultimate arbiter in determining the success or failure of a product or service, let us first understand what UX is all about ...
The requirements of an APM tool are now much more complex than they've ever been. Not only do they need to trace a user transaction across numerous microservices on the same system, but they also need to happen pretty fast ...
Performance monitoring is an old problem. As technology has advanced, we've had to evolve how we monitor applications. Initially, performance monitoring largely involved sending ICMP messages to start troubleshooting a down or slow application. Applications have gotten much more complex, so this is no longer enough. Now we need to know not just whether an application is broken, but why it broke. So APM has had to evolve over the years for us to get there. But how did this evolution take place, and what happens next? Let's find out ...
There are some IT organizations that are using DevOps methodology but are wary of getting bogged down in ITSM procedures. But without at least some ITSM controls in place, organizations lose their focus on systematic customer engagement, making it harder for them to scale ...
If you have deployed a Java application in production, you've probably encountered a situation where the application suddenly starts to take up a large amount of CPU. When this happens, application response becomes sluggish and users begin to complain about slow response. Often the solution to this problem is to restart the application and, lo and behold, the problem goes away — only to reappear a few days later. A key question then is: how to troubleshoot high CPU usage of a Java application? ...
Operations are no longer tethered tightly to a main office, as the headquarters-centric model has been retired in favor of a more decentralized enterprise structure. Rather than focus the business around a single location, enterprises are now comprised of a web of remote offices and individuals, where network connectivity has broken down the geographic barriers that in the past limited the availability of talent and resources. Key to the success of the decentralized enterprise model is a new generation of collaboration and communication tools ...
To better understand the AI maturity of businesses, Dotscience conducted a survey of 500 industry professionals. Research findings indicate that although enterprises are dedicating significant time and resources towards their AI deployments, many data science and ML teams don't have the adequate tools needed to properly collaborate on, build and deploy AI models efficiently ...