APMdigest asked experts from across the IT industry for their opinions on what IT departments should be monitoring to ensure digital performance. Part 4 covers the infrastructure, including the cloud and the network.
Start with What You Should Be Monitoring to Ensure Digital Performance - Part 1
Start with What You Should Be Monitoring to Ensure Digital Performance - Part 2
Start with What You Should Be Monitoring to Ensure Digital Performance - Part 3
DATA
Data performance monitoring is the most important aspect to ensure digital performance. Data transformed into content will dictate the end user experience. Data represented as text, images, video or voxels in extended reality requires continual monitoring to ensure quality of experience. IT departments can determine the amount of investment required to modify the network and application components based upon data performance. Data visualization formats can also be modified to function on the status quo infrastructure until the upgrade investments are in place.
Dos Dosanjh
Director, Technical Marketing, Quali
Monitor slow flow data - collected by standard discovery tools to map changes in network, apps, data, location or users. And fast flow data - collected by log analyzers, webcasters, and real time discovery to overlay changes to dependency map.
Jeanne Morain
Author and Strategist, iSpeak Cloud
BIG DATA TECHNOLOGIES
Organizations need to be able to monitor the big data technology that modern applications are increasingly reliant upon. These apps need fast access to technologies such as Hadoop, Kafka, Spark and Hbase, in order to make business critical decisions across all verticals, including financial services, retail, manufacturing, healthcare and telecom. For example, in finance, fraud detection leverages streaming data from systems like Kafka and Spark Streaming to collect and process information in order to detect any irregular patterns and prevent fraudulent transactions. Streaming apps like these have complex distributed architectures and produce massive volumes of data that is constantly changing. This makes them susceptible to performance issues, jeopardizing the important business process they were supporting. It's critical that enterprises monitor these modern data apps with a strong application performance management (APM) platform that has end-to-end observability and AI-driven automation at the core.
Kunal Agarwal
CEO, Unravel Data
MAINFRAME
Ever since servers of all types began processing transactions between businesses and the web/mobile devices, the need for millisecond performance between the end-user and the mainframe back-end data repository has existed. Along this path, there are numerous moving parts that contribute to a delightful or disastrous user experience. Focus must be given to monitoring mainframe performance as most web and mobile applications end with a purchase, a bank account deposit, or some exchange that ultimately takes place on a back-end mainframe.
Kelly Vogt
Performance Consultant, Compuware
MIDDLEWARE
One component that people often miss: the middleware. Whether it be on-premises ESBs, cloud-based iPaaS, or some combination, if the middleware has an issue, it can adversely impact the customer experience.
Jason Bloomberg
President, Intellyx
INTERACTIONS BETWEEN CLOUDS
As organizations increasingly adopt multi-cloud strategies, there's a growing need to monitor not just the performance (speed, reliability) of individual cloud infrastructures themselves, but also the interactions between these platforms. When deploying a multi-cloud environment, fast, reliable interoperability between multiple cloud regions and providers can be the key to strong performance for entire end-to-end componentized applications.
Mehdi Daoudi
CEO and Founder, Catchpoint
NETWORK
Digital performance is a full-stack affair, so it's essential to monitor the network at the wire level all the way to the applications and real user experience.
Jason Bloomberg
President, Intellyx
The most important metric for IT teams to monitor is the performance of the network.
Douglas Roberts
VP and General Manager, Viavi Enterprise and Cloud Business Unit
Digital performance monitoring requires complete network visibility to expose hidden problems and soon-to-be problems.
Keith Bromley
Senior Manager, Solutions Marketing, Ixia Solutions Group a Keysight Technologies business
Networks can be needy: They often require constant attention to ensure continuous uptime and properly defend against cyberattacks. Traditional symptom-based SNMP monitoring isn't enough to ensure (or enhance) digital performance – IT teams need to leverage proactive network monitoring and contextualized visibility. Waiting for a problem to occur and then trying to track down the source with an outdated map or protocols results in more time troubleshooting and increased MTTR, leaving less time and brain space for making strategic updates or otherwise optimizing digital performance. Instead of always operating in crisis mode, network teams need to employ network automation to continuously monitor for underlying faults, identify problems in context to speed recovery and proactively enforce best practices.
Jason Baudreau
Product Specialist, NetBrain
SD-WAN
Ensuring digital performance today can be a proxy for keeping employees productive. When networks are slow, people are slow. The adoption of SD-WAN is enticing for large enterprises seeking to lower costs or increase flexibility for remote locations, but it's not a silver bullet and what's missing from this discussion is end-to-end performance baselining. SD-WAN has no effect on issues outside of the WAN and the application delivery path is constantly changing so monitoring before, during, and after SD-WAN deployment across the entire connection is essential to finding and fixing issues that degrade user experience no matter where the issue occurs.
Sean Armstrong
VP of Product, AppNeta
Read What You Should Be Monitoring to Ensure Digital Performance - Part 5, the final installment, with some recommendations you may not have thought about.
The Latest
The OpenTelemetry End-User SIG surveyed more than 100 OpenTelemetry users to learn more about their observability journeys and what resources deliver the most value when establishing an observability practice ... Regardless of experience level, there's a clear need for more support and continued education ...
A silo is, by definition, an isolated component of an organization that doesn't interact with those around it in any meaningful way. This is the antithesis of collaboration, but its effects are even more insidious than the shutting down of effective conversation ...
New Relic's 2024 State of Observability for Industrials, Materials, and Manufacturing report outlines the adoption and business value of observability for the industrials, materials, and manufacturing industries ... Here are 8 key takeaways from the report ...
For mission-critical applications, it's often easy to justify an investment in a solution designed to ensure that the application is available no less than 99.99% of the time — easy because the cost to the organization of that app being offline would quickly surpass the cost of a high availability (HA) solution ... But not every application warrants the investment in an HA solution with redundant infrastructure spanning multiple data centers or cloud availability zones ...
The edge brings computing resources and data storage closer to end users, which explains the rapid boom in edge computing, but it also generates a huge amount of data ... 44% of organizations are investing in edge IT to create new customer experiences and improve engagement. To achieve those goals, edge services observability should be a centerpoint of that investment ...
The growing adoption of efficiency-boosting technologies like artificial intelligence (AI) and machine learning (ML) helps counteract staffing shortages, rising labor costs, and talent gaps, while giving employees more time to focus on strategic projects. This trend is especially evident in the government contracting sector, where, according to Deltek's 2024 Clarity Report, 34% of GovCon leaders rank AI and ML in their top three technology investment priorities for 2024, above perennial focus areas like cybersecurity, data management and integration, business automation and cloud infrastructure ...
While IT leaders are preparing organizations for accelerated generative AI (GenAI) adoption, C-suite executives' confidence in their IT team's ability to deliver basic services is declining, according to a study conducted by the IBM Institute for Business Value ...
The consequences of outages have become a pressing issue as the largest IT outage in history continues to rock the world with severe ramifications ... According to the Catchpoint Internet Resilience Report, these types of disruptions, internet outages in particular, can have severe financial and reputational impacts and enterprises should strongly consider their resilience ...
Everyday AI and digital employee experience (DEX) are projected to reach mainstream adoption in less than two years according to the Gartner, Inc. Hype Cycle for Digital Workplace Applications, 2024 ...
When an IT issue is not handled correctly, not only is innovation stifled, but stakeholder trust can also be impacted (such as when there's an IT outage or slowdowns in performance). When you add new technology investments and innovations into the mix, you have a recipe for disaster ...