In 2013, APMdigest published a list called 15 Top Factors That Impact Application Performance. Even today, this is one of the most popular pieces of content on the site. And for good reason – the whole concept of Application Performance Management (APM) starts with identifying the factors that impact application performance, and then doing something about it. However, in the fast moving world of IT, many aspects of application performance have changed in the 3 years since the list was published. And many new experts have come on the scene. So APMdigest is updating the list for 2016, and you will be surprised how much it has changed.
Start with Top Factors That Impact Application Performance 2016 - Part 1
Part 2 of this list covers more challenges in the environment, including containers, microservices and issues with the network.
6. VIRTUALIZATION AND CONTAINERIZATION
Applications today are disaggregated into multiple components that could be deployed as highly virtualized or containerized workloads. As a result, gaining visibility into the flows to understand the interaction between different components is paramount for IT operations to provide the best user experience for the applications.
VP of Product Management, Gigamon
Over the last decade, we have seen the commoditization of the Cloud, and the trend towards running applications on virtualized hardware continues and evolves to an even higher level of modularization and compartmentalization: containers, micro-services, software-defined networks, virtual storage, and more. There is a trend towards small, self-contained, independent components that act as recyclable, multi-purpose building blocks. All that might allow for faster and cheaper development and operation of complex systems, but complexity increases when it comes to APM, tuning, monitoring, logging, debugging. The respective tools need to be able to see and analyze all physical and virtual components and how they interact, and allow developers and ops teams to make sense of all those data points.
Senior Director of Product Marketing, Loggly
As enterprises are adopting microservices and continuous delivery methodologies, the number of independent applications and web services are growing exponentially. Isolating any application performance issues in an application environment with hundreds and thousands of interdependent services can be challenging if not instrumented and monitored in real-time. Manually instrumenting these microservices and setting static thresholds can be a very difficult task, if not impossible. Enterprises need to automatically discover these large numbers of microservices, dynamically baseline their performance, collect deep diagnostics and alerts when the performance deviates from the normal baseline.
Director, Product Marketing, AppDynamics
Modern applications increasingly relying on stateless microservices, are often paired with stateful data services (like NoSQL, Kafka, Hadoop etc.), and are being deployed on containers or leverage serverless architectures. As the application substrate is changing, so are the factors that impact performance of these applications. These factors include how the various microservices are interacting with each other, their availability, and correlating issues like errors, latency and throughput across these services. The individual service performance is not paramount in itself. Also as orchestration systems like Mesos, Kubernetes and Docker Swarm become more critical, application performance will increasingly rely on how effectively these orchestration systems can manage resources, both applications as well as underlying infrastructure.
VP Marketing, OpsClarity
8. SERVICE DESIGN
In today's Everything-As-A-Service hybrid application world the biggest impact on performance is ignoring core principles of proximity and context of the services to the users. Gone are the days of tightly coupled applications, data and infrastructure. Whether born in the cloud or connected to it – the biggest impact I have seen with today's modern applications is ignoring solid user centered design principles. Services that integrate multiple applications, micro-services and clouds require more finesse in balancing the last mile connectivity from the user to the service but also the services, micro-services and data to each other. Experience teaches us that applying same assumptions one would of traditional client/server applications will not work with today's modern applications. In lieu of just looking at a specific micro-service, service or component – performance testing must take into account impact to overall performance or risk production issues that are difficult to identify or pinpoint because of the multiple faceted nature of these solutions. Improper design of a service from a user location/experience is a recipe for disaster.
Strategist and Author, iSpeak Cloud
9. SERVER SIDE CODE
Although we are seeing less tolerance for low applications by users, the primary reason for poor performance continues to be inefficient server side code. As components are increasingly interconnected determining the cause of slowness or faults continues to take longer, creating the need for end-to-end APM.
VP of Market Development and Insights, AppDynamics
10. NETWORK LATENCY
Latency is the top factor that impacts application performance. The most well-developed application will be terribly slow if latency between users and servers is high. And the most poorly-developed application can garner all kinds of praises when everything is local. I believe low latency is the single most important asset that IT managers can have on their networks. You should focus on any ways to reduce end-to-end delay. This includes reducing the various contributors to latency, such as processing delay, queuing delay, serialization delay, and last, but certainly not least, propagation delay.
Senior Consultant and Founder of RootPerformance
Poorly peered Internet relationships and congestion remain the top contributor to latency in web and mobile apps, even when using a Content Delivery Network. Seventy-five percent of an application's page load time comes from the latency of the network. Even after solving the poorly constructed HTML, and ensuring your app has no blocking calls, you still have the Internet to deal with.
Chief Architect, Cedexis
Applications can become overloaded based on changes in the business environment. More jobs, workloads, or users can in turn negatively affect performance. For example, if your Microsoft Exchange send and receive queue lengths grow for mailbox databases, or if users are experiencing logon latency, you must look at not only the processing and memory resources for Exchange servers, but also the availability of flash storage.
VP of Engineering, Comtrade Software
12. RESOURCE AVAILABILITY
Application performance is most impacted by resource availability. With applications migrating to digital media, it can be difficult for businesses to gauge the appropriate resources necessary to deliver a consistent performance metric. It is critical to leverage solutions that can dynamically detect and scale resource requirements through the use of virtualization, orchestration, and automation to provide the cloud-like elasticity and agility that organizations require for successful and consistent application performance.
Director of Application Delivery Solutions, Radware
13. CACHING BOTTLENECKS
Application speed and scalability are forever intertwined with each other. With load triggered elastic scaling in the cloud, finding every application bottleneck and applying strategic caching architectures are more important than ever. At times, code or database results in a given framework or architecture cannot be refactored quickly enough. Failure to cache wherever possible, relates directly to a higher number of host instances, which means a higher cost of doing business in the cloud. Bottlenecks tend move around within an evolving application with a high release rate and with a lot of developers. A solid APM technology can quickly and automatically identify a deviation from application baseline performance at any tier, essentially safeguarding user experience, brand reputation and digital trust. Hosting an application in the cloud without an APM tool watching for bottlenecks is throwing money away.
CTO, HITS Inc.
Read Top Factors That Impact Application Performance 2016 - Part 3 covering how the application interacts with the backend and the front end.
To achieve maximum availability, IT leaders must employ domain-agnostic solutions that identify and escalate issues across all telemetry points. These technologies, which we refer to as Artificial Intelligence for IT Operations, create convergence — in other words, they provide IT and DevOps teams with the full picture of event management and downtime ...
APMdigest and leading IT research firm Enterprise Management Associates (EMA) are partnering to bring you the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 2 - Part 1 Pete Goldin, Editor and Publisher of APMdigest, discusses Network Observability with Shamus McGillicuddy, Vice President of Research, Network Infrastructure and Operations, at EMA ...
CIOs have stepped into the role of digital leader and strategic advisor, according to the 2023 Global CIO Survey from Logicalis ...
Synthetic monitoring is crucial to deploy code with confidence as catching bugs with E2E tests on staging is becoming increasingly difficult. It isn't trivial to provide realistic staging systems, especially because today's apps are intertwined with many third-party APIs ...
Recent EMA field research found that ServiceOps is either an active effort or a formal initiative in 78% of the organizations represented by a global panel of 400+ IT leaders. It is relatively early but gaining momentum across industries and organizations of all sizes globally ...
Managing availability and performance within SAP environments has long been a challenge for IT teams. But as IT environments grow more complex and dynamic, and the speed of innovation in almost every industry continues to accelerate, this situation is becoming a whole lot worse ...
Harnessing the power of network-derived intelligence and insights is critical in detecting today's increasingly sophisticated security threats across hybrid and multi-cloud infrastructure, according to a new research study from IDC ...
Recent research suggests that many organizations are paying for more software than they need. If organizations are looking to reduce IT spend, leaders should take a closer look at the tools being offered to employees, as not all software is essential ...
Organizations are challenged by tool sprawl and data source overload, according to the Grafana Labs Observability Survey 2023, with 52% of respondents reporting that their companies use 6 or more observability tools, including 11% that use 16 or more.
An array of tools purport to maintain availability — the trick is sorting through the noise to find the right one. Let us discuss why availability is so important and then unpack the ROI of deploying Artificial Intelligence for IT Operations (AIOps) during an economic downturn ...