In 2013, APMdigest published a list called 15 Top Factors That Impact Application Performance. Even today, this is one of the most popular pieces of content on the site. And for good reason – the whole concept of Application Performance Management (APM) starts with identifying the factors that impact application performance, and then doing something about it. However, in the fast moving world of IT, many aspects of application performance have changed in the 3 years since the list was published. And many new experts have come on the scene. So APMdigest is updating the list for 2016, and you will be surprised how much it has changed.
Part 5 is the final installment of the list of top factors that impact application performance.
27. CODE INTEGRATION
As application topologies become more and more distributed, the need for seamless code integration between applications in new releases has become a significant factor in application performance. This is especially true in the case of expanding IT departments when new employees are not always familiar with the application topologies and dependencies in an organization.
Founder & CEO, Correlsense
28. PACE OF INNOVATION
Developers are reacting to unrelenting pressure from the business to implement more business functionality in less time, at a lower cost (of development) and to then evolve that code more frequently. These pressures have caused there to be a tremendous amount of innovation in process areas like Agile and DevOps, and in new languages (PHP, Python, Ruby, Node-JS) that collectively improve developer productivity. But all of these process and technology improvements abstract the developer from the performance characteristics of their code. Docker is just the latest example of this. So the number one factor that impacts application performance is that the pace of innovation in the application stacks in response to business pressures makes measuring and ensuring application performance more difficult. This is THE challenge that the APM vendors must address
29. LACK OF TESTING
Not testing performance early in development and not testing it later in production. Today's tools make it easier to "shift-left" moving performance testing into the development cycle so that all new code can have not only unit, smoke, and functional tests, but also performance tests that will detect performance regressions and defects before the code becomes part of the project. Allowing code that performs poorly into a project increases the cost to address this defect later. Adding performance testing as a ‘shift-right' into production ensures that the production system truly can scale and perform well when demand is higher than a development or pre-prod test would simulate. Testing in production also allows testing third-party components as a part of an integrated performance load test. You don't want a third-party feature to be the blocking item that can't perform at scale.
Sr. Evangelist, SOASTA
The biggest factor that impacts application performance is a lack of experience, which includes knowledge. Performance (meaning transactional performance and scalability) gets plenty of lip service, but how many people really test for performance at every build? Think about a scalable and fast architecture from day 1, from the messaging platform to the backend to the use of Angular to the load balancers: Everything has an impact. A culture of testing at every build, and setting clear SLA's drives true performance. There is no way around it.
30. INEFFICIENT COMMUNICATION
Over the past decade, IT Organizations have heavily invested in APM and UEM solutions to become aware of potential performance issues even before consumers of the service felt the pain. New generation APM tools go even further with infrastructure discovery, analytics and deep code analysis to refine and speed up the diagnosis process when something goes wrong. This is all good, but it must be recognized however, that these same organizations tend to spoil all these efficiency gains because of immature communication processes. I believe that no matter how fast IT becomes aware of an application performance issue, today, the top factor that impacts application performance and customer experience is really the ability or inability for the IT organization to respond quickly enough and prevent the issue from getting bigger and the performance from deteriorating even more.
Senior Director of Product Marketing, IT Alerting & IoT, Everbridge
Numerous factors can impact application performance - a mistake in design, application defects, insufficient capacity and many others. However, for each of such factors to impact the application, a change should happen. Application, infrastructure, data, workload or capacity – something should change for performance to deteriorate. Hence, the top factor that impacts application performance is a change. To ensure maximum performance it is critical to know "what's changed?” and be able to detect early changes that are causing negative impact. Today, most application performance management tools still mainly focus on application transaction performance and availability. Leading vendors started to explore application logs looking for additional information about application behavior. Change is a key missing piece required to manage application performance. Change detection, change correlation with performance events, and risk assessment of changes are critical capabilities IT Operations needs to become truly proactive in maintaining optimal application performance.
32. UNKNOWN UNKNOWNS
From reading APM reviews on IT Central Station, I see that it is a common theme that an "unknown unknown" is what most concerns IT and DevOps managers. Examples of these "unknown unknowns" that impact app performance include factors such as the way an application responds to an unanticipated application behavior (e.g. "80% of users are coming from mobile devices!"), user behavior (e.g. "We didn't expect users to keep hitting that button.") and/or load (e.g. "Traffic spike of 600% during the summer!?").
Founder and CEO, IT Central Station
Site reliability engineering (SRE) is fast becoming an essential aspect of modern IT operations, particularly in highly scaled, big data environments. As businesses and industries shift to the digital and embrace new IT infrastructures and technologies to remain operational and competitive, the need for a new approach for IT teams to find and manage the balance between launching new systems and features and ensuring these are intuitive, reliable, and friendly for end users has intensified as well ...
The most sophisticated observability practitioners (leaders) are able to cut downtime costs by 90%, from an estimated $23.8 million annually to just $2.5 million, compared to observability beginners, according to the State of Observability 2022 from Splunk in collaboration with the Enterprise Strategy Group. What's more, leaders in observability are more innovative and more successful at achieving digital transformation outcomes and other initiatives ...
Programmatically tracked service level indicators (SLIs) are foundational to every site reliability engineering practice. When engineering teams have programmatic SLIs in place, they lessen the need to manually track performance and incident data. They're also able to reduce manual toil because our DevOps teams define the capabilities and metrics that define their SLI data, which they collect automatically — hence "programmatic" ...
Recently, a regional healthcare organization wanted to retire its legacy monitoring tools and adopt AIOps. The organization asked Windward Consulting to implement an AIOps strategy that would help streamline its outdated and unwieldy IT system management. Our team's AIOps implementation process helped this client and can help others in the industry too. Here's what my team did ...
You've likely heard it before: every business is a digital business. However, some businesses and sectors digitize more quickly than others. Healthcare has traditionally been on the slower side of digital transformation and technology adoption, but that's changing. As healthcare organizations roll out innovations at increasing velocity, they must build a long-term strategy for how they will maintain the uptime of their critical apps and services. And there's only one tool that can ensure this continuous availability in our modern IT ecosystems. AIOps can help IT Operations teams ensure the uptime of critical apps and services ...
Between 2012 to 2015 all of the hyperscalers attempted to use the legacy APM solutions to improve their own visibility. To no avail. The problem was that none of the previous generations of APM solutions could match the scaling demand, nor could they provide interoperability due to their proprietary and exclusive agentry ...
The DevOps journey begins by understanding a team's DevOps flow and identifying precisely what tasks deliver the best return on engineers' time when automated. The rest of this blog will help DevOps team managers by outlining what jobs can — and should be automated ...
A survey from Snow Software polled more than 500 IT leaders to determine the current state of cloud infrastructure. Nearly half of the IT leaders who responded agreed that cloud was critical to operations during the pandemic with the majority deploying a hybrid cloud strategy consisting of both public and private clouds. Unsurprisingly, over the last 12 months, the majority of respondents had increased overall cloud spend — a substantial increase over the 2020 findings ...
As we all know, the drastic changes in the world have caused the workforce to take a hybrid approach over the last two years. A lot of that time, being fully remote. With the back and forth between home and office, employees need ways to stay productive and access useful information necessary to complete their daily work. The ability to obtain a holistic view of data relevant to the user and get answers to topics, no matter the worker's location, is crucial for a successful and efficient hybrid working environment ...
For the past decade, Application Performance Management has been a capability provided by a very small and exclusive set of vendors. These vendors provided a bolt-on solution that provided monitoring capabilities without requiring developers to take ownership of instrumentation and monitoring. You may think of this as a benefit, but in reality, it was not ...