APMdigest's 2017 Application Performance Management Predictions is a forecast by the top minds in APM today. Industry experts — from analysts and consultants to users and the top vendors — offer thoughtful, insightful, and often controversial predictions on how APM and related technologies will evolve and impact business in 2017. Part 4 covers cloud, containers and microservices.
31. APM AND ITOA EVOLVE TO ADDRESS CLOUD
As companies choose new cloud management platforms for on-demand service delivery to the business, IT support teams will increasingly look for new application and infrastructure monitoring systems for those on-demand services. These new monitoring systems will need to support the chosen cloud platform and new cloud-native technologies being deployed on these platforms, plus be compatible with on-demand provisioning systems (i.e., do you want monitoring with that service?) and support the dynamic scaling inherent in today's cloud management platforms. Choosing the right cloud-compatible monitoring technology will increase IT agility, allowing their customers to self-provision services while maintaining control and providing quality services.
VP Products, SL Corporation
We finally see medium and large enterprises expanding into the cloud and truly leveraging full cloud capabilities (self-service, elasticity, self-healing). Incident recovery in the cloud becomes much easier through automated resource allocation, replacement, and rollbacks. As a result, the focus on IT Operations Analytics shifts from minor incidents to problems that become far more difficult to resolve. APM and ITOA technologies will introduce a number of key capabilities to address this shift including visibility into the state of a complete cloud stack and its history, the ability to support the dynamic nature of the cloud (e.g., identify spun off and stopped server instances and containers, track instances moved across the physical infrastructure, map server instances and containers to source images and provisioned software baselines), understanding changes of cloud resources and cloud infrastructure, and automatic analysis of all this information generating insights that can trigger manual or automated operations actions preventing issues, facilitating incident recovery, and remediating underlying problems.
32. APM PROVIDES CLOUD NATIVE VISIBILITY
Traditional Application Performance Management (APM) and monitoring solutions are not designed for cloud native environments, for example many (100s, 1000s) microservices are running concurrently, so how do you trace a particular business transaction? How can you tell if a microservice is acting slow amongst all the data available from monitoring, when that service is dynamic and in a few seconds the environment completely changes? There are new solutions on the market that address these needs – it is a highly active area with many startups. Cloud native adoption is relatively new and mostly at proof of concept stage in large enterprises but 2017 will see the major APM vendors begin to address the particular needs of this market segment.
Principal Analyst, Ovum
In 2017, we expect customers an increase in deployment of cloud native workloads; requiring an APM solution that is dynamic and cognitive to address the complexities of these new workloads.
Offering Management, Application Insights, IBM
Read Arun Biligiri's blog: 3 Focus Points for Future of Application Management
Driven, in part by a motivation to be cloud-native, we will see a shift in focus from "outside-in" APM, to instrumentation being provided at source by the app developer. And just as network monitoring is now the domain of the switch vendor, Platform and Infrastructure-as-a-Service providers will step up the level of native instrumentation too.
Chief Evangelist, Moogsoft
33. APM OPTIMIZED FOR HYBRID CLOUD
As customers shift content to the cloud, 2017 will drive APM solutions to be optimized for hybrid application management. A generalized APM solution that manages cloud native and traditional on-prem applications together, is a step forward from the specialized point solutions for cloud native workloads today.
Offering Management, Application Insights, IBM
Visibility Creates a Competitive Advantage for the Hybrid Enterprise: After deploying a hybrid environment, which can be complex and difficult to manage, the work is just beginning for the enterprise. The process continues as application requirements and business needs evolve. So, to increase agility, IT is always evaluating and adopting cloud services and related technologies like PaaS, containers and microservices to deliver applications faster. IT organizations will keep an eye on the longer-term to ensure that they can scale as usage increases. We expect to see greater adoption of application and network management functionality to ensure visibility into the hybrid cloud, creating more trust in IT and alignment to business objectives.
Sr. Director, Technology Strategist, Riverbed
Read Sean Applegate's blog: Trends and Predictions for 2017
IT infrastructure monitoring today involves physical and virtual resources that are both persistent and ephemeral. With data centers becoming hybrid in a multi-cloud eco-system, application and network monitoring have also become more complex and multi-dimensional. While analytics, telemetry, correlation, automation, and remediation are important, visualization and end-user experience are rising as critical elements in successfully managing and operating virtualized and cloud based deployments. With so much innovation taking place, in 2017 and beyond IT monitoring will be required to continue its transformation to help operators achieve backend operational efficiency in hybrid, multi-cloud data centers by delivering visibility and management across virtual and physical, as well as persistent and ephemeral environments.
VP of Product Management and Marketing, PLUMgrid
APM will have the greatest impact in traditional enterprises such as banks, which are looking to use prebuilt applications for both on-premises and in the cloud, using APM across their entire stack.
CEO, Unravel Data
34. TAKING RESPONSIBILITY FOR SAAS
Enterprise IT teams realize they can't outsource responsibility for SaaS. 2017 will be the year that Enterprise IT teams realize that they are responsible for the performance of 3rd party SaaS applications such as Office 365 and G-Suite. They've been able to skate by for a few years, but now that these applications are officially business critical, they will realize that they don't have the ability to monitor, measure or manage these services. This will impact their organizational role as they move from being the builder of servers to being the broker of cloud services.
35. RETHINKING APM FOR SERVERLESS
In 2017, serverless application architectures will grow significantly, bringing about a rethinking of application performance monitoring, with more concern for the applications Quality of Experience. In addition, APM will start to expand from application performance to application performance and delivery monitoring. Real User Monitoring (RUM) will become more fully integrated with APM solutions as application Quality of Experience. Lastly, APM customers will accelerate the use of APM data for automated application optimization and delivery.
VP of Marketing and Business Development, Cedexis
As applications use serverless technologies to deliver end-customer functionality, APM services will also have to start managing the end to the transaction including "serverless functions".
VP of Product Marketing, Sumo Logic
36. CLOUD DRIVES IT OPS TO FOCUS ON BUSINESS SERVICES
IT Operations will move from an infrastructure and on-premises orientation to a cloud and business services orientation. Today's cloud-first world is forcing IT operations professionals to view the health of business services in their entirety and in real time, not just by the individual uncorrelated pieces of infrastructure that make it up. With business services as the lens, IT Operations will take a more predictive approach to preventing service outages by proactively identifying the symptoms of problems and being able to quickly remediate issues using automation all while doing so in a traceable and governed manner. Separate tools that rely on multiple systems of record and expensive integrations will fade, and costly maintenance of on-premises systems will reduce over time.
Senior Director, ServiceNow
37. APM FOR MICROSERVICES AND CONTAINERS
Dynamic application environments such as containerized applications and microservices architectures are becoming increasingly popular. Many organizations are expected to start using these technologies in their production environments as opposed to previous years where the technologies were used primarily in testing environments. IT departments should look to adopt the right APM solution to simplify the management of these complex environments and to contend with their dynamic resource requirements.
Applications Manager Market Analyst, ManageEngine
The tectonic shift in the application substrate marked by microservices and containerization will reach enterprise level maturity and adoption in 2017, requiring a new approach to application monitoring. APM for dynamic microservices will need to evolve from code tracing methodologies and adopt data science approaches that collect & correlate metrics in real-time from thousands of microservices, classify the meta-data, understand the characteristic of the services, and quickly identify patterns that point to anomalies in system or application behavior.
VP Marketing, OpsClarity
The rapid adoption of containers and microservices makes it imperative to rethink APM in the context of highly dynamic applications and infrastructure. Just collecting Docker performance metrics is not enough; APM solutions need support these loosely coupled (and at times, ephemeral) services in a true-first manner to ensure end-customer experience.
VP of Product Marketing, Sumo Logic
2017 will be the year of the container application, delivered as single serving entities for Just-In-Time services. As microservice-based architectures become mainstream, there will be a need for microservice-based application monitoring, which is based on the premise of short-lived application instances that you can run nearly anywhere. Likewise, with proliferation of converged and hyper-converged infrastructure (including web-scale cloud providers), classic APM that focuses solely on the compute layer will be insufficient. In a hyper-converged environment, you will need hyper-converged monitoring that is aware of both the application dependencies as well as the underlying infrastructure dependencies.
38. MICROSERVICES AND CONTAINERS: DISTRIBUTED TRACING
2017 will be the year of fully integrated solutions that can do distributed tracing, because the generic APM solutions of today are not a good fit for the new paradigm shift microservices bring.
CEO and Co-Founder, RisingStack
39. MICROSERVICES AND CONTAINERS: DISCOVERY AND MAPPING
More focus will be placed on the ability to easily map interdependencies between components within the data-center both for internal and public cloud. Microservices and Containers as an initial trend, will set the overall tone in the more traditional IT departments requiring the basic ability to map dynamically dependencies in real-time. We will see more discovery functionality coming from APM vendors.
Founder and CEO, Correlsense
40. CONVERSATIONAL INTERFACES POSES NEW PERFORMANCE CHALLENGE
With the advent of conversational interfaces like Siri and Alexa, customer expectations are increasing for "instant" interactions with the companies that they engage with. And users are impatient, the software and infrastructure supporting these interactions must perform well as the slightest delay can cause a user to abandon and move on to another brand. We expect that as companies begin to implement conversational interfaces and chatbots, the velocity at which IT will need to operate will only increase. Innovation requires velocity, only by moving quickly to address customer needs will companies maintain loyalty. But velocity also brings risk, as rapid change can lead to instability, so it will be important that IT departments carefully monitor both the technical applications and the business data contained therein to ensure they are achieving their desired business outcomes and not causing disruptions. Adapting these interaction models will also likely require new services running on new kinds of infrastructure. And that introduces even more complexity to an already highly complex IT environment. As the user's requests flow through the multitude of applications and their infrastructure it is important that the interactions be traced completely, end-to-end, to again ensure that users are not negatively impacted as changes in the environment occur.
Head of Product Marketing, AppDynamics
Read 2017 Application Performance Management Predictions - Part 5, the final installment.
Achieve more with less. How many of you feel that pressure — or, even worse, hear those words — trickle down from leadership? The reality is that overworked and under-resourced IT departments will only lead to chronic errors, missed deadlines and service assurance failures. After all, we're only human. So what are overburdened IT departments to do? Reduce the human factor. In a word: automate ...
On average, data innovators release twice as many products and increase employee productivity at double the rate of organizations with less mature data strategies, according to the State of Data Innovation report from Splunk ...
While 90% of respondents believe observability is important and strategic to their business — and 94% believe it to be strategic to their role — just 26% noted mature observability practices within their business, according to the 2021 Observability Forecast ...
Let's explore a few of the most prominent app success indicators and how app engineers can shift their development strategy to better meet the needs of today's app users ...
Business enterprises aiming at digital transformation or IT companies developing new software applications face challenges in developing eye-catching, robust, fast-loading, mobile-friendly, content-rich, and user-friendly software. However, with increased pressure to reduce costs and save time, business enterprises often give a short shrift to performance testing services ...
DevOps, SRE and other operations teams use observability solutions with AIOps to ingest and normalize data to get visibility into tech stacks from a centralized system, reduce noise and understand the data's context for quicker mean time to recovery (MTTR). With AI using these processes to produce actionable insights, teams are free to spend more time innovating and providing superior service assurance. Let's explore AI's role in ingestion and normalization, and then dive into correlation and deduplication too ...
As we look into the future direction of observability, we are paying attention to the rise of artificial intelligence, machine learning, security, and more. I asked top industry experts — DevOps Institute Ambassadors — to offer their predictions for the future of observability. The following are 10 predictions ...
One thing is certain: The hybrid workplace, a term we helped define in early 2020, with its human-centric work design, is the future. However, this new hybrid work flexibility does not come without its costs. According to Microsoft ... weekly meeting times for MS Teams users increased 148%, between February 2020 and February 2021 they saw a 40 billion increase in the number of emails, weekly per person team chats is up 45% (and climbing), and people working on Office Docs increased by 66%. This speaks to the need to further optimize remote interactions to avoid burnout ...
Here's how it happens: You're deploying a new technology, thinking everything's going smoothly, when the alerts start coming in. Your rollout has hit a snag. Whole groups of users are complaining about poor performance on their devices. Some can't access applications at all. You've now blown your service-level agreement (SLA). You might have just introduced a new security vulnerability. In the worst case, your big expensive product launch has missed the mark altogether. "How did this happen?" you're asking yourself. "Didn't we test everything before we deployed?" ...
The Fastly outage in June 2021 showed how one inconspicuous coding error can cause worldwide chaos. A single Fastly customer making a legitimate configuration change, triggered a hidden bug that sent half of the internet offline, including web giants like Amazon and Reddit. Ultimately, this incident illustrates why organizations must test their software in production ...