Industry experts — from analysts and consultants to users and the top vendors — offer thoughtful, insightful, and often controversial predictions on how APM and related technologies will evolve and impact business in 2018. Part 6 covers ITOA and data.
The amount of data facing ITOps practitioners is only going to grow in the coming year and teams will be faced with the increased challenge of finding the signal in the noise — and fast — to resolve incidents. As a result, it will be necessary for ITOps to reexamine previous assumptions around automation and responsibility.
Head of DevOps, PagerDuty
By utilizing smart data, which distills the essence of the traffic flows that traverse the service delivery infrastructure in a distributed fashion, close to the source, and compresses it into metadata, businesses can ensure they only store the information that holds real value. This information can then be used to gain meaningful and actionable insights, helping organizations to gain a competitive edge while driving efficiencies by enabling data to be rapidly compressed, and substantially reducing the volume of data stored by an order of magnitude or more. Smart data is already used to power a range of service, operations and business analytics across different industries including automotive, manufacturing and healthcare, and we expect its usage to increase dramatically in 2018. With the proliferation of IoT sensors, mobile devices and digital services creating an abundance of data used by the various applications and services that rely on hybrid cloud infrastructure, having the ability to convert smart data into meaningful and actionable IT and business insights, will help corporations to thrive in 2018 and beyond.
Area VP, Strategy, NetScout
APM CONVERGED DATA STORES
Today's APM tools and monitoring, in general, have discreet silos of data for time-series, transactions, and logs. In 2018 We will begin to see the first converged data stores, which will unlock the ability to answer questions significantly more easily than today's tools.
VP of Market Development and Insights, AppDynamics
Read Jonah Kowall's Blog: Looking Back at 2017 APM Predictions - Did They Come True?
APPLICATION-CENTRIC APPROACH TO BIG DATA
In the past, people were focused on learning the various big data technologies. It took time for users to understand, differentiate, and ultimately deploy them. There was a lot of debate and plenty of hype. Now that organizations have cut through the noise and figured all that out, they're concerned about actually putting their data to use. The enterprise doesn't really care about the technology being used. It's not important which distribution or database or analytics they're using, what matters is the result. The enterprise has realized this and we can expect to see an increased adoption of an application-centric approach to big data in the coming year.
CEO, Unravel Data
NEW PERFORMANCE METRICS
We'll see new patterns of backend performance problems stemming from the broader adoption of containers and microservices architecture. Our performance palette will be expanded to include new measurements like micro pause delays, herd effects, cold start time etc.
Sr. Product Manager, Riverbed
DBA TAKES ON IT OPS ROLE
The New Job Description: DBAs Take on New Responsibilities. One of the most significant changes we will see in 2018 will be toward a more collaborative relationship between IT infrastructure managers and database administrators (DBAs). As more applications are run in the cloud, senior DBA managers will be able to take more of a central role in troubleshooting problems and improving efficiency. DBAs will be looking beyond just the application and database to find and fix issues. Both IT and application teams will need tools that look deeply into the cloud infrastructure to identify causes of performance and availability issues and provide accurate recommendations for addressing them.
President & CEO, SIOS Technology
2018 will see a de-emphasis on centralized cloud-based management and traditional data-lakes, and a shift towards distributed analysis. This will be driven not only by explosive growth in IoT, but also by the support of edge computing by major cloud vendors
RESTFUL API DRIVES ITOA
The advent of efficient RESTful APIs on many services and applications coupled with the maturation of time-series databases such as OpenTSDB and InfluxDB will drive IT operations analytics to use more quantitative approaches, and lead to advances in root cause analysis. This is due to the high storage efficiency of the time series databases, and the speed with which the optimize-on-write approaches they use can accept data. It is now increasingly practical to track large quantitative data volumes. RESTful API endpoints from applications and cloud services are rich in metrics, and the same types of APIs are efficient at accepting such metrics in data streams. With these large volumes of contemporaneous, high-cardinality time series data sources, operations analysis will become possible at a higher scale than previously possible. Cross-correlation will yield forensic insight into failures. In contrast, predictive time series analysis based on auto-regressive/moving average models, while mathematically practical, will fail to lead to any significantly valuable results on operations data, with rare exceptions.
Director of Sales Engineering, GroundWork
IOT INFLECTION POINT
IoT apps need to get out of the hype cycle and deal with real world pain points. Their devices are still a hassle to set up and they solve too few real world cases. Alexa has shown promise, but IoT platforms as a whole are too fragmented for developers to invest in learning them. Hardware manufacturers will agree on a common set of protocols and open up APIs for devices to work seamlessly.
In my opinion, the biggest occurrence in 2017 was that IoT reached peak hype, giving way to new hype-cycles for machine learning (actually, an offshoot of IoT) and for Artificial Intelligence (a familiar topic area, and one that requires IoT data as fuel for its intelligence). We saw a combination of several companies making very large investments in IoT, while others are scaling back or reorganizing their IoT teams. This combination of investment push and pull means that we're at an inflection point. For IoT, this means we're now at a point where projects have to deliver results. IoT vendors invested ahead of demand, with all sorts of claims of IoT one-stop shopping. With more capacity in the industry than there is demand, I expect we will see players drop off or shift focus.
Chief Architect, IoT, Red Hat
We've now passed the point where we know that the human brain can no longer cope with the complexity of modern applications. Meanwhile, businesses have never relied on digital services more than they do today. Today, every company is a digital company and every critical IT issue has become a business issue. Therefore, APM solutions must evolve from just being early performance-issue detection tools to providing much more insights into the other phases of the resolution process. This includes not only root cause diagnostic and identification capabilities, but also self-teaching capabilities that leverage big data and AI-based algorithms and require very limited initial configuration to deliver actionable insights and recommended remediation actions. This will allow DevOps teams to make the best possible decisions to resolve performance issues.
Senior Director of Product Marketing, IT Response Automation, Everbridge
Read 2018 Network Performance Management Predictions, the final installment.
Site reliability engineering (SRE) is fast becoming an essential aspect of modern IT operations, particularly in highly scaled, big data environments. As businesses and industries shift to the digital and embrace new IT infrastructures and technologies to remain operational and competitive, the need for a new approach for IT teams to find and manage the balance between launching new systems and features and ensuring these are intuitive, reliable, and friendly for end users has intensified as well ...
The most sophisticated observability practitioners (leaders) are able to cut downtime costs by 90%, from an estimated $23.8 million annually to just $2.5 million, compared to observability beginners, according to the State of Observability 2022 from Splunk in collaboration with the Enterprise Strategy Group. What's more, leaders in observability are more innovative and more successful at achieving digital transformation outcomes and other initiatives ...
Programmatically tracked service level indicators (SLIs) are foundational to every site reliability engineering practice. When engineering teams have programmatic SLIs in place, they lessen the need to manually track performance and incident data. They're also able to reduce manual toil because our DevOps teams define the capabilities and metrics that define their SLI data, which they collect automatically — hence "programmatic" ...
Recently, a regional healthcare organization wanted to retire its legacy monitoring tools and adopt AIOps. The organization asked Windward Consulting to implement an AIOps strategy that would help streamline its outdated and unwieldy IT system management. Our team's AIOps implementation process helped this client and can help others in the industry too. Here's what my team did ...
You've likely heard it before: every business is a digital business. However, some businesses and sectors digitize more quickly than others. Healthcare has traditionally been on the slower side of digital transformation and technology adoption, but that's changing. As healthcare organizations roll out innovations at increasing velocity, they must build a long-term strategy for how they will maintain the uptime of their critical apps and services. And there's only one tool that can ensure this continuous availability in our modern IT ecosystems. AIOps can help IT Operations teams ensure the uptime of critical apps and services ...
Between 2012 to 2015 all of the hyperscalers attempted to use the legacy APM solutions to improve their own visibility. To no avail. The problem was that none of the previous generations of APM solutions could match the scaling demand, nor could they provide interoperability due to their proprietary and exclusive agentry ...
The DevOps journey begins by understanding a team's DevOps flow and identifying precisely what tasks deliver the best return on engineers' time when automated. The rest of this blog will help DevOps team managers by outlining what jobs can — and should be automated ...
A survey from Snow Software polled more than 500 IT leaders to determine the current state of cloud infrastructure. Nearly half of the IT leaders who responded agreed that cloud was critical to operations during the pandemic with the majority deploying a hybrid cloud strategy consisting of both public and private clouds. Unsurprisingly, over the last 12 months, the majority of respondents had increased overall cloud spend — a substantial increase over the 2020 findings ...
As we all know, the drastic changes in the world have caused the workforce to take a hybrid approach over the last two years. A lot of that time, being fully remote. With the back and forth between home and office, employees need ways to stay productive and access useful information necessary to complete their daily work. The ability to obtain a holistic view of data relevant to the user and get answers to topics, no matter the worker's location, is crucial for a successful and efficient hybrid working environment ...
For the past decade, Application Performance Management has been a capability provided by a very small and exclusive set of vendors. These vendors provided a bolt-on solution that provided monitoring capabilities without requiring developers to take ownership of instrumentation and monitoring. You may think of this as a benefit, but in reality, it was not ...