As we enter 2018, businesses are busy anticipating what the new year will bring in terms of industry developments, growing trends, and hidden surprises. In 2017, the increased use of automation within testing teams (where Agile development boosted speed of release), led to QA becoming much more embedded within development teams than would have been the case a few years ago. As a result, proper software testing and monitoring assumes ever greater importance.
The natural question is – what next? Here are some of the changes we believe will happen within our industry in 2018:
AI Breakthroughs Will Begin
Organizations will make breakthroughs with machine learning and artificial intelligence in 2018, especially when it comes to using this technology to get a better understanding of their collected data.
Often, it's hard to see the physical manifestation of wider concepts like AI, but in our space, physical objects – “intelligent things” – fill that gap. Previously, IoT devices sent data for limited onward processing; now, machine learning means devices are capable of transforming that same data into actionable insight. Realtime feedback will change the behavior of our IoT devices for good.
Focus on Quality, Security and Resilience
Businesses will need to address the overall quality of their services as the competitive landscape evens out
Given the high level of major outages in 2017, it is evident that the industry has not been moving fast enough to address the explosive growth of the IoT and API economy. There are organizations that are leading the way and achieving great things in both testing and monitoring; however, most are still disproportionally focusing on speed rather than quality, security and resilience.
Looking into 2018, businesses will need to address the overall quality of their services as the competitive landscape evens out. This will result in a refocus on the monitoring of the customer experience and the need for extensive end-to-end testing, embedded within the delivery lifecycle.
Services Will be a Key Differentiator
In 2018, services will become more of the differentiating factor, as capabilities become more similar. Differentiation of services will come down to availability, ease of use and consistency of a quality experience.
The increased reliance on IoT devices, their data and their management will also drive the need for high availability of the API services that these devices will talk to. Monitoring the availability of these APIs will be the critical factor in ensuring that business can continue to run (especially in the manufacturing space), and that business intelligence data can be trusted by decision makers.
Customer Experience Will Become More Important Than Ever
Software testing and monitoring has historically been the realm of the IT team, be that the development teams for testing, or operations on the monitoring side.
In 2018, the digital transformation drive is underway in most enterprises, combined with the explosion of IoT devices and the data processing that derives from them. This will draw the focus onto both the quality of the application and the overall customer experience.
Consequently, both testing and monitoring should be of significant interest to the Chief Operating Officer and the Chief Marketing Officer within organizations, resulting in more rounded testing with team members coming from different parts of the business. That's a potential step change in the type of testing that would be carried out, as well as in the visibility within the business of monitoring results and testing success.
The Way We Validate Results Will Change
2018 will see the adoption of AI, in the form of machine learning, by major software vendors who will be embedding it within their core applications. This machine learning will also become a standard platform for data analytics for new development initiatives. The IoT market will take greatest advantage from this adoption, as the volume of data needing analysis grows exponentially.
This is going to challenge the testing community as new ways of testing and validating the results from AI need to be identified and embedded within the development lifecycle.
The Latest
To achieve maximum availability, IT leaders must employ domain-agnostic solutions that identify and escalate issues across all telemetry points. These technologies, which we refer to as Artificial Intelligence for IT Operations, create convergence — in other words, they provide IT and DevOps teams with the full picture of event management and downtime ...
APMdigest and leading IT research firm Enterprise Management Associates (EMA) are partnering to bring you the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 2 - Part 1 Pete Goldin, Editor and Publisher of APMdigest, discusses Network Observability with Shamus McGillicuddy, Vice President of Research, Network Infrastructure and Operations, at EMA ...
CIOs have stepped into the role of digital leader and strategic advisor, according to the 2023 Global CIO Survey from Logicalis ...
Synthetic monitoring is crucial to deploy code with confidence as catching bugs with E2E tests on staging is becoming increasingly difficult. It isn't trivial to provide realistic staging systems, especially because today's apps are intertwined with many third-party APIs ...
Recent EMA field research found that ServiceOps is either an active effort or a formal initiative in 78% of the organizations represented by a global panel of 400+ IT leaders. It is relatively early but gaining momentum across industries and organizations of all sizes globally ...
Managing availability and performance within SAP environments has long been a challenge for IT teams. But as IT environments grow more complex and dynamic, and the speed of innovation in almost every industry continues to accelerate, this situation is becoming a whole lot worse ...
Harnessing the power of network-derived intelligence and insights is critical in detecting today's increasingly sophisticated security threats across hybrid and multi-cloud infrastructure, according to a new research study from IDC ...
Recent research suggests that many organizations are paying for more software than they need. If organizations are looking to reduce IT spend, leaders should take a closer look at the tools being offered to employees, as not all software is essential ...
Organizations are challenged by tool sprawl and data source overload, according to the Grafana Labs Observability Survey 2023, with 52% of respondents reporting that their companies use 6 or more observability tools, including 11% that use 16 or more.
An array of tools purport to maintain availability — the trick is sorting through the noise to find the right one. Let us discuss why availability is so important and then unpack the ROI of deploying Artificial Intelligence for IT Operations (AIOps) during an economic downturn ...