2018 Predictions: Rapid Transformation, Smart Data and Mission-Critical Connectivity
January 17, 2018

Michael Segal
NetScout

Share this

With more than one-third of IT Professionals citing "moving faster" as their top goal for 2018, and an overwhelming 99 percent of IT and business decision makers noticing an increasing pace of change in today's connected world, it's clear that speed has become intrinsically linked to business success.

For companies looking to compete in the digital economy, this pace of transformation is being driven by their customers and requires speedy software releases, agility through cloud services, and automation.

Speed becomes a primary business objective

As we look ahead to 2018, we therefore expect businesses to place increased focus on accelerating the development and deployment of applications and services, while maintaining quality and cutting costs: juxtaposing tasks. To achieve this, more and more companies will look to elastically expand their infrastructure by moving compute applications and storage workloads to the cloud and delivering services through hybrid, on-prem and public cloud environments.

In the rush to embrace digital transformation, organizations must ensure they don't lose sight of whether the hybrid cloud is delivering real business value

However, in the rush to embrace digital transformation (DX), organizations must ensure they don't lose sight of whether the hybrid cloud is delivering real business value. To best evaluate its effectiveness, it is imperative that organizations continuously monitor their entire infrastructure to provide a 360 view of business services, infrastructure and their interdependencies, which will enable them to quickly identify current or potential problems.

Assuring networks and applications will be paramount

DX will also power a surge in momentum for the IoT, with the number of connected devices predicted to reach 23.14 billion by 2018. We expect to see the IoT continue to touch all aspects of the digital economy, unlocking enormous benefits in a wide range of sectors, from agriculture to automotive.

With more and more IoT technologies underpinning critical applications, such as disaster monitoring and military situational awareness, and the amount of IoT devices and use cases increasing, businesses will be under increasing pressure to maintain connectivity and communication across a myriad of wireless and wired, physical and virtual, local and wide area networks. In 2018, assured delivery of IoT services will therefore become key determiners for success.

As operators in the US and around the world take steps towards delivering 5G connectivity, IoT applications and services would significantly benefit by utilizing the 5G technology to achieve a truly ubiquitous, reliable, scalable, and cost-efficient Device-to-Device connectivity between nearby mobiles. This will support use cases such as vehicle-to-vehicle communications, public safety, or mobile data offloading, as well as sensors deployed throughout a smart city. However, for 5G to be truly heralded a success, organizations and governments will need to know how to assure availability, reliability, responsiveness and security of applications and services delivered across their networks.

Environmental data comes to the forefront

With the amount of data in the world predicted to increase at least 50 fold between 2010 and 2020, we'll also start to see growing emphasis being placed on how that data is stored. Collecting large volumes of raw log data from multiple applications and infrastructure components and sending it to a central location for storage and processing, for example, increases the size and cost of storage and communications over the Wide Area Network (WAN).

Furthermore, the surging demand for data has environmental implications; by 2020, 12 percent of the world's energy consumption will be taken by our digital ecosystem, and this is expected to grow annually at approximately 7 percent until 2030. As these high costs and inefficiencies could hugely undermine the advantages big data brings, we expect to see more and more businesses take a smarter approach to data collection, organization and processing, saving not only on storage costs, but also on communications, electricity and raw material, beginning the journey towards a greener and brighter data-driven future.

Data gets smarter

By utilizing smart data, which distills the essence of the traffic flows that traverse the service delivery infrastructure in a distributed fashion, close to the source, and compresses it into metadata, businesses can ensure they only store the information that holds real value. This information can then be used to gain meaningful and actionable insights, helping organizations to gain a competitive edge while driving efficiencies by enabling data to be rapidly compressed, and substantially reducing the volume of data stored by an order of magnitude or more.

Smart data is already used to power a range of service, operations and business analytics across different industries including automotive, manufacturing and healthcare, and we expect its usage to increase dramatically in 2018. With the proliferation of IoT sensors, mobile devices and digital services creating an abundance of data used by the various applications and services that rely on hybrid cloud infrastructure, having the ability to convert smart data into meaningful and actionable IT and business insights, will help corporations to thrive in 2018 and beyond.

Michael Segal is VP of Strategy at NetScout
Share this

The Latest

May 25, 2022

Site reliability engineering (SRE) is fast becoming an essential aspect of modern IT operations, particularly in highly scaled, big data environments. As businesses and industries shift to the digital and embrace new IT infrastructures and technologies to remain operational and competitive, the need for a new approach for IT teams to find and manage the balance between launching new systems and features and ensuring these are intuitive, reliable, and friendly for end users has intensified as well ...

May 24, 2022

The most sophisticated observability practitioners (leaders) are able to cut downtime costs by 90%, from an estimated $23.8 million annually to just $2.5 million, compared to observability beginners, according to the State of Observability 2022 from Splunk in collaboration with the Enterprise Strategy Group. What's more, leaders in observability are more innovative and more successful at achieving digital transformation outcomes and other initiatives ...

May 23, 2022

Programmatically tracked service level indicators (SLIs) are foundational to every site reliability engineering practice. When engineering teams have programmatic SLIs in place, they lessen the need to manually track performance and incident data. They're also able to reduce manual toil because our DevOps teams define the capabilities and metrics that define their SLI data, which they collect automatically — hence "programmatic" ...

May 19, 2022

Recently, a regional healthcare organization wanted to retire its legacy monitoring tools and adopt AIOps. The organization asked Windward Consulting to implement an AIOps strategy that would help streamline its outdated and unwieldy IT system management. Our team's AIOps implementation process helped this client and can help others in the industry too. Here's what my team did ...

May 18, 2022

You've likely heard it before: every business is a digital business. However, some businesses and sectors digitize more quickly than others. Healthcare has traditionally been on the slower side of digital transformation and technology adoption, but that's changing. As healthcare organizations roll out innovations at increasing velocity, they must build a long-term strategy for how they will maintain the uptime of their critical apps and services. And there's only one tool that can ensure this continuous availability in our modern IT ecosystems. AIOps can help IT Operations teams ensure the uptime of critical apps and services ...

May 17, 2022

Between 2012 to 2015 all of the hyperscalers attempted to use the legacy APM solutions to improve their own visibility. To no avail. The problem was that none of the previous generations of APM solutions could match the scaling demand, nor could they provide interoperability due to their proprietary and exclusive agentry ...

May 16, 2022

The DevOps journey begins by understanding a team's DevOps flow and identifying precisely what tasks deliver the best return on engineers' time when automated. The rest of this blog will help DevOps team managers by outlining what jobs can — and should be automated ...

May 12, 2022

A survey from Snow Software polled more than 500 IT leaders to determine the current state of cloud infrastructure. Nearly half of the IT leaders who responded agreed that cloud was critical to operations during the pandemic with the majority deploying a hybrid cloud strategy consisting of both public and private clouds. Unsurprisingly, over the last 12 months, the majority of respondents had increased overall cloud spend — a substantial increase over the 2020 findings ...

May 11, 2022

As we all know, the drastic changes in the world have caused the workforce to take a hybrid approach over the last two years. A lot of that time, being fully remote. With the back and forth between home and office, employees need ways to stay productive and access useful information necessary to complete their daily work. The ability to obtain a holistic view of data relevant to the user and get answers to topics, no matter the worker's location, is crucial for a successful and efficient hybrid working environment ...

May 10, 2022

For the past decade, Application Performance Management has been a capability provided by a very small and exclusive set of vendors. These vendors provided a bolt-on solution that provided monitoring capabilities without requiring developers to take ownership of instrumentation and monitoring. You may think of this as a benefit, but in reality, it was not ...