Why Network Change Initiatives Must be Data-Driven
January 13, 2021

Jay Botelho
LiveAction

Share this

Enterprise IT infrastructure never ceases to evolve, as companies continually re-examine and reimagine the network to incorporate new technology advancements and meet changing business requirements. But network change initiatives can be costly and time-consuming without a proactive approach to ensuring the right data is available to drive your initiatives.

Common network change initiatives today include cloud migrations, new SD-WAN deployments and adopting 802.11ax. Data should be your guide when removing, upgrading or replacing any IT infrastructure and managing a transition to such technologies. You must monitor key performance metrics for all network elements involved before, during and after every network change operation. Failing to do so will elevate your risks of failed deployments, as well as obfuscated performance issues, poor user experiences and more, throughout rollout and beyond.

Cloud migrations are particularly timely network change initiatives we can examine to understand importance of a data-drive approach. Given COVID-19's impact on how and where people leverage network resources, cloud adoption has spiked in 2020. In fact, nearly 60% of enterprises expect cloud technology usage to exceed prior plans due to increasing distributed and remote work as a result of the pandemic. Let's explore the consequences of a cloud migration project without the necessary data involved, how data impacts each step in the process and the visibility you need to succeed.

Cloud Migration Crises Abound Without Data

A cloud migration without foundational data can be an ugly affair. Without baselines for your existing network and application performance, you're likely to be greeted with a complex set of issues to untangle throughout the migration process. These can range from poor connectivity and higher latency, to even security issues.

For example, after migrating several key business applications users might experience increased latency. But is it truly worse than before?

Is it unacceptable?

And are the migrated applications really the cause?

Or could it be due to increased VPN connections and bandwidth as more users working remotely attempt to access the new cloud services?

Without solid data from before the migration, these are difficult questions to answer. And they need to be answered quickly because a perceived degradation in performance will encourage employees to circumvent established processes requiring VPN usage to access key cloud-based applications such as Salesforce, WebEx or Zoom. This would change the workflow before a clear diagnosis can be made, and make things less secure by reducing your visibility into user activity and any suspicious anomalies. Data is the key to getting in front of just about every cloud migration issue.

The Role of Data Throughout the Cloud Migration Lifecycle

Rooting your migration in data and leveraging data-driven insights throughout the initiative can deliver end-to-end visibility from on-premises environments into the public cloud, and help ensure a successful rollout. From "Day 0" planning and "Day 1" deployment to "Day 2" ongoing monitoring and optimization, here's why data is king when it comes to cloud migrations:

1. Planning a cloud migration should start with establishing a baseline across your existing IT infrastructure. Here you'll measure key data and metrics to define what's "normal" for network performance levels, application performance trends, and behaviors across users, devices, key services and more. You'll leverage all this information to map out existing bandwidth usage and throughput patterns, SLA requirements and quality of service (QoS) policies. Without collecting and understanding these data upfront, you'll lack the context and specifics you need to be able to truly determine, tune and control how your new cloud deployment is functioning. It's also critical that you have the solutions in place to ensure that the visibility and data you're able to access pre-deployment carries over across the cloud migration.

2. Implementing a new cloud deployment successfully will rely heavily on the data you've collected pre-rollout. The cloud migration phase itself will be a true test of how well your team has planned the initiative and if you've established the historical baselines needed to effectively measure and manage post-migration. You'll need to quickly identify and resolve any network or application performance issues such as poor connectivity, high latency, unforeseen capacity limitations, degraded user experiences and more, as well as verify the SLAs and QoS policies you established during the planning process.

Whether you're migrating limited portions of your system such as a few specific databases or servers, or an entire application stack or data center, you need deep, end-to-end visibility from on-prem into the public cloud, and into VPC traffic and the cloud services running through it.
Most cloud monitoring tools are burdensome to manage alongside existing monitoring products and can't provide a comprehensive view of network or application issues that extend across the hybrid environment. This goes for both monitoring dashboards from cloud providers themselves as well as specialized point solutions.

That's why it's critical to leverage advanced monitoring solutions capable of capturing network traffic that traverse the public cloud and converting it into flow data for in-depth 360-degree performance analytics and visualization, all using the same integrated solution. Without this level of detail, you'll lack a complete understanding of traffic behavior, application usage and performance within your new cloud infrastructure, and be unable to verify the new implementation is working as planned.

3. Effective cloud monitoring and optimization over the long term is heavily dependent on your level of visibility. The third and "final" phase of a cloud migration is all about continual improvement. Your goal is to continuously monitor the deployment, proactively identify and resolve issues before they happen, and optimize performance to meet your business' needs.

If you've established the baselines and visibility required to identify network and application performance issues across your cloud workloads, you should be able to take advantage of automated alerting that proactively "predicts" potential issues — or at least the warning signs that those issues might arise — so you can mitigate them before they impact the business.

For instance, if you have multiple automated alerts of application latency exceeding your defined threshold, and these reports span a wide range of users, you can proactively test the latency from your location and quickly determine if the source is in the cloud. And if so, assuming you have a solution in place that can also capture network data from the cloud, you can quickly isolate the issue and the source, and take corrective action.

End-to-end visibility that extends from the network to the cloud is a basic requirement for successful cloud migrations today. This is just one example of the many network change initiatives organizations often tackle, and the one major commonality across them all is that their success depends on data. Knowing your network through in-depth data will ensure your team is able to plan, deploy and optimize key network change initiatives that will better support and enable your business in 2021 and beyond.

Jay Botelho is Director of Engineering at LiveAction
Share this

The Latest

May 25, 2022

Site reliability engineering (SRE) is fast becoming an essential aspect of modern IT operations, particularly in highly scaled, big data environments. As businesses and industries shift to the digital and embrace new IT infrastructures and technologies to remain operational and competitive, the need for a new approach for IT teams to find and manage the balance between launching new systems and features and ensuring these are intuitive, reliable, and friendly for end users has intensified as well ...

May 24, 2022

The most sophisticated observability practitioners (leaders) are able to cut downtime costs by 90%, from an estimated $23.8 million annually to just $2.5 million, compared to observability beginners, according to the State of Observability 2022 from Splunk in collaboration with the Enterprise Strategy Group. What's more, leaders in observability are more innovative and more successful at achieving digital transformation outcomes and other initiatives ...

May 23, 2022

Programmatically tracked service level indicators (SLIs) are foundational to every site reliability engineering practice. When engineering teams have programmatic SLIs in place, they lessen the need to manually track performance and incident data. They're also able to reduce manual toil because our DevOps teams define the capabilities and metrics that define their SLI data, which they collect automatically — hence "programmatic" ...

May 19, 2022

Recently, a regional healthcare organization wanted to retire its legacy monitoring tools and adopt AIOps. The organization asked Windward Consulting to implement an AIOps strategy that would help streamline its outdated and unwieldy IT system management. Our team's AIOps implementation process helped this client and can help others in the industry too. Here's what my team did ...

May 18, 2022

You've likely heard it before: every business is a digital business. However, some businesses and sectors digitize more quickly than others. Healthcare has traditionally been on the slower side of digital transformation and technology adoption, but that's changing. As healthcare organizations roll out innovations at increasing velocity, they must build a long-term strategy for how they will maintain the uptime of their critical apps and services. And there's only one tool that can ensure this continuous availability in our modern IT ecosystems. AIOps can help IT Operations teams ensure the uptime of critical apps and services ...

May 17, 2022

Between 2012 to 2015 all of the hyperscalers attempted to use the legacy APM solutions to improve their own visibility. To no avail. The problem was that none of the previous generations of APM solutions could match the scaling demand, nor could they provide interoperability due to their proprietary and exclusive agentry ...

May 16, 2022

The DevOps journey begins by understanding a team's DevOps flow and identifying precisely what tasks deliver the best return on engineers' time when automated. The rest of this blog will help DevOps team managers by outlining what jobs can — and should be automated ...

May 12, 2022

A survey from Snow Software polled more than 500 IT leaders to determine the current state of cloud infrastructure. Nearly half of the IT leaders who responded agreed that cloud was critical to operations during the pandemic with the majority deploying a hybrid cloud strategy consisting of both public and private clouds. Unsurprisingly, over the last 12 months, the majority of respondents had increased overall cloud spend — a substantial increase over the 2020 findings ...

May 11, 2022

As we all know, the drastic changes in the world have caused the workforce to take a hybrid approach over the last two years. A lot of that time, being fully remote. With the back and forth between home and office, employees need ways to stay productive and access useful information necessary to complete their daily work. The ability to obtain a holistic view of data relevant to the user and get answers to topics, no matter the worker's location, is crucial for a successful and efficient hybrid working environment ...

May 10, 2022

For the past decade, Application Performance Management has been a capability provided by a very small and exclusive set of vendors. These vendors provided a bolt-on solution that provided monitoring capabilities without requiring developers to take ownership of instrumentation and monitoring. You may think of this as a benefit, but in reality, it was not ...