Why Visibility is Critical for DevOps Teams
October 01, 2018

Michael Segal
NetScout

Share this

According to recent reports, the majority of businesses now use cloud computing in one form or another. Innovation and agility are key to success in today's fast-moving, competitive environment, and with many legacy systems no longer able to keep up with the demands of digital transformation, it's little surprise that more than two thirds of enterprise workloads are now reported to be in the cloud.

As businesses look to capitalize on the benefits offered by the cloud, we've seen the rise of the DevOps practice which, in common with the cloud, offers businesses the advantages of greater agility, speed, quality and efficiency.

However, achieving this agility requires end-to-end visibility based on continuous monitoring of the developed applications as part of the software development life cycle (SDLC) in order to achieve a common situational awareness; without which, DevOps teams can find themselves hindered, causing innovation to stall.

Reaching Maturity

In simple terms, the role of DevOps is to produce new software, based on business needs, at very high speed, and of the highest possible quality of user experience given the constraints under which they operate. A continuous delivery pipeline, for example, could mean as many as several releases a day, each of which requires code to be built, tested, and integrated before being deployed, and each of which must deliver a responsive, reliable service with virtually no downtime.

The functionality of a DevOps team can be impacted by the level of its maturity, however, which can be influenced by two factors. The first of these is the cultural dimension; the team's ability to collaborate effectively, owning the overall DevOps mission as opposed to meeting specific objectives of the individual teams that comprise the whole, such as Operations or QA.

Before mastering this aspect, developers tend to be focused on the speed of software delivery, QA tends to focus on testing predefined use cases, while Operations concentrates on monitoring the production environment. Each team is focused on its own domain and is often siloed off from the others, without utilizing an effective feedback loop and establishing a common situational awareness.

At this stage of organizational maturity, the DevOps team will be focused more on accelerating and optimizing the effectiveness of its individual domains using technologies such as version control management, continuous integration, automated testing, automated deployment and configuration management. Increasing DevOps maturity relies on additional technologies for continuous monitoring, improved visibility, telemetry, feedback loops, and situational awareness. Achieving this, however, can prove challenging.

Visibility and Insights

Consider a situation in which developers build the code for an application, QA tests it based on common use cases, and then the release manager oversees its integration into the mainline and its subsequent deployment. At this point, Operations might find a problem that only manifests at scale, requiring Dev teams to quickly pinpoint the issue and rectify it by developing new code that functions correctly in the product environment.

It's here, then, that visibility is most crucial, providing all parties with common situational awareness. Rather than relying on Ops to highlight issues, in this example Dev teams are able instead to look on the system and see the same situation themselves, and thereby better understand the parameters within which they need to work. Doing so will save time and create more effective feedback loops which would enable to adjust the development and QA processes to detect similar issues early on in the SDLC or even prevent them from occurring altogether.

Achieving this level of visibility requires the use of smart data – metadata based on processing and organizing wire data at the point of collection, and optimizing it for analytics at the highest speed and quality. By analyzing every IP packet that traverses the network during a development cycle and beyond – in real time – smart data delivers meaningful and actionable insights, creating a common situational awareness for all teams. This then enables those teams, from developers through QA to IT Operations, to work together within constantly evolving parameters, avoiding any bottlenecks in the feedback loop.

Opportunity for Innovation

Digital transformation, and the role of the cloud within it, are integral to an organization's innovation. With more applications and services being migrated to the cloud, however, a host of new, unprecedented challenges are emerging.

This is particularly true for DevOps teams, charged with producing quality code at speed. To reach the level of maturity at which they can function most efficiently and effectively requires siloes of work to be broken down across the organization to foster a culture of collaboration and continuous communication. The visibility, insight and common situational awareness offered by smart data can help achieve this, freeing up the potential of DevOps, and affording organizations a greater opportunity for innovation.

Michael Segal is VP of Strategy at NetScout
Share this

The Latest

March 18, 2024

Gartner has highlighted the top trends that will impact technology providers in 2024: Generative AI (GenAI) is dominating the technical and product agenda of nearly every tech provider ...

March 15, 2024

In MEAN TIME TO INSIGHT Episode 4 - Part 1, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA) discusses artificial intelligence and network management ...

March 14, 2024

The integration and maintenance of AI-enabled Software as a Service (SaaS) applications have emerged as pivotal points in enterprise AI implementation strategies, offering both significant challenges and promising benefits. Despite the enthusiasm surrounding AI's potential impact, the reality of its implementation presents hurdles. Currently, over 90% of enterprises are grappling with limitations in integrating AI into their tech stack ...

March 13, 2024

In the intricate landscape of IT infrastructure, one critical component often relegated to the back burner is Active Directory (AD) forest recovery — an oversight with costly consequences ...

March 12, 2024

eBPF is a technology that allows users to run custom programs inside the Linux kernel, which changes the behavior of the kernel and makes execution up to 10x faster(link is external) and more efficient for key parts of what makes our computing lives work. That includes observability, networking and security ...

March 11, 2024

Data mesh, an increasingly important decentralized approach to data architecture and organizational design, focuses on treating data as a product, emphasizing domain-oriented data ownership, self-service tools and federated governance. The 2024 State of the Data Lakehouse report from Dremio presents evidence of the growing adoption of data mesh architectures in enterprises ... The report highlights that the drive towards data mesh is increasingly becoming a business strategy to enhance agility and speed in problem-solving and innovation ...

March 07, 2024
In this digital era, consumers prefer a seamless user experience, and here, the significance of performance testing cannot be overstated. Application performance testing is essential in ensuring that your software products, websites, or other related systems operate seamlessly under varying conditions. However, the cost of poor performance extends beyond technical glitches and slow load times; it can directly affect customer satisfaction and brand reputation. Understand the tangible and intangible consequences of poor application performance and how it can affect your business ...
March 06, 2024

Too much traffic can crash a website ... That stampede of traffic is even more horrifying when it's part of a malicious denial of service attack ... These attacks are becoming more common, more sophisticated and increasingly tied to ransomware-style demands. So it's no wonder that the threat of DDoS remains one of the many things that keep IT and marketing leaders up at night ...

March 05, 2024

Today, applications serve as the backbone of businesses, and therefore, ensuring optimal performance has never been more critical. This is where application performance monitoring (APM) emerges as an indispensable tool, empowering organizations to safeguard their applications proactively, match user expectations, and drive growth. But APM is not without its challenges. Choosing to implement APM is a path that's not easily realized, even if it offers great benefits. This blog deals with the potential hurdles that may manifest when you actualize your APM strategy in your IT application environment ...

March 04, 2024

This year's Super Bowl drew in viewership of nearly 124 million viewers and made history as the most-watched live broadcast event since the 1969 moon landing. To support this spike in viewership, streaming companies like YouTube TV, Hulu and Paramount+ began preparing their IT infrastructure months in advance to ensure an exceptional viewer experience without outages or major interruptions. New Relic conducted a survey to understand the importance of a seamless viewing experience and the impact of outages during major streaming events such as the Super Bowl ...