Complexity and Scale of Kubernetes Highlight Need for Observability and Optimization
May 11, 2021
Share this

Kubernetes is rapidly becoming the standard for cloud and on-premises clusters, according to the 2021 Kubernetes & Big Data Report from Pepperdata based on a survey of 800 IT professionals.


However, Kubernetes is a complex technology, and companies are struggling to properly and effectively implement and manage these environments. The complexity of big data applications makes resource optimization a real challenge. Unsurprisingly, when IT doesn't have granular visibility into big data Kubernetes performance, optimized performance and spend are hard to achieve.

"Kubernetes is increasingly being adopted by our customers for big data applications. As a result, we see customers experiencing performance challenges," said Ash Munshi, CEO, Pepperdata. "This survey clearly indicates that these problems are universal and there is a need to better optimize these big data workloads."

The report states:"Kubernetes is extremely complicated. Manual monitoring cannot keep up, and proprietary solutions are unlikely to be up to the task. It’s always tempting, faced with a new tool like Kubernetes, to use a homegrown solution that already exists. But with Kubernetes, more custom solutions are required. General-purpose APM won’t cut it; companies need tools purpose built for big data workloads on Kubernetes."

The survey reveals a number of insights into how businesses are adopting Kubernetes for big data applications:

■ When asked what their goals were for adopting Kubernetes for big data workloads, 30% said to "improve resource utilization for reduced cloud costs." 23% want to enable their migration to the cloud; 18% said to shorten deployment cycles; 15% wanted to make their platforms and applications cloud-agnostic; and 14% said to containerize monolithic apps.

■ Porting hundreds or thousands of apps over to Kubernetes can be challenging, and the biggest hurdles for survey respondents included initial deployment, followed by migration, monitoring and alerting, complexity and increased cost, and reliability, in that order.

■ The kinds of applications and workloads respondents are running, in order of most to least, include Spark, 30%; Kafka, 25%; Presto 23%; AI/deep learning workloads using PyTorch or Tensorflow at 18%; and "other" at 5%.

■ Surprisingly, and despite how much the media writes about the move to public cloud, this survey found that 47% of respondents are using Kubernetes in private cloud environments. On-premises use made up 35%, and just 18% of respondents said they were using Kubernetes containers in public cloud environments.

■ 45% of Kubernetes workloads are in development and testing environments, as users move production workloads into a new resource management framework. 30% are doing proof-of-concept work.

■ 66% of respondents said 75–100% of their big data workloads will be on Kubernetes by the end of 2021.

■ IT operations was the clear leader — at 80% — in deploying Spark and other big data apps built on Kubernetes; Engineering followed with 11%; with business unit developers at just 9%.

Methodology: The survey was conducted in March 2021, among 800 participants from a range of industries, 72% of which worked at companies with between 500 and 5000 employees. 

Share this

The Latest

May 25, 2022

Site reliability engineering (SRE) is fast becoming an essential aspect of modern IT operations, particularly in highly scaled, big data environments. As businesses and industries shift to the digital and embrace new IT infrastructures and technologies to remain operational and competitive, the need for a new approach for IT teams to find and manage the balance between launching new systems and features and ensuring these are intuitive, reliable, and friendly for end users has intensified as well ...

May 24, 2022

The most sophisticated observability practitioners (leaders) are able to cut downtime costs by 90%, from an estimated $23.8 million annually to just $2.5 million, compared to observability beginners, according to the State of Observability 2022 from Splunk in collaboration with the Enterprise Strategy Group. What's more, leaders in observability are more innovative and more successful at achieving digital transformation outcomes and other initiatives ...

May 23, 2022

Programmatically tracked service level indicators (SLIs) are foundational to every site reliability engineering practice. When engineering teams have programmatic SLIs in place, they lessen the need to manually track performance and incident data. They're also able to reduce manual toil because our DevOps teams define the capabilities and metrics that define their SLI data, which they collect automatically — hence "programmatic" ...

May 19, 2022

Recently, a regional healthcare organization wanted to retire its legacy monitoring tools and adopt AIOps. The organization asked Windward Consulting to implement an AIOps strategy that would help streamline its outdated and unwieldy IT system management. Our team's AIOps implementation process helped this client and can help others in the industry too. Here's what my team did ...

May 18, 2022

You've likely heard it before: every business is a digital business. However, some businesses and sectors digitize more quickly than others. Healthcare has traditionally been on the slower side of digital transformation and technology adoption, but that's changing. As healthcare organizations roll out innovations at increasing velocity, they must build a long-term strategy for how they will maintain the uptime of their critical apps and services. And there's only one tool that can ensure this continuous availability in our modern IT ecosystems. AIOps can help IT Operations teams ensure the uptime of critical apps and services ...

May 17, 2022

Between 2012 to 2015 all of the hyperscalers attempted to use the legacy APM solutions to improve their own visibility. To no avail. The problem was that none of the previous generations of APM solutions could match the scaling demand, nor could they provide interoperability due to their proprietary and exclusive agentry ...

May 16, 2022

The DevOps journey begins by understanding a team's DevOps flow and identifying precisely what tasks deliver the best return on engineers' time when automated. The rest of this blog will help DevOps team managers by outlining what jobs can — and should be automated ...

May 12, 2022

A survey from Snow Software polled more than 500 IT leaders to determine the current state of cloud infrastructure. Nearly half of the IT leaders who responded agreed that cloud was critical to operations during the pandemic with the majority deploying a hybrid cloud strategy consisting of both public and private clouds. Unsurprisingly, over the last 12 months, the majority of respondents had increased overall cloud spend — a substantial increase over the 2020 findings ...

May 11, 2022

As we all know, the drastic changes in the world have caused the workforce to take a hybrid approach over the last two years. A lot of that time, being fully remote. With the back and forth between home and office, employees need ways to stay productive and access useful information necessary to complete their daily work. The ability to obtain a holistic view of data relevant to the user and get answers to topics, no matter the worker's location, is crucial for a successful and efficient hybrid working environment ...

May 10, 2022

For the past decade, Application Performance Management has been a capability provided by a very small and exclusive set of vendors. These vendors provided a bolt-on solution that provided monitoring capabilities without requiring developers to take ownership of instrumentation and monitoring. You may think of this as a benefit, but in reality, it was not ...