Skip to main content

Pepperdata Introduces Observability and Optimization for GPUs Running Big Data Applications

Pepperdata announced that the Pepperdata product portfolio now includes the ability to monitor Graphics Processing Units (GPUs) running big data applications like Spark on Kubernetes.

Workloads that harness tremendous amounts of data, such as machine learning (ML) and artificial intelligence (AI) applications, require GPUs, which were originally designed to accelerate graphics rendering. That extra processing power comes with a high price tag, and it requires near-constant monitoring for resource waste to get the best performance at the lowest possible cost.

Pepperdata now monitors GPU performance, providing the visibility needed for Spark applications running on Kubernetes and utilizing the processing power of GPUs. With this new visibility, companies can improve the performance of their Spark apps running on those GPUs and manage costs at a granular level.

Unlike traditional infrastructure monitoring, which is limited to the platform, the Pepperdata solution provides visibility into GPU resource utilization at the application level. Pepperdata also provides instant recommendations for optimization. Features include:

- Visibility into GPU memory usage and waste

- Fine-tuning of GPU usage through end-user recommendations

- Ability to attribute usage and cost to specific end-users

“Spark on Kubernetes is quickly becoming a dominant part of the compute infrastructure as data-intensive ML and AI applications proliferate,” said Ash Munshi, CEO, Pepperdata. “GPUs can handle these workloads, but they are expensive to buy and are power-intensive. Until now, there hasn’t been a way to view and manage the infrastructure and applications, which can lead to unnecessary waste and overspending for big data workloads. With Pepperdata, organizations can properly size their GPU hardware investments and have the confidence that they are utilizing them well.”

There are products on the market for monitoring GPUs, but they typically lack long-term storage, the ability to scale, and often do not correlate infrastructure metrics to applications. Pepperdata solves these problems with insight for data center operators, data scientists, and ML/AI developers. They can now understand who is using what resources, optimize to eliminate waste so jobs can be tuned and prioritized, and make sure costs are assigned appropriately to the right users or groups across the enterprise.

The Latest

Regardless of their scale, business decisions often take time, effort, and a lot of back-and-forth discussion to reach any sort of actionable conclusion ... Any means of streamlining this process and getting from complex problems to optimal solutions more efficiently and reliably is key. How can organizations optimize their decision-making to save time and reduce excess effort from those involved? ...

As enterprises accelerate their cloud adoption strategies, CIOs are routinely exceeding their cloud budgets — a concern that's about to face additional pressure from an unexpected direction: uncertainty over semiconductor tariffs. The CIO Cloud Trends Survey & Report from Azul reveals the extent continued cloud investment despite cost overruns, and how organizations are attempting to bring spending under control ...

Image
Azul

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

Pepperdata Introduces Observability and Optimization for GPUs Running Big Data Applications

Pepperdata announced that the Pepperdata product portfolio now includes the ability to monitor Graphics Processing Units (GPUs) running big data applications like Spark on Kubernetes.

Workloads that harness tremendous amounts of data, such as machine learning (ML) and artificial intelligence (AI) applications, require GPUs, which were originally designed to accelerate graphics rendering. That extra processing power comes with a high price tag, and it requires near-constant monitoring for resource waste to get the best performance at the lowest possible cost.

Pepperdata now monitors GPU performance, providing the visibility needed for Spark applications running on Kubernetes and utilizing the processing power of GPUs. With this new visibility, companies can improve the performance of their Spark apps running on those GPUs and manage costs at a granular level.

Unlike traditional infrastructure monitoring, which is limited to the platform, the Pepperdata solution provides visibility into GPU resource utilization at the application level. Pepperdata also provides instant recommendations for optimization. Features include:

- Visibility into GPU memory usage and waste

- Fine-tuning of GPU usage through end-user recommendations

- Ability to attribute usage and cost to specific end-users

“Spark on Kubernetes is quickly becoming a dominant part of the compute infrastructure as data-intensive ML and AI applications proliferate,” said Ash Munshi, CEO, Pepperdata. “GPUs can handle these workloads, but they are expensive to buy and are power-intensive. Until now, there hasn’t been a way to view and manage the infrastructure and applications, which can lead to unnecessary waste and overspending for big data workloads. With Pepperdata, organizations can properly size their GPU hardware investments and have the confidence that they are utilizing them well.”

There are products on the market for monitoring GPUs, but they typically lack long-term storage, the ability to scale, and often do not correlate infrastructure metrics to applications. Pepperdata solves these problems with insight for data center operators, data scientists, and ML/AI developers. They can now understand who is using what resources, optimize to eliminate waste so jobs can be tuned and prioritized, and make sure costs are assigned appropriately to the right users or groups across the enterprise.

The Latest

Regardless of their scale, business decisions often take time, effort, and a lot of back-and-forth discussion to reach any sort of actionable conclusion ... Any means of streamlining this process and getting from complex problems to optimal solutions more efficiently and reliably is key. How can organizations optimize their decision-making to save time and reduce excess effort from those involved? ...

As enterprises accelerate their cloud adoption strategies, CIOs are routinely exceeding their cloud budgets — a concern that's about to face additional pressure from an unexpected direction: uncertainty over semiconductor tariffs. The CIO Cloud Trends Survey & Report from Azul reveals the extent continued cloud investment despite cost overruns, and how organizations are attempting to bring spending under control ...

Image
Azul

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...