Skip to main content

An Interview with BlueStripe CEO

Pete Goldin
APMdigest

In BSMdigest’s exclusive interview, Chris Neal, CEO of BlueStripe, talks about the next generation of Application Performance Management tools.

BSM: Is there something that most companies still do not understand about application performance management today?

CN: Most companies we talk to monitor and respond to application problems in a backward fashion. They begin with specialist tools that look at a single piece of the infrastructure supporting the application and try to understand how transactions performed as they crossed each system. Everyday I hear from IT leaders that this approach takes too long, with too many people involved. The problem is that traditional applications management solutions only manage parts of the application system, hoping that the whole is performing well.

BSM: When it comes to pinpointing the sources of performance problems, what are the traditional APM and BTM tools missing?

CN: For IT Operations and production support to manage application performance and availability, they need to monitor the transactions that are executing, the applications that run them, and the IT systems they depend on to determine why their transactions are slow or unavailable. Traditional APM vendors focus narrowly on the application code or app servers, but don’t see anything else. Today’s BTM tools focus just on the transaction, but have no ability to drill-down to see why a transaction got stuck in a particular system.

BSM: What drove the founding of BlueStripe in 2007?

CN: The company founders all came from the experience of building Wily Technology to help manage what were then new Java applications. After that experience, it became clear that application management was changing beyond the need for code level visibility — the challenge has moved to the platforms, infrastructure, and services needed to run the applications. New technologies like virtualization, SOA, and now private cloud are accelerating this change and increasing management complexity for IT Operations. As we spoke with IT executives across the Fortune 500, it became clear that a new approach that manages the whole application system was needed, one that could monitor both transactions and the IT systems they depend on.

BSM: What is APM 2.0?

CN: APM 2.0 is the next generation of application management. APM 1.0 delivered Java/.NET diagnostics for application developers. APM 2.0, led by BlueStripe’s FactFinder solution, has delivered management of the whole production application system for IT Operations by monitoring both transactions and the IT systems they depend on. FactFinder works for any transaction, running on any TCP-connected application, running in physical, virtual, and cloud. The best part about FactFinder and APM 2.0 is that transaction tracking is automatic (no pre-definition) and continuously updates any time a transaction path changes.

BSM: What is the advantage of having APM and transaction performance monitoring in the same tool?

CN: The advantage is that IT Operations starts with what your users are actually doing: initiating transactions. According to a recent Ziff Davis survey of about 1190 IT professionals, the top application management challenge is trying to find what component is slowing things down across complicated application systems. BlueStripe’s transaction performance monitoring capabilities enable you to follow the performance of transactions at each step in their execution, allowing a single support person to pinpoint exactly where the problem is in just a few minutes, without the bridge call. Once the problem is identified, BlueStripe’s APM capabilities enable drill-down into the server stack to see why the component is failing.

BSM: Explain the meaning behind “If You Don’t Manage Everything, You Don’t Manage Anything”.

CN: That’s actually a quote from Forrester BSM expert J.P. Garbani, and what he meant was that IT Operations is responsible for application performance and availability. Period. That includes every transaction and the systems they run on. If your tools don’t give you a single view into transaction problems, the application platform, the systems underneath, and all of the dependencies that the transactions depend on, your tools don’t manage anything.

BSM: What APM capabilities does a company need in order to confidently deploy business critical applications in the virtual environment?

CN: Virtualization breaks the relationship between the physical hardware and the operating system, which means that application management gets harder. Companies need an APM solution that can follow the transaction wherever it goes, even as it crosses physical and virtual boundaries.

Also, because virtualization means that the IT systems are constantly changing, APM tools need to be able to automatically update in real-time as changes occur.

Finally, companies need their APM tools to tie information about how the virtual hardware itself impacts application performance. BlueStripe’s FactFinder is the only product that enables companies to do all these things.

BSM: What is the difference between managing a transaction in virtual environment vs. a private cloud?

CN: Virtualization is about server technology; private cloud is about the way a company organizes and deploys virtualization as a service. In both cases, IT Operations needs to be able to follow the transaction across every component it touches. However, in a private cloud they also need management visibility that bridges the organizational divide between Operations and other groups.

BSM: What is the next step for APM technology in the cloud?

CN: IT Operations doesn’t get a pass on application performance just because they set up a private cloud. APM technology is going to be moving towards monitoring everything from the perspective of the transaction, instead of the backward systems-first approach favored by the old generation of tools. This is because the value of private cloud platforms will be judged on how well the infrastructure supports the transactions.

BlueStripe’s FactFinder can already provide monitoring of all transactions and the systems they depend on across every environment—physical, virtual, and cloud. FactFinder is leading the APM market when it comes to private cloud because FactFinder tracks and monitors transactions automatically, without requiring any knowledge or assistance from developers.

BSM: Did VMworld give you any insight into how APM or BTM will be changing in the near future?

CN: VMworld continued to confirm for us that virtualization and private cloud are seeing overwhelming adoption across the enterprise. Both of these technologies are accelerants to complexity that create real challenges for the groups responsible for managing application performance and availability.

Secondly, the show confirmed that cloud-adopting IT organizations are demanding the transaction monitoring capabilities of BTM mixed with some of the drill-down capabilities from APM. This demand has created great success for BlueStripe this year and we expect that this demand will increase as more companies hear about how transaction monitoring is leading to faster problem isolation and resolution, while using fewer support team members.

About Chris Neal

Chris Neal is the CEO of BlueStripe. Before founding BlueStripe, Neal was VP of Field Operations for the Americas for Wily Technology, the Magic Quadrant Leader for Java EE Application Management. Neal brings over 15 years of leadership in enterprise infrastructure software sales, including Oracle & NetDynamics, a Java Application Server leader acquired by Sun Microsystems. Neal holds a BS in Business Administration from the University of North Carolina at Chapel Hill.

Related Links:

www.bluestripe.com

Hot Topic
The Latest
The Latest 10

The Latest

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

An Interview with BlueStripe CEO

Pete Goldin
APMdigest

In BSMdigest’s exclusive interview, Chris Neal, CEO of BlueStripe, talks about the next generation of Application Performance Management tools.

BSM: Is there something that most companies still do not understand about application performance management today?

CN: Most companies we talk to monitor and respond to application problems in a backward fashion. They begin with specialist tools that look at a single piece of the infrastructure supporting the application and try to understand how transactions performed as they crossed each system. Everyday I hear from IT leaders that this approach takes too long, with too many people involved. The problem is that traditional applications management solutions only manage parts of the application system, hoping that the whole is performing well.

BSM: When it comes to pinpointing the sources of performance problems, what are the traditional APM and BTM tools missing?

CN: For IT Operations and production support to manage application performance and availability, they need to monitor the transactions that are executing, the applications that run them, and the IT systems they depend on to determine why their transactions are slow or unavailable. Traditional APM vendors focus narrowly on the application code or app servers, but don’t see anything else. Today’s BTM tools focus just on the transaction, but have no ability to drill-down to see why a transaction got stuck in a particular system.

BSM: What drove the founding of BlueStripe in 2007?

CN: The company founders all came from the experience of building Wily Technology to help manage what were then new Java applications. After that experience, it became clear that application management was changing beyond the need for code level visibility — the challenge has moved to the platforms, infrastructure, and services needed to run the applications. New technologies like virtualization, SOA, and now private cloud are accelerating this change and increasing management complexity for IT Operations. As we spoke with IT executives across the Fortune 500, it became clear that a new approach that manages the whole application system was needed, one that could monitor both transactions and the IT systems they depend on.

BSM: What is APM 2.0?

CN: APM 2.0 is the next generation of application management. APM 1.0 delivered Java/.NET diagnostics for application developers. APM 2.0, led by BlueStripe’s FactFinder solution, has delivered management of the whole production application system for IT Operations by monitoring both transactions and the IT systems they depend on. FactFinder works for any transaction, running on any TCP-connected application, running in physical, virtual, and cloud. The best part about FactFinder and APM 2.0 is that transaction tracking is automatic (no pre-definition) and continuously updates any time a transaction path changes.

BSM: What is the advantage of having APM and transaction performance monitoring in the same tool?

CN: The advantage is that IT Operations starts with what your users are actually doing: initiating transactions. According to a recent Ziff Davis survey of about 1190 IT professionals, the top application management challenge is trying to find what component is slowing things down across complicated application systems. BlueStripe’s transaction performance monitoring capabilities enable you to follow the performance of transactions at each step in their execution, allowing a single support person to pinpoint exactly where the problem is in just a few minutes, without the bridge call. Once the problem is identified, BlueStripe’s APM capabilities enable drill-down into the server stack to see why the component is failing.

BSM: Explain the meaning behind “If You Don’t Manage Everything, You Don’t Manage Anything”.

CN: That’s actually a quote from Forrester BSM expert J.P. Garbani, and what he meant was that IT Operations is responsible for application performance and availability. Period. That includes every transaction and the systems they run on. If your tools don’t give you a single view into transaction problems, the application platform, the systems underneath, and all of the dependencies that the transactions depend on, your tools don’t manage anything.

BSM: What APM capabilities does a company need in order to confidently deploy business critical applications in the virtual environment?

CN: Virtualization breaks the relationship between the physical hardware and the operating system, which means that application management gets harder. Companies need an APM solution that can follow the transaction wherever it goes, even as it crosses physical and virtual boundaries.

Also, because virtualization means that the IT systems are constantly changing, APM tools need to be able to automatically update in real-time as changes occur.

Finally, companies need their APM tools to tie information about how the virtual hardware itself impacts application performance. BlueStripe’s FactFinder is the only product that enables companies to do all these things.

BSM: What is the difference between managing a transaction in virtual environment vs. a private cloud?

CN: Virtualization is about server technology; private cloud is about the way a company organizes and deploys virtualization as a service. In both cases, IT Operations needs to be able to follow the transaction across every component it touches. However, in a private cloud they also need management visibility that bridges the organizational divide between Operations and other groups.

BSM: What is the next step for APM technology in the cloud?

CN: IT Operations doesn’t get a pass on application performance just because they set up a private cloud. APM technology is going to be moving towards monitoring everything from the perspective of the transaction, instead of the backward systems-first approach favored by the old generation of tools. This is because the value of private cloud platforms will be judged on how well the infrastructure supports the transactions.

BlueStripe’s FactFinder can already provide monitoring of all transactions and the systems they depend on across every environment—physical, virtual, and cloud. FactFinder is leading the APM market when it comes to private cloud because FactFinder tracks and monitors transactions automatically, without requiring any knowledge or assistance from developers.

BSM: Did VMworld give you any insight into how APM or BTM will be changing in the near future?

CN: VMworld continued to confirm for us that virtualization and private cloud are seeing overwhelming adoption across the enterprise. Both of these technologies are accelerants to complexity that create real challenges for the groups responsible for managing application performance and availability.

Secondly, the show confirmed that cloud-adopting IT organizations are demanding the transaction monitoring capabilities of BTM mixed with some of the drill-down capabilities from APM. This demand has created great success for BlueStripe this year and we expect that this demand will increase as more companies hear about how transaction monitoring is leading to faster problem isolation and resolution, while using fewer support team members.

About Chris Neal

Chris Neal is the CEO of BlueStripe. Before founding BlueStripe, Neal was VP of Field Operations for the Americas for Wily Technology, the Magic Quadrant Leader for Java EE Application Management. Neal brings over 15 years of leadership in enterprise infrastructure software sales, including Oracle & NetDynamics, a Java Application Server leader acquired by Sun Microsystems. Neal holds a BS in Business Administration from the University of North Carolina at Chapel Hill.

Related Links:

www.bluestripe.com

Hot Topic
The Latest
The Latest 10

The Latest

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.