Skip to main content

Two APM Takeaways from Velocity Santa Clara 2014

Denis Goodwin

Last week my team and I spent several days at the Velocity conference for web application and performance – arguably the one place where the most technical of performance folk and the most business focused web folk come together and focus solely on application performance. Anyone with a vested interest is there to learn, debate and show off their latest and greatest products and ideas. As the team and I spent time talking to customers, attending sessions and visiting vendors I was struck by a couple of interesting trends that seemed to stand out.


1. Continuous Development/Integration + Tools Fragmentation

Interestingly enough, while everyone was talking about continuous integration, very few solutions actually play well with each other – never mind being integrated in the same platform. I was particularly struck by the tremendous amount of fragmentation in the market. A lot of vendors are solving just one part of the problem. As the CEO of one exhibitor said to me when I pressed him on this point: the specific pain points and their separate solutions represent a big enough problem as it is – solving the larger problem of bringing all the parts together is almost insurmountable at this particular moment in time for this market lifecycle. This is true for almost every APM vendor in the marketplace today.  

Many of the folks I spoke with at the show are using several solutions simultaneously in order to measure all different parts of the full APM spectrum — user experience, performance, and availability. This seemed at odds with the buzz around DevOps and continuous integration and delivery. If users aren’t able to standardize on a common tool to monitor their production web apps and APIs, how can they possibly be consistent in measuring the quality of their user’s experience while delivering those apps via continuous integration? If there are multiple tools being used to measure user experience via synthetic and real monitoring and load testing in production, how many are being added in pre-production environments? How does a team know what the varied data is showing them if each tool only shows an individual part on its own terms?

2. Load Testing = Very Popular Topic

It was interesting to see load testing get as much emphasis as it did, among both vendors and attendees. What really seemed to generate excitement was the importance of tightly connecting both load testing and synthetic monitoring.  Companies need the ability to apply load against their applications for a simultaneous understanding of the user experience – before going live. Without the ability to empathize with the end user, companies can never confidently deploy their applications. The move to continuous delivery and integration only amplifies the importance of stressing your applications on a regular basis and the need for tools that enable efficient load tests. It also calls for excellent diagnostic tools to facilitate fixes and so facilitate time to market.

Is it possible that what’s old is new again? Does continuous integration and delivery require a new way of doing old things? Is it simply applying the same approach to multiple environments simultaneously? It seems that bringing a consistency to measurement and assessment methodologies across environments, coupled with continuous assessment and feedback, is a key to ensuring that your software improves with each iteration. Equally important is an easy to deploy toolset that is accessible and provides insights to users developing applications as well as those supporting them in production.

Denis Goodwin is Director of Product Management, APM, AlertSite UXM, SmartBear Software.


Denis Goodwin at Velocity 2014

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

Two APM Takeaways from Velocity Santa Clara 2014

Denis Goodwin

Last week my team and I spent several days at the Velocity conference for web application and performance – arguably the one place where the most technical of performance folk and the most business focused web folk come together and focus solely on application performance. Anyone with a vested interest is there to learn, debate and show off their latest and greatest products and ideas. As the team and I spent time talking to customers, attending sessions and visiting vendors I was struck by a couple of interesting trends that seemed to stand out.


1. Continuous Development/Integration + Tools Fragmentation

Interestingly enough, while everyone was talking about continuous integration, very few solutions actually play well with each other – never mind being integrated in the same platform. I was particularly struck by the tremendous amount of fragmentation in the market. A lot of vendors are solving just one part of the problem. As the CEO of one exhibitor said to me when I pressed him on this point: the specific pain points and their separate solutions represent a big enough problem as it is – solving the larger problem of bringing all the parts together is almost insurmountable at this particular moment in time for this market lifecycle. This is true for almost every APM vendor in the marketplace today.  

Many of the folks I spoke with at the show are using several solutions simultaneously in order to measure all different parts of the full APM spectrum — user experience, performance, and availability. This seemed at odds with the buzz around DevOps and continuous integration and delivery. If users aren’t able to standardize on a common tool to monitor their production web apps and APIs, how can they possibly be consistent in measuring the quality of their user’s experience while delivering those apps via continuous integration? If there are multiple tools being used to measure user experience via synthetic and real monitoring and load testing in production, how many are being added in pre-production environments? How does a team know what the varied data is showing them if each tool only shows an individual part on its own terms?

2. Load Testing = Very Popular Topic

It was interesting to see load testing get as much emphasis as it did, among both vendors and attendees. What really seemed to generate excitement was the importance of tightly connecting both load testing and synthetic monitoring.  Companies need the ability to apply load against their applications for a simultaneous understanding of the user experience – before going live. Without the ability to empathize with the end user, companies can never confidently deploy their applications. The move to continuous delivery and integration only amplifies the importance of stressing your applications on a regular basis and the need for tools that enable efficient load tests. It also calls for excellent diagnostic tools to facilitate fixes and so facilitate time to market.

Is it possible that what’s old is new again? Does continuous integration and delivery require a new way of doing old things? Is it simply applying the same approach to multiple environments simultaneously? It seems that bringing a consistency to measurement and assessment methodologies across environments, coupled with continuous assessment and feedback, is a key to ensuring that your software improves with each iteration. Equally important is an easy to deploy toolset that is accessible and provides insights to users developing applications as well as those supporting them in production.

Denis Goodwin is Director of Product Management, APM, AlertSite UXM, SmartBear Software.


Denis Goodwin at Velocity 2014

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...