Two APM Takeaways from Velocity Santa Clara 2014
July 07, 2014

Denis Goodwin
SmartBear

Share this

Last week my team and I spent several days at the Velocity conference for web application and performance – arguably the one place where the most technical of performance folk and the most business focused web folk come together and focus solely on application performance. Anyone with a vested interest is there to learn, debate and show off their latest and greatest products and ideas. As the team and I spent time talking to customers, attending sessions and visiting vendors I was struck by a couple of interesting trends that seemed to stand out.


1. Continuous Development/Integration + Tools Fragmentation

Interestingly enough, while everyone was talking about continuous integration, very few solutions actually play well with each other – never mind being integrated in the same platform. I was particularly struck by the tremendous amount of fragmentation in the market. A lot of vendors are solving just one part of the problem. As the CEO of one exhibitor said to me when I pressed him on this point: the specific pain points and their separate solutions represent a big enough problem as it is – solving the larger problem of bringing all the parts together is almost insurmountable at this particular moment in time for this market lifecycle. This is true for almost every APM vendor in the marketplace today.  

Many of the folks I spoke with at the show are using several solutions simultaneously in order to measure all different parts of the full APM spectrum — user experience, performance, and availability. This seemed at odds with the buzz around DevOps and continuous integration and delivery. If users aren’t able to standardize on a common tool to monitor their production web apps and APIs, how can they possibly be consistent in measuring the quality of their user’s experience while delivering those apps via continuous integration? If there are multiple tools being used to measure user experience via synthetic and real monitoring and load testing in production, how many are being added in pre-production environments? How does a team know what the varied data is showing them if each tool only shows an individual part on its own terms?

2. Load Testing = Very Popular Topic

It was interesting to see load testing get as much emphasis as it did, among both vendors and attendees. What really seemed to generate excitement was the importance of tightly connecting both load testing and synthetic monitoring.  Companies need the ability to apply load against their applications for a simultaneous understanding of the user experience – before going live. Without the ability to empathize with the end user, companies can never confidently deploy their applications. The move to continuous delivery and integration only amplifies the importance of stressing your applications on a regular basis and the need for tools that enable efficient load tests. It also calls for excellent diagnostic tools to facilitate fixes and so facilitate time to market.

Is it possible that what’s old is new again? Does continuous integration and delivery require a new way of doing old things? Is it simply applying the same approach to multiple environments simultaneously? It seems that bringing a consistency to measurement and assessment methodologies across environments, coupled with continuous assessment and feedback, is a key to ensuring that your software improves with each iteration. Equally important is an easy to deploy toolset that is accessible and provides insights to users developing applications as well as those supporting them in production.

Denis Goodwin is Director of Product Management, APM, AlertSite UXM, SmartBear Software.


Denis Goodwin at Velocity 2014

Share this

The Latest

January 26, 2023

As enterprises work to implement or improve their observability practices, tool sprawl is a very real phenomenon ... Tool sprawl can and does happen all across the organization. In this post, though, we'll focus specifically on how and why observability efforts often result in tool sprawl, some of the possible negative consequences of that sprawl, and we'll offer some advice on how to reduce or even avoid sprawl ...

January 25, 2023

As companies generate more data across their network footprints, they need network observability tools to help find meaning in that data for better decision-making and problem solving. It seems many companies believe that adding more tools leads to better and faster insights ... And yet, observability tools aren't meeting many companies' needs. In fact, adding more tools introduces new challenges ...

January 24, 2023

Driven by the need to create scalable, faster, and more agile systems, businesses are adopting cloud native approaches. But cloud native environments also come with an explosion of data and complexity that makes it harder for businesses to detect and remediate issues before everything comes to a screeching halt. Observability, if done right, can make it easier to mitigate these challenges and remediate incidents before they become major customer-impacting problems ...

January 23, 2023

The spiraling cost of energy is forcing public cloud providers to raise their prices significantly. A recent report by Canalys predicted that public cloud prices will jump by around 20% in the US and more than 30% in Europe in 2023. These steep price increases will test the conventional wisdom that moving to the cloud is a cheap computing alternative ...

January 19, 2023

Despite strong interest over the past decade, the actual investment in DX has been recent. While 100% of enterprises are now engaged with DX in some way, most (77%) have begun their DX journey within the past two years. And most are early stage, with a fourth (24%) at the discussion stage and half (49%) currently transforming. Only 27% say they have finished their DX efforts ...