Two APM Takeaways from Velocity Santa Clara 2014
July 07, 2014

Denis Goodwin
SmartBear

Share this

Last week my team and I spent several days at the Velocity conference for web application and performance – arguably the one place where the most technical of performance folk and the most business focused web folk come together and focus solely on application performance. Anyone with a vested interest is there to learn, debate and show off their latest and greatest products and ideas. As the team and I spent time talking to customers, attending sessions and visiting vendors I was struck by a couple of interesting trends that seemed to stand out.


1. Continuous Development/Integration + Tools Fragmentation

Interestingly enough, while everyone was talking about continuous integration, very few solutions actually play well with each other – never mind being integrated in the same platform. I was particularly struck by the tremendous amount of fragmentation in the market. A lot of vendors are solving just one part of the problem. As the CEO of one exhibitor said to me when I pressed him on this point: the specific pain points and their separate solutions represent a big enough problem as it is – solving the larger problem of bringing all the parts together is almost insurmountable at this particular moment in time for this market lifecycle. This is true for almost every APM vendor in the marketplace today.  

Many of the folks I spoke with at the show are using several solutions simultaneously in order to measure all different parts of the full APM spectrum — user experience, performance, and availability. This seemed at odds with the buzz around DevOps and continuous integration and delivery. If users aren’t able to standardize on a common tool to monitor their production web apps and APIs, how can they possibly be consistent in measuring the quality of their user’s experience while delivering those apps via continuous integration? If there are multiple tools being used to measure user experience via synthetic and real monitoring and load testing in production, how many are being added in pre-production environments? How does a team know what the varied data is showing them if each tool only shows an individual part on its own terms?

2. Load Testing = Very Popular Topic

It was interesting to see load testing get as much emphasis as it did, among both vendors and attendees. What really seemed to generate excitement was the importance of tightly connecting both load testing and synthetic monitoring.  Companies need the ability to apply load against their applications for a simultaneous understanding of the user experience – before going live. Without the ability to empathize with the end user, companies can never confidently deploy their applications. The move to continuous delivery and integration only amplifies the importance of stressing your applications on a regular basis and the need for tools that enable efficient load tests. It also calls for excellent diagnostic tools to facilitate fixes and so facilitate time to market.

Is it possible that what’s old is new again? Does continuous integration and delivery require a new way of doing old things? Is it simply applying the same approach to multiple environments simultaneously? It seems that bringing a consistency to measurement and assessment methodologies across environments, coupled with continuous assessment and feedback, is a key to ensuring that your software improves with each iteration. Equally important is an easy to deploy toolset that is accessible and provides insights to users developing applications as well as those supporting them in production.

Denis Goodwin is Director of Product Management, APM, AlertSite UXM, SmartBear Software.


Denis Goodwin at Velocity 2014

Share this

The Latest

September 27, 2022

Users have high expectations around applications — quick loading times, look and feel visually advanced, with feature-rich content, video streaming, and multimedia capabilities — all of these devour network bandwidth. With millions of users accessing applications and mobile apps from multiple devices, most companies today generate seemingly unmanageable volumes of data and traffic on their networks ...

September 26, 2022

In Italy, it is customary to treat wine as part of the meal ... Too often, testing is treated with the same reverence as the post-meal task of loading the dishwasher, when it should be treated like an elegant wine pairing ...

September 23, 2022

In order to properly sort through all monitoring noise and identify true problems, their causes, and to prioritize them for response by the IT team, they have created and built a revolutionary new system using a meta-cognitive model ...

September 22, 2022

As we shift further into a digital-first world, where having a reliable online experience becomes more essential, Site Reliability Engineers remain in-demand among organizations of all sizes ... This diverse set of skills and values can be difficult to interview for. In this blog, we'll get you started with some example questions and processes to find your ideal SRE ...

September 21, 2022

US government agencies are bringing more of their employees back into the office and implementing hybrid work schedules, but federal workers are worried that their agencies' IT architectures aren't built to handle the "new normal." They fear that the reactive, manual methods used by the current systems in dealing with user, IT architecture and application problems will degrade the user experience and negatively affect productivity. In fact, according to a recent survey, many federal employees are concerned that they won't work as effectively back in the office as they did at home ...