In BSMdigest’s exclusive interview, Al Sargent, Sr. Product Marketing Manager at VMware, discusses the Business Service Management news coming out of VMworld, and the monitoring and management challenges of the cloud.
BSM: Many companies in the Business Service Management space are exhibiting at, making announcements at and attending VMworld. Why is VMworld such an important event for monitoring and management software companies?
AS: The reason is that VMworld has become the de-facto conference for modern datacenter management. It is more than just one vendor’s conference.
Two of the most important trends in datacenter management are virtualization and cloud computing. VMware is at the center of virtualization with Vsphere, but in addition to that, VMware’s new vFabric cloud application platform provides customers with a pragmatic and evolutionary path to cloud computing.
BSM: What announcements did VMware make at VMworld to address the monitoring and management needs of the market?
AS: The main announcement was around the introduction of vFabric, and how Hyperic supports the monitoring of cloud applications. One key goal for Hyperic going forward is to provide best-in-class monitoring of cloud applications, whether running on our vFabric cloud application platform or a platform from another vendor.
Fulfilling this goal entails the following three capabilities: support for dynamic architectures and elastic capacity; extreme scalability to collect all the performance data from all the VMs in the data center; and monitoring a large number of application infrastructure components.
BSM: You mention the massive amount of performance data. Why is there so much more performance data coming out of the cloud?
AS: Think about a data center with a thousand servers, which is actually a pretty small datacenter. If you are collecting 1,000 metrics on each one of those 1,000 virtual machines every minute, that is one million metrics per minute that needs to be processed. Even a midsized firm can hit this level of metrics data.
BSM: So one factor is the number of VMs. Is another factor that there are more metrics coming out of each application because of all the changes that are going on?
AS: Exactly. That is a really good point. Another thing is this inexorable march towards web applications that streamline business processes and therefore need to accommodate surges in the business cycle. Every industry has these cycles, and applications need to be architected to accommodate the surges in demand that accompany them. As the infrastructure scales up and scales down, that is going to mean more changes in your datacenter and that is going to mean that at the peak of those surges you are going to have a lot of VMs throwing off a lot of performance metrics, and your monitoring tool has to be able to accommodate that.
BSM: So is it just a matter of scalability? The ability to handle many more metrics?
There are three elements. One is the peak number of VMs. Second, it is the fact that the number of those VMs varies over time. And third is the fact that the stakes are so much higher. During these surge periods, if your application is slow or unavailable entirely, for every minute of performance problems you are going to have significant lost revenue.
BSM: What are the current monitoring technologies missing that do not allow them be able to handle this new environment?
AS: We are going to see more metrics collected more and more frequently. Believe it or not, there are many monitoring tools that think it’s perfectly acceptable to capture metrics every 10 minutes. That might have worked fine in the late 90s when those tools first came out, but today that will not cut it.
We are seeing a need for monitoring tools that monitor very frequently - as frequently as one minute or less. There are two big drivers for this. One is the fact that consumer software is driving the enterprise software innovation. Think of Twitter: users expect that when you post something to Twitter, it is immediately available to the world. Users today expect software to work in real time, and that expectation weaves its way into the requirements for monitoring tools and reporting metrics.
The second point goes back to surges in web application workloads due to business cycles. To accommodate those surges, you need to figure out how much you need to dynamically scale up your virtualized environment. Doing that confidently requires that you collect performance metrics very frequently.
Here’s an example: Let’s say you only spin up a new app server VM if you have four datapoints indicating that the app is running slowly, because you don’t want to spin up a new VM based on a single, possibly spurious data point. If you collect metrics once every 15 minutes – a common setting among legacy tools – it will be a whole hour before you spawn a new VM. No business can afford an entire hour of sluggishness in its critical apps. You can imagine the conversation that the business would have with IT.
But let’s say you collect metrics once a minute. In four minutes, you’ll have four datapoints, and can confidently spin up that new VM. IT responds quickly, and the business and customers are happy.
BSM: At VMworld, VMware announced that the introduction of the vFabric cloud application platform will drive IT as a service. Do you see vFabric helping users of the cloud move to a more business-centric view of IT service?
Yes, vFabric lets IT move in that direction. IT can start to scale infrastructure more quickly in response to the needs of the business, and that frees them up to understand more about the cycles of the business. For instance, if IT serves a retail business, they can think about the major shopping days during the holidays, and when they’ll need to ramp up their infrastructure during those shopping days.
About Al Sargent
Al Sargent, Sr. Product Marketing Manager at VMware, handles product marketing for Hyperic, VMware's application monitoring product. He has 15+ years of experience in product management and marketing, business development, sales, and engineering at VMware, Oracle, Mercury and startups such as Wily Technology.
Application performance monitoring (APM) has become one of the key strategies adopted by IT teams and application owners in today’s era of digital business services. Application downtime has always been considered adverse to business productivity. But in today’s digital economy, what is becoming equally dreadful is application slowdown. When an application is slow, the end user’s experience accessing the application is negatively affected leaving a dent on the business in terms of commercial loss and brand damage ...
Useful digital transformation means altering or designing new business processes, and implementing them via the people and technology changes needed to support these new business processes ...
xMatters recently released the results of its Incident Management in the Age of Customer-Centricity research study to better understand the range of various incident management practices and how the increased focus on customer experience has caused roles across an organization to evolve. Findings highlight the ongoing challenges organizations face as they continue to introduce and rapidly evolve digital services ...
The new App Attention Index Report from AppDynamics finds that consumers are using an average 32 digital services every day — more than four times as many as they realize. What's more, their use of digital services has evolved from a conscious decision to carry around a device and use it for a specific task, to an unconscious and automated behavior — a digital reflex. So what does all this mean for the IT teams driving application performance on the backend? Bottom line: delivering seamless and world-class digital experiences is critical if businesses want to stay relevant and ensure long-term customer loyalty. Here are some key considerations for IT leaders and developers to consider ...
Through the adoption of agile technologies, financial firms can begin to use software to both operate more effectively and be faster to market with improvements for customer experiences. Making sure there is the necessary software in place to give customers frictionless everyday activities, like remote deposits, business overdraft services and wealth management, is key for a positive customer experience ...
For the past two years, Couchbase has been digging into enterprises' digital strategies. Can they deliver the experiences and services their end-users need? What pressure are they under to innovate and succeed? And what is driving investments in new technologies? ...
Adapting to new business requirements and technological shifts requires that IT Ops teams adopt a different viewpoint, and along with that, skills and culture. A survey by OpsRamp uncovered some common thinking among IT Operations leaders on how to address talent, budget, and data management pains amid digital disruption ...
Unexpected and unintentional drops in network quality, so-called network brownouts, cause serious financial damage and frustrate employees. A recent survey sponsored by Netrounds reveals that more than 60% of network brownouts are first discovered by IT’s internal and external customers, or never even reported, instead of being proactively detected by IT organizations ...
Digital transformation reaches into every aspect of our work and personal lives, to the point that there is an automatic expectation of 24/7, anywhere availability regarding any organization with an online presence. This environment is ripe for artificial intelligence, so it's no surprise that IT Operations has been an early adopter of AI ...