Performance Assurance - A Key to Virtual Desktop Success
January 05, 2012
Srinivas Ramanathan
Share this

Very often, when an enterprise starts on the virtual desktop journey, the focus is on the user desktop. This is only natural - after all, it is the desktop that is moving - from being on a physical system to a virtual machine.

Therefore, once a decision to try out VDI is made, the primary focus is to benchmark the performance of physical desktops, model their usage, predict the virtualized user experience and based on the results, determine which desktops can be virtualized and which can't. This is what many people refer to as “VDI assessment”.

One of the fundamental changes with VDI is that the desktops no longer have dedicated resources. They share the resources of the physical machine on which they are hosted and they may even be using a common storage subsystem.

While resource sharing provides several benefits, it also introduces new complications. A single malfunctioning desktop can take so much resources that it impacts the performance of all the other desktops. Whereas in the physical world, the impact of a failure or a slowdown was minimal (if a physical desktop failed, it would impact only one user), the impact of failure or slowdown in the virtual world is much more severe (one failure can impact hundreds of desktops). Therefore, even in the VDI assessment phase, it is important to take performance considerations into account.

In fact, performance has to be considered at every stage of the VDI lifecycle because it is fundamental to the success or failure of the VDI rollout. The new types of inter-desktop dependencies that exist in VDI have to be accounted for at every stage.

For example, in many of the early VDI deployments, administrators found that when they just migrated the physical desktops to VDI, backups or antivirus software became a problem. These software components were scheduled to run at the same time on all the desktops. When the desktops were physical, it didn’t matter, because each desktop had dedicated hardware. With VDI, the synchronized demand for resources from all the desktops severely impacted the performance of the virtual desktops. This was not something that was anticipated because the focus of most designs and plans was on the individual desktops.


Understanding the performance requirements of desktops may also help plan the virtual desktop infrastructure more efficiently. For example, known heavy CPU using desktop users can be load balanced across servers. Likewise, by planning to assign a good mix of CPU intensive and memory intensive user desktops are assigned to a physical server, it is possible to get optimal usage of the existing hardware resources.

Lessons Learned

Taking this discussion one step further, it is interesting to draw a parallel with how server virtualization evolved and to see what lessons we can learn as far as VDI is concerned.

A lot of the emphasis in the early days was on determining which applications could be virtualized and which ones could not. Today, server virtualization technology has evolved to a point where there are more virtual machines being deployed in a year than physical machines, and almost every application server (except very old legacy ones) are virtualized fairly well. You no longer hear anyone asking whether this application server can be virtualized or not. From focusing on the hypervisor, virtualization vendors have realized that performance and manageability are key to the success of server virtualization deployments.


VDI deployments could be done more rapidly and more successfully if we learn our lessons from how server virtualization evolved. VDI assessment needs to expand in focus on just the desktop and look at the entire infrastructure. Attention during VDI rollouts has to be paid to performance management and assurance. To avoid a lot of rework and problem remediation down the line, performance assurance must be considered early on in the process and at every stage. This is key to getting VDI deployed on a bigger scale and faster, with great return on investment (ROI).

To learn more about VDI performance, join the on-demand webinar "Top-5 Best Practices for Virtual Desktop Success"

About Srinivas Ramanathan

Srinivas Ramanathan is CEO and founder of eG Innovations. Prior to eG Innovations, he was a Senior Research Scientist at Hewlett-Packard Laboratories in Palo Alto, California. Ramanathan has extensive experience in Internet technologies, performance monitoring and management, and multimedia systems. He has co-authored more than forty technical papers and has been a co-inventor of 14 US patents. Ramanathan has a PhD in Computer Science and Engineering from the University of California, San Diego and a Masters in Computer Science from the Indian Institute of Technology, Chennai, India.

Share this

The Latest

October 18, 2021

Distributed tracing has been growing in popularity as a primary tool for investigating performance issues in microservices systems. Our recent DevOps Pulse survey shows a 38% increase year-over-year in organizations' tracing use. Furthermore, 64% of those respondents who are not yet using tracing indicated plans to adopt it in the next two years ...

October 14, 2021

Businesses are embracing artificial intelligence (AI) technologies to improve network performance and security, according to a new State of AIOps Study, conducted by ZK Research and Masergy ...

October 13, 2021

What may have appeared to be a stopgap solution in the spring of 2020 is now clearly our new workplace reality: It's impossible to walk back so many of the developments in workflow we've seen since then. The question is no longer when we'll all get back to the office, but how the companies that are lagging in their technological ability to facilitate remote work can catch up ...

October 12, 2021

The pandemic accelerated organizations' journey to the cloud to enable agile, on-demand, flexible access to resources, helping them align with a digital business's dynamic needs. We heard from many of our customers at the start of lockdown last year, saying they had to shift to a remote work environment, seemingly overnight, and this effort was heavily cloud-reliant. However, blindly forging ahead can backfire ...

October 07, 2021

SmartBear recently released the results of its 2021 State of Software Quality | Testing survey. I doubt you'll be surprised to hear that a "lack of time" was reported as the number one challenge to doing more testing, especially as release frequencies continue to increase. However, it was disheartening to see that a lack of time was also the number one response when we asked people to identify the biggest blocker to professional development ...

October 06, 2021

The role of the CIO is evolving with an increased focus on unlocking customer connections through service innovation, according to the 2021 Global CIO Survey. The study reveals the shift in the role of the CIO with the majority of CIO respondents stating innovation, operational efficiency, and customer experience as their top priorities ...

October 05, 2021

The perception of IT support has dramatically improved thanks to the successful response of service desks to the pandemic, lockdowns and working from home, according to new research from the Service Desk Institute (SDI), sponsored by Sunrise Software ...

October 04, 2021

Is your company trying to use artificial intelligence (AI) for business purposes like sales and marketing, finance or customer experience? If not, why not? If so, has it struggled to start AI projects and get them to work effectively? ...

September 30, 2021

As remote work persists, and organizations take advantage of hire-from-anywhere models — in addition to facing other challenges like extreme weather events — companies across industries are continuing to re-evaluate the effectiveness of their tech stack. Today's increasingly distributed workforce has put a much greater emphasis on network availability across more endpoints as well as increased the bandwidth required for voice and video. For many, this has posed the question of whether to switch to a new network monitoring system ...

September 29, 2021

When a website or app fails or falters, the standard operating procedure is to assemble a sizable team to quickly "divide and conquer" to find a solution. The details of the problem can usually be found somewhere among millions of log events and metrics, leading to slow and painstaking searches that can take hours and often involve handoffs between experts in different areas of the software. The immediate goal in these situations is not to be comprehensive, but rather to troubleshoot until you find a solution that remedies the symptom, even if the underlying root cause is not addressed ...