Application Performance Management and the Cloud
May 27, 2014

Shahid Saleem
Quadnet

Share this

The lack of innovation in traditional data centers has given way to developments in the cloud. It offers flexible user models such as Pay As You Go (PAYG) and Multi Tenancy services for e.g. Amazon Web Services (AWS). The downside is that as the cloud’s capacity increases (400k registrations AWS-Quadnet 2014) it is prone to more blackouts, security and compliance risks than we are led to believe.

The IT environment has become more complex around the cloud. The continued convergence of platforms and technologies has created additional challenges like Virtualization of legacy Data Center, Cloud Hosting, Software Defined Networks (SDN), remote access, Mobility (BYOD) and additional unstructured Big Data, a part of which is consumerism and encompasses User Generated Content (UGC) such as social media (Voice/Video).

The confluence of hardware and software over layered on an existing network architecture will create architectural complications in monitoring applications and network performance and visibility blind spots such as bandwidth growth across the vertical network between VM and physical servers, security and compliance protocols for remote and cloud environments etc.

The interplay of complexity e.g. in the area of data packet loss, leaks and packet segmentation in a virtualized environment will lead to delays of more than a few seconds in software performance synchronization. This can cause brownouts (lags, latency or degradation) and blackouts (crashes) which are detrimental to any commercial environment - such as retail web UI where a 2 second delay in web page uploads (slow DNS) is far too much.

The issues in a virtualized cloud lie in the Hypervisor as it changes IP addresses for VDI’s regularly. So the real measurement issue becomes getting insight into the core virtualized server environment.

When questioned, 79% of CTOs (Information Week study 2010) cited “software as very important” and with only 32% of APM service providers actually using specialized monitoring tools for the cloud. By not gaining deep insight into PaaS (Programming as a Service) and IaaS (Infrastructure as a Service), there is no visibility into the performance of application and networks. Therefore tracking degradation, latency and hub jitter becomes like finding a needle in the proverbial infrastructure haystack.

The debate surrounding cloud visibility and transparency is yet to be resolved partly because synthetic, probes, and passive agents only provide a mid-tier picture of the cloud. A passive virtual agent can be used to gain deep insight into the virtualized cloud environment. As the cloud market becomes more competitive, suppliers are being forced to disclose IaaS/PaaS performance data. Currently 59% of CTOs hold software in the cloud (Information Week 2011) without any specialized APM solution. Therefore one can only monitor the end user experience or resource used (CPU, memory etc.) to get some idea of application/network performance through the wire.

The imperative is in ensuring that your APM provider can cope with the intertwining complexities of the network, application, infrastructure and architecture. This means that a full arsenal of active and passive measuring tools need to be deployed for a pure play APM or a full MSP (Managed Service Provider) of end to end solutions that can set up, measure and translate outsourcing and SLAs into core critical measurable metrics. Furthermore, new software/technology deployments can be compared to established benchmarks allowing business decisions - such as application or hardware upgrades - to be made on current and relevant factual information i.e. business transaction, end user experience and network/application efficacy.

The convergence, consumerism, challenges and complexities based around the cloud have increased. So have the proficiencies of the leading APM providers in dealing with cloud complexity by using agentless data, collecting mechanisms such as injecting probes into middleware or using routers or switches embedded with NetFlow data analysers. The data is used to compile reports and dashboards on packet loss, latency and hub jitter etc. The generated reports allow comparisons of trends through semantic relationship testing, correlation and multivariate analysis with automated and advanced statistical techniques allowing CTOs and CIOs to make real time business decisions that provide a competitive advantage.

Shahid Saleem is Managing Partner & Chief APM Evangelist for Quadnet Ltd.

Share this