Q&A Part One: IBM Talks About APM
January 26, 2012

In APMdigest's exclusive interview, Matthew Ellis, IBM Vice President of Service Availability and Performance, discusses APM including cost concerns, APM in the cloud, and Gartner's 5 dimensions of APM.

APM: What have been IBM's most important advancement in APM in the last year?

ME: One of IBM’s most significant innovations was the introduction in 2011 of an agentless transaction tracking solution that works in harmony with our existing agent-based solution. This combination, which is unique in the market, gives our customers the best of both worlds – the ease-of-use and time-to-value of agentless tracking combined with the detailed information provided by an agent-based solution in the domains that need it. Agentless and agent-based data combine seamlessly to provide our customers with incremental value and a complete picture of their application transaction topologies.

APM: What is the secret to successful APM in the cloud?

ME: There are three keys to insuring application performance in cloud-based infrastructures:

- Visibility beyond the firewall

- Robust SLA monitoring of public and private cloud infrastructure

- Tight integration to traditional monitoring

Getting performance data on individual cloud components is crucial to rapid problem isolation and diagnosis, but is often hindered by incompatible (or non-existent) instrumentation or an inability to share data in a meaningful way. Effective SLA monitoring involves watching every transaction that crosses the firewall boundary, and alerting when expectations aren’t being met.

Lastly, since moving applications to the cloud is a process, and very few IT divisions are 100% cloud-based at this point, it is critical that the APM data you get from the cloud tightly integrates with your existing traditional management solutions.

Ideally, you want an APM solution that is completely infrastructure-agnostic – you have exactly the same visibility, presented in the same way, whether the application is running natively on physical hardware, on an internal virtualized infrastructure, in the cloud, or some hybrid combination of all three.

APM: A recent study from Trac Research shows cost management as a key APM concern. How does an organization find the right balance between how much money and time they can afford to spend on managing applications versus how much visibility they can get?

ME: For each organization, the investment in APM is going to vary.  Of course, it is ultimately an ROI discussion.  For some, any incremental amount of increased visibility increases confidence in their support of critical applications and can be justified in improved availability or optimized performance of critical applications.  For others, there is a clear point of diminishing returns where further investment is no longer warranted.

We recommend a staged approach to APM deployment that allows simple, high value goals to be achieved rapidly and enables further investment in greater visibility to be seamless and incrementally added.

APM: What are the steps you recommend?

Many organizations start by simply monitoring the application response times that customers experience to ensure that application behavior is meeting their expectations.  

The next stage is to deploy our agentless transaction tracking solution which can monitor applications across the infrastructure without investing in deep metric evaluation of all involved application components. The information learned with this part of our APM solution can show where applications are spending most of their time, and suggest where richer instrumentation would be most beneficial.  

At this point we recommend installing local agents for deep monitoring of critical components to collect all of the information that can be important to maintaining optimal application behavior. Some customers opt to install deep monitoring on all of the components of their critical applications, and some go even deeper, capturing information sufficient to enable application debugging of production applications.

Different organizations and different applications have different needs. By providing a multi-layered APM solution that progresses through very simple steps from response time monitoring to different levels of transaction tracking and even application diagnostics, IBM is able to provide a solution that can be easily deployed and extended incremental for the most demanding organization.

APM: Why do you feel the Gartner Magic Quadrant on APM named IBM as a Leader?

ME: IBM has a comprehensive vision of APM. IBM’s APM solution offers a combination of proven technology, industry-leading integration, and extensive breadth of coverage. In addition, IBM’s continued focus on ease of use, rapid time-to-value, and role-based pricing and packaging make our portfolio straightforward to adopt in production environments.

Gartner defines APM as having 5 dimensions: End-user Experience Monitoring, Discovery, Transaction Profiling, Deep-Dive Component Monitoring, and Performance Analytics. A unified solution incorporating each of these dimensions is critical to insuring application performance, by enabling the context for action that is so critical to modern operations.

Click here to read Part Two of the Q&A with IBM VP Matthew Ellis, covering predictive analytics.

The Latest

March 27, 2015

After speaking to thousands of APM users during my time with Gartner, I have seen 5 key issues that cause APM failures ...

March 26, 2015

A new report by Radware shows that 9% of the top 100 leading retail web pages took ten or more seconds to become interactive, which is down considerably from 22% of sites last quarter ...

March 25, 2015

Everywhere you turn, the very latest IT technologies are being leveraged to provide advanced services that were unimaginable even ten years ago. So why is it that the IT environments that provide these services are managed using an analytics technology designed for the 1970s?

March 24, 2015

With the proliferation of composite applications for cloud and mobility, monitoring individual components of the application delivery chain is no longer an effective way to assure user experience. IT organizations must evolve toward a unified approach that promotes collaboration and efficiency to better align with corporate return on investment (ROI) and risk management objectives ...

March 23, 2015

Mobile and desktop applications have become the new battleground for brand loyalty, according to a global study commissioned by CA Technologies. In today’s software-driven world, where consumers are more discerning about what they expect from applications, the reality is that businesses that fail to deliver a positive application experience risk losing as much as a quarter of their customer base. The study – Software: the New Battleground for Brand Loyalty – surveyed 6,770 consumers and 809 business decision makers to uncover how each group thought various characteristics of applications impacted user experience, and how well different industries delivered on those characteristics. Consumers identified three that have the biggest impact on the consumer experience ...

March 20, 2015

Today’s CIOs face a daunting task: They must move their enterprises from a traditional organization, with some degree of optimization and automation, into the digital business age. Digital businesses are software-defined — dependent on or driven by software, and leveraging software-derived data to drive decision-making. In order to move businesses into the digital age, much needs to evolve, including innovation, leadership, organization, and culture within IT. These changes often are driven by a chief digital officer or a digitally savvy CIO ...

March 19, 2015

While most companies believe virtualization technology is a strategic priority, there are still clear risks that need to be addressed, according to a new report by Ixia entitled The State of Virtualization for Visibility Architecture 2015 ...

March 18, 2015

As March Madness continues to be a digitally driven event with a large US following, IT knows the business network will be put under additional stress and employee productivity will decline amid the tournament frenzy and all-consuming bracket. This is especially true during the first two days of the tournament when early round games take place during peak work hours. To help better prepare organizations for the oncoming flurry, we've put together our own "Final Four" list of actions every IT team can take to ensure networks don't come down with the nets ...

March 17, 2015

March Madness is basketball ecstasy for college hoops fans. But it's network agony for the organizations and IT managers forced to deal with severe strains on the network and threats of poorly performing applications. Of course, ever-increasing cloud usage and bring your own device (BYOD) policies only heighten the challenge for IT. With a little bit of proactive planning and with the right performance management tools in place, IT Ops can accurately monitor, identify and address application and network performance issues before they can impact the business. Here are a few tips to make sure administrators stay sane during March Madness ...

March 16, 2015

The phrase "The customer is always right" is ubiquitous in the business and retail world and one that companies should extend as a matter of course to refer to their employees. For IT teams, they are usually known as the "end user". It is a company’s employees who keep it running and when a network problem gets in the way not only is the end-user frustrated and annoyed, but productivity can quickly be driven to a halt ...

Share this