APM Application Performance and the Internet of Things - Part 4
Delivering Essential Digital Experience Insights with Analytics
January 17, 2018

Pete Waterhouse
CA Technologies

In our third blog, we discussed how container technologies will become key to successfully delivering many IoT initiatives and how the resulting complexity tradeoff can only be addressed with new monitoring approaches. In our final blog, we'll discuss the applicability and application of analytics and machine learning. Specifically, what constitutes an effective analytics platform and the facets needed for an extensible monitoring model.


Depending on the use case, the volume and variety of connected IoT data together with the speeds at which it needs to be consumed and analyzed will challenge many organizations. Analytics become key – not only in a data management context, but in ingesting and correlating a deluge of both structured and unstructured information into meaningful insights that support rapid decision making. With IoT initiatives, the focus shifts from back-office ‘keep the lights' on operational IT tactics to gaining more business-centric insight. Using the Insurance sector as an example, this could include:

■ Risk management – carefully tracking storms and adverse weather conditions and proactively alerting policyholders of imminent risks.

■ Claims processing – In-vehicle sensor and telemetry data can provide warnings about poor driving patterns, with rewards and discounts offered to “safe drivers”.

■ Health and life insurance – the use of wearables and mobile apps allows the promotion of health-related programs and pay-as-you-go models which could be more attractive to younger customers.

Gaining this level of insight from data across many facets of a business necessitates a holistic monitoring approach. This includes addressing the big data challenge of IoT and unifying cross-functional teams (including IT, business and operational engineering professional) towards building a sustainable IoT business. Major considerations include:

1. Assessment and audit

Determine what monitoring tools currently exist within the organization and whether these can extend to support a variety of IoT use-cases. This shouldn't be limited to IT tools. In many manufacturing sectors, engineers have used M2M monitoring and testing tools for years, so time will be needed to better understand them.

2. Leveraging an extensible platform

The types and speed at which data needs to be captured, transported, ingested and analyzed will be varied. In retail scenarios, app experience data will be need captured at the point of customer engagement, with collections of data periodically offloaded to the cloud for business insight – e.g. segmentation and sentiment analysis, usage patterns etc. Irrespective of your business, seek out platforms that have the scale and processing power needed unify all data sources into a single point of analysis. Elements include:

Data capture and transport mechanisms– such as Apache Kafka - opensource software engineered to handle data ingestion on a massive scale and acting as a fast-path gateway to processing engines like Apache Spark and Hadoop.

Data-store flexibility– the ability to economically store multiple data types for ingestion, correlation and analysis. This could include ElasticSearch for unstructured data, together with other proprietary and opensource stores best suited for metric, logs and topological data.

Advanced correlation capabilities– the analytics engine must be capable of delivering multi-dimensional insights. In many use-cases, customer engagement data will need to be analyzed with application performance, while the ability to correlate from IoT application to supporting infrastructure becomes paramount for pinpointing more complex problems and guiding future IoT designs and improvements.

3. Adopting more extensive analytics

In addition to the capture, transport and storage, analytics platforms should be capable of providing a variety of techniques needed for IoT monitoring. Methods include statistical pattern discovery and recognition, complex event processing and topology analysis are beyond the scope of this blog but should be accessible by monitoring tools to at a minimum support:

Rapid Root Cause analysis– analyzing across the IoT application stack to help users pinpoint complex conditions and previously unknown root causes.

Behavioral learning and dynamic baselining– learns IoT application behaviors based on patterns, with metrics used to automatically set and adjust thresholds.

Predictive modelling– determines future conditions and their impact on IoT application performance.

Contextual insight and layering– root-cause determination at a detailed component level (infrastructure, network…), presented in context of IoT application performance.

4. Increasing resilience

Certain IoT systems and applications carry too much societal and operational impact to become unreliable. Testing and monitoring at a component level is critical, but often the complex nature of systems and their unique/expensive components make preproduction testing highly problematic. In such cases, examine the entire end-to-end IoT architecture, instrumenting and monitoring each critical IoT element for availability and performance while surfacing blind-spots through predictive analysis and simulations. IoT testing may be necessary in a production context, so explore monitoring tools that strengthen techniques such as A/B testing and blue green deployments.

5. Visualizing to communicate

Having captured and analyzed IoT data, many stakeholders will benefit from the rapid presentation of contextualized information. Every IoT program will be different, so give preference to dashboarding solutions that allow data stores to be quickly navigated, searched and interpreted. Here again, extensibility is key, with modern tools such as Kibana flexing to deliver both business and operational insights. For business executives and analysts this could be a dashboard illustrating the impact of a new mobile app release on customer conversion goals, while DevOps teams could quickly identify which coding changes correlate to meeting that outcome.

6. Becoming standards savvy

There's a baffling array of protocols and standards across the IoT stack. Some like Bluetooth, WIFI and 4G for communications / transport or will be well understood, while others like ZigBee and 6LoWPAN perhaps less so. While it's easy to get lost in the soup, consider assembling cross-functional teams to determine the business context of each IoT initiative and then assess the viability of emerging standards and how these can integrate with analytics systems.

In this 4-part series on IoT, we've outlined many modern monitoring approaches needed to meet the exacting service quality requirements of complex IoT applications. In a development context, we've discussed the importance of monitoring new frameworks and container platforms. We've also described the dual-role analytics plays in resolving critical IoT issues and correlating end-to-end data to gain the insights businesses need to drive better outcomes.

IoT technologies are rapidly evolving, so it would be naïve to claim all the answers. What's clear, however, is that as business is conducted and customers engage through connected devices and systems, monitoring must extend beyond its current operational purview to ensuring a flawless digital experience – as IoT systems are designed, developed and deployed. This requires solutions that deliver an integrated set of app experience analytics, application performance management and infrastructure management capabilities across the entire IoT fabric. Built upon and backed by a powerful, open and extensible analytics platform.

Pete Waterhouse is Senior Strategist at CA Technologies
Share this