Skip to main content

Software AG Releases TrendMiner 2021.R2

Software AG’s TrendMiner released TrendMiner 2021.R2.

This new release extends the reach of previously released notebook integration, allowing analytics expert-users to make their data model outputs available to the rest of the organization, giving operational experts better insights. The new multi-variate Anomaly Detection Model allows optimal process conditions to be trained on historical data and the model will be able to detect anomalies on new incoming data.

TrendMiner 2021.R2 also now allows self-service integration via webMethods.io. This enables contextual process information from other business applications to be taken into account and workflows in external systems to be triggered through the new Anomaly Detection Model.

TrendMiner enables operational experts in process manufacturing industries to analyze, monitor and predict operational performance using sensor-generated time-series data. The goal of TrendMiner has always been to empower engineers with analytics for improving operational excellence, without the need to rely on data scientists. It brings data science to the engineers.

TrendMiner 2021.R2 extends the notebook capabilities of the previous release, enabling them to be operationalized by deploying custom created data models into an embedded scoring/inference engine through use of Machine Learning Model tags. These ‘Machine Learning Model’ tags are available for all TrendMiner users, as if they were tags originating from an enterprise historian or any other time-series data source. All existing TrendMiner capabilities can be applied, such as visualizing recent & historical data, searching for patterns or threshold values as well as monitoring using the machine learning model patterns.

Nick Van Damme, Director of Products at TrendMiner commented: “Classical data science depends on bringing process / asset know-how to data science (expert) teams and using their scripting, hacking and parsing skills to come to increased insights, in their expert silo. With the new TrendMiner capabilities we aim at breaking apart the traditional silo-approach and really bring the data scientist in the loop. While crafting the prepared data into something useful for themselves and others, they can work in close collaboration with all other TrendMiner users to contextualize the raw data with operational knowledge. Afterwards they have an easy out-of-the-box way to operationalize their findings within the organization, empowering others to get better and easier insights.”

The TrendMiner 2021.R2 release now offers a proprietary model for multi-variate anomaly detection via the mentioned notebook and "Machine Learning Model" tags functionality. The TrendMiner Anomaly Detection Model can be trained on a trend view containing normal operating conditions of the process. After learning the desired process conditions, the model will then be able to detect anomalies on new incoming data. The model will return whether a new datapoint is an outlier or not based on a given threshold (anomaly class) or return an anomaly score. The higher the anomaly score, the more likely it is that the datapoint is an outlier.

Factories today are capturing and storing an enormous amount of data directly or indirectly related to the production process. All this data typically ends up in best of breed business applications serving specific operational purposes. All this contextual information residing in various business applications can give new insights for improving operational performance, if the operational experts can actually access that data. With the introduction of the integration add-on powered by webMethods.io within the TrendMiner platform, engineers can now create integrations to crucial business applications themselves. On top of that, the self-service integration via webMethods.io allows workflows to be created across the business applications on premises and in cloud solutions. This can for example be used to notify your colleagues with a MS Teams or Slack message and to simultaneously add a maintenance work request in SAP, when a TrendMiner monitor fires an alert.

The Latest

In the world of digital-first business, there is no tolerance for service outages. Businesses know that outages are the quickest way to lose money and customers. For smaller organizations, unplanned downtime could even force the business to close ... A new study from PagerDuty, The State of AI-First Operations, reveals that companies actively incorporating AI into operations now view operational resilience as a growth driver rather than a cost center. But how are they achieving it? ...

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

Software AG Releases TrendMiner 2021.R2

Software AG’s TrendMiner released TrendMiner 2021.R2.

This new release extends the reach of previously released notebook integration, allowing analytics expert-users to make their data model outputs available to the rest of the organization, giving operational experts better insights. The new multi-variate Anomaly Detection Model allows optimal process conditions to be trained on historical data and the model will be able to detect anomalies on new incoming data.

TrendMiner 2021.R2 also now allows self-service integration via webMethods.io. This enables contextual process information from other business applications to be taken into account and workflows in external systems to be triggered through the new Anomaly Detection Model.

TrendMiner enables operational experts in process manufacturing industries to analyze, monitor and predict operational performance using sensor-generated time-series data. The goal of TrendMiner has always been to empower engineers with analytics for improving operational excellence, without the need to rely on data scientists. It brings data science to the engineers.

TrendMiner 2021.R2 extends the notebook capabilities of the previous release, enabling them to be operationalized by deploying custom created data models into an embedded scoring/inference engine through use of Machine Learning Model tags. These ‘Machine Learning Model’ tags are available for all TrendMiner users, as if they were tags originating from an enterprise historian or any other time-series data source. All existing TrendMiner capabilities can be applied, such as visualizing recent & historical data, searching for patterns or threshold values as well as monitoring using the machine learning model patterns.

Nick Van Damme, Director of Products at TrendMiner commented: “Classical data science depends on bringing process / asset know-how to data science (expert) teams and using their scripting, hacking and parsing skills to come to increased insights, in their expert silo. With the new TrendMiner capabilities we aim at breaking apart the traditional silo-approach and really bring the data scientist in the loop. While crafting the prepared data into something useful for themselves and others, they can work in close collaboration with all other TrendMiner users to contextualize the raw data with operational knowledge. Afterwards they have an easy out-of-the-box way to operationalize their findings within the organization, empowering others to get better and easier insights.”

The TrendMiner 2021.R2 release now offers a proprietary model for multi-variate anomaly detection via the mentioned notebook and "Machine Learning Model" tags functionality. The TrendMiner Anomaly Detection Model can be trained on a trend view containing normal operating conditions of the process. After learning the desired process conditions, the model will then be able to detect anomalies on new incoming data. The model will return whether a new datapoint is an outlier or not based on a given threshold (anomaly class) or return an anomaly score. The higher the anomaly score, the more likely it is that the datapoint is an outlier.

Factories today are capturing and storing an enormous amount of data directly or indirectly related to the production process. All this data typically ends up in best of breed business applications serving specific operational purposes. All this contextual information residing in various business applications can give new insights for improving operational performance, if the operational experts can actually access that data. With the introduction of the integration add-on powered by webMethods.io within the TrendMiner platform, engineers can now create integrations to crucial business applications themselves. On top of that, the self-service integration via webMethods.io allows workflows to be created across the business applications on premises and in cloud solutions. This can for example be used to notify your colleagues with a MS Teams or Slack message and to simultaneously add a maintenance work request in SAP, when a TrendMiner monitor fires an alert.

The Latest

In the world of digital-first business, there is no tolerance for service outages. Businesses know that outages are the quickest way to lose money and customers. For smaller organizations, unplanned downtime could even force the business to close ... A new study from PagerDuty, The State of AI-First Operations, reveals that companies actively incorporating AI into operations now view operational resilience as a growth driver rather than a cost center. But how are they achieving it? ...

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...