Skip to main content

Using Monitoring to Bridge the Gap Between Process and Technology

Ivar Sagemo

For several decades now, IT infrastructure has been the fundamental engine of business processes. Going from the abstract idea of a business process to a smoothly running technical implementation of that process ought to be straightforward, right? But as we know, such is not the case. Technology has improved by leaps and bounds, but our ability to leverage it to our best business effect isn't nearly as well optimized. Too often, IT has become its own little world, all but divorced from the business side and unable to take into account business goals and strategies in the way services are managed.

Consider, for instance, solutions such as Microsoft BizTalk and the way BizTalk-driven processes are typically implemented:

• Company requirements translate into logistic rules and requests. For retail stores for example, packages need to be shipped on time or warehouse stores supplemented adequately. Ideally at the lowest cost and the highest reliability.

• The flow of information is then determined and logical rights are assigned to make that happen. When a company specifies where shipments should go and which are rush orders, as the shipments move towards a destination, updates must be provided to a website. Business logistics such as these take on a technical slant when processes are implemented.

Instead of focusing on whether information is actually getting from one location to another in a timely and accurate manner, monitoring services generally miss the mark and revolve around issues such as the CPU utilization of underlying systems, available storage of associated databases, etc.

Also problematic is that thresholds are typically determined arbitrarily and don't always correlate to actual success or failure of the business process it was derived from. This has the effect of making it harder and slower to solve problems when they occur.

It's also an essentially reactive approach: "Wait until something goes wrong and then fix it." Much better would be: "Anticipate what is likely to go wrong, and ensure that it doesn't."

And what happens when the process changes?

Imagine, for instance, that a new business system is brought in-house such as a new sales tool, involving a whole new data source. How easy or difficult is it for a BizTalk monitoring system to adapt in parallel? Usually, a series of manual modifications are needed — possibly by outside consultants specializing in BizTalk. This is slow, cumbersome, and operationally costly. It also introduces the possibility of inadvertent mistakes that could easily compromise monitoring when the whole point was to improve it.

Building a Better Mousetrap

Instead of that all-too-familiar paradigm, let's imagine something quite different.

• Smart discovery. What if BizTalk monitoring systems, once deployed, could automatically discover the business processes that led to the IT decision such as how information flows, critical dependencies, normal performance at different times and under different conditions — and thus establish accurate thresholds needed to ensure effective performance?

• Intuitive design. What if, instead of having to call in a consultant when things go belly-up, IT people could look at a topological map and understand the issue themselves? What if they could drill down into that map, getting specific technical insight needed to fix the problem quickly?

• Out-of-box best practices. What if your BizTalk monitoring system already knew the kinds of monitoring problems other companies have faced, and the best ways to avoid those problems? What if your organization could benefit from that kind of insight without having to call in a consultant?

While, the goal of every organization is to take those great ideas developed at the process stage and carry them through to the final IT implementation, we know that solutions change after initial deployment: new processes, new partners and changing business demands mean processes shift as the journey toward final implementation moves along. We'd like to think that monitoring could help in that journey.

Ivar Sagemo is CEO of AIMS Innovation.

Hot Topics

The Latest

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

Using Monitoring to Bridge the Gap Between Process and Technology

Ivar Sagemo

For several decades now, IT infrastructure has been the fundamental engine of business processes. Going from the abstract idea of a business process to a smoothly running technical implementation of that process ought to be straightforward, right? But as we know, such is not the case. Technology has improved by leaps and bounds, but our ability to leverage it to our best business effect isn't nearly as well optimized. Too often, IT has become its own little world, all but divorced from the business side and unable to take into account business goals and strategies in the way services are managed.

Consider, for instance, solutions such as Microsoft BizTalk and the way BizTalk-driven processes are typically implemented:

• Company requirements translate into logistic rules and requests. For retail stores for example, packages need to be shipped on time or warehouse stores supplemented adequately. Ideally at the lowest cost and the highest reliability.

• The flow of information is then determined and logical rights are assigned to make that happen. When a company specifies where shipments should go and which are rush orders, as the shipments move towards a destination, updates must be provided to a website. Business logistics such as these take on a technical slant when processes are implemented.

Instead of focusing on whether information is actually getting from one location to another in a timely and accurate manner, monitoring services generally miss the mark and revolve around issues such as the CPU utilization of underlying systems, available storage of associated databases, etc.

Also problematic is that thresholds are typically determined arbitrarily and don't always correlate to actual success or failure of the business process it was derived from. This has the effect of making it harder and slower to solve problems when they occur.

It's also an essentially reactive approach: "Wait until something goes wrong and then fix it." Much better would be: "Anticipate what is likely to go wrong, and ensure that it doesn't."

And what happens when the process changes?

Imagine, for instance, that a new business system is brought in-house such as a new sales tool, involving a whole new data source. How easy or difficult is it for a BizTalk monitoring system to adapt in parallel? Usually, a series of manual modifications are needed — possibly by outside consultants specializing in BizTalk. This is slow, cumbersome, and operationally costly. It also introduces the possibility of inadvertent mistakes that could easily compromise monitoring when the whole point was to improve it.

Building a Better Mousetrap

Instead of that all-too-familiar paradigm, let's imagine something quite different.

• Smart discovery. What if BizTalk monitoring systems, once deployed, could automatically discover the business processes that led to the IT decision such as how information flows, critical dependencies, normal performance at different times and under different conditions — and thus establish accurate thresholds needed to ensure effective performance?

• Intuitive design. What if, instead of having to call in a consultant when things go belly-up, IT people could look at a topological map and understand the issue themselves? What if they could drill down into that map, getting specific technical insight needed to fix the problem quickly?

• Out-of-box best practices. What if your BizTalk monitoring system already knew the kinds of monitoring problems other companies have faced, and the best ways to avoid those problems? What if your organization could benefit from that kind of insight without having to call in a consultant?

While, the goal of every organization is to take those great ideas developed at the process stage and carry them through to the final IT implementation, we know that solutions change after initial deployment: new processes, new partners and changing business demands mean processes shift as the journey toward final implementation moves along. We'd like to think that monitoring could help in that journey.

Ivar Sagemo is CEO of AIMS Innovation.

Hot Topics

The Latest

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...