Skip to main content

Multi-Tenancy in an APM Context

Ivar Sagemo

No topic in IT today is hotter than cloud computing. And I find it interesting how the rapid adoption of cloud platforms has led to a reinvention of how many IT applications and services work at a fairly deep level — certainly including those in my own area of APM.

Multi-tenancy, for instance, is a concept that has really come into vogue with the advent of public cloud platforms. A public cloud is by definition a shared architecture. This means an indefinite number of users (tenants) may be utilizing it at any given time. For all of those customers, the cloud provider wants to offer key services such as authentication, resource tracking, information management, policy creation, etc. It's only a question of what the most efficient way to accomplish this might be.

The most obvious idea would be to create a new instance of each service for each client. In this scenario, if the cloud has a thousand current clients, it also has a thousand iterations of a given service running simultaneously. Such an approach would be technically viable, but operationally wasteful — enormously complex, and therefore relatively slow and awkward to manage.

Multi-tenancy takes a different approach altogether. Instead of deploying new instances on a one-to-one basis with customers, the cloud host only needs to deploy one instance of a core application in total. That one instance, thanks to its sophisticated design, can then scale to support as many cloud customers as are necessary, logically sandboxing their data so as to keep them all completely separate from each other (even though the cloud architecture is in fact shared).

From the perspective of the cloud host, this approach is substantially superior. It is operationally much simpler to install, integrate, and manage one instance instead of many. And from the perspective of the cloud customer, the benefits are just as impressive. A customer who is interested in APM (Application Performance Management) capabilities, for example, can get them without ever having to worry about buying, deploying, or managing an actual APM solution. All that's required is contracting with a cloud provider who offers them.

Imagine an organization that manages a fleet of cruise ships. Each ship offers its own logical services, based on its own information; for each ship, separate APM considerations apply. Such an organization might solve that problem by purchasing, rolling out, and continually managing an APM solution in-house, but after all, IT infrastructure and IT service management isn't this organization's core strength; cruise ship management is. After all, APM on a moving target is tricky.

Now imagine that this organization discovers APM capabilities can be obtained from a trusted cloud provider, and that those capabilities will scale naturally to any number of ships. This may well prove the more attractive option of the two.

Setup time per server: roughly five minutes to install an agent. And because the cloud provider bills on a utility basis, the organization will only be charged in proportion to actual service usage. All the benefits of modern APM are thus achieved, yet the costs and complexity involved are relatively low.

Naturally, this does put a bit more burden on the APM solution developer! Re-coding an application to support multi-tenancy in cloud architecture is not a trivial feat of software engineering.

But for developers willing to put in the time, the benefits generated in the marketplace are clearly worth the effort:

• A broader range of service/software models, including both traditional and SaaS models, from which customers can easily choose to meet their needs

• A more direct focus on the core mission and less worry about IT infrastructure and overhead

• And for cloud hosts, simplified management, reduced costs and complexity, and a faster response to changing business conditions

For developer and organizations alike it’s a WIN-WIN situation.

Ivar Sagemo is CEO of AIMS Innovation.

Hot Topics

The Latest

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

Multi-Tenancy in an APM Context

Ivar Sagemo

No topic in IT today is hotter than cloud computing. And I find it interesting how the rapid adoption of cloud platforms has led to a reinvention of how many IT applications and services work at a fairly deep level — certainly including those in my own area of APM.

Multi-tenancy, for instance, is a concept that has really come into vogue with the advent of public cloud platforms. A public cloud is by definition a shared architecture. This means an indefinite number of users (tenants) may be utilizing it at any given time. For all of those customers, the cloud provider wants to offer key services such as authentication, resource tracking, information management, policy creation, etc. It's only a question of what the most efficient way to accomplish this might be.

The most obvious idea would be to create a new instance of each service for each client. In this scenario, if the cloud has a thousand current clients, it also has a thousand iterations of a given service running simultaneously. Such an approach would be technically viable, but operationally wasteful — enormously complex, and therefore relatively slow and awkward to manage.

Multi-tenancy takes a different approach altogether. Instead of deploying new instances on a one-to-one basis with customers, the cloud host only needs to deploy one instance of a core application in total. That one instance, thanks to its sophisticated design, can then scale to support as many cloud customers as are necessary, logically sandboxing their data so as to keep them all completely separate from each other (even though the cloud architecture is in fact shared).

From the perspective of the cloud host, this approach is substantially superior. It is operationally much simpler to install, integrate, and manage one instance instead of many. And from the perspective of the cloud customer, the benefits are just as impressive. A customer who is interested in APM (Application Performance Management) capabilities, for example, can get them without ever having to worry about buying, deploying, or managing an actual APM solution. All that's required is contracting with a cloud provider who offers them.

Imagine an organization that manages a fleet of cruise ships. Each ship offers its own logical services, based on its own information; for each ship, separate APM considerations apply. Such an organization might solve that problem by purchasing, rolling out, and continually managing an APM solution in-house, but after all, IT infrastructure and IT service management isn't this organization's core strength; cruise ship management is. After all, APM on a moving target is tricky.

Now imagine that this organization discovers APM capabilities can be obtained from a trusted cloud provider, and that those capabilities will scale naturally to any number of ships. This may well prove the more attractive option of the two.

Setup time per server: roughly five minutes to install an agent. And because the cloud provider bills on a utility basis, the organization will only be charged in proportion to actual service usage. All the benefits of modern APM are thus achieved, yet the costs and complexity involved are relatively low.

Naturally, this does put a bit more burden on the APM solution developer! Re-coding an application to support multi-tenancy in cloud architecture is not a trivial feat of software engineering.

But for developers willing to put in the time, the benefits generated in the marketplace are clearly worth the effort:

• A broader range of service/software models, including both traditional and SaaS models, from which customers can easily choose to meet their needs

• A more direct focus on the core mission and less worry about IT infrastructure and overhead

• And for cloud hosts, simplified management, reduced costs and complexity, and a faster response to changing business conditions

For developer and organizations alike it’s a WIN-WIN situation.

Ivar Sagemo is CEO of AIMS Innovation.

Hot Topics

The Latest

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...