Skip to main content

Multi-Tenancy in an APM Context

Ivar Sagemo

No topic in IT today is hotter than cloud computing. And I find it interesting how the rapid adoption of cloud platforms has led to a reinvention of how many IT applications and services work at a fairly deep level — certainly including those in my own area of APM.

Multi-tenancy, for instance, is a concept that has really come into vogue with the advent of public cloud platforms. A public cloud is by definition a shared architecture. This means an indefinite number of users (tenants) may be utilizing it at any given time. For all of those customers, the cloud provider wants to offer key services such as authentication, resource tracking, information management, policy creation, etc. It's only a question of what the most efficient way to accomplish this might be.

The most obvious idea would be to create a new instance of each service for each client. In this scenario, if the cloud has a thousand current clients, it also has a thousand iterations of a given service running simultaneously. Such an approach would be technically viable, but operationally wasteful — enormously complex, and therefore relatively slow and awkward to manage.

Multi-tenancy takes a different approach altogether. Instead of deploying new instances on a one-to-one basis with customers, the cloud host only needs to deploy one instance of a core application in total. That one instance, thanks to its sophisticated design, can then scale to support as many cloud customers as are necessary, logically sandboxing their data so as to keep them all completely separate from each other (even though the cloud architecture is in fact shared).

From the perspective of the cloud host, this approach is substantially superior. It is operationally much simpler to install, integrate, and manage one instance instead of many. And from the perspective of the cloud customer, the benefits are just as impressive. A customer who is interested in APM (Application Performance Management) capabilities, for example, can get them without ever having to worry about buying, deploying, or managing an actual APM solution. All that's required is contracting with a cloud provider who offers them.

Imagine an organization that manages a fleet of cruise ships. Each ship offers its own logical services, based on its own information; for each ship, separate APM considerations apply. Such an organization might solve that problem by purchasing, rolling out, and continually managing an APM solution in-house, but after all, IT infrastructure and IT service management isn't this organization's core strength; cruise ship management is. After all, APM on a moving target is tricky.

Now imagine that this organization discovers APM capabilities can be obtained from a trusted cloud provider, and that those capabilities will scale naturally to any number of ships. This may well prove the more attractive option of the two.

Setup time per server: roughly five minutes to install an agent. And because the cloud provider bills on a utility basis, the organization will only be charged in proportion to actual service usage. All the benefits of modern APM are thus achieved, yet the costs and complexity involved are relatively low.

Naturally, this does put a bit more burden on the APM solution developer! Re-coding an application to support multi-tenancy in cloud architecture is not a trivial feat of software engineering.

But for developers willing to put in the time, the benefits generated in the marketplace are clearly worth the effort:

• A broader range of service/software models, including both traditional and SaaS models, from which customers can easily choose to meet their needs

• A more direct focus on the core mission and less worry about IT infrastructure and overhead

• And for cloud hosts, simplified management, reduced costs and complexity, and a faster response to changing business conditions

For developer and organizations alike it’s a WIN-WIN situation.

Ivar Sagemo is CEO of AIMS Innovation.

Hot Topics

The Latest

Most organizations approach OpenTelemetry as a collection of individual tools they need to assemble from scratch. This view misses the bigger picture. OpenTelemetry is a complete telemetry framework with composable components that address specific problems at different stages of organizational maturity. You start with what you need today and adopt additional pieces as your observability practices evolve ...

One of the earliest lessons I learned from architecting throughput-heavy services is that simplicity wins repeatedly: fewer moving parts, loosely coupled execution (fewer synchronous calls), and precise timing metering. You want data and decisions to travel the shortest possible path. The goal is to build a system where every strategy and each line of code (contention is the key metric) complements the decision trees ...

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...

APMdigest's Predictions Series concludes with 2026 AI Predictions — industry experts offer predictions on how AI and related technologies will evolve and impact business in 2026. Part 5, the final installment, covers AI's impacts on IT teams ...

Multi-Tenancy in an APM Context

Ivar Sagemo

No topic in IT today is hotter than cloud computing. And I find it interesting how the rapid adoption of cloud platforms has led to a reinvention of how many IT applications and services work at a fairly deep level — certainly including those in my own area of APM.

Multi-tenancy, for instance, is a concept that has really come into vogue with the advent of public cloud platforms. A public cloud is by definition a shared architecture. This means an indefinite number of users (tenants) may be utilizing it at any given time. For all of those customers, the cloud provider wants to offer key services such as authentication, resource tracking, information management, policy creation, etc. It's only a question of what the most efficient way to accomplish this might be.

The most obvious idea would be to create a new instance of each service for each client. In this scenario, if the cloud has a thousand current clients, it also has a thousand iterations of a given service running simultaneously. Such an approach would be technically viable, but operationally wasteful — enormously complex, and therefore relatively slow and awkward to manage.

Multi-tenancy takes a different approach altogether. Instead of deploying new instances on a one-to-one basis with customers, the cloud host only needs to deploy one instance of a core application in total. That one instance, thanks to its sophisticated design, can then scale to support as many cloud customers as are necessary, logically sandboxing their data so as to keep them all completely separate from each other (even though the cloud architecture is in fact shared).

From the perspective of the cloud host, this approach is substantially superior. It is operationally much simpler to install, integrate, and manage one instance instead of many. And from the perspective of the cloud customer, the benefits are just as impressive. A customer who is interested in APM (Application Performance Management) capabilities, for example, can get them without ever having to worry about buying, deploying, or managing an actual APM solution. All that's required is contracting with a cloud provider who offers them.

Imagine an organization that manages a fleet of cruise ships. Each ship offers its own logical services, based on its own information; for each ship, separate APM considerations apply. Such an organization might solve that problem by purchasing, rolling out, and continually managing an APM solution in-house, but after all, IT infrastructure and IT service management isn't this organization's core strength; cruise ship management is. After all, APM on a moving target is tricky.

Now imagine that this organization discovers APM capabilities can be obtained from a trusted cloud provider, and that those capabilities will scale naturally to any number of ships. This may well prove the more attractive option of the two.

Setup time per server: roughly five minutes to install an agent. And because the cloud provider bills on a utility basis, the organization will only be charged in proportion to actual service usage. All the benefits of modern APM are thus achieved, yet the costs and complexity involved are relatively low.

Naturally, this does put a bit more burden on the APM solution developer! Re-coding an application to support multi-tenancy in cloud architecture is not a trivial feat of software engineering.

But for developers willing to put in the time, the benefits generated in the marketplace are clearly worth the effort:

• A broader range of service/software models, including both traditional and SaaS models, from which customers can easily choose to meet their needs

• A more direct focus on the core mission and less worry about IT infrastructure and overhead

• And for cloud hosts, simplified management, reduced costs and complexity, and a faster response to changing business conditions

For developer and organizations alike it’s a WIN-WIN situation.

Ivar Sagemo is CEO of AIMS Innovation.

Hot Topics

The Latest

Most organizations approach OpenTelemetry as a collection of individual tools they need to assemble from scratch. This view misses the bigger picture. OpenTelemetry is a complete telemetry framework with composable components that address specific problems at different stages of organizational maturity. You start with what you need today and adopt additional pieces as your observability practices evolve ...

One of the earliest lessons I learned from architecting throughput-heavy services is that simplicity wins repeatedly: fewer moving parts, loosely coupled execution (fewer synchronous calls), and precise timing metering. You want data and decisions to travel the shortest possible path. The goal is to build a system where every strategy and each line of code (contention is the key metric) complements the decision trees ...

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...

APMdigest's Predictions Series concludes with 2026 AI Predictions — industry experts offer predictions on how AI and related technologies will evolve and impact business in 2026. Part 5, the final installment, covers AI's impacts on IT teams ...