No topic in IT today is hotter than cloud computing. And I find it interesting how the rapid adoption of cloud platforms has led to a reinvention of how many IT applications and services work at a fairly deep level — certainly including those in my own area of APM.
Multi-tenancy, for instance, is a concept that has really come into vogue with the advent of public cloud platforms. A public cloud is by definition a shared architecture. This means an indefinite number of users (tenants) may be utilizing it at any given time. For all of those customers, the cloud provider wants to offer key services such as authentication, resource tracking, information management, policy creation, etc. It's only a question of what the most efficient way to accomplish this might be.
The most obvious idea would be to create a new instance of each service for each client. In this scenario, if the cloud has a thousand current clients, it also has a thousand iterations of a given service running simultaneously. Such an approach would be technically viable, but operationally wasteful — enormously complex, and therefore relatively slow and awkward to manage.
Multi-tenancy takes a different approach altogether. Instead of deploying new instances on a one-to-one basis with customers, the cloud host only needs to deploy one instance of a core application in total. That one instance, thanks to its sophisticated design, can then scale to support as many cloud customers as are necessary, logically sandboxing their data so as to keep them all completely separate from each other (even though the cloud architecture is in fact shared).
From the perspective of the cloud host, this approach is substantially superior. It is operationally much simpler to install, integrate, and manage one instance instead of many. And from the perspective of the cloud customer, the benefits are just as impressive. A customer who is interested in APM (Application Performance Management) capabilities, for example, can get them without ever having to worry about buying, deploying, or managing an actual APM solution. All that's required is contracting with a cloud provider who offers them.
Imagine an organization that manages a fleet of cruise ships. Each ship offers its own logical services, based on its own information; for each ship, separate APM considerations apply. Such an organization might solve that problem by purchasing, rolling out, and continually managing an APM solution in-house, but after all, IT infrastructure and IT service management isn't this organization's core strength; cruise ship management is. After all, APM on a moving target is tricky.
Now imagine that this organization discovers APM capabilities can be obtained from a trusted cloud provider, and that those capabilities will scale naturally to any number of ships. This may well prove the more attractive option of the two.
Setup time per server: roughly five minutes to install an agent. And because the cloud provider bills on a utility basis, the organization will only be charged in proportion to actual service usage. All the benefits of modern APM are thus achieved, yet the costs and complexity involved are relatively low.
Naturally, this does put a bit more burden on the APM solution developer! Re-coding an application to support multi-tenancy in cloud architecture is not a trivial feat of software engineering.
But for developers willing to put in the time, the benefits generated in the marketplace are clearly worth the effort:
• A broader range of service/software models, including both traditional and SaaS models, from which customers can easily choose to meet their needs
• A more direct focus on the core mission and less worry about IT infrastructure and overhead
• And for cloud hosts, simplified management, reduced costs and complexity, and a faster response to changing business conditions
For developer and organizations alike it’s a WIN-WIN situation.
Ivar Sagemo is CEO of AIMS Innovation.
The Latest
To achieve maximum availability, IT leaders must employ domain-agnostic solutions that identify and escalate issues across all telemetry points. These technologies, which we refer to as Artificial Intelligence for IT Operations, create convergence — in other words, they provide IT and DevOps teams with the full picture of event management and downtime ...
APMdigest and leading IT research firm Enterprise Management Associates (EMA) are partnering to bring you the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 2 - Part 1 Pete Goldin, Editor and Publisher of APMdigest, discusses Network Observability with Shamus McGillicuddy, Vice President of Research, Network Infrastructure and Operations, at EMA ...
CIOs have stepped into the role of digital leader and strategic advisor, according to the 2023 Global CIO Survey from Logicalis ...
Synthetic monitoring is crucial to deploy code with confidence as catching bugs with E2E tests on staging is becoming increasingly difficult. It isn't trivial to provide realistic staging systems, especially because today's apps are intertwined with many third-party APIs ...
Recent EMA field research found that ServiceOps is either an active effort or a formal initiative in 78% of the organizations represented by a global panel of 400+ IT leaders. It is relatively early but gaining momentum across industries and organizations of all sizes globally ...
Managing availability and performance within SAP environments has long been a challenge for IT teams. But as IT environments grow more complex and dynamic, and the speed of innovation in almost every industry continues to accelerate, this situation is becoming a whole lot worse ...
Harnessing the power of network-derived intelligence and insights is critical in detecting today's increasingly sophisticated security threats across hybrid and multi-cloud infrastructure, according to a new research study from IDC ...
Recent research suggests that many organizations are paying for more software than they need. If organizations are looking to reduce IT spend, leaders should take a closer look at the tools being offered to employees, as not all software is essential ...
Organizations are challenged by tool sprawl and data source overload, according to the Grafana Labs Observability Survey 2023, with 52% of respondents reporting that their companies use 6 or more observability tools, including 11% that use 16 or more.
An array of tools purport to maintain availability — the trick is sorting through the noise to find the right one. Let us discuss why availability is so important and then unpack the ROI of deploying Artificial Intelligence for IT Operations (AIOps) during an economic downturn ...