Over the years, IT systems management has evolved dramatically. What started as monitoring just the network infrastructure via ping/telnet/SNMP has transformed into monitoring and managing multi-tier, geographically distributed IT infrastructures and applications deployed on physical and virtual environments as well as private, public and hybrid cloud environments.
Around a decade ago, Application Performance Management (APM) emerged as an independent category within systems management. APM enables one to measure and eventually ensure availability, response time, and integrity of critical application services used by the business, i.e., the consumers of the IT application services. A couple of years later, another IT management technology called Business Service Management (BSM) emerged to align IT with business objectives.
Now, interest in BSM is resurging as companies strive to make their IT departments more responsive to their business needs. Simultaneously, APM has also emerged stronger in the last few years to encompass a broader scope in IT management. This two-article series looks at the evolution of BSM and APM, the key drivers for both technologies, and how we're seeing them converge to fulfill the promise of aligning IT with business.
Enter BSM
In the last decade, IT teams were often left in the dark whenever a problem in the IT infrastructure led to the unavailability or poor responsiveness of an IT application service used by the organization's business process. The problem? The lack of mature IT processes and tools meant IT teams rarely had any insight into the impact of the problem on the business.
As a result, IT was often criticized for being not aligned with the needs of the business. This led to the coining of the term "business service" which was different from an IT service. A business service was defined as an IT service that was provided by the IT team to the business and that had an intrinsic financial value associated with it.
Any impact to a business service always had a financial implication, and it was all the C-level executives cared about. This led to the pursuit of the lofty goal of identifying, measuring and ensuring availability and response time of business services, aka Business Service Management (BSM).
BSM dynamically linked business-focused IT services to the underlying IT infrastructure. It was what the CIOs and IT heads of the time wanted to hear, and the marketeers served up BSM to them as the holy grail of IT.
BSM promised:
- Alignment of IT and business: BSM was typically sold to the C-level executives as "The Tool" - a magic pill that could automatically help them align IT with business. Numerous productivity numbers and terms such as "time to gather business insights" were thrown up to justify BSM purchases.
- Faster time to resolve problems: BSM users were touted to be an order of magnitude faster in isolating and diagnosing problems compared to those not using it.
- Easier implementation: Not only could BSM improve IT productivity and business profitability, it was also supposed to be a breeze to set up and automatically configure.
- Better TCO and ROI: IT operations would be able to reactively and proactively determine where they should be spending their time to best impact the business. The cost savings in faster troubleshooting and increased business profitability would justify the investment in BSM.
- Power and control for business owners: Business owners were promised visibility and control into what was happening and how it could be fixed.
BSM Oversold and Under Delivered
As organizations started gradually buying and using BSM products, they realized that those products required a lot of manual effort and complex procedures to work. It was not the plug-and-play solution that was originally promoted.
BSM was a term coined to fill the gap between businesses' needs and IT capabilities. When business failed to see the value in BSM, the BSM promises fell flat.
So why didn't BSM live up to expectations? Probably because:
- BSM did not truly reflect the financial impact of IT on business.
- BSM did not have automated, real-time updates to reflect the current status of IT. As IT changed, BSM systems would either have older data or require manual update of the status.
- There was no easy and automated way to capture all the dependencies of a business process on the underlying IT components. Capturing such details was complex and often inaccurate, and it required a lot of effort.
- BSM was probably ahead of its time. BSM-required technologies such as automated discovery and dependency mapping and end-user monitoring were not sufficiently matured at that time.
As originally brought to market, BSM solutions failed to deliver the coveted alignment of IT and business. However, the initial failure did little to discourage organizations from pursuing their goal.
In the second article of this two-part series, we will take a look at the rising popularity of APM and the resurgence of BSM as companies continue to seek alignment.
Read Part 2 of this article: IT and Business Alignment: Has APM Evolved to Fulfill the Promise of BSM? Part 2
The Latest
To help you stay on top of the ever-evolving tech scene, Automox IT experts shake the proverbial magic eight ball and share their predictions about tech trends in the coming year. From M&A frenzies to sustainable tech and automation, these forecasts paint an exciting picture of the future ...
Incident management processes are not keeping pace with the demands of modern operations teams, failing to meet the needs of SREs as well as platform and ops teams. Results from the State of DevOps Automation and AI Survey, commissioned by Transposit, point to an incident management paradox. Despite nearly 60% of ITOps and DevOps professionals reporting they have a defined incident management process that's fully documented in one place and over 70% saying they have a level of automation that meets their needs, teams are unable to quickly resolve incidents ...
Today, in the world of enterprise technology, the challenges posed by legacy Virtual Desktop Infrastructure (VDI) systems have long been a source of concern for IT departments. In many instances, this promising solution has become an organizational burden, hindering progress, depleting resources, and taking a psychological and operational toll on employees ...
Within retail organizations across the world, IT teams will be bracing themselves for a hectic holiday season ... While this is an exciting opportunity for retailers to boost sales, it also intensifies severe risk. Any application performance slipup will cause consumers to turn their back on brands, possibly forever. Online shoppers will be completely unforgiving to any retailer who doesn't deliver a seamless digital experience ...
Black Friday is a time when consumers can cash in on some of the biggest deals retailers offer all year long ... Nearly two-thirds of consumers utilize a retailer's web and mobile app for holiday shopping, raising the stakes for competitors to provide the best online experience to retain customer loyalty. Perforce's 2023 Black Friday survey sheds light on consumers' expectations this time of year and how developers can properly prepare their applications for increased online traffic ...
This holiday shopping season, the stakes for online retailers couldn't be higher ... Even an hour or two of downtime for a digital storefront during this critical period can cost millions in lost revenue and has the potential to damage brand credibility. Savvy retailers are increasingly investing in observability to help ensure a seamless, omnichannel customer experience. Just ahead of the holiday season, New Relic released its State of Observability for Retail report, which offers insight and analysis on the adoption and business value of observability for the global retail/consumer industry ...
As organizations struggle to find and retain the talent they need to manage complex cloud implementations, many are leaning toward hybrid cloud as a solution ... While it's true that using the cloud is not a "one size fits all" proposition, it is clear that both large and small companies prefer a hybrid cloud model ...
In the same way a city is a sum of its districts and neighborhoods, complex IT systems are made of many components that continually interact. Observability requires a comprehensive and connected view of all aspects of the system, including even some that don't directly relate to its technological innards ...
Multicasting in this context refers to the process of directing data streams to two or more destinations. This might look like sending the same telemetry data to both an on-premises storage system and a cloud-based observability platform concurrently. The two principal benefits of this strategy are cost savings and service redundancy ...