You ask a friend to "check" on your dog while you're away. Obliging, your friend goes to your house, rings the doorbell to listen for a bark and then returns to their car. However, when you made the request you really wanted your friend to go into the house for a bit, make sure there were no issues and immediately notify you if something was wrong. A perfect case of a poorly negotiated SLA!
What Are SLAs and Why Do We Have Them?
A Service Level Agreement is a contractual agreement between a service provider and a customer regarding the level of service that will be provided. SLAs are beneficial for both parties – they define what is being purchased and also the roles and responsibilities to remediate any issues. A well-constructed SLA strengthens the customer relationship by bridging the gap between the vendor services and customer expectations. With software services, websites and applications becoming increasingly complex, negotiating and adhering to SLAs is more important than ever.
What Do SLAs Typically Cover?
It is very important to keep the SLA simple, measurable and realistic. SLAs typically cover:
■ Description of overall services
■ Service performance metrics
■ Financial aspects of service delivery
■ Responsibilities of service provider and customer
■ Disaster recovery process
■ Review process and frequency of review
■ Termination of agreement process
The specific performance metrics that manage the compliance of service delivery are called Service Level Objectives (SLOs). In the context of web services, SLOs would cover availability, uptime and response time for the service; probably accessibility by geography and problem resolution metrics such as mean time to answer and/or mean time to repair.
Is a service really available if the customer cannot use it? A well-constructed SLA should include a unit of measurement that defines availability to align with the customer's critical business process, and not just the availability of the servers URL/URI or log in process.
Using our doorbell analogy in web services context, a poorly negotiated SLA will ring the doorbell equivalent of looking for the 200 OK from the server. The 200 code, like the dog's bark, will just tell you that someone is home and not the actual condition i.e. health of the service. Checking a website or authenticating without validating the business process you rely on, exposes you to downtime without financial leverage.
Step One: Measure What You Have
What can you, the service provider, do to get most out of SLAs? Let's say you are providing a marketing automation system to an enterprise that will run its global web activities over your system. You have promised them 95% availability and suitable performance from the USA east and west coasts, UK, Germany and India.
Before you commit to an exact performance target, hopefully you have measured what you have now. You need to baseline the performance of your service in order to understand what you can offer. No sense promising 95% availability in India if your system typically only is available 80% of the time in India. However, when it comes to SLAs, under committing can lead to lost business opportunities and lost revenue. You can use your SLA as a competitive advantage, only if you know what you can and cannot deliver. Baselining performance will help you commit not too much, not too little but just right!
Using a synthetic performance monitoring tool, you can baseline your services. Ex. Let's say you want to measure performance of a user log in activity from UK during business hours. You can record this multi-step user transaction and use that script to create a monitor. Next, you can create an SLA for that monitor by setting desired response time and availability objective. A quality synthetic tool will not only see if the service is up and running but also measures the response times and functional correctness from its global monitoring nodes; assuring SLA compliance by comparing the actual performance with SLA objectives.
By observing your monitors in real time , as well as from the SLA summary, you get the realistic and complete picture of your performance.
Step Two: Include What Applies to Your Customer, Exclude the Rest
If your agreement states that you will provide a certain level of service for east coast, west coast, UK, Germany and India, don't provide the data regarding the Netherlands and Africa. You also need to account for operational time for you, clearly mention the descriptions of your maintenance windows and/or upgrades. When building the service-level-agreement, keep in mind the operating periods as well as both ongoing and one-time events.
Customers are getting used to the multi tenancy nature of service providers. So be open to SLA negotiations, however calculate the cost associated with customization and make sure it aligns with your aggregate business interest in that customer. Many times the customer can also be found in over/under demanding situations. Baselining customer's performance requirements will lead to more realistic SLAs and a win-win situation for both parties.
Step Three: Monitor Aggressively
In order to make realistic availability and performance goals and keep them, you have to take enough measurements so that a single failure doesn't skew the overall results.
I want to talk a little bit about the law of large numbers: which is a principle of probability and statistics. The law of large numbers states that as a sample size grows, its mean will get closer and closer to the average of the whole population.
This is an important context for monitoring and setting SLAs. If you run an availability test from 5 locations once an hour, one time, and one of those tests fails. Your availability is down to 80 percent. If you run tests from 10 locations every 5 minutes for an hour that is 50 tests – and if 1 fails then your availability is now 98%! Less aggressive monitoring leaves you vulnerable to an SLA violation for a brief outage.
In conclusion, service level agreements are valuable for you and your customers. These three steps will help you look at SLAs as an opportunity than a restriction.
■ Make the right agreement based on baseline performance
■ Measure the correct things with the correct frequency
■ Take enough measurements to smooth out variability
John Lucania is Senior Sales Engineer at SmartBear Software.
The Latest
Part 3 covers even more on Observability: Observability will move up the organization to support the sustainability and FinOps drive. The combined pressure of needing to adopt more sustainable practices and tackle rising cloud costs will catapult observability from an IT priority to a business requirement in 2024 ...
Part 2 covers more on Observability: In 2024, observability platforms will embrace and innovate with new technologies like GenAI for real-time analytics, becoming the fulcrum for digital experience management ...
The Holiday Season means it is time for APMdigest's annual list of Application Performance Management (APM) predictions, covering IT performance topics. Industry experts — from analysts and consultants to the top vendors — offer thoughtful, insightful, and often controversial predictions on how APM, Observability, AIOps and related technologies will evolve and impact business in 2024. Part 1 covers APM and Observability ...
To help you stay on top of the ever-evolving tech scene, Automox IT experts shake the proverbial magic eight ball and share their predictions about tech trends in the coming year. From M&A frenzies to sustainable tech and automation, these forecasts paint an exciting picture of the future ...
Incident management processes are not keeping pace with the demands of modern operations teams, failing to meet the needs of SREs as well as platform and ops teams. Results from the State of DevOps Automation and AI Survey, commissioned by Transposit, point to an incident management paradox. Despite nearly 60% of ITOps and DevOps professionals reporting they have a defined incident management process that's fully documented in one place and over 70% saying they have a level of automation that meets their needs, teams are unable to quickly resolve incidents ...
Today, in the world of enterprise technology, the challenges posed by legacy Virtual Desktop Infrastructure (VDI) systems have long been a source of concern for IT departments. In many instances, this promising solution has become an organizational burden, hindering progress, depleting resources, and taking a psychological and operational toll on employees ...
Within retail organizations across the world, IT teams will be bracing themselves for a hectic holiday season ... While this is an exciting opportunity for retailers to boost sales, it also intensifies severe risk. Any application performance slipup will cause consumers to turn their back on brands, possibly forever. Online shoppers will be completely unforgiving to any retailer who doesn't deliver a seamless digital experience ...
Black Friday is a time when consumers can cash in on some of the biggest deals retailers offer all year long ... Nearly two-thirds of consumers utilize a retailer's web and mobile app for holiday shopping, raising the stakes for competitors to provide the best online experience to retain customer loyalty. Perforce's 2023 Black Friday survey sheds light on consumers' expectations this time of year and how developers can properly prepare their applications for increased online traffic ...
This holiday shopping season, the stakes for online retailers couldn't be higher ... Even an hour or two of downtime for a digital storefront during this critical period can cost millions in lost revenue and has the potential to damage brand credibility. Savvy retailers are increasingly investing in observability to help ensure a seamless, omnichannel customer experience. Just ahead of the holiday season, New Relic released its State of Observability for Retail report, which offers insight and analysis on the adoption and business value of observability for the global retail/consumer industry ...