Skip to main content

How Do I Assess the Quality of My External IT Supplier?

Coen Meerbeek

Given the extent to which companies are contracting out their IT organization to other parties, outsourcing appears to be making a comeback. But migrating your IT infrastructure and management to the cloud or another party remains a hot topic.

In the outsourcing procedure you lay down your criteria for the quality to be delivered by the other party. We have to do this, because otherwise the supplier will rest on his laurels, which is the last thing we want. So, we've got our criteria, but who's going to monitor them and how transparent are the figures?

An interesting subject to take a look at today.

As I mentioned above, the concept of outsourcing has changed a fair bit since the emergence of the cloud. We now have different levels of outsourcing.

1. The IT infrastructure runs at the supplier's premises but you remain responsible for the architecture, implementation and management. You also have the applications managed by your own people. (IaaS)

2. The IT infrastructure runs at the supplier's premises and the supplier is responsible for the architecture, implementation and management. You have the applications managed by your own people. (PaaS)

3. Both the IT infrastructure and the applications are the responsibility of the supplier. It makes no difference how everything has been designed. (SaaS)

The monitoring process is different for each of the three variants. What I regard as being most important here is:

■ To have all of the variants assessed by an independent party or by your own organisation. Under no circumstances should this be done by the supplier.

■ All of the assessments are carried out from the end-user's perspective. The best measure of the supplier's quality is the performance and availability that you as the buyer receive.

We can use the following solutions for the three variants. I've based this on pure SLA monitoring but also on the four data sets; wire, machine, agent and synthetic.

Variant 1 – Infrastructure-as-a-Service

The supplier is solely responsible for providing the hardware. You are responsible for its content. You will usually be responsible for everything from the virtualization level onwards.

In that case, the best way to monitor the supplier's service is to place the monitoring process at virtualization level. Splunk, for instance, has a superb VMWare app that provides you with all the information you need. I can well imagine that the supplier will not in all cases be willing to allow this. It will mean that he has to be transparent about the service he provides and it might be possible to pass on the monitoring data that he generates to your own Splunk implementation in order to draw up the right reports yourself.

Monitoring from infrastructure level is entirely your responsibility, so you can decide for yourself which tool to choose to cover four data sets. A synthetic solution remains desirable but cannot be used in respect of the supplier because he'll always say that there are other links between what he provides and the end-user and that he is not responsible for them.

Variant 2 – Platform-as-a-Service

In this variant, the supplier is responsible up to OS level. From that point onwards you are the owner who implements and manages the applications. In some cases the OS is also shielded and you are only able to implement the applications.

Monitoring the supplier's service directly affects the performance and availability of your applications, and for that reason it's advisable to implement a synthetic monitor. You can supplement this with a process that monitors the availability of the OS if you are able to influence this. In this case a simple Ping monitor will suffice.

You can synthetically add agent or machine data in order to cover all of the data sets. Wire data is an attractive option in the application area, but not at infrastructure level. To do this you will need to know a lot about how the infrastructure is set up, and that's precisely what you wanted to outsource.

Variant 3 – Software-as-a-Service

The supplier arranges everything. You are only the user of the application that you want to buy. The supplier will usually have published his own SLA, but how transparent is it?

The best bet here is to chose a synthetic solution yourself and have it assessed independently. Use these figures to check the quality of the service and to confront the supplier with results other than what have been agreed.

To conclude ...

Outsourcing is based on trust but to many companies IT is a matter of life or death: it's importance is inestimable. You want to avoid a situation where finger-pointing starts between you and the supplier if faults occur. Make sure that you give careful thought to monitoring quality also during the implementation of the outsourcing.

How have you experienced this as a user? How do you monitor your outsourcing contracts, and what were your experiences with an outsourcing party? I'd like to hear your experiences - I can learn from them too.

Coen Meerbeek is an Online Performance Consultant at Blue Factory Internet.

Hot Topics

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

How Do I Assess the Quality of My External IT Supplier?

Coen Meerbeek

Given the extent to which companies are contracting out their IT organization to other parties, outsourcing appears to be making a comeback. But migrating your IT infrastructure and management to the cloud or another party remains a hot topic.

In the outsourcing procedure you lay down your criteria for the quality to be delivered by the other party. We have to do this, because otherwise the supplier will rest on his laurels, which is the last thing we want. So, we've got our criteria, but who's going to monitor them and how transparent are the figures?

An interesting subject to take a look at today.

As I mentioned above, the concept of outsourcing has changed a fair bit since the emergence of the cloud. We now have different levels of outsourcing.

1. The IT infrastructure runs at the supplier's premises but you remain responsible for the architecture, implementation and management. You also have the applications managed by your own people. (IaaS)

2. The IT infrastructure runs at the supplier's premises and the supplier is responsible for the architecture, implementation and management. You have the applications managed by your own people. (PaaS)

3. Both the IT infrastructure and the applications are the responsibility of the supplier. It makes no difference how everything has been designed. (SaaS)

The monitoring process is different for each of the three variants. What I regard as being most important here is:

■ To have all of the variants assessed by an independent party or by your own organisation. Under no circumstances should this be done by the supplier.

■ All of the assessments are carried out from the end-user's perspective. The best measure of the supplier's quality is the performance and availability that you as the buyer receive.

We can use the following solutions for the three variants. I've based this on pure SLA monitoring but also on the four data sets; wire, machine, agent and synthetic.

Variant 1 – Infrastructure-as-a-Service

The supplier is solely responsible for providing the hardware. You are responsible for its content. You will usually be responsible for everything from the virtualization level onwards.

In that case, the best way to monitor the supplier's service is to place the monitoring process at virtualization level. Splunk, for instance, has a superb VMWare app that provides you with all the information you need. I can well imagine that the supplier will not in all cases be willing to allow this. It will mean that he has to be transparent about the service he provides and it might be possible to pass on the monitoring data that he generates to your own Splunk implementation in order to draw up the right reports yourself.

Monitoring from infrastructure level is entirely your responsibility, so you can decide for yourself which tool to choose to cover four data sets. A synthetic solution remains desirable but cannot be used in respect of the supplier because he'll always say that there are other links between what he provides and the end-user and that he is not responsible for them.

Variant 2 – Platform-as-a-Service

In this variant, the supplier is responsible up to OS level. From that point onwards you are the owner who implements and manages the applications. In some cases the OS is also shielded and you are only able to implement the applications.

Monitoring the supplier's service directly affects the performance and availability of your applications, and for that reason it's advisable to implement a synthetic monitor. You can supplement this with a process that monitors the availability of the OS if you are able to influence this. In this case a simple Ping monitor will suffice.

You can synthetically add agent or machine data in order to cover all of the data sets. Wire data is an attractive option in the application area, but not at infrastructure level. To do this you will need to know a lot about how the infrastructure is set up, and that's precisely what you wanted to outsource.

Variant 3 – Software-as-a-Service

The supplier arranges everything. You are only the user of the application that you want to buy. The supplier will usually have published his own SLA, but how transparent is it?

The best bet here is to chose a synthetic solution yourself and have it assessed independently. Use these figures to check the quality of the service and to confront the supplier with results other than what have been agreed.

To conclude ...

Outsourcing is based on trust but to many companies IT is a matter of life or death: it's importance is inestimable. You want to avoid a situation where finger-pointing starts between you and the supplier if faults occur. Make sure that you give careful thought to monitoring quality also during the implementation of the outsourcing.

How have you experienced this as a user? How do you monitor your outsourcing contracts, and what were your experiences with an outsourcing party? I'd like to hear your experiences - I can learn from them too.

Coen Meerbeek is an Online Performance Consultant at Blue Factory Internet.

Hot Topics

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...