6 Principles for Choosing the Right IT Management Software
April 21, 2015

Coen Meerbeek
Blue Factory Internet

Share this

Choosing the right IT management software is sometimes like looking for a needle in a haystack. There's so much to choose from, and it all seems to do the same thing and is claimed to be fantastic.

But things aren't always what they seem. In a world that's changing faster than ever, virtualization and commodity hardware make it extremely difficult for your organization to choose the right tools. To point you in the right direction, I have set out 6 basic rules below. I hope they'll be useful to you.

1. Start from the beginning

Don't assume that the tools you've used in the past will still work.

Many well-established companies complain that parties such as Google and Facebook innovate much faster, have fewer faults and are able to manage with fewer people and lower costs because they're not weighed down by legacy. It's true that having to drag along legacy systems costs time and money, but why should you be left to carry the burden? The same goes for IT management software. If you, as an organization, innovate with the applications, you also have to innovate in this area. Don't assume that the parties who were already around when you started still have the best solutions.

Challenge the dinosaurs.

2. Choose freemium, opt for self-installation and only test in production

There are a number of perceivable trends in IT management software:

■ It must be possible to try out software free of charge, even if the free version has limited features. Even with a limited feature set you can gain a clear impression of the software.

■ You have to be able to install the software yourself without calling in a professional services organization. This is the best way of judging whether the tools are easy to use and manage, and that is a crucial aspect. This hugely shortens ROI and lowers the TCO.

■ And this is actually the most important point: make sure that you test in production before buying. Nothing is worse than discovering that the tools work well in the acceptance environment but create so much overhead in production that they are unusable. Testing in production saves a lot of money and frustration!

3. Be prepared for virtualization

Virtualization is an unstoppable trend in organizations, and your software has to keep pace. There are many implications here. A lot of legacy software is unable to read the right counters or is simply incapable of dealing with environments that are upscaled or downscaled according to usage.

4. Performance = latency or response time, not the use of resources

The most important KPI in the toolset of today and the future is performance, but measured in terms of latency or response time. This should be measured from the end-user to the back end.

Performance used to be measured in terms of resource usage, such as CPU usage. But those days are behind us. In a virtualized environment it's very difficult to determine the effect of what are often inaccurate figures and what this says about the end-user. Probably nothing.

5. Be sure to have 100% cover, not 95%

The 80/20 rule doesn't apply here. The right tool has to cover the entire application landscape. It's important to map out every aspect of the chain, both horizontally and vertically. That doesn't mean that you have to measure everything all the time, but you do need to have access to the right information at the right times.

6. Data must be real time, predictable and complete

Fortunately most legacy tools are real time and complete, but by no means all of them are predictable.

"Real time" speaks for itself. Nothing is achieved if the required data isn't available until hours after the incident. Things move so fast these days that it only takes an hour before the whole country knows you've got a problem, which could harm your image.

"Complete" follows on seamlessly from this. The tool is not up to the job if it takes extra actions to get the information you need. Integrations between several tools are crucial in the software society. Correlating from several sources is vital to everyone's ability to make the right decisions.

"Predictable" is perhaps the most interesting aspect. It takes a lot of work to set up signals to alert you to incidents as soon as possible, and this is often based on settings that were agreed years ago, but who's to say that this is realistic? Who knows what constitutes normal behavior in a virtualized environment? Nobody, which is why it's of paramount importance that the tool you choose learns for itself what normal behavior is. That's how you optimize the ability to predict. Of course, this will have to be constantly adapted, since what was normal last week won't necessary be normal today.

Coen Meerbeek is an Online Performance Consultant at Blue Factory Internet.

Share this

The Latest

September 12, 2019

Multichannel marketers report that mobile-friendly websites have emerged as a dominant engagement channel for their brands, according to Gartner. However, Gartner research has found that too many organizations build their mobile websites without accurate knowledge about, or regard for, their customer's mobile preferences ...

September 11, 2019

Do you get excited when you discover a new service from one of the top three public clouds or a new public cloud provider? I do. But every time you feel excited about new cloud offerings, you should also feel a twinge of fear. Because in the tech world, each time we introduce something new we also add a new point of failure for our application and potentially a service we are stuck with. This is why thinking about the long-tail cloud for your organization is important ...

September 10, 2019

A solid start to migration can be approached three ways — all of which are ladder up to adopting a Software Intelligence strategy ...

September 09, 2019

Many aren't doing the due diligence needed to properly assess and facilitate a move of applications to the cloud. This is according to the recent 2019 Cloud Migration Report which revealed half of IT leaders at banks, insurance and telecommunications companies do not conduct adequate risk assessments prior to moving apps over to the cloud. Essentially, they are going in blind and expecting everything to turn out ok. Spoiler alert: It doesn't ...

September 05, 2019

Research conducted by Aite Group uncovered more than 80 global eCommerce sites that were actively being compromised by Magecart groups, according to a new report, In Plain Sight II: On the Trail of Magecart ...

September 04, 2019

In this blog, I'd like to expand beyond the TAP and look at the role Packet Brokers play in an organization's visibility architecture. Here are 5 common mistakes that are made when deploying Packet Brokers, and how to avoid them ...

August 29, 2019

Over the last several years, EMA research found that enterprises are actively expanding their use of network automation tools.To understand these automation projects more fully, EMA has completed a new study, titled "Enterprise Network Automation for 2020 and Beyond" ...

August 28, 2019

Data modernization and cloud migration are reaching a tipping point among large and medium-sized businesses as many companies double their data footprints once or twice a year, according to a new Deloitte survey ...

August 27, 2019

Digital transformation is a journey, not a destination. Your enterprise needs the right insights to drive a continuous process that can create positive user experiences and improved efficiencies before, during, and after the implementation of SAP S/4HANA ...

August 26, 2019

The widespread, ongoing adoption of innovative technologies related to SD-WAN, cloud services, 5G, etc. means that today's networks are in a constant state of flux. Although these technologies offer tremendous business benefits, they can also add tremendous complexity as well, creating major obstacles that prevent network agility ...