Choosing the right IT management software is sometimes like looking for a needle in a haystack. There's so much to choose from, and it all seems to do the same thing and is claimed to be fantastic.
But things aren't always what they seem. In a world that's changing faster than ever, virtualization and commodity hardware make it extremely difficult for your organization to choose the right tools. To point you in the right direction, I have set out 6 basic rules below. I hope they'll be useful to you.
1. Start from the beginning
Don't assume that the tools you've used in the past will still work.
Many well-established companies complain that parties such as Google and Facebook innovate much faster, have fewer faults and are able to manage with fewer people and lower costs because they're not weighed down by legacy. It's true that having to drag along legacy systems costs time and money, but why should you be left to carry the burden? The same goes for IT management software. If you, as an organization, innovate with the applications, you also have to innovate in this area. Don't assume that the parties who were already around when you started still have the best solutions.
Challenge the dinosaurs.
2. Choose freemium, opt for self-installation and only test in production
There are a number of perceivable trends in IT management software:
■ It must be possible to try out software free of charge, even if the free version has limited features. Even with a limited feature set you can gain a clear impression of the software.
■ You have to be able to install the software yourself without calling in a professional services organization. This is the best way of judging whether the tools are easy to use and manage, and that is a crucial aspect. This hugely shortens ROI and lowers the TCO.
■ And this is actually the most important point: make sure that you test in production before buying. Nothing is worse than discovering that the tools work well in the acceptance environment but create so much overhead in production that they are unusable. Testing in production saves a lot of money and frustration!
3. Be prepared for virtualization
Virtualization is an unstoppable trend in organizations, and your software has to keep pace. There are many implications here. A lot of legacy software is unable to read the right counters or is simply incapable of dealing with environments that are upscaled or downscaled according to usage.
4. Performance = latency or response time, not the use of resources
The most important KPI in the toolset of today and the future is performance, but measured in terms of latency or response time. This should be measured from the end-user to the back end.
Performance used to be measured in terms of resource usage, such as CPU usage. But those days are behind us. In a virtualized environment it's very difficult to determine the effect of what are often inaccurate figures and what this says about the end-user. Probably nothing.
5. Be sure to have 100% cover, not 95%
The 80/20 rule doesn't apply here. The right tool has to cover the entire application landscape. It's important to map out every aspect of the chain, both horizontally and vertically. That doesn't mean that you have to measure everything all the time, but you do need to have access to the right information at the right times.
6. Data must be real time, predictable and complete
Fortunately most legacy tools are real time and complete, but by no means all of them are predictable.
"Real time" speaks for itself. Nothing is achieved if the required data isn't available until hours after the incident. Things move so fast these days that it only takes an hour before the whole country knows you've got a problem, which could harm your image.
"Complete" follows on seamlessly from this. The tool is not up to the job if it takes extra actions to get the information you need. Integrations between several tools are crucial in the software society. Correlating from several sources is vital to everyone's ability to make the right decisions.
"Predictable" is perhaps the most interesting aspect. It takes a lot of work to set up signals to alert you to incidents as soon as possible, and this is often based on settings that were agreed years ago, but who's to say that this is realistic? Who knows what constitutes normal behavior in a virtualized environment? Nobody, which is why it's of paramount importance that the tool you choose learns for itself what normal behavior is. That's how you optimize the ability to predict. Of course, this will have to be constantly adapted, since what was normal last week won't necessary be normal today.
Coen Meerbeek is an Online Performance Consultant at Blue Factory Internet.
Making predictions is always a gamble. But given the way 2017 played out and the way 2018 is shaping up, odds are that certain technology trends will play a significant role in your IT department this year ...
With more than one-third of IT Professionals citing "moving faster" as their top goal for 2018, and an overwhelming 99 percent of IT and business decision makers noticing an increasing pace of change in today's connected world, it's clear that speed has become intrinsically linked to business success. For companies looking to compete in the digital economy, this pace of transformation is being driven by their customers and requires speedy software releases, agility through cloud services, and automation ...
Looking back on this year, we can see threads of what the future holds in enterprise networking. Specifically, taking a closer look at the biggest news and trends of this year, IT areas where businesses are investing and perspectives from the analyst community, as well as our own experiences, here are five network predictions for the coming year ...
As we enter 2018, businesses are busy anticipating what the new year will bring in terms of industry developments, growing trends, and hidden surprises. In 2017, the increased use of automation within testing teams (where Agile development boosted speed of release), led to QA becoming much more embedded within development teams than would have been the case a few years ago. As a result, proper software testing and monitoring assumes ever greater importance. The natural question is – what next? Here are some of the changes we believe will happen within our industry in 2018 ...
Application Performance Monitoring (APM) has become a must-have technology for IT organizations. In today’s era of digital transformation, distributed computing and cloud-native services, APM tools enable IT organizations to measure the real experience of users, trace business transactions to identify slowdowns and deliver the code-level visibility needed for optimizing the performance of applications. 2018 will see the requirements and expectations from APM solutions increase in the following ways ...
We don't often enough look back at the prior year’s predictions to see if they actually came to fruition. That is the purpose of this analysis. I have picked out a few key areas in APMdigest's 2017 Application Performance Management Predictions, and analyzed which predictions actually came true ...
Planning for a new year often includes predicting what’s going to happen. However, we don't often enough look back at the prior year’s predictions to see if they actually came to fruition. That is the purpose of this analysis. I have picked out a few key areas in APMdigest's 2017 Application Performance Management Predictions, and analyzed which predictions actually came true ...
The annual list of DevOps Predictions is now a DEVOPSdigest tradition. DevOps experts — analysts and consultants, users and the top vendors — offer predictions on how DevOps and related technologies will evolve and impact business in 2018 ...
Industry experts offer predictions on how Network Performance Management (NPM) and related technologies will evolve and impact business in 2018 ...
Industry experts offer predictions on how APM and related technologies will evolve and impact business in 2018. Part 6 covers ITOA and data ...