Choosing the right IT management software is sometimes like looking for a needle in a haystack. There's so much to choose from, and it all seems to do the same thing and is claimed to be fantastic.
But things aren't always what they seem. In a world that's changing faster than ever, virtualization and commodity hardware make it extremely difficult for your organization to choose the right tools. To point you in the right direction, I have set out 6 basic rules below. I hope they'll be useful to you.
1. Start from the beginning
Don't assume that the tools you've used in the past will still work.
Many well-established companies complain that parties such as Google and Facebook innovate much faster, have fewer faults and are able to manage with fewer people and lower costs because they're not weighed down by legacy. It's true that having to drag along legacy systems costs time and money, but why should you be left to carry the burden? The same goes for IT management software. If you, as an organization, innovate with the applications, you also have to innovate in this area. Don't assume that the parties who were already around when you started still have the best solutions.
Challenge the dinosaurs.
2. Choose freemium, opt for self-installation and only test in production
There are a number of perceivable trends in IT management software:
■ It must be possible to try out software free of charge, even if the free version has limited features. Even with a limited feature set you can gain a clear impression of the software.
■ You have to be able to install the software yourself without calling in a professional services organization. This is the best way of judging whether the tools are easy to use and manage, and that is a crucial aspect. This hugely shortens ROI and lowers the TCO.
■ And this is actually the most important point: make sure that you test in production before buying. Nothing is worse than discovering that the tools work well in the acceptance environment but create so much overhead in production that they are unusable. Testing in production saves a lot of money and frustration!
3. Be prepared for virtualization
Virtualization is an unstoppable trend in organizations, and your software has to keep pace. There are many implications here. A lot of legacy software is unable to read the right counters or is simply incapable of dealing with environments that are upscaled or downscaled according to usage.
4. Performance = latency or response time, not the use of resources
The most important KPI in the toolset of today and the future is performance, but measured in terms of latency or response time. This should be measured from the end-user to the back end.
Performance used to be measured in terms of resource usage, such as CPU usage. But those days are behind us. In a virtualized environment it's very difficult to determine the effect of what are often inaccurate figures and what this says about the end-user. Probably nothing.
5. Be sure to have 100% cover, not 95%
The 80/20 rule doesn't apply here. The right tool has to cover the entire application landscape. It's important to map out every aspect of the chain, both horizontally and vertically. That doesn't mean that you have to measure everything all the time, but you do need to have access to the right information at the right times.
6. Data must be real time, predictable and complete
Fortunately most legacy tools are real time and complete, but by no means all of them are predictable.
"Real time" speaks for itself. Nothing is achieved if the required data isn't available until hours after the incident. Things move so fast these days that it only takes an hour before the whole country knows you've got a problem, which could harm your image.
"Complete" follows on seamlessly from this. The tool is not up to the job if it takes extra actions to get the information you need. Integrations between several tools are crucial in the software society. Correlating from several sources is vital to everyone's ability to make the right decisions.
"Predictable" is perhaps the most interesting aspect. It takes a lot of work to set up signals to alert you to incidents as soon as possible, and this is often based on settings that were agreed years ago, but who's to say that this is realistic? Who knows what constitutes normal behavior in a virtualized environment? Nobody, which is why it's of paramount importance that the tool you choose learns for itself what normal behavior is. That's how you optimize the ability to predict. Of course, this will have to be constantly adapted, since what was normal last week won't necessary be normal today.
Coen Meerbeek is an Online Performance Consultant at Blue Factory Internet.
You've heard of DevOps and SecOps, but NetOps? NetOps is a natural progression of legacy Network Operations to foster more efficient and resilient infrastructures through automation and intelligence. The efficacy of NetOps personnel is reliant upon understanding five key elements of a NetOps Platform and how to best utilize and implement each ...
It's also important to keep the diversity of the Advanced IT Analytics (AIA) landscape in mind as you plan for your investments. AIA is still not a market in the traditional sense. My vision of AIA is rather an arena of fast-growing exploration and invention, in which in-house development is beginning to cede to third-party solutions that can accelerate time to value ...
Most application performance monitoring (APM) tools offer user experience monitoring and transaction tracing capabilities. But, when there is infrastructure slowness affecting the application, these APM tools cannot always pinpoint the root cause of problems. This is where unified infrastructure monitoring comes in ...
Business transaction monitoring is the approach commonly used to identify and diagnose server-side processing slowness for web applications. While it is an important component of an application performance monitoring strategy, a key question is whether business transaction tracing is sufficient for ensuring peak application performance ...
Gartner highlighted the top strategic technology trends that will impact most organizations in 2018. The next trends focus on blending the digital and physical worlds to create an immersive, digitally enhanced environment. The last three refer to exploiting connections between an expanding set of people and businesses, as well as devices, content and services to deliver digital business outcomes ...
Gartner highlighted the top strategic technology trends that will impact most organizations in 2018. The first three strategic technology trends explore how artificial intelligence (AI) and machine learning are seeping into virtually everything and represent a major battleground for technology providers over the next five years ...
In the Riverbed Future of Networking Global Survey, more than half of the respondents acknowledged that achieving operational agility is critical to the success of a modern enterprise, and next-generation networks as well as the technology to support them are key to reaching this goal ...
Legacy infrastructures are holding back their cloud and digital strategies, according to the Riverbed Future of Networking Global Survey 2017. Nearly all survey respondents agree that legacy network infrastructure will have difficulty keeping pace with the changing demands of the cloud and hybrid networks ...