Solutions for Minimizing Server Downtime
April 11, 2018

Chris Adams
Park Place Technologies

Share this

As we've seen, hardware is at the root of a large proportion of data center outages, and the costs and consequences are often exacerbated when VMs are affected. The best answer, therefore, is for IT pros to get back to basics.

Start with Part 1: Complacency Kills Uptime in Virtualized Environments

Just as drivers wearing seatbelts should still use turn signals (even though many don't), data center managers should continue to take the usual precautions to protect against equipment-related outages. Put simply:

Attend to the hardware

In the rush to implement the latest technologies, don't overlook the fundamentals, such as routine server maintenance, UPS tests and upgrades, and facility checks for hotspots, air flow problems, and other issues.

Integrate monitoring and response

Only about half of IT organizations rely on their monitoring tool or ticketing system to activate a response team. This is a lost opportunity for accelerating break/fix. So is the failure to utilize newer AI-driven hardware monitoring technologies which are becoming highly accessible.

Have parts on standby

It's no good to go searching for spares after a hardware failure occurs. Spare parts should be on site for mission critical systems or available for quick delivery in other cases.

Invest in expertise

Having the right people with the right skills is essential. Unfortunately, today's tight IT labor market is making it difficult to find and afford talent. Data center managers should consider whether they have the budget to build comprehensive engineering capabilities or if they are better off sourcing it from a partner.

It can be hard to manage these tasks in addition to the many responsibilities that have been piled on data center personnel over the past decade. In many cases, the easiest and most affordable option is to hand off the bulk of the hardware "to do" list to a third-party provider specializing in IT support. That way someone else can effectively address the risk associated with hardware through 24/7 monitoring, spares management, and immediate Level 3 support while the business gets back to business.

Chris Adams is President and COO of Park Place Technologies
Share this

The Latest

August 21, 2018

High availability's (HA) primary objective has historically been focused on ensuring continuous operations and performance. HA was built on a foundation of redundancy and failover technologies and methodologies to ensure business continuity in the event of workload spikes, planned maintenance, and unplanned downtime. Today, HA methodologies have been superseded by intelligent workload routing automation (i.e., intelligent availability), in that data and their processing are consistently directed to the proper place at the right time ...

August 20, 2018

You need insight to maximize performance — not inefficient troubleshooting, longer time to resolution, and an overall lack of application intelligence. Steps 5 through 10 will help you maximize the performance of your applications and underlying network infrastructure ...

August 17, 2018

As a Network Operations professional, you know how hard it is to ensure optimal network performance when you’re unsure of how end-user devices, application code, and infrastructure affect performance. Identifying your important applications and prioritizing their performance is more difficult than ever, especially when much of an organization’s web-based traffic appears the same to the network. You need insight to maximize performance — not inefficient troubleshooting, longer time to resolution, and an overall lack of application intelligence. But you can stay ahead. Follow these 10 steps to maximize the performance of your applications and underlying network infrastructure ...

August 16, 2018

IT organizations are constantly trying to optimize operations and troubleshooting activities and for good reason. Let's look at one example for the medical industry. Networked applications, such as electronic medical records (EMR), are vital for hospitals to provide outstanding service to their patients and physicians. However, a networking team can often not be aware of slow response times on the remotely hosted EMR application until a physician or someone else calls in to complain ...

August 15, 2018

In 2014, AWS Lambda introduced serverless architecture. Since then, many other cloud providers have developed serverless options. What’s behind this rapid growth? ...

August 14, 2018

This question is really two questions. The first would be: What's really going on in terms of a confusion of terms? — as we wrestle with AIOps, IT Operational Analytics, big data, AI bots, machine learning, and more generically stated "AI platforms" (… and the list is far from complete). The second might be phrased as: What's really going on in terms of real-world advanced IT analytics deployments — where are they succeeding, and where are they not? This blog will look at both questions as a way of introducing EMA's newest research with data ...

August 13, 2018

Consumers will now trade app convenience for security, according to a study commissioned by F5 Networks, The Curve of Convenience – The Trade-Off between Security and Convenience ...

August 10, 2018

Gartner unveiled the CX Pyramid, a new methodology to test organizations’ customer journeys and forge more powerful experiences that deliver greater customer loyalty and brand advocacy ...

August 09, 2018

Nearly half (48 percent) of consumers report that they currently use, or have used in the past, services of organizations that were involved in a publicly disclosed data breach and, of those, 48 percent have stopped using the services of an organization because of a breach, according to Global State of Digital Trust Survey and Index 2018, a new report from CA Technologies ...

August 08, 2018

Here's the problem: IT teams are in the dark. The only information they have available to them is based on what users decide to tell them about through calls to the help desk ...