Have you ever asked a branding pro what 6.5 seconds means in the minds of their consumers; what 2 seconds means to a gaming company; what
How about your operations teams? Why is it acceptable to spend hundreds of thousands of dollars, if not millions, on a week or even year's worth of integration or fire-fighting, spent by your best engineers, on things that should be automated, more intuitive or just plane simpler to take on? Just because they’re working hard, doesn’t mean you aren’t wasting valuable assets on the opportunity cost of time that could otherwise be spent on more innovative and revenue generating activities.
I’ve compiled a list of ten factors in IT operations that help speed up that time to value:
Automation is probably the most obvious time saver for IT operations. In its simplest form, it prevents the need for individuals to log-in and out from server to server, undertaking maintenance, or checking to see the performance and operations of each IT asset under their control.
However, the IT operations automation opportunity starts even before then, with the ability to apply automation in the form of templates toward the intelligent discovery of all virtual and physical assets. This is then followed by the automated monitoring of those IT assets, and their affiliated health; the automated runbook and remediation actions; the automated workflows of order/entry systems and the orchestration necessary to allow the automated provisioning, chargeback and billing of those systems. The list of possibilities goes on and on.
Suffice to say, the Holy Grail of automation is to eliminate manual interventions, and have IT administrators become the policy brokers for business services and performance thresholds.
2. the right people
Following on from the previous thought, there are a number of new considerations for IT staff to make when it comes to increasing movement toward cloud computing and affiliated levels of automation. Contrary to popular IT belief, automation is less about job cutting, and more around making critical decisions that need protagonists in the form of existing IT staff.
The biggest hindrance to this change is cultural. Are your people ready to elevate themselves to becoming more of value creators to their consumers? This changing role toward service brokerage requires a level of intelligent decision-making that will only increase in the era of business analytics popularity.
Questions such as: “which of my current manual processes can be automated, and what are those dependencies when I do so?” Similarly, questions around change and process management, and current and future IT spends on both infrastructure and human resource projections will need to be answered.
3. the right data in the right format
Making operations staff as effective at making decisions as possible requires having the right data in the right format, and having the right tools to take action against that data.
Business analytics is one of the fastest growing – and most hyped – applications being deployed in the cloud. But IT infrastructure, which is playing an increasingly important role in driving up the top line and business productivity, has data that can help organizations make smart business decisions to boost the bottom line too. This is what we are calling “Operational Business Intelligence” (OBI).
Focusing on what you are able to accomplish with business IT assets will help make smarter decisions around capacity needs, human resource needs, future spending and your ability to increase business productivity. The ability to collect and correlate data from a plethora of layers within the same infrastructure is critical to eliminating false negatives or positives in IT operations – or worse still, not being able to receive advanced warning of potential risks.
However, correlating massive volumes of data from back end systems is no longer an easy task given the number of protocols the average IT infrastructure is utilizing, and the correlation can get complicated quickly. Hence the simplicity and intuitiveness with which the data is manipulated and presented is equally important, meaning, for example, having an html5 front end that can receive data analytics on any personal device of choice.
4. Creating consistency across teams
Many IT groups rely on a patchwork of open source, commercial and home-grown tools to support their multi-tenant environment, which inhibits their ability to deliver consistent services, add new services and onboard users quickly. Their engineers spend too much time administering tools, coding integration scripts, creating reports and troubleshooting. Cloud and virtualization technologies only complicate this process, and all too often the operations manager is left cobbling together inefficient point-products never intended for multi-tenant environments.
Even within experienced IT operations teams, having to find the proverbial needle means jumping from reference architecture to reference architecture, storage tool to application tool, cloud platform to server closet – generally wasting time looking for the unknown. Or worse still, never proactively looking, but rather reacting to problems.
There has never been a more crucial time to have a centralized and holistic management system, and one that creates consistency across technology silos from a performance and fault perspective; allowing different organizations to be consistent in the way they view the problem Service, rather than diagnosing symptoms as the problem themselves.
5. Proactive monitoring and risk assessment
IT decision makers are no longer content with simply knowing the up and down nature of IT assets, but more concerned with the vulnerability, the performance, and the potential risks driven by any combination of factors influencing workloads. Having warning systems, and the ability to be more proactive rather than reactive in IT management should remain a core objective, which is becoming increasingly difficult with legacy tools.
Similarly, while the world is watching the advent of instances being spun up globally, we are simultaneously seeing the explosion of structured, semi-structured and unstructured data in the business place. The storage, management and manipulation of this data is going to be a core influencer of virtual infrastructure management going forward.
Operations teams are all too often reacting to change requests or problems in the field, rather than understanding ahead of time from a capacity projection, or risk probability metric, how to mitigate potential threats to their working time; time spent by your best engineers on fixing issues and not being innovative.
6. Asset/tool repurposing for new revenue streams
There is exponential value in having your best engineers do more innovative work and create new business services, rather than fighting fires. But is that an easy transition for those with a hardened operational mindset? Contrary to popular belief, operations tools can be the bridge to achieving commercial leaps forward too.
A multitenant, centralized management system helps with the creation and provisioning of new value-added services, so that product managers are not left wondering where to even start introducing a new service, but have confidence that they can move quickly to new business applications as the cloud intended them to do.
Examples of this include the ability to provision monitoring automatically at a services layer instead of an infrastructure level; to offer advanced visibility into topologies; to leverage a highly extensible API into third party solutions as well as your back-end systems rather than draining pro services engagements; and using management systems in conjunction with CRM systems for opportunistic upsells, in advance of resource constraints being met.
7. The value of APIs in future-proofing
There is confusion taking place in the market thanks to the markitecture peddler mentality that is manifesting itself through a variety of reference architectures and disparate tools that are popping up on every announcement on the web. I foresee a consolidation of reference architectures and common API’s, driven at a higher control layer that becomes useful to decision makers, and more appropriate to differentiated workloads and service level expectations.
Relying on a an open API should allow for relatively easy integration not only with Web portals, but also with all the systems involved in service delivery and reporting, including third-party provisioning, billing, orchestration managers or other systems.
The longer term benefit to leveraging extensible restful API's is largely due to the simplicity of the framework, and having it attached to a centralized management system, creates a long term pivot point from which to attach the more rapidly changing apps, infrastructure and backend systems that require time and effort to get right.
8. Stronger security and resiliency
It goes without saying that if you are investing in infrastructure that is consistent with the audience you are serving (be it an internal SMB organization, or highly regulated multi-departmental government entity) you should be demanding an equally appropriate level of security (logical or physical), and related resiliency from your tools. However, what’s often missed is the same assurance and high availability that you need in your IT tools.
We’ve heard plenty of stories from customers buying a phenomenal hypervisor and associated management portal with all the bells and whistles of intuitiveness built-in, only to have the interface be flaky on them to the point where it created more work, and greater time wastage for their operations teams. That’s why it’s important to go with the smarter designs of specialists in their respective areas, and hold vendors accountable for even the peripheral tools that are not core to their central products, since those are the ones that can mitigate all the benefits of the flagship product if not smartly integrated.
On the flip side, having 22 different tools ensures that the chain is as strong as the weakest link. The truth is that there is an acceptable range of convergence amongst management toolsets, with greater consistency amongst those that leverage a single code-base.
Consider ousourcing to external cloud platforms. The move by enterprises and service providers alike is an inevitable one due to the economies of scale at work amongst cloud service providers, and since so many employees, whether allowed or not, are already using cloud platforms for personal and professionals datasets. To that end, it's best to be in control of the movement of that data rather than find out to too late about the rogue IT department that has been using an insecure cloud location for years.
Not all of those cloud service providers are created equally – they have made different investments and offer a variety of services ranging from the datacenter through to the application on-demand; with and without significant disaster recovery plans. Notwithstanding the highly applicable truism, that you get what you pay for, the kind of company you are will also dictate the right cloud offering for your needs. A consistent, correlated set of metrics across both on-premise and cloud infrastructures helps speed troubleshooting, simplify and optimize management workflows, and minimize integration work to increase efficiency and reduce costs.
But starting with a key set of performance indicators also facilitates proactive service-level management, so that both service provider and customer can ensure service-level expectations are being met.
Multitenancy capabilities of operations management tools is a concept that has invaluable opportunities to feed personalized contextual views. For example, for your operations team, multitenancy capabilities can offer a different view to the VP of sales for high performing infrastructure, an executive view, and yet a different downstream view to your end consumers or constituents of the service, allowing visibility and control off a single platform, without the incessant need to be bogged down offering feedback, data points, views and risk assessments to those users.
Our stats show that 64% of operations teams believe they will need a new management tool; aligned specifically to the fact that they have a great set of users in a multitenancy setting, each with different visibility and control needs, which, if facilitated at the tool level, can alleviate the time-decay operations teams.
Antonio Piraino's article: Move to the Cloud or Stay Home?
Antonio Piraino's three-part series posted on the APMdigest Vendor Forum: