Unifying IT silos and decision makers across an ever more complex application/infrastructure landscape is making the age-old requirements for discovery and inventory both more relevant than ever, but also more challenging. It may sound like a blast from the past — as some of us remember how rich, dynamic and accurate topologies began to provide a foundation for event management in the 80s and the 90s. Back then, having a map of what was "out there" was required for managing for availability and change.
In parallel, getting asset data out of spreadsheets has been a bit of a slower process, at least based on EMA research ("EMA Research: Optimizing IT for Financial Performance," September 2016), and it's still something of a tug of war.
And finally understanding exactly how and where applications sit across the infrastructure, often called application dependency mapping, has become a rich area of innovation, which is the good news. But it can also present IT stakeholders with 16 flavors of what to the casual eye might appear to be the same thing — which is the bad news.
On August 8, EMA will be delivering a webinar on what's really going on today in the areas related to discovery and inventory, along with some recommendations on how take charge of "discovering what's out there" and optimize the process.
In this blog I'd like to share just a few highlights.
An Inventory and Discovery Tool by Any Other Name
Discovery and inventory investments can come in many different packages to address many different needs. EMA has documented as many as 50 different inventory/discovery sources in use in a single IT organization.
Some are more focused on inventory per se — capturing asset-related data across the entire application infrastructure. Others are more focused on discovery in the traditional IP management sense, or else with many advances that embrace private and public cloud, application/infrastructure relevance, and increasingly even containers and microservices.
The world of software-defined everything carries its own levels of awareness and may seem at times to be a nirvana. But of course, almost no IT organization lives in other than a mix of infrastructure and application realms.
Trying to unify insights across the following list of use cases for discovery and inventory is still, universally, a work in progress. The following list is, by the way, far from complete.
■ Asset management and audits- represents not one but a whole host of inventory-related insights that all too often are neither current nor complete. A place where, sadly, in many environments spreadsheets still abound.
■ CMDB/CMS- depend on both good inventory and discovery capabilities. Too often, as we see in our own consulting practices, the dream of creating an effective configuration management system is pursued without regard to currency, relevance and data population.
■ Effective analytics- as used for application/infrastructure availability and performance, or other use cases, also depend, in almost all cases, on effective discovery and in a growing number of cases on dependency mapping for contextual decision making.
■ Change management- won't work well without knowing exactly what's out there to change, what its dependencies are, and also, potentially, what are its use-related and asset-related vulnerabilities.
■ Release management/DevOps- fires up images of a "brave new world" that all too often lacks cohesive insights across what turn out to be all parties, especially as development tries to coordinate with operations and vice versa.
■ Capacity planning- like change management, won't work without deep and current insights into the application infrastructure, its interdependencies, as well as usage and asset-related insights.
■ Assimilating cloud resources- has become a market in its own right, with many vendors specializing in telling you "what's going on" in cloud consumption, cost, and infrastructure vulnerabilities. All of this is usually done in partnership with the cloud providers, such as AWS and Azure.
■ Security and compliance concerns- reflect a growing need for accurate, timely and relevant insights across the application/infrastructure. However, according to EMA research ("EMA Research: Integrating Security with Operations, Development and ITSM in the Age of Cloud and Agile," Spring, 2017), these "timely insights" typically bounce back and forth between using shared discovery/inventory tools with operations (in some cases ten or more), and security's own private suite (the average was seven inventory and discovery tools used purely by security).
Benefits and Closing Thoughts
The list above not only presents obvious challenges once you begin to take seriously the need not only to do each of the above well, but to be able to pull the pieces together better so that change management isn't at war with performance, and capacity management is aware of asset realities and costs, and security and compliance can be effectively integrated into virtually every option listed above.
A partial list of benefits for well reconciled inventory and discovery data includes:
■ Improved service availability and performance
■ Improved lifecycle optimization for IT (HW/SW) assets
■ Improved capacity optimization and planning
■ Improved efficiencies in change management
■ Improved capabilities for assimilating cloud resources
■ Improved dialog with business stakeholders
■ Improved operational efficiencies overall
■ Keeping up with security when new vulnerabilities are discovered
■ Lifecycle planning of application services for cost and value
■ Improved visibility of the business value contribution of IT
("Best Practices for Optimizing IT with ITAM Big Data," EMA, July 2015)
Of course getting there is half the fun, and more than half the challenge. So please tune in on August 8 for more insights into challenges, benefits and best practices in unifying data awareness of "what's out there" along with real-world examples of both failure and success.
As a Network Operations professional, you know how hard it is to ensure optimal network performance when you’re unsure of how end-user devices, application code, and infrastructure affect performance. Identifying your important applications and prioritizing their performance is more difficult than ever, especially when much of an organization’s web-based traffic appears the same to the network. You need insight to maximize performance — not inefficient troubleshooting, longer time to resolution, and an overall lack of application intelligence. But you can stay ahead. Follow these 10 steps to maximize the performance of your applications and underlying network infrastructure ...
IT organizations are constantly trying to optimize operations and troubleshooting activities and for good reason. Let's look at one example for the medical industry. Networked applications, such as electronic medical records (EMR), are vital for hospitals to provide outstanding service to their patients and physicians. However, a networking team can often not be aware of slow response times on the remotely hosted EMR application until a physician or someone else calls in to complain ...
In 2014, AWS Lambda introduced serverless architecture. Since then, many other cloud providers have developed serverless options. What’s behind this rapid growth? ...
This question is really two questions. The first would be: What's really going on in terms of a confusion of terms? — as we wrestle with AIOps, IT Operational Analytics, big data, AI bots, machine learning, and more generically stated "AI platforms" (… and the list is far from complete). The second might be phrased as: What's really going on in terms of real-world advanced IT analytics deployments — where are they succeeding, and where are they not? This blog will look at both questions as a way of introducing EMA's newest research with data ...
Consumers will now trade app convenience for security, according to a study commissioned by F5 Networks, The Curve of Convenience – The Trade-Off between Security and Convenience ...
Gartner unveiled the CX Pyramid, a new methodology to test organizations’ customer journeys and forge more powerful experiences that deliver greater customer loyalty and brand advocacy ...
Nearly half (48 percent) of consumers report that they currently use, or have used in the past, services of organizations that were involved in a publicly disclosed data breach and, of those, 48 percent have stopped using the services of an organization because of a breach, according to Global State of Digital Trust Survey and Index 2018, a new report from CA Technologies ...
Here's the problem: IT teams are in the dark. The only information they have available to them is based on what users decide to tell them about through calls to the help desk ...
Over the past year, the enterprise network grew significantly more complicated, creating new challenges for network professionals, according to IDG’s 8th annual State of the Network study. Internet of Things (IoT) projects, the demands of an increasingly mobile workforce, and an explosion of apps prompted network professionals to enhance their network infrastructure and the skillsets needed to support it. Network professionals are now being asked to help shape IT strategy ...
Retailers are already busy prepping to avoid an Amazon Prime type meltdown during the holiday shopping season. However, rather than focusing efforts on coping with surges in traffic to your website, you also need to be thinking about the ongoing speed of your site ...