Unifying IT silos and decision makers across an ever more complex application/infrastructure landscape is making the age-old requirements for discovery and inventory both more relevant than ever, but also more challenging. It may sound like a blast from the past — as some of us remember how rich, dynamic and accurate topologies began to provide a foundation for event management in the 80s and the 90s. Back then, having a map of what was "out there" was required for managing for availability and change.
In parallel, getting asset data out of spreadsheets has been a bit of a slower process, at least based on EMA research ("EMA Research: Optimizing IT for Financial Performance," September 2016), and it's still something of a tug of war.
And finally understanding exactly how and where applications sit across the infrastructure, often called application dependency mapping, has become a rich area of innovation, which is the good news. But it can also present IT stakeholders with 16 flavors of what to the casual eye might appear to be the same thing — which is the bad news.
On August 8, EMA will be delivering a webinar on what's really going on today in the areas related to discovery and inventory, along with some recommendations on how take charge of "discovering what's out there" and optimize the process.
In this blog I'd like to share just a few highlights.
An Inventory and Discovery Tool by Any Other Name
Discovery and inventory investments can come in many different packages to address many different needs. EMA has documented as many as 50 different inventory/discovery sources in use in a single IT organization.
Some are more focused on inventory per se — capturing asset-related data across the entire application infrastructure. Others are more focused on discovery in the traditional IP management sense, or else with many advances that embrace private and public cloud, application/infrastructure relevance, and increasingly even containers and microservices.
The world of software-defined everything carries its own levels of awareness and may seem at times to be a nirvana. But of course, almost no IT organization lives in other than a mix of infrastructure and application realms.
Trying to unify insights across the following list of use cases for discovery and inventory is still, universally, a work in progress. The following list is, by the way, far from complete.
■ Asset management and audits- represents not one but a whole host of inventory-related insights that all too often are neither current nor complete. A place where, sadly, in many environments spreadsheets still abound.
■ CMDB/CMS- depend on both good inventory and discovery capabilities. Too often, as we see in our own consulting practices, the dream of creating an effective configuration management system is pursued without regard to currency, relevance and data population.
■ Effective analytics- as used for application/infrastructure availability and performance, or other use cases, also depend, in almost all cases, on effective discovery and in a growing number of cases on dependency mapping for contextual decision making.
■ Change management- won't work well without knowing exactly what's out there to change, what its dependencies are, and also, potentially, what are its use-related and asset-related vulnerabilities.
■ Release management/DevOps- fires up images of a "brave new world" that all too often lacks cohesive insights across what turn out to be all parties, especially as development tries to coordinate with operations and vice versa.
■ Capacity planning- like change management, won't work without deep and current insights into the application infrastructure, its interdependencies, as well as usage and asset-related insights.
■ Assimilating cloud resources- has become a market in its own right, with many vendors specializing in telling you "what's going on" in cloud consumption, cost, and infrastructure vulnerabilities. All of this is usually done in partnership with the cloud providers, such as AWS and Azure.
■ Security and compliance concerns- reflect a growing need for accurate, timely and relevant insights across the application/infrastructure. However, according to EMA research ("EMA Research: Integrating Security with Operations, Development and ITSM in the Age of Cloud and Agile," Spring, 2017), these "timely insights" typically bounce back and forth between using shared discovery/inventory tools with operations (in some cases ten or more), and security's own private suite (the average was seven inventory and discovery tools used purely by security).
Benefits and Closing Thoughts
The list above not only presents obvious challenges once you begin to take seriously the need not only to do each of the above well, but to be able to pull the pieces together better so that change management isn't at war with performance, and capacity management is aware of asset realities and costs, and security and compliance can be effectively integrated into virtually every option listed above.
A partial list of benefits for well reconciled inventory and discovery data includes:
■ Improved service availability and performance
■ Improved lifecycle optimization for IT (HW/SW) assets
■ Improved capacity optimization and planning
■ Improved efficiencies in change management
■ Improved capabilities for assimilating cloud resources
■ Improved dialog with business stakeholders
■ Improved operational efficiencies overall
■ Keeping up with security when new vulnerabilities are discovered
■ Lifecycle planning of application services for cost and value
■ Improved visibility of the business value contribution of IT
("Best Practices for Optimizing IT with ITAM Big Data," EMA, July 2015)
Of course getting there is half the fun, and more than half the challenge. So please tune in on August 8 for more insights into challenges, benefits and best practices in unifying data awareness of "what's out there" along with real-world examples of both failure and success.
While remote work policies have been gaining steam for the better part of the past decade across the enterprise space — driven in large part by more agile and scalable, cloud-delivered business solutions — recent events have pushed adoption into overdrive ...
Time-critical, unplanned work caused by IT disruptions continues to plague enterprises around the world, leading to lost revenue, significant employee morale problems and missed opportunities to innovate, according to the State of Unplanned Work Report 2020, conducted by Dimensional Research for PagerDuty ...
In today's iterative world, development teams care a lot more about how apps are running. There's a demand for fixing actionable items. Developers want to know exactly what's broken, what to fix right now, and what can wait. They want to know, "Do we build or fix?" This trade-off between building new features versus fixing bugs is one of the key factors behind the adoption of Application Stability management tools ...
With the rise of mobile apps and iterative development releases, Application Stability has answered the widespread need to monitor applications in a new way, shifting the focus from servers and networks to the customer experience. The emergence of Application Stability has caused some consternation for diehard APM fans. However, these two solutions embody very distinct monitoring focuses, which leads me to believe there's room for both tools, as well as different teams for both ...
The 2019 State of E-Commerce Infrastructure Report, from Webscale, analyzes findings from a comprehensive survey of more than 450 ecommerce professionals regarding how their online stores performed during the 2019 holiday season. Some key insights from the report include ...
Robinhood is a unicorn startup that has been disrupting the way by which many millennials have been investing and managing their money for the past few years. For Robinhood, the burden of proof was to show that they can provide an infrastructure that is as scalable, reliable and secure as that of major banks who have been developing their trading infrastructure for the last quarter-century. That promise fell flat last week, when the market volatility brought about a set of edge cases that brought Robinhood's trading app to its knees ...
Application backend monitoring is the key to acquiring visibility across the enterprise's application stack, from the application layer and underlying infrastructure to third-party API services, web servers and databases, be they on-premises, in a public or private cloud, or in a hybrid model. By tracking and reporting performance in real time, IT teams can ensure applications perform at peak efficiency — and guarantee a seamless customer experience. How can IT operations teams improve application backend monitoring? By embracing artificial intelligence for operations — AIOps ...
In 2020, DevOps teams will face heightened expectations for higher speed and frequency of code delivery, which means their IT environments will become even more modular, ephemeral and dynamic — and significantly more complicated to monitor. As a result, AIOps will further cement its position as the most effective technology that DevOps teams can use to see and control what's going on with their applications and their underlying infrastructure, so that they can prevent outages. Here I outline five key trends to watch related to how AIOps will impact DevOps in 2020 and beyond ...
With the spread of the coronavirus (COVID-19), CIOs should focus on three short-term actions to increase their organizations' resilience against disruptions and prepare for rebound and growth, according to Gartner ...
Whether you consider the first generation of APM or the updates that followed for SOA and microservices, the most basic premise of the tools remains the same — PROVIDE VISIBILITY ...