Skip to main content

Navigating IT Chaos: Why the Challenges of Discovery and Inventory Are More Relevant Than Ever

Dennis Drogseth

Unifying IT silos and decision makers across an ever more complex application/infrastructure landscape is making the age-old requirements for discovery and inventory both more relevant than ever, but also more challenging. It may sound like a blast from the past — as some of us remember how rich, dynamic and accurate topologies began to provide a foundation for event management in the 80s and the 90s. Back then, having a map of what was "out there" was required for managing for availability and change.

In parallel, getting asset data out of spreadsheets has been a bit of a slower process, at least based on EMA research ("EMA Research: Optimizing IT for Financial Performance," September 2016), and it's still something of a tug of war.

And finally understanding exactly how and where applications sit across the infrastructure, often called application dependency mapping, has become a rich area of innovation, which is the good news. But it can also present IT stakeholders with 16 flavors of what to the casual eye might appear to be the same thing — which is the bad news.

On August 8, EMA will be delivering a webinar on what's really going on today in the areas related to discovery and inventory, along with some recommendations on how take charge of "discovering what's out there" and optimize the process.

In this blog I'd like to share just a few highlights.

An Inventory and Discovery Tool by Any Other Name

Discovery and inventory investments can come in many different packages to address many different needs. EMA has documented as many as 50 different inventory/discovery sources in use in a single IT organization.

Some are more focused on inventory per se — capturing asset-related data across the entire application infrastructure. Others are more focused on discovery in the traditional IP management sense, or else with many advances that embrace private and public cloud, application/infrastructure relevance, and increasingly even containers and microservices.

The world of software-defined everything carries its own levels of awareness and may seem at times to be a nirvana. But of course, almost no IT organization lives in other than a mix of infrastructure and application realms.

Trying to unify insights across the following list of use cases for discovery and inventory is still, universally, a work in progress. The following list is, by the way, far from complete.

Asset management and audits- represents not one but a whole host of inventory-related insights that all too often are neither current nor complete. A place where, sadly, in many environments spreadsheets still abound.

CMDB/CMS- depend on both good inventory and discovery capabilities. Too often, as we see in our own consulting practices, the dream of creating an effective configuration management system is pursued without regard to currency, relevance and data population.

Effective analytics- as used for application/infrastructure availability and performance, or other use cases, also depend, in almost all cases, on effective discovery and in a growing number of cases on dependency mapping for contextual decision making.

Change management- won't work well without knowing exactly what's out there to change, what its dependencies are, and also, potentially, what are its use-related and asset-related vulnerabilities.

Release management/DevOps- fires up images of a "brave new world" that all too often lacks cohesive insights across what turn out to be all parties, especially as development tries to coordinate with operations and vice versa.

Capacity planning- like change management, won't work without deep and current insights into the application infrastructure, its interdependencies, as well as usage and asset-related insights.

Assimilating cloud resources- has become a market in its own right, with many vendors specializing in telling you "what's going on" in cloud consumption, cost, and infrastructure vulnerabilities. All of this is usually done in partnership with the cloud providers, such as AWS and Azure.

Security and compliance concerns- reflect a growing need for accurate, timely and relevant insights across the application/infrastructure. However, according to EMA research ("EMA Research: Integrating Security with Operations, Development and ITSM in the Age of Cloud and Agile," Spring, 2017), these "timely insights" typically bounce back and forth between using shared discovery/inventory tools with operations (in some cases ten or more), and security's own private suite (the average was seven inventory and discovery tools used purely by security).

Benefits and Closing Thoughts

The list above not only presents obvious challenges once you begin to take seriously the need not only to do each of the above well, but to be able to pull the pieces together better so that change management isn't at war with performance, and capacity management is aware of asset realities and costs, and security and compliance can be effectively integrated into virtually every option listed above.

A partial list of benefits for well reconciled inventory and discovery data includes:

■ Improved service availability and performance

■ Improved lifecycle optimization for IT (HW/SW) assets

■ Improved capacity optimization and planning

■ Improved efficiencies in change management

■ Improved capabilities for assimilating cloud resources

■ Improved dialog with business stakeholders

■ Improved operational efficiencies overall

■ Keeping up with security when new vulnerabilities are discovered

■ Lifecycle planning of application services for cost and value

■ Improved visibility of the business value contribution of IT

("Best Practices for Optimizing IT with ITAM Big Data," EMA, July 2015)

Of course getting there is half the fun, and more than half the challenge. So please tune in on August 8 for more insights into challenges, benefits and best practices in unifying data awareness of "what's out there" along with real-world examples of both failure and success.

Image removed.

Hot Topics

The Latest

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

Navigating IT Chaos: Why the Challenges of Discovery and Inventory Are More Relevant Than Ever

Dennis Drogseth

Unifying IT silos and decision makers across an ever more complex application/infrastructure landscape is making the age-old requirements for discovery and inventory both more relevant than ever, but also more challenging. It may sound like a blast from the past — as some of us remember how rich, dynamic and accurate topologies began to provide a foundation for event management in the 80s and the 90s. Back then, having a map of what was "out there" was required for managing for availability and change.

In parallel, getting asset data out of spreadsheets has been a bit of a slower process, at least based on EMA research ("EMA Research: Optimizing IT for Financial Performance," September 2016), and it's still something of a tug of war.

And finally understanding exactly how and where applications sit across the infrastructure, often called application dependency mapping, has become a rich area of innovation, which is the good news. But it can also present IT stakeholders with 16 flavors of what to the casual eye might appear to be the same thing — which is the bad news.

On August 8, EMA will be delivering a webinar on what's really going on today in the areas related to discovery and inventory, along with some recommendations on how take charge of "discovering what's out there" and optimize the process.

In this blog I'd like to share just a few highlights.

An Inventory and Discovery Tool by Any Other Name

Discovery and inventory investments can come in many different packages to address many different needs. EMA has documented as many as 50 different inventory/discovery sources in use in a single IT organization.

Some are more focused on inventory per se — capturing asset-related data across the entire application infrastructure. Others are more focused on discovery in the traditional IP management sense, or else with many advances that embrace private and public cloud, application/infrastructure relevance, and increasingly even containers and microservices.

The world of software-defined everything carries its own levels of awareness and may seem at times to be a nirvana. But of course, almost no IT organization lives in other than a mix of infrastructure and application realms.

Trying to unify insights across the following list of use cases for discovery and inventory is still, universally, a work in progress. The following list is, by the way, far from complete.

Asset management and audits- represents not one but a whole host of inventory-related insights that all too often are neither current nor complete. A place where, sadly, in many environments spreadsheets still abound.

CMDB/CMS- depend on both good inventory and discovery capabilities. Too often, as we see in our own consulting practices, the dream of creating an effective configuration management system is pursued without regard to currency, relevance and data population.

Effective analytics- as used for application/infrastructure availability and performance, or other use cases, also depend, in almost all cases, on effective discovery and in a growing number of cases on dependency mapping for contextual decision making.

Change management- won't work well without knowing exactly what's out there to change, what its dependencies are, and also, potentially, what are its use-related and asset-related vulnerabilities.

Release management/DevOps- fires up images of a "brave new world" that all too often lacks cohesive insights across what turn out to be all parties, especially as development tries to coordinate with operations and vice versa.

Capacity planning- like change management, won't work without deep and current insights into the application infrastructure, its interdependencies, as well as usage and asset-related insights.

Assimilating cloud resources- has become a market in its own right, with many vendors specializing in telling you "what's going on" in cloud consumption, cost, and infrastructure vulnerabilities. All of this is usually done in partnership with the cloud providers, such as AWS and Azure.

Security and compliance concerns- reflect a growing need for accurate, timely and relevant insights across the application/infrastructure. However, according to EMA research ("EMA Research: Integrating Security with Operations, Development and ITSM in the Age of Cloud and Agile," Spring, 2017), these "timely insights" typically bounce back and forth between using shared discovery/inventory tools with operations (in some cases ten or more), and security's own private suite (the average was seven inventory and discovery tools used purely by security).

Benefits and Closing Thoughts

The list above not only presents obvious challenges once you begin to take seriously the need not only to do each of the above well, but to be able to pull the pieces together better so that change management isn't at war with performance, and capacity management is aware of asset realities and costs, and security and compliance can be effectively integrated into virtually every option listed above.

A partial list of benefits for well reconciled inventory and discovery data includes:

■ Improved service availability and performance

■ Improved lifecycle optimization for IT (HW/SW) assets

■ Improved capacity optimization and planning

■ Improved efficiencies in change management

■ Improved capabilities for assimilating cloud resources

■ Improved dialog with business stakeholders

■ Improved operational efficiencies overall

■ Keeping up with security when new vulnerabilities are discovered

■ Lifecycle planning of application services for cost and value

■ Improved visibility of the business value contribution of IT

("Best Practices for Optimizing IT with ITAM Big Data," EMA, July 2015)

Of course getting there is half the fun, and more than half the challenge. So please tune in on August 8 for more insights into challenges, benefits and best practices in unifying data awareness of "what's out there" along with real-world examples of both failure and success.

Image removed.

Hot Topics

The Latest

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...