Skip to main content

Service-Centric Mapping for Agile Software Deployments

Tom Molfetto

If you've spent any time at all on industry websites, you've probably seen a couple of statistics that support of service-centric mapping: 80% of unplanned business service downtime is due to change (and therefore a failure in IT change control), and that more than 50% of unplanned downtime is the result of human error. It's a simple fact, and there's no way of getting around it. And in today's landscape, there can be more impactful issues than downtime on the business, such as security vulnerabilities, especially where personal information or sensitive data is at stake.

This fact becomes especially important to consider in organizations and amongst teams that operate in an agile development environment, where fast and regular iterations of the core product – which is that which serves to distinguish the company in the marketplace and is imperative along the pathway to market leadership – are being released to introduce new features or otherwise improve performance.

In this type of environment, where the development team is generally rolling out new builds on a monthly basis, and perhaps even more regularly to address minor issues, problems often arise in the gap that exists between the development and operations teams. Fast and short iterations of the core service require rapid and sometimes automated deployment procedures from IT teams, who are being pressed to ensure that the production environment is stable, functional and performing optimally. And as development continues to push new builds towards production, IT is oftentimes introducing new components into the infrastructure to support various aspects of the core product.

DevOps is a software development method that stresses communication, collaboration and integration between software developers and information technology (IT) professionals that aims to help an organization rapidly produce software products and services. It is a method that effectively combines and accounts for: product development, technology operations, and quality assurance. In short, DevOps is a method that seeks to bridge the gap between development and operations teams in a way that allows for greater efficiency for the operation, and a better experience for the end-user of the core service.

In order to promote DevOps harmony, it is important that the operations team responsible for managing the production environment begins to manage from the perspective of the business service(s) being powered by the IT infrastructure as opposed to managing the components of the IT infrastructure without a sense of context for the service(s) that it underlies.

With the advent of virtualization, organizations that rely on agile development to stay ahead of the competition run the risk of a fragmented and disorganized infrastructure with VMs spread out all over the place. It can be difficult to keep track of the physical locations of each VM, many of which are powering mission-critical pieces of the core product. In this sort of environment, having an up-to-date, service-centric map that makes sense out of the increasingly complex IT infrastructure powering many of today's most powerful applications is critical.

In a landscape where a single business service may have dozens of distinct virtualized IT components, it is critical that the traditional approach taken by IT organizations of managing disparate technology silos shifts towards managing the business services that are running in the data center. Otherwise IT is not supporting the business in an optimal manner.

Tom Molfetto is Marketing Director for Neebula.

Hot Topics

The Latest

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

The gap is widening between what teams spend on observability tools and the value they receive amid surging data volumes and budget pressures, according to The Breaking Point for Observability Leaders, a report from Imply ...

Service-Centric Mapping for Agile Software Deployments

Tom Molfetto

If you've spent any time at all on industry websites, you've probably seen a couple of statistics that support of service-centric mapping: 80% of unplanned business service downtime is due to change (and therefore a failure in IT change control), and that more than 50% of unplanned downtime is the result of human error. It's a simple fact, and there's no way of getting around it. And in today's landscape, there can be more impactful issues than downtime on the business, such as security vulnerabilities, especially where personal information or sensitive data is at stake.

This fact becomes especially important to consider in organizations and amongst teams that operate in an agile development environment, where fast and regular iterations of the core product – which is that which serves to distinguish the company in the marketplace and is imperative along the pathway to market leadership – are being released to introduce new features or otherwise improve performance.

In this type of environment, where the development team is generally rolling out new builds on a monthly basis, and perhaps even more regularly to address minor issues, problems often arise in the gap that exists between the development and operations teams. Fast and short iterations of the core service require rapid and sometimes automated deployment procedures from IT teams, who are being pressed to ensure that the production environment is stable, functional and performing optimally. And as development continues to push new builds towards production, IT is oftentimes introducing new components into the infrastructure to support various aspects of the core product.

DevOps is a software development method that stresses communication, collaboration and integration between software developers and information technology (IT) professionals that aims to help an organization rapidly produce software products and services. It is a method that effectively combines and accounts for: product development, technology operations, and quality assurance. In short, DevOps is a method that seeks to bridge the gap between development and operations teams in a way that allows for greater efficiency for the operation, and a better experience for the end-user of the core service.

In order to promote DevOps harmony, it is important that the operations team responsible for managing the production environment begins to manage from the perspective of the business service(s) being powered by the IT infrastructure as opposed to managing the components of the IT infrastructure without a sense of context for the service(s) that it underlies.

With the advent of virtualization, organizations that rely on agile development to stay ahead of the competition run the risk of a fragmented and disorganized infrastructure with VMs spread out all over the place. It can be difficult to keep track of the physical locations of each VM, many of which are powering mission-critical pieces of the core product. In this sort of environment, having an up-to-date, service-centric map that makes sense out of the increasingly complex IT infrastructure powering many of today's most powerful applications is critical.

In a landscape where a single business service may have dozens of distinct virtualized IT components, it is critical that the traditional approach taken by IT organizations of managing disparate technology silos shifts towards managing the business services that are running in the data center. Otherwise IT is not supporting the business in an optimal manner.

Tom Molfetto is Marketing Director for Neebula.

Hot Topics

The Latest

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

The gap is widening between what teams spend on observability tools and the value they receive amid surging data volumes and budget pressures, according to The Breaking Point for Observability Leaders, a report from Imply ...