Skip to main content

Gartner: Major Organizations Need to Grow Performance Management Skills

Agile infrastructures demand that infrastructure and operations (I&O) managers acquire the performance management skills present only in Web-scale IT, according to Gartner, Inc.

While major organizations will need to maintain and sustain their conventional capacity-planning skills and tools, they will also need to regularly re-evaluate the tools available, and make a special effort to acquire and grow the capacity and performance management skills that are rarely evident outside the Web-scale IT community.

"By 2016, the availability of capacity and performance management skills for horizontally scaled architectures will be a major constraint or risk to growth for 80 percent of major businesses," says Ian Head, research director at Gartner. "To take advantage of Web-scale IT approaches to capacity and performance management, IT architects need to fully embrace stateless application architectures and horizontally scaling infrastructure architectures."

Adding additional central processing units (CPUs), memory and storage to a monolithic server has been the traditional, vertical way of scaling up applications while capacity planning has traditionally been developed with the goal of forecasting the requirements for this vertical scaling approach. However, vertical architectures and approaches have limited scalability, making vertical architectures unsuitable for hyperscaling. For service capacity to expand seamlessly to extremely large scales, different approaches are required.

"Organizations managing such services need the ability to rapidly assign and de-assign resources to each service, as well as the ability to scale linearly and continuously as more resources are added or removed," says Head. "They also need high levels of resiliency that will allow sections of the infrastructure to fail without bringing the services down. Designing the underlying infrastructure for horizontal scalability, high degrees of fault-tolerance and rapid, incremental change is a key prerequisite for effective Web-scale operations."

To achieve the overall goal of an infrastructure in which services can consume capacity on an as-needed basis, the overall Web-scale IT capacity planning function may be divided among two teams. The application or product team develops the applications, monitors the consumption of their services in various locations and user subsets, and requests and allocates infrastructure resources based on policy-driven utilization triggers. The infrastructure team ensures that the overall shared physical limits do not constrain the performance of the individual services being continuously developed by the product teams.

Services constructed in this way are better equipped to scale geographically, and share multiple data centers with limited impact on user performance. Although extending service availability to new locations will carry additional burdens and requirements, horizontally scaling application and infrastructure architectures will generally be better equipped to scale geographically, because additional capacity may be used by the different applications, as required and as load patterns shift.

Demand Shaping

Where there are potentially large-scale, limited forecasts or demand history, IT leaders need to develop demand-shaping techniques to provide acceptable performance. Demand shaping enables finite infrastructure resources to maintain their vital always-on character with acceptable, if not consistent, performance experiences across the entire user base. Gartner predicts that through 2017, 25 percent of enterprises will use demand shaping for capacity planning and management, a significant increase from less than one percent in 2014.

When deploying large-scale services, infrastructure and operations leaders need to become proficient in operational analytics tools and big data capabilities, rather than traditional capacity planning tools.

"Traditional capacity-planning tools enable I&O organizations to gather data from various sources, including monitoring tools and, possibly, business demand forecasts, then produce trending, utilization and investment forecast information, taking into account several different scenarios," says Head. "The different architectures and the huge scale of the Web-scale IT organizations make traditional, highly focused tools of limited utility. Demand shaping requires more and different functionality than current off-the-shelf tools provide."

Although different Web-scale organizations grow and adapt their techniques to their specific requirements, a common theme is the extensive use of large volumes of operational data. Even though the infrastructures are large, the horizontal design enables clear visualization and understanding of constraints and dependencies, such that these may be managed as the environment, loads and demands change.

In general, in-memory computing and deep analytics tools are used to extract the required information from a combination of the infrastructure monitoring tools and the instrumentation built into the applications. The resulting analytical information is used to facilitate proactive, real-time and near-real-time actions to allocate resources and manage potential bottlenecks. Similar functionality is also used to model the impact of moving workloads and to simulate the effects of potential infrastructure and application changes.

The outcome is that Web-scale enterprises have developed a set of tools, practices and capabilities that enable real-time demand shaping. These operational skills and tools are unique to each Web-scale organization and so are not yet available in most end-user enterprises.

"Much of the art of achieving always-on, scalable, rapidly changing, high-performance services is a consequence of the advanced use of homegrown and customized analytics and tools by the application and infrastructure teams," Head concludes. "These are used to shape demand in real time and to produce forward-looking capacity and investment plans to an acceptable degree of accuracy."

Related Links:

Gartner analysts will take a deeper look at the outlook for IT operations trends at the Gartner IT Infrastructure & Operations Management Summit taking place June 2-3 in Berlin and June 9-11 in Orlando FL.

Hot Topics

The Latest

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...

Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...

Gartner: Major Organizations Need to Grow Performance Management Skills

Agile infrastructures demand that infrastructure and operations (I&O) managers acquire the performance management skills present only in Web-scale IT, according to Gartner, Inc.

While major organizations will need to maintain and sustain their conventional capacity-planning skills and tools, they will also need to regularly re-evaluate the tools available, and make a special effort to acquire and grow the capacity and performance management skills that are rarely evident outside the Web-scale IT community.

"By 2016, the availability of capacity and performance management skills for horizontally scaled architectures will be a major constraint or risk to growth for 80 percent of major businesses," says Ian Head, research director at Gartner. "To take advantage of Web-scale IT approaches to capacity and performance management, IT architects need to fully embrace stateless application architectures and horizontally scaling infrastructure architectures."

Adding additional central processing units (CPUs), memory and storage to a monolithic server has been the traditional, vertical way of scaling up applications while capacity planning has traditionally been developed with the goal of forecasting the requirements for this vertical scaling approach. However, vertical architectures and approaches have limited scalability, making vertical architectures unsuitable for hyperscaling. For service capacity to expand seamlessly to extremely large scales, different approaches are required.

"Organizations managing such services need the ability to rapidly assign and de-assign resources to each service, as well as the ability to scale linearly and continuously as more resources are added or removed," says Head. "They also need high levels of resiliency that will allow sections of the infrastructure to fail without bringing the services down. Designing the underlying infrastructure for horizontal scalability, high degrees of fault-tolerance and rapid, incremental change is a key prerequisite for effective Web-scale operations."

To achieve the overall goal of an infrastructure in which services can consume capacity on an as-needed basis, the overall Web-scale IT capacity planning function may be divided among two teams. The application or product team develops the applications, monitors the consumption of their services in various locations and user subsets, and requests and allocates infrastructure resources based on policy-driven utilization triggers. The infrastructure team ensures that the overall shared physical limits do not constrain the performance of the individual services being continuously developed by the product teams.

Services constructed in this way are better equipped to scale geographically, and share multiple data centers with limited impact on user performance. Although extending service availability to new locations will carry additional burdens and requirements, horizontally scaling application and infrastructure architectures will generally be better equipped to scale geographically, because additional capacity may be used by the different applications, as required and as load patterns shift.

Demand Shaping

Where there are potentially large-scale, limited forecasts or demand history, IT leaders need to develop demand-shaping techniques to provide acceptable performance. Demand shaping enables finite infrastructure resources to maintain their vital always-on character with acceptable, if not consistent, performance experiences across the entire user base. Gartner predicts that through 2017, 25 percent of enterprises will use demand shaping for capacity planning and management, a significant increase from less than one percent in 2014.

When deploying large-scale services, infrastructure and operations leaders need to become proficient in operational analytics tools and big data capabilities, rather than traditional capacity planning tools.

"Traditional capacity-planning tools enable I&O organizations to gather data from various sources, including monitoring tools and, possibly, business demand forecasts, then produce trending, utilization and investment forecast information, taking into account several different scenarios," says Head. "The different architectures and the huge scale of the Web-scale IT organizations make traditional, highly focused tools of limited utility. Demand shaping requires more and different functionality than current off-the-shelf tools provide."

Although different Web-scale organizations grow and adapt their techniques to their specific requirements, a common theme is the extensive use of large volumes of operational data. Even though the infrastructures are large, the horizontal design enables clear visualization and understanding of constraints and dependencies, such that these may be managed as the environment, loads and demands change.

In general, in-memory computing and deep analytics tools are used to extract the required information from a combination of the infrastructure monitoring tools and the instrumentation built into the applications. The resulting analytical information is used to facilitate proactive, real-time and near-real-time actions to allocate resources and manage potential bottlenecks. Similar functionality is also used to model the impact of moving workloads and to simulate the effects of potential infrastructure and application changes.

The outcome is that Web-scale enterprises have developed a set of tools, practices and capabilities that enable real-time demand shaping. These operational skills and tools are unique to each Web-scale organization and so are not yet available in most end-user enterprises.

"Much of the art of achieving always-on, scalable, rapidly changing, high-performance services is a consequence of the advanced use of homegrown and customized analytics and tools by the application and infrastructure teams," Head concludes. "These are used to shape demand in real time and to produce forward-looking capacity and investment plans to an acceptable degree of accuracy."

Related Links:

Gartner analysts will take a deeper look at the outlook for IT operations trends at the Gartner IT Infrastructure & Operations Management Summit taking place June 2-3 in Berlin and June 9-11 in Orlando FL.

Hot Topics

The Latest

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...

Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...