To Manage Today's Data Surge, CIOs Need Mix of Automation and AISecOps
October 11, 2022

Andreas Grabner
Dynatrace

Share this

IT teams need modern technologies to identify valuable data insights from the oceans of data businesses collect today, according to a new report. The report found that 71% of the 1,303 chief information officers (CIOs) and other IT decision makers surveyed say the colossal amount of data generated by cloud-native technology stacks is now beyond human ability to manage.


To keep pace with all the data from complex cloud-native architectures, organizations need more sophisticated solutions to power operations and security, said those surveyed.

Core to these new technologies is automation supported by AISecOps (a methodology that brings artificial intelligence to operations and security tasks/teams). The findings of the survey, conducted by Coleman Parkes and commissioned by Dynatrace, were published in the 2022 Global CIO Report.

Without a more automated approach to IT operations, 59% of CIOs surveyed say their teams could soon become overloaded by the increasing complexity of their technology stack. Perhaps more concerning is that 93% of CIOs say AIOps (or AI for IT operations) and automation are increasingly vital to helping ease the shortage of skilled IT, development, and security professionals and reducing the risk of teams becoming burned out by the complexity of modern cloud and development environments.

Glut of Data and the Lack of Effective Tools Creates Numerous Problems

The era of big data has created scores of opportunities, but it also has posed nearly as many challenges. The hyperconnected environments created using multicloud strategies, Kubernetes and serverless architectures enable organizations to accelerate the building of customized and innovative new architectures. These new environments, however, have become increasingly distributed and complex.

Meanwhile, IT and application managers must cobble together a host of legacy technologies to monitor and maintain visibility into performance and availability. The survey found that CIOs say their teams use an average of 10 monitoring tools across their technology stacks, though they have observability of just 9% of their environment.

This ad hoc approach makes it more difficult to deliver the best-performing and most secure software applications. The reasons are simple: each separate tool requires a different skill set to interpret the data's meaning and each organizes and visualizes metrics in different ways. Also, each tool provides visibility into just one layer of the stack, which creates data silos.

More Complex Environments Are Costly, Take Toll on Workers

When it comes to cost, 45% of CIOs say it's too expensive to manage the large volume of observability and security data using existing analytics solutions. As a result, respondents say they keep only what is most critical.

The mounting complexity and hassle involved in maintaining operations also takes a toll on employees; 64% of CIOs say it has become harder to attract and retain enough skilled IT operations and DevOps professionals.

Another problem is that log analytics — traditionally the source from which to unlock insights from data and optimize software performance and security — too often can't scale to address the torrent of observability and security data generated by today's technology stacks.

Automation and AISecOps is the Answer

IT teams need a more automated approach to operations and security, combined with AISecOps. Achieving this effectively requires an end-to-end observability and application security platform with the ability to capture data in context and provide AI-powered, advanced analytics with high performance, cost-effectiveness and limitless scale.

Additionally, data warehouses have also become outdated. Models that employ a strategy with a data lakehouse at the core, one that features powerful processing capabilities, will drive greater innovation and efficiency. This kind of strategy harnesses petabytes of data at the speed needed to turn raw information into precise and actionable answers that drive AISecOps automation.

Another benefit of this model is the freeing of skilled DevOps teams from arduous, routine manual tasks, enabling them to work on more strategic, innovation-driving projects.

According to the survey, CIOs estimate that their teams spend 40% of their time just "keeping the lights on," and that these valuable hours could be saved through automation.

Organizations suffering from these issues should seek out an all-in-one platform that provides observability, application security and AIOps. With this strategy, leaders can provide their teams with an easy-to-use, automated, and unified approach that delivers precise answers and exceptional digital experiences at scale.

Andreas Graber is a DevOps Activist at Dynatrace
Share this

The Latest

January 26, 2023

As enterprises work to implement or improve their observability practices, tool sprawl is a very real phenomenon ... Tool sprawl can and does happen all across the organization. In this post, though, we'll focus specifically on how and why observability efforts often result in tool sprawl, some of the possible negative consequences of that sprawl, and we'll offer some advice on how to reduce or even avoid sprawl ...

January 25, 2023

As companies generate more data across their network footprints, they need network observability tools to help find meaning in that data for better decision-making and problem solving. It seems many companies believe that adding more tools leads to better and faster insights ... And yet, observability tools aren't meeting many companies' needs. In fact, adding more tools introduces new challenges ...

January 24, 2023

Driven by the need to create scalable, faster, and more agile systems, businesses are adopting cloud native approaches. But cloud native environments also come with an explosion of data and complexity that makes it harder for businesses to detect and remediate issues before everything comes to a screeching halt. Observability, if done right, can make it easier to mitigate these challenges and remediate incidents before they become major customer-impacting problems ...

January 23, 2023

The spiraling cost of energy is forcing public cloud providers to raise their prices significantly. A recent report by Canalys predicted that public cloud prices will jump by around 20% in the US and more than 30% in Europe in 2023. These steep price increases will test the conventional wisdom that moving to the cloud is a cheap computing alternative ...

January 19, 2023

Despite strong interest over the past decade, the actual investment in DX has been recent. While 100% of enterprises are now engaged with DX in some way, most (77%) have begun their DX journey within the past two years. And most are early stage, with a fourth (24%) at the discussion stage and half (49%) currently transforming. Only 27% say they have finished their DX efforts ...

January 18, 2023

While most thought that distraction and motivation would be the main contributors to low productivity in a work-from-home environment, many organizations discovered that it was gaps in their IT systems that created some of the most significant challenges ...

January 17, 2023
The US aviation sector was struggling to return to normal following a nationwide ground stop imposed by Federal Aviation Administration (FAA) early Wednesday over a computer issue ...
January 13, 2023

APMdigest and leading IT research firm Enterprise Management Associates (EMA) are teaming up on the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 1, Dan Twing, President and COO of EMA, discusses Observability and Automation with Will Schoeppner, Research Director covering Application Performance Management and Business Intelligence at EMA ...

January 12, 2023

APMdigest is following up our list of 2023 Application Performance Management Predictions with predictions from industry experts about how the cloud will evolve in 2023 ...

January 11, 2023

As demand for digital services increases and distributed systems become more complex, organizations must collect and process a growing amount of observability data (logs, metrics, and traces). Site reliability engineers (SREs), developers, and security engineers use observability data to learn how their applications and environments are performing so they can successfully respond to issues and mitigate risk ...