The State of AI Development and Operations in 2019
October 01, 2019

Mark Coleman
Dotscience

Share this

The use of AI is booming across the modern enterprise. In fact, according to Gartner's 2019 CIO Survey, the number of enterprises implementing AI grew 270% in the past four years and tripled in the past year. However, many enterprises will be unable to realize the full potential of their initiatives until they find more efficient means of tracking data, code, models and metrics across the entire AI lifecycle.

To better understand the AI maturity of businesses, Dotscience conducted a survey of 500 industry professionals in its inaugural State of Development and Operations of AI Applications 2019 report.

Research findings indicate that although enterprises are dedicating significant time and resources towards their AI deployments, many data science and ML teams don't have the adequate tools needed to properly collaborate on, build and deploy AI models efficiently.

AI Goes Mainstream

AI has moved beyond the experimentation stage and is now seen as a critical and impactful function for many businesses. Enterprises are becoming increasingly reliant on AI for its ability to deliver greater operational efficiency, streamline complex business processes, and support cost control and profit potential. This is evidenced by the survey results, which indicate that the top three drivers of AI adoption are efficiency gains (47%), growth initiatives (46%) and digital transformation (44%). Furthermore, over 88% of respondents at organizations where AI is in production indicated that AI has either been impactful or highly impactful to their company's competitive advantage. The exponential growth of AI's value and influence is also reflected in the large investments organizations are making in AI. Nearly a third of respondents (30%) are budgeting between 1 and 10 million dollars for AI tools, platforms and services.

Unfortunately, it's not all rainbows and sunshine in the world of enterprise AI. The study also found that despite this level of financial commitment, data science and ML teams continue to experience issues, including duplicating their work (33%), rewriting models after team members leave (28%), justifying the value of their projects to the wider business (27%), and slow and unpredictable AI projects (25%).

Manual Tools and Processes

Despite providing an impactful competitive advantage for enterprises, AI deployments today are largely slow and inefficient. The manual tools and processes primarily in use to operationalize ML and AI don't support the scaling and governance demanded of many AI initiatives.

The top two ways that ML engineers and data scientists collaborate with each other are by using a manually updated shared spreadsheet for metrics (44%) and sitting in the same office and working closely together (38%). These methods of collaboration ultimately disrupt efficiency and limit AI's potential. Machine learning has many moving parts, and teams require version control for their training and test data, their code and their environment, as well as metrics and hyperparameters in order to collaborate efficiently. Survey findings show that over 35% of organizations don't use any version control for their training and test data. However, of those who don't currently have any version control, over 60% would like to.

These limitations are compounded by the fact that nearly 90% of respondents either manually track model provenance (a complete record of all the steps taken to create an AI model) or do not track provenance at all. And of those that manually track model provenance, more than half (52%) do their tracking in a spreadsheet or wiki, a cumbersome and error-prone approach.

Challenges in Scaling AI Initiatives

Despite significant investment in AI, many companies are still struggling to stabilize and scale their AI initiatives. The manual tools and processes being used by many for AI model development are insufficient and do not support the required scaling and governance.

While 63% of businesses reported they are spending between $500,000 and $10 million on their AI efforts, 61% of respondents continue to experience a variety of operational challenges. This is evidenced by the fact that 64% of organizations deploying AI said that it is taking between 7 to 18 months to get their AI workloads from idea into production, illustrating the slow, unpredictable nature of AI projects today. Meanwhile, for nearly another 20%, the anticipated timeline is 19+ months to production.

DevOps Like It's 1999

The challenges faced by data science and ML teams today are reminiscent of the same challenges facing software engineers in the late 1990s. Then came DevOps, which transformed the way software engineers deliver applications by making it possible to collaborate, test and deliver software continuously.

With ML and AI projects today, collaboration is even more challenging when compared to basic software engineering. Normal software development tools focus on versions or commits of code whereas ML has many more moving parts. ML teams require version control for both training and test data, their code and their environment, as well as metrics and hyperparameters for each training run.

While ML and AI are understood as powerful technologies with the potential to reinvent the global economy, operationalizing AI still remains a major hurdle for many organizations. To simplify, accelerate and control every stage of the AI model lifecycle, the same DevOps-like principles of collaboration, fast feedback and continuous delivery should be applied to AI. Only then can enterprises realize the full potential of their AI deployments across the organization.

Mark Coleman is VP of Product and Marketing at Dotscience
Share this

The Latest

March 27, 2023

To achieve maximum availability, IT leaders must employ domain-agnostic solutions that identify and escalate issues across all telemetry points. These technologies, which we refer to as Artificial Intelligence for IT Operations, create convergence — in other words, they provide IT and DevOps teams with the full picture of event management and downtime ...

March 23, 2023

APMdigest and leading IT research firm Enterprise Management Associates (EMA) are partnering to bring you the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 2 - Part 1 Pete Goldin, Editor and Publisher of APMdigest, discusses Network Observability with Shamus McGillicuddy, Vice President of Research, Network Infrastructure and Operations, at EMA ...

March 22, 2023

CIOs have stepped into the role of digital leader and strategic advisor, according to the 2023 Global CIO Survey from Logicalis ...

March 21, 2023

Synthetic monitoring is crucial to deploy code with confidence as catching bugs with E2E tests on staging is becoming increasingly difficult. It isn't trivial to provide realistic staging systems, especially because today's apps are intertwined with many third-party APIs ...

March 20, 2023

Recent EMA field research found that ServiceOps is either an active effort or a formal initiative in 78% of the organizations represented by a global panel of 400+ IT leaders. It is relatively early but gaining momentum across industries and organizations of all sizes globally ...

March 16, 2023

Managing availability and performance within SAP environments has long been a challenge for IT teams. But as IT environments grow more complex and dynamic, and the speed of innovation in almost every industry continues to accelerate, this situation is becoming a whole lot worse ...

March 15, 2023

Harnessing the power of network-derived intelligence and insights is critical in detecting today's increasingly sophisticated security threats across hybrid and multi-cloud infrastructure, according to a new research study from IDC ...

March 14, 2023

Recent research suggests that many organizations are paying for more software than they need. If organizations are looking to reduce IT spend, leaders should take a closer look at the tools being offered to employees, as not all software is essential ...

March 13, 2023

Organizations are challenged by tool sprawl and data source overload, according to the Grafana Labs Observability Survey 2023, with 52% of respondents reporting that their companies use 6 or more observability tools, including 11% that use 16 or more.

March 09, 2023

An array of tools purport to maintain availability — the trick is sorting through the noise to find the right one. Let us discuss why availability is so important and then unpack the ROI of deploying Artificial Intelligence for IT Operations (AIOps) during an economic downturn ...