Skip to main content

Debunking Common Myths About Operationalizing AI

Alan Young
InRule

Is your company trying to use artificial intelligence (AI) for business purposes like sales and marketing, finance or customer experience?

If not, why not?

If so, has it struggled to start AI projects and get them to work effectively?

Chances are, you're being held back by one or more operational misperceptions that are causing an overwhelming majority of AI projects to fail. To better understand AI's challenges, InRule Technology tapped Forrester Consulting to explore some common myths about operationalizing AI and suggest ways enterprises can overcome their AI challenges.

The report found that companies believe operationalizing AI can generate real value — helping them gain insights about customers and markets and improve business outcomes. They're just having trouble making it happen; operational silos, data strategy challenges, and a lack of resources are standing in their way.

One commonly held myth suggests that there aren't enough use cases to convince leadership to make AI a priority. Turns out, many companies are overwhelmed by having too many use cases. At least three quarters of AI decision-makers have either a manageable number or too many use cases to manage. This number should grow, since more than two thirds of decision-makers expect their AI and machine learning use cases will increase at least slightly over the next 18 to 24 months.

There's also a wide variety of use cases being exercised across business functions. The most popular involve generating insights into competitors, markets and customer behavior. Others include projects focused on innovation, automation, security, business efficiencies and business automation.

A second myth: AI projects are hard to implement because you can't find enough data scientists with doctorate degrees in statistics. Good data scientists are important, but the truth is, you don't need PhD's to start operationalizing AI. You don't need a PhD to work with most of the machine learning modeling tools in the market today. The real challenge is connecting data scientists to the rest of the ecosystem. Internal silos ranked as one of the top three collaboration challenges firms face, keeping data programmers, gatherers, interpreters and users from communicating with each other. The fact that one in four organizations have cultures that do not encourage data democratization makes the problem worse.

Data is clearly a requisite for AI projects, but the myth that you need lots of data managed by massive data systems is untrue. Regardless of the volume of data available, it's the quality that really matters. Data quality ranked second highest among the top challenges firms encounter when using AI technologies. If your data quality is poor, decisions will suffer, and this likely will impact customer experience and the corporate bottom line.

Another myth: AI learns by itself, so you can set it and forget it. This is where a lot of AI projects fail to live up to expectations. AI models need to be nurtured and continually monitored to make informed predictions and/or recommendations. While 71% of AI decision-makers routinely monitor and retrain models, a surprisingly high 28% build and train models and then leave them alone, creating an incorrect, negative perception about the effectiveness of AI. The most successful AI adopters build models with data feedback loops so they can be continuously retrained. For example, AIOps can enhance IT processes within an enterprise. While AIOps allows for real-time continuous data acquisition, the outcome data is important for model updates and insights as part of an ongoing feedback loop.

What can organizations do to better operationalize their AI?

An important starting point is sharp decision-making. Machine learning algorithms need case-relevant context and decision logic to be successfully operationalized. Decision platforms that incorporate machine learning, human decision logic, and other decisioning technologies and techniques can help scale AI projects, turning them into an integral part of your business strategy. AIOps anchors machine learning, decision automation, digital process and advanced analytics to automate and improve governance of repetitive tasks, freeing teams to focus on new mission critical problems with higher ROI — resulting in faster and more effective completion of projects and higher-impact business outcomes. Forrester data shows that more than two thirds of all enterprises are currently implementing AI and nearly all will be doing so by 2025. Getting up to speed on AI will pay dividends in the future.

Alan Young is Chief Product Officer at InRule

Hot Topics

The Latest

80% of respondents agree that the IT role is shifting from operators to orchestrators, according to the 2026 IT Trends Report: The Human Side of Autonomous IT from SolarWinds ...

40% of organizations deploying AI will implement dedicated AI observability tools by 2028 to monitor model performance, bias and outputs, according to Gartner ...

Until AI-powered engineering tools have live visibility of how code behaves at runtime, they cannot be trusted to autonomously ensure reliable systems, according to the State of AI-Powered Engineering Report 2026 report from Lightrun. The report reveals that a major volume of manual work is required when AI-generated code is deployed: 43% of AI-generated code requires manual debugging in production, even after passing QA or staging tests. Furthermore, an average of three manual redeploy cycles are required to verify a single AI-suggested code fix in production ...

Many organizations describe AI as strategic, but they do not manage it strategically. When AI plans are disconnected from strategy, detached from organizational learning, and protected from serious assumptions testing, the problem is no longer technical immaturity; it is a failure of management discipline ... Executives too often tell organizations to "use AI" before they define what AI is supposed to change. The problem deepens in organizations where strategy isn't well articulated in the first place ...

Across the enterprise technology landscape, a quiet crisis is playing out. Organizations have run hundreds, sometimes thousands, of generative AI pilots. Leadership has celebrated the proof of concept (POCs) ... Industry experience points to a sobering reality: only 5-10% of AI POCs that progress to the pilot stage successfully reach scaled production. The remaining 90% fail because the enterprise environment around them was never ready to absorb them, not the AI models ...

Today's modern systems are not what they once were. Organizations now rely on distributed systems, event-driven workflows, hybrid and multi-cloud environments and continuous delivery pipelines. While each adds flexibility, it also introduces new, often invisible failures. Development speed is no longer the primary bottleneck of innovation. Reliability is ...

Seeing is believing, or in this case, seeing is understanding, according to New Relic's 2025 Observability Forecast for Retail and eCommerce report. Retailers who want to provide exceptional customer experiences while improving IT operations efficiency are leaning on observability ... Here are five key takeaways from the report ...

Technology leaders across the federal landscape are facing, and will continue to face, an uphill battle when it comes to fortifying their digital environments against hostile and persistent threat actors. On one hand, they are being asked to push digital transformation ... On the other hand, they are facing the fiscal uncertainty of continuing resolutions (CR) and government shutdowns looming near and far. In the face of these challenges, CIOs, CTOs, and CISOs must figure out how to modernize legacy systems and infrastructure while doing more with less and still defending against external and internal threats ...

Reliability is no longer proven by uptime alone, according to the The SRE Report 2026 from LogicMonitor. In the AI era, it is experienced through speed, consistency, and user trust, and increasingly judged by business impact. As digital services grow more complex and AI systems move into production, traditional monitoring approaches are struggling to keep pace, increasing the need for AI-first observability that spans applications, infrastructure, and the Internet ...

If AI is the engine of a modern organization, then data engineering is the road system beneath it. You can build the most powerful engine in the world, but without paved roads, traffic signals, and bridges that can support its weight, it will stall. In many enterprises, the engine is ready. The roads are not ...

Debunking Common Myths About Operationalizing AI

Alan Young
InRule

Is your company trying to use artificial intelligence (AI) for business purposes like sales and marketing, finance or customer experience?

If not, why not?

If so, has it struggled to start AI projects and get them to work effectively?

Chances are, you're being held back by one or more operational misperceptions that are causing an overwhelming majority of AI projects to fail. To better understand AI's challenges, InRule Technology tapped Forrester Consulting to explore some common myths about operationalizing AI and suggest ways enterprises can overcome their AI challenges.

The report found that companies believe operationalizing AI can generate real value — helping them gain insights about customers and markets and improve business outcomes. They're just having trouble making it happen; operational silos, data strategy challenges, and a lack of resources are standing in their way.

One commonly held myth suggests that there aren't enough use cases to convince leadership to make AI a priority. Turns out, many companies are overwhelmed by having too many use cases. At least three quarters of AI decision-makers have either a manageable number or too many use cases to manage. This number should grow, since more than two thirds of decision-makers expect their AI and machine learning use cases will increase at least slightly over the next 18 to 24 months.

There's also a wide variety of use cases being exercised across business functions. The most popular involve generating insights into competitors, markets and customer behavior. Others include projects focused on innovation, automation, security, business efficiencies and business automation.

A second myth: AI projects are hard to implement because you can't find enough data scientists with doctorate degrees in statistics. Good data scientists are important, but the truth is, you don't need PhD's to start operationalizing AI. You don't need a PhD to work with most of the machine learning modeling tools in the market today. The real challenge is connecting data scientists to the rest of the ecosystem. Internal silos ranked as one of the top three collaboration challenges firms face, keeping data programmers, gatherers, interpreters and users from communicating with each other. The fact that one in four organizations have cultures that do not encourage data democratization makes the problem worse.

Data is clearly a requisite for AI projects, but the myth that you need lots of data managed by massive data systems is untrue. Regardless of the volume of data available, it's the quality that really matters. Data quality ranked second highest among the top challenges firms encounter when using AI technologies. If your data quality is poor, decisions will suffer, and this likely will impact customer experience and the corporate bottom line.

Another myth: AI learns by itself, so you can set it and forget it. This is where a lot of AI projects fail to live up to expectations. AI models need to be nurtured and continually monitored to make informed predictions and/or recommendations. While 71% of AI decision-makers routinely monitor and retrain models, a surprisingly high 28% build and train models and then leave them alone, creating an incorrect, negative perception about the effectiveness of AI. The most successful AI adopters build models with data feedback loops so they can be continuously retrained. For example, AIOps can enhance IT processes within an enterprise. While AIOps allows for real-time continuous data acquisition, the outcome data is important for model updates and insights as part of an ongoing feedback loop.

What can organizations do to better operationalize their AI?

An important starting point is sharp decision-making. Machine learning algorithms need case-relevant context and decision logic to be successfully operationalized. Decision platforms that incorporate machine learning, human decision logic, and other decisioning technologies and techniques can help scale AI projects, turning them into an integral part of your business strategy. AIOps anchors machine learning, decision automation, digital process and advanced analytics to automate and improve governance of repetitive tasks, freeing teams to focus on new mission critical problems with higher ROI — resulting in faster and more effective completion of projects and higher-impact business outcomes. Forrester data shows that more than two thirds of all enterprises are currently implementing AI and nearly all will be doing so by 2025. Getting up to speed on AI will pay dividends in the future.

Alan Young is Chief Product Officer at InRule

Hot Topics

The Latest

80% of respondents agree that the IT role is shifting from operators to orchestrators, according to the 2026 IT Trends Report: The Human Side of Autonomous IT from SolarWinds ...

40% of organizations deploying AI will implement dedicated AI observability tools by 2028 to monitor model performance, bias and outputs, according to Gartner ...

Until AI-powered engineering tools have live visibility of how code behaves at runtime, they cannot be trusted to autonomously ensure reliable systems, according to the State of AI-Powered Engineering Report 2026 report from Lightrun. The report reveals that a major volume of manual work is required when AI-generated code is deployed: 43% of AI-generated code requires manual debugging in production, even after passing QA or staging tests. Furthermore, an average of three manual redeploy cycles are required to verify a single AI-suggested code fix in production ...

Many organizations describe AI as strategic, but they do not manage it strategically. When AI plans are disconnected from strategy, detached from organizational learning, and protected from serious assumptions testing, the problem is no longer technical immaturity; it is a failure of management discipline ... Executives too often tell organizations to "use AI" before they define what AI is supposed to change. The problem deepens in organizations where strategy isn't well articulated in the first place ...

Across the enterprise technology landscape, a quiet crisis is playing out. Organizations have run hundreds, sometimes thousands, of generative AI pilots. Leadership has celebrated the proof of concept (POCs) ... Industry experience points to a sobering reality: only 5-10% of AI POCs that progress to the pilot stage successfully reach scaled production. The remaining 90% fail because the enterprise environment around them was never ready to absorb them, not the AI models ...

Today's modern systems are not what they once were. Organizations now rely on distributed systems, event-driven workflows, hybrid and multi-cloud environments and continuous delivery pipelines. While each adds flexibility, it also introduces new, often invisible failures. Development speed is no longer the primary bottleneck of innovation. Reliability is ...

Seeing is believing, or in this case, seeing is understanding, according to New Relic's 2025 Observability Forecast for Retail and eCommerce report. Retailers who want to provide exceptional customer experiences while improving IT operations efficiency are leaning on observability ... Here are five key takeaways from the report ...

Technology leaders across the federal landscape are facing, and will continue to face, an uphill battle when it comes to fortifying their digital environments against hostile and persistent threat actors. On one hand, they are being asked to push digital transformation ... On the other hand, they are facing the fiscal uncertainty of continuing resolutions (CR) and government shutdowns looming near and far. In the face of these challenges, CIOs, CTOs, and CISOs must figure out how to modernize legacy systems and infrastructure while doing more with less and still defending against external and internal threats ...

Reliability is no longer proven by uptime alone, according to the The SRE Report 2026 from LogicMonitor. In the AI era, it is experienced through speed, consistency, and user trust, and increasingly judged by business impact. As digital services grow more complex and AI systems move into production, traditional monitoring approaches are struggling to keep pace, increasing the need for AI-first observability that spans applications, infrastructure, and the Internet ...

If AI is the engine of a modern organization, then data engineering is the road system beneath it. You can build the most powerful engine in the world, but without paved roads, traffic signals, and bridges that can support its weight, it will stall. In many enterprises, the engine is ready. The roads are not ...