Skip to main content

Even Artificial Intelligence Is Only as Good as Its Users

Khadim Batti
Whatfix

Artificial intelligence (AI) has saturated the conversation around technology as compelling new tools like ChatGPT produce headlines every day. Enterprise leaders have correctly identified the potential of AI — and its many tributary technologies — to generate new efficiencies at scale, particularly in the cloud era. But as we now know, these technologies are rarely plug-and-play, for reasons both technical and human. As they introduce AI into the workplace, IT leaders, CIOs and other executives will need to address both of these dynamics to derive the full value from their technology investments across all different departments from sales and marketing to R&D.

Focus on User Digital Experience

The value of modern technology is realized at scale. As new advanced technologies move more into everyday operations, an emerging barrier (and consequently, also a differentiator) is how easily and efficiently users are able to interact with the tools they're given. A powerful tool which lags in adoption among half the workforce cannot achieve its full potential value. Of course, this means training is essential. But the modern technology environment moves quickly, outpacing traditional training methods. Therefore, methods of training need to evolve as well, leveraging the technologies to which they correspond. 

Organizations that are able to achieve high rates of technology adoption, usage, and efficiency among their workforce at scale will be in a far better position to generate the full returns on their technology investments. This means focusing on users just as much as the technology itself. Organizations must take advantage of all the training resources at their disposal — including product demos, walkthroughs, and other materials — to adapt to new technology. However, the core issue is the rapid rate of technology change. The pace of change makes it challenging for users to adjust to the cadence of updates and new tools. This holds companies back from realizing the full value of their technology investments through lack of user adoption. 

Crucially, traditional training methods are inadequate in the face of today's fast-moving technology landscape. This is where AI can enable the introduction of further tools: by automating user guidance through a software layer that can act across business apps, for example, an organization can reduce the friction associated with learning a new tool and therefore increase adoption. AI on the back end can automate tasks or introduce real-time guidance to produce a smoother, more efficient user experience. By making business apps easier to use and adopt, organizations can derive greater value from their existing technology suite as well as reduce the friction associated with introducing a new tool. Used in this way, and combined with more traditional elements, like workshops and on-demand informational content and mechanisms to deliver feedback, AI creates a virtuous cycle. 

The other side of this is monitoring software efficiency. In modern organizations, data fuels decision making — this should be no different when it comes to AI. Leaders can't expect to introduce solutions — even automated solutions — and automatically receive maximum return on their investment. Especially at scale, digital tools are still only as good as how well they're being used. Leaders must be able to identify bottlenecks and quickly adapt to increase efficiency over time. This means developing KPIs that correspond to business goals and tracking with the purpose of making informed adjustments to strategy both on the business side and the internal technology side.

Build the Infrastructure to Support AI

Digital transformation in general, and especially where AI is concerned, is at its core a technical and organizational infrastructure to support continuous change over time. In the cloud era, change management strategies must be a permanent feature of the company's strategic outlook rather than a transition plan with an end-date. Technology, as the primary differentiator in all industries, must be a central part of any change management strategy. For AI, this means building teams that have the skills and expertise to manage its deployment across business units. Software engineers are an essential part of any AI team — they have the technical capabilities to enable deployments and to integrate them into operations. They should also contribute to making the operations of any particular AI program visible and intelligible to all relevant stakeholders, and especially the C-suite. 

It's important to note that AI is not best used as a catch-all solution to apply broadly and blindly everywhere it might fit. In the avalanche of AI headlines concerning every industry under the sun, it can be easy to forget this. AI is best used to achieve specific tasks. Organizations must clearly identify the purpose of each AI deployment and have a reliable means to track its progress in relation to those goals. The team should include representatives from product management and design to ensure that any AI project aligns with overall business objectives. 

Additionally, organizations must ensure that stakeholders clearly understand the inputs and outputs of any program, as well as how they relate to one another so that teams can make informed decisions about strategic adjustments. AI outputs depend on the specificity of their inputs, so teams must be trained on how to formulate these inputs in an efficient way, a process called "prompt engineering." Some AI solutions can also learn these inputs as employees deploy them and autofill them in context moving forward, creating a positive feedback loop to remove friction from the process over time. 

Artificial intelligence represents a structural shift in how we use technology — organizations must reflect that by establishing dedicated systems and structures to integrate the technology and manage its evolution over time. At the same time, the organizations that are able to achieve the best return on AI investments will clearly understand its capabilities and limitations and establish mechanisms to ensure AI projects are contributing positively to overall business goals.

Unlocking the Potential of Your Existing Workforce

AI is here to stay, and it represents a massive change in terms of how people and businesses relate to technology. As tools like generative AI grow more sophisticated, they will emerge in additional areas of our everyday lives — chatbots, customer service, IT service management, and more, for example. In sales, for example, AI helps employees conduct prospect research and develop personalized email scripts on the front end, while economizing the CRM user experience on the back end. In R&D it helps researchers filter monumental datalakes of information to produce actionable knowledge. The true benefits of AI tools are in the efficiencies they can unlock among the existing workforce. Employees within a structure that focuses on continuous transformation will develop competencies and skills through their natural workflows that will enable them to supervise AI as an everyday function. By focusing on user digital experience as much as the technologies themselves, organizations will be able to generate the maximum return on their investments while simultaneously developing the capacity to evolve in tandem with innovations they used to chase.

Khadim Batti is Co-founder and CEO of Whatfix

The Latest

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

Even Artificial Intelligence Is Only as Good as Its Users

Khadim Batti
Whatfix

Artificial intelligence (AI) has saturated the conversation around technology as compelling new tools like ChatGPT produce headlines every day. Enterprise leaders have correctly identified the potential of AI — and its many tributary technologies — to generate new efficiencies at scale, particularly in the cloud era. But as we now know, these technologies are rarely plug-and-play, for reasons both technical and human. As they introduce AI into the workplace, IT leaders, CIOs and other executives will need to address both of these dynamics to derive the full value from their technology investments across all different departments from sales and marketing to R&D.

Focus on User Digital Experience

The value of modern technology is realized at scale. As new advanced technologies move more into everyday operations, an emerging barrier (and consequently, also a differentiator) is how easily and efficiently users are able to interact with the tools they're given. A powerful tool which lags in adoption among half the workforce cannot achieve its full potential value. Of course, this means training is essential. But the modern technology environment moves quickly, outpacing traditional training methods. Therefore, methods of training need to evolve as well, leveraging the technologies to which they correspond. 

Organizations that are able to achieve high rates of technology adoption, usage, and efficiency among their workforce at scale will be in a far better position to generate the full returns on their technology investments. This means focusing on users just as much as the technology itself. Organizations must take advantage of all the training resources at their disposal — including product demos, walkthroughs, and other materials — to adapt to new technology. However, the core issue is the rapid rate of technology change. The pace of change makes it challenging for users to adjust to the cadence of updates and new tools. This holds companies back from realizing the full value of their technology investments through lack of user adoption. 

Crucially, traditional training methods are inadequate in the face of today's fast-moving technology landscape. This is where AI can enable the introduction of further tools: by automating user guidance through a software layer that can act across business apps, for example, an organization can reduce the friction associated with learning a new tool and therefore increase adoption. AI on the back end can automate tasks or introduce real-time guidance to produce a smoother, more efficient user experience. By making business apps easier to use and adopt, organizations can derive greater value from their existing technology suite as well as reduce the friction associated with introducing a new tool. Used in this way, and combined with more traditional elements, like workshops and on-demand informational content and mechanisms to deliver feedback, AI creates a virtuous cycle. 

The other side of this is monitoring software efficiency. In modern organizations, data fuels decision making — this should be no different when it comes to AI. Leaders can't expect to introduce solutions — even automated solutions — and automatically receive maximum return on their investment. Especially at scale, digital tools are still only as good as how well they're being used. Leaders must be able to identify bottlenecks and quickly adapt to increase efficiency over time. This means developing KPIs that correspond to business goals and tracking with the purpose of making informed adjustments to strategy both on the business side and the internal technology side.

Build the Infrastructure to Support AI

Digital transformation in general, and especially where AI is concerned, is at its core a technical and organizational infrastructure to support continuous change over time. In the cloud era, change management strategies must be a permanent feature of the company's strategic outlook rather than a transition plan with an end-date. Technology, as the primary differentiator in all industries, must be a central part of any change management strategy. For AI, this means building teams that have the skills and expertise to manage its deployment across business units. Software engineers are an essential part of any AI team — they have the technical capabilities to enable deployments and to integrate them into operations. They should also contribute to making the operations of any particular AI program visible and intelligible to all relevant stakeholders, and especially the C-suite. 

It's important to note that AI is not best used as a catch-all solution to apply broadly and blindly everywhere it might fit. In the avalanche of AI headlines concerning every industry under the sun, it can be easy to forget this. AI is best used to achieve specific tasks. Organizations must clearly identify the purpose of each AI deployment and have a reliable means to track its progress in relation to those goals. The team should include representatives from product management and design to ensure that any AI project aligns with overall business objectives. 

Additionally, organizations must ensure that stakeholders clearly understand the inputs and outputs of any program, as well as how they relate to one another so that teams can make informed decisions about strategic adjustments. AI outputs depend on the specificity of their inputs, so teams must be trained on how to formulate these inputs in an efficient way, a process called "prompt engineering." Some AI solutions can also learn these inputs as employees deploy them and autofill them in context moving forward, creating a positive feedback loop to remove friction from the process over time. 

Artificial intelligence represents a structural shift in how we use technology — organizations must reflect that by establishing dedicated systems and structures to integrate the technology and manage its evolution over time. At the same time, the organizations that are able to achieve the best return on AI investments will clearly understand its capabilities and limitations and establish mechanisms to ensure AI projects are contributing positively to overall business goals.

Unlocking the Potential of Your Existing Workforce

AI is here to stay, and it represents a massive change in terms of how people and businesses relate to technology. As tools like generative AI grow more sophisticated, they will emerge in additional areas of our everyday lives — chatbots, customer service, IT service management, and more, for example. In sales, for example, AI helps employees conduct prospect research and develop personalized email scripts on the front end, while economizing the CRM user experience on the back end. In R&D it helps researchers filter monumental datalakes of information to produce actionable knowledge. The true benefits of AI tools are in the efficiencies they can unlock among the existing workforce. Employees within a structure that focuses on continuous transformation will develop competencies and skills through their natural workflows that will enable them to supervise AI as an everyday function. By focusing on user digital experience as much as the technologies themselves, organizations will be able to generate the maximum return on their investments while simultaneously developing the capacity to evolve in tandem with innovations they used to chase.

Khadim Batti is Co-founder and CEO of Whatfix

The Latest

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.