Skip to main content

The 3 Questions Every Product Leader Should Ask When Evaluating a New AI Tool

Ranjan Goel
VP of Product
LogicMonitor

All eyes are on the value AI can provide to enterprises. Whether it's simplifying the lives of developers, more accurately forecasting business decisions, or empowering teams to do more with less, AI has already become deeply integrated into businesses. However, it's still early to evaluate its impact using traditional methods. Here's how engineering and IT leaders can make educated decisions despite the ambiguity.

1. Does my current team have the technical ability to implement this?

Even the most advanced technology won't deliver its full potential if it isn't implemented and maintained properly. Leaders must ask:

Can my existing team do this? Can we train them to do an AI implementation in a timely manner?

Or will we need to hire additional staff?

None of the answers to the above questions spell disaster for implementing AI, they do help create a clearer picture of what's possible for your specific team. Given how quickly AI is evolving, upskilling or reskilling is likely required for most organizations. Whether through training or hiring, implementation needs to be feasible.

2. Am I willing to implement this at its current stage?

AI is full of promises — some near-term, some further off. When evaluating AI vendors, it's important to recognize that the technology's current capabilities may continue to evolve rapidly. If the current proof of concept meets most of your needs, great!

Decision makers should evaluate whether the AI tool provider they're entertaining is open to working closely to iterate the tool. Most AI tools are not yet mature enough for all potential use cases to be available already.

3. So you want to move forward. How do you justify the investment?

Think of the ROI of AI as falling into two categories: business benefits and financial benefits.

Most AI tools today offer value in terms of business benefits, such as improved customer experience, enhanced employee productivity, and faster rollouts of new features or products. Businesses using AI can differentiate better from competitors as more innovative in their products and service offerings.

The other category is financial benefits, which, in addition to the above, will undoubtedly catch the attention of the C-suite and board of directors. These include factors like improved top-line growth or improving margins. Quantifying solid financial benefits from AI tools is starting to make its way, especially for domain-specific AI applications like IT operations, medical or retail. This is an area where a partnership with the AI tool vendor and decision-maker can greatly improve the quality of ROI calculation to account for key use cases.

It's rarely one person's responsibility to ask and answer all these questions. These considerations should involve the broader team and be viewed holistically. Some tools that are still in their infancy may be worth the risk if they check many of the other boxes. A more significant monetary investment could be the right choice if the technology addresses a critical need for your team that otherwise couldn't be met. Ask these questions, and reevaluate often.

Ranjan Goel is VP of Product at LogicMonitor

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

The 3 Questions Every Product Leader Should Ask When Evaluating a New AI Tool

Ranjan Goel
VP of Product
LogicMonitor

All eyes are on the value AI can provide to enterprises. Whether it's simplifying the lives of developers, more accurately forecasting business decisions, or empowering teams to do more with less, AI has already become deeply integrated into businesses. However, it's still early to evaluate its impact using traditional methods. Here's how engineering and IT leaders can make educated decisions despite the ambiguity.

1. Does my current team have the technical ability to implement this?

Even the most advanced technology won't deliver its full potential if it isn't implemented and maintained properly. Leaders must ask:

Can my existing team do this? Can we train them to do an AI implementation in a timely manner?

Or will we need to hire additional staff?

None of the answers to the above questions spell disaster for implementing AI, they do help create a clearer picture of what's possible for your specific team. Given how quickly AI is evolving, upskilling or reskilling is likely required for most organizations. Whether through training or hiring, implementation needs to be feasible.

2. Am I willing to implement this at its current stage?

AI is full of promises — some near-term, some further off. When evaluating AI vendors, it's important to recognize that the technology's current capabilities may continue to evolve rapidly. If the current proof of concept meets most of your needs, great!

Decision makers should evaluate whether the AI tool provider they're entertaining is open to working closely to iterate the tool. Most AI tools are not yet mature enough for all potential use cases to be available already.

3. So you want to move forward. How do you justify the investment?

Think of the ROI of AI as falling into two categories: business benefits and financial benefits.

Most AI tools today offer value in terms of business benefits, such as improved customer experience, enhanced employee productivity, and faster rollouts of new features or products. Businesses using AI can differentiate better from competitors as more innovative in their products and service offerings.

The other category is financial benefits, which, in addition to the above, will undoubtedly catch the attention of the C-suite and board of directors. These include factors like improved top-line growth or improving margins. Quantifying solid financial benefits from AI tools is starting to make its way, especially for domain-specific AI applications like IT operations, medical or retail. This is an area where a partnership with the AI tool vendor and decision-maker can greatly improve the quality of ROI calculation to account for key use cases.

It's rarely one person's responsibility to ask and answer all these questions. These considerations should involve the broader team and be viewed holistically. Some tools that are still in their infancy may be worth the risk if they check many of the other boxes. A more significant monetary investment could be the right choice if the technology addresses a critical need for your team that otherwise couldn't be met. Ask these questions, and reevaluate often.

Ranjan Goel is VP of Product at LogicMonitor

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...