
New Relic announced a machine learning operations (MLOps) capability that allows engineering teams to monitor applications built with OpenAI’s GPT Series APIs.
With just two lines of code, engineering teams can monitor OpenAI completion queries while simultaneously tracking performance and cost metrics in real-time in a single view with New Relic. This new integration allows New Relic to ingest raw OpenAI data and helps companies leverage the power of emerging AI technologies like OpenAI’s ChatGPT to accelerate innovation and business goals while balancing considerations to cost.
This integration expands New Relic's catalog of supported data and extends New Relic's access to a wider audience of developers. Engineers can quickly deploy the OpenAI quickstart from New Relic Instant Observability and access this capability for free with no credit card required and minimal setup by signing up for a forever free New Relic account.
“This is an exciting time for companies who are embracing GPT and building modern applications with Generative AI,” said New Relic Chief Growth Officer and GM of Observability Manav Khurana. “Observability is a game changer when it comes to helping companies extract value from GPT. We are making it so that any engineer using GPT APIs can easily monitor their cost and performance with easy set-up and at no cost. This aligns with our mission to put the power of observability into the hands of every engineer.”
The new capability allows engineers to:
- Get started for free: Access to New Relic Instant Observability and our out-of-the-box GPT monitoring solution is the first of its kind, and included at no additional cost for New Relic full platform users.
- Easy installation: With just two lines of code, users can import the monitor module from the nr_openai_monitor library and automatically generate a dashboard that displays a variety of key GPT performance metrics.
- Monitor cost: Usage of OpenAI’s Davinci model costs can add up quickly and make it difficult to operate at scale. New Relic provides engineering teams with real-time cost tracking of their GPT usage.
- Optimize performance: New Relic gives engineering teams insight into the average response time and other key performance metrics around GPT requests, allowing engineers to optimize usage and ensure the best possible response times.
- Analyze prompts and responses: New Relic provides valuable information about the usage, speed, and effectiveness of GPT to help engineering teams achieve better results from their ML models.
The OpenAI GPT integration with New Relic is included at no additional cost to New Relic full platform users. New Relic supports all current OpenAI GPT versions including the recently released GPT-4.
The Latest
In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...
Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...
In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ...
Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...
Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...
Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...
The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...
The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...
In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...
AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.