Skip to main content

Q&A: HP Talks About App Development and DevOps - Part 2

Pete Goldin
APMdigest

In Part 2 of APMdigest's exclusive interview, John Jeremiah, Technology Evangelist for HP's Software Research Group, talks about DevOps.

Start with Part 1 of the interview

APM: Do you feel that developers and testers are being held more accountable for application quality today? How is their role changing?

JJ: Developers and testers are taking greater and greater accountability for both speed and quality. As we've discussed, if defective code gets into a new software product or update, it becomes much more costly and time-consuming to rectify down the line. It's like one bad ingredient in a sandwich. The more ingredients are added to the sandwich, the more laborious and painful it becomes to take it apart.

The goal of having smaller, more focused releases is to improve both speed and quality. Because development is faster and releases are smaller, then it becomes easier to test and fix bugs.

Faster feedback is a key to both speed and quality – if a bug is quickly found, the developer knows what to fix, as opposed to finding a bug that was created six months ago. Not only is it easier to fix, it's also possible to prevent a spiral of issues based on that one bad line of code.

APM: What about the "Ops" side of DevOps – how is the Ops role changing? What new demands do they face?

JJ: It's not all about Dev and Test. In fact, automating the delivery — code and infrastructure — of an app change is a critical part of DevOps. The explosion of containerization and infrastructure as code is having a real impact on the definition of "Ops". I see their role evolving to where they provide consistent frameworks or patterns of infrastructure for DevOps teams to utilize — shifting from actually doing the provisioning, to providing "standard" and "supported" packages for Dev teams.

IT Ops teams also contribute to application quality in a "shift right" way so to speak. There is a wealth of information in production data that can be fed back to developers, in order to help developers prioritize areas for improvement – for example, what are the most common click-through paths on a website, or where exactly in a site are end users abandoning shopping carts or transactions?

APM: How do you predict that DevOps will evolve?

JJ: DevOps will become more common in many enterprises, evolving from an emerging movement. The collaborative nature and shared responsibilities of DevOps will continue to blur rigid role definitions and we will see traditional silo mentalities increasingly fade away.

Developers are acting more like testers; IT Ops teams are feeding crucial information back to developers to assist in the development process; and developers are architecting applications — based on this feedback — to be more resource-efficient, essentially thinking and behaving like IT ops teams. Everyone is united and focused on application roll-out speed and quality, which includes functional and end-user performance quality, as well as resource-efficiency — yet another ingredient of a quality app.

Visionary business leaders will take advantage of DevOps speed, and will create disruptive offerings in many industries, further accelerating the adoption of DevOps.

APM: What tools are essential to enable DevOps?

JJ: To achieve velocity combined with quality, DevOps teams need automation tools that enable them to eliminate manual, error-prone tasks; and radically increase testing coverage, both earlier in the lifecycle, as well as more realistically and comprehensively, in terms of network environments and end-user devices and geographies.

DevOps teams need visibility and insight into how their application is delivering value, so we're also seeing an increased need for advanced data analytics capabilities, which can identify trends within the wealth of production data being generated.

APM: Where does APM fit into DevOps?

JJ: Application Performance Management (APM) is a critical success factor in DevOps. There is no point rolling out the most feature-rich application — if it performs poorly for end users, they'll just abandon it, or it's a major IT resource drain, for example.

Before DevOps, you often had situations where a poorly performing app was discovered, and developers would then promptly point their finger at IT, and vice versa. In a DevOps team, everyone owns application performance and is responsible for success. Hence, APM systems give DevOps teams a true, unbiased view of how an application is performing.

Today, these systems are often combined with analytics that let DevOps teams identify the root cause of performance issues — whether code- or IT-related — in minutes, rather than days. In this way, APM helps to eliminate finger-pointing and guessing.

APM also can be used to proactively anticipate the end-user performance impact of new features and functionalities, which can help DevOps teams determine if these possible additions are worth it.

APM: With all the emphasis on testing automation lately, there is a theory that testing will go away as a discipline — "DevOps" will become "NoOps." Do you envision this ever happening?

JJ: In a word – No. It's a myth, misunderstanding and misconception that DevOps leads to reduced testing. In fact the opposite is true. A DevOps team is committed to keeping their code base ALWAYS ready for production. That means every change is tested, and defects are not tracked to be fixed later, but are fixed immediately. The team commits to keeping the build green and ready to go, ready to pass acceptance tests. The key to achieving this is the use of automation tools enabling them to provision, tweak, and de-provision testing resources, quickly and easily, so they can focus more of their time on actual testing.

Read Part 3, the final installment of the interview, where John Jeremiah, Technology Evangelist for HP's Software Research Group, outlines the future of application development.

Hot Topic
The Latest
The Latest 10

The Latest

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...

In today's fast-paced AI landscape, CIOs, IT leaders, and engineers are constantly challenged to manage increasingly complex and interconnected systems. The sheer scale and velocity of data generated by modern infrastructure can be overwhelming, making it difficult to maintain uptime, prevent outages, and create a seamless customer experience. This complexity is magnified by the industry's shift towards agentic AI ...

In MEAN TIME TO INSIGHT Episode 19, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA explains the cause of the AWS outage in October ... 

The explosion of generative AI and machine learning capabilities has fundamentally changed the conversation around cloud migration. It's no longer just about modernization or cost savings — it's about being able to compete in a market where AI is rapidly becoming table stakes. Companies that can't quickly spin up AI workloads, feed models with data at scale, or experiment with new capabilities are falling behind faster than ever before. But here's what I'm seeing: many organizations want to capitalize on AI, but they're stuck ...

On September 16, the world celebrated the 10th annual IT Pro Day, giving companies a chance to laud the professionals who serve as the backbone to almost every successful business across the globe. Despite the growing importance of their roles, many IT pros still work in the background and often go underappreciated ...

Artificial Intelligence (AI) is reshaping observability, and observability is becoming essential for AI. This is a two-way relationship that is increasingly relevant as enterprises scale generative AI ... This dual role makes AI and observability inseparable. In this blog, I cover more details of each side ...

Poor DEX directly costs global businesses an average of 470,000 hours per year, equivalent to around 226 full-time employees, according to a new report from Nexthink, Cracking the DEX Equation: The Annual Workplace Productivity Report. This indicates that digital friction is a vital and underreported element of the global productivity crisis ...

Q&A: HP Talks About App Development and DevOps - Part 2

Pete Goldin
APMdigest

In Part 2 of APMdigest's exclusive interview, John Jeremiah, Technology Evangelist for HP's Software Research Group, talks about DevOps.

Start with Part 1 of the interview

APM: Do you feel that developers and testers are being held more accountable for application quality today? How is their role changing?

JJ: Developers and testers are taking greater and greater accountability for both speed and quality. As we've discussed, if defective code gets into a new software product or update, it becomes much more costly and time-consuming to rectify down the line. It's like one bad ingredient in a sandwich. The more ingredients are added to the sandwich, the more laborious and painful it becomes to take it apart.

The goal of having smaller, more focused releases is to improve both speed and quality. Because development is faster and releases are smaller, then it becomes easier to test and fix bugs.

Faster feedback is a key to both speed and quality – if a bug is quickly found, the developer knows what to fix, as opposed to finding a bug that was created six months ago. Not only is it easier to fix, it's also possible to prevent a spiral of issues based on that one bad line of code.

APM: What about the "Ops" side of DevOps – how is the Ops role changing? What new demands do they face?

JJ: It's not all about Dev and Test. In fact, automating the delivery — code and infrastructure — of an app change is a critical part of DevOps. The explosion of containerization and infrastructure as code is having a real impact on the definition of "Ops". I see their role evolving to where they provide consistent frameworks or patterns of infrastructure for DevOps teams to utilize — shifting from actually doing the provisioning, to providing "standard" and "supported" packages for Dev teams.

IT Ops teams also contribute to application quality in a "shift right" way so to speak. There is a wealth of information in production data that can be fed back to developers, in order to help developers prioritize areas for improvement – for example, what are the most common click-through paths on a website, or where exactly in a site are end users abandoning shopping carts or transactions?

APM: How do you predict that DevOps will evolve?

JJ: DevOps will become more common in many enterprises, evolving from an emerging movement. The collaborative nature and shared responsibilities of DevOps will continue to blur rigid role definitions and we will see traditional silo mentalities increasingly fade away.

Developers are acting more like testers; IT Ops teams are feeding crucial information back to developers to assist in the development process; and developers are architecting applications — based on this feedback — to be more resource-efficient, essentially thinking and behaving like IT ops teams. Everyone is united and focused on application roll-out speed and quality, which includes functional and end-user performance quality, as well as resource-efficiency — yet another ingredient of a quality app.

Visionary business leaders will take advantage of DevOps speed, and will create disruptive offerings in many industries, further accelerating the adoption of DevOps.

APM: What tools are essential to enable DevOps?

JJ: To achieve velocity combined with quality, DevOps teams need automation tools that enable them to eliminate manual, error-prone tasks; and radically increase testing coverage, both earlier in the lifecycle, as well as more realistically and comprehensively, in terms of network environments and end-user devices and geographies.

DevOps teams need visibility and insight into how their application is delivering value, so we're also seeing an increased need for advanced data analytics capabilities, which can identify trends within the wealth of production data being generated.

APM: Where does APM fit into DevOps?

JJ: Application Performance Management (APM) is a critical success factor in DevOps. There is no point rolling out the most feature-rich application — if it performs poorly for end users, they'll just abandon it, or it's a major IT resource drain, for example.

Before DevOps, you often had situations where a poorly performing app was discovered, and developers would then promptly point their finger at IT, and vice versa. In a DevOps team, everyone owns application performance and is responsible for success. Hence, APM systems give DevOps teams a true, unbiased view of how an application is performing.

Today, these systems are often combined with analytics that let DevOps teams identify the root cause of performance issues — whether code- or IT-related — in minutes, rather than days. In this way, APM helps to eliminate finger-pointing and guessing.

APM also can be used to proactively anticipate the end-user performance impact of new features and functionalities, which can help DevOps teams determine if these possible additions are worth it.

APM: With all the emphasis on testing automation lately, there is a theory that testing will go away as a discipline — "DevOps" will become "NoOps." Do you envision this ever happening?

JJ: In a word – No. It's a myth, misunderstanding and misconception that DevOps leads to reduced testing. In fact the opposite is true. A DevOps team is committed to keeping their code base ALWAYS ready for production. That means every change is tested, and defects are not tracked to be fixed later, but are fixed immediately. The team commits to keeping the build green and ready to go, ready to pass acceptance tests. The key to achieving this is the use of automation tools enabling them to provision, tweak, and de-provision testing resources, quickly and easily, so they can focus more of their time on actual testing.

Read Part 3, the final installment of the interview, where John Jeremiah, Technology Evangelist for HP's Software Research Group, outlines the future of application development.

Hot Topic
The Latest
The Latest 10

The Latest

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...

In today's fast-paced AI landscape, CIOs, IT leaders, and engineers are constantly challenged to manage increasingly complex and interconnected systems. The sheer scale and velocity of data generated by modern infrastructure can be overwhelming, making it difficult to maintain uptime, prevent outages, and create a seamless customer experience. This complexity is magnified by the industry's shift towards agentic AI ...

In MEAN TIME TO INSIGHT Episode 19, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA explains the cause of the AWS outage in October ... 

The explosion of generative AI and machine learning capabilities has fundamentally changed the conversation around cloud migration. It's no longer just about modernization or cost savings — it's about being able to compete in a market where AI is rapidly becoming table stakes. Companies that can't quickly spin up AI workloads, feed models with data at scale, or experiment with new capabilities are falling behind faster than ever before. But here's what I'm seeing: many organizations want to capitalize on AI, but they're stuck ...

On September 16, the world celebrated the 10th annual IT Pro Day, giving companies a chance to laud the professionals who serve as the backbone to almost every successful business across the globe. Despite the growing importance of their roles, many IT pros still work in the background and often go underappreciated ...

Artificial Intelligence (AI) is reshaping observability, and observability is becoming essential for AI. This is a two-way relationship that is increasingly relevant as enterprises scale generative AI ... This dual role makes AI and observability inseparable. In this blog, I cover more details of each side ...

Poor DEX directly costs global businesses an average of 470,000 hours per year, equivalent to around 226 full-time employees, according to a new report from Nexthink, Cracking the DEX Equation: The Annual Workplace Productivity Report. This indicates that digital friction is a vital and underreported element of the global productivity crisis ...