Skip to main content

Q&A: HP Talks About App Development and DevOps - Part 2

Pete Goldin
Editor and Publisher
APMdigest

In Part 2 of APMdigest's exclusive interview, John Jeremiah, Technology Evangelist for HP's Software Research Group, talks about DevOps.

Start with Part 1 of the interview

APM: Do you feel that developers and testers are being held more accountable for application quality today? How is their role changing?

JJ: Developers and testers are taking greater and greater accountability for both speed and quality. As we've discussed, if defective code gets into a new software product or update, it becomes much more costly and time-consuming to rectify down the line. It's like one bad ingredient in a sandwich. The more ingredients are added to the sandwich, the more laborious and painful it becomes to take it apart.

The goal of having smaller, more focused releases is to improve both speed and quality. Because development is faster and releases are smaller, then it becomes easier to test and fix bugs.

Faster feedback is a key to both speed and quality – if a bug is quickly found, the developer knows what to fix, as opposed to finding a bug that was created six months ago. Not only is it easier to fix, it's also possible to prevent a spiral of issues based on that one bad line of code.

APM: What about the "Ops" side of DevOps – how is the Ops role changing? What new demands do they face?

JJ: It's not all about Dev and Test. In fact, automating the delivery — code and infrastructure — of an app change is a critical part of DevOps. The explosion of containerization and infrastructure as code is having a real impact on the definition of "Ops". I see their role evolving to where they provide consistent frameworks or patterns of infrastructure for DevOps teams to utilize — shifting from actually doing the provisioning, to providing "standard" and "supported" packages for Dev teams.

IT Ops teams also contribute to application quality in a "shift right" way so to speak. There is a wealth of information in production data that can be fed back to developers, in order to help developers prioritize areas for improvement – for example, what are the most common click-through paths on a website, or where exactly in a site are end users abandoning shopping carts or transactions?

APM: How do you predict that DevOps will evolve?

JJ: DevOps will become more common in many enterprises, evolving from an emerging movement. The collaborative nature and shared responsibilities of DevOps will continue to blur rigid role definitions and we will see traditional silo mentalities increasingly fade away.

Developers are acting more like testers; IT Ops teams are feeding crucial information back to developers to assist in the development process; and developers are architecting applications — based on this feedback — to be more resource-efficient, essentially thinking and behaving like IT ops teams. Everyone is united and focused on application roll-out speed and quality, which includes functional and end-user performance quality, as well as resource-efficiency — yet another ingredient of a quality app.

Visionary business leaders will take advantage of DevOps speed, and will create disruptive offerings in many industries, further accelerating the adoption of DevOps.

APM: What tools are essential to enable DevOps?

JJ: To achieve velocity combined with quality, DevOps teams need automation tools that enable them to eliminate manual, error-prone tasks; and radically increase testing coverage, both earlier in the lifecycle, as well as more realistically and comprehensively, in terms of network environments and end-user devices and geographies.

DevOps teams need visibility and insight into how their application is delivering value, so we're also seeing an increased need for advanced data analytics capabilities, which can identify trends within the wealth of production data being generated.

APM: Where does APM fit into DevOps?

JJ: Application Performance Management (APM) is a critical success factor in DevOps. There is no point rolling out the most feature-rich application — if it performs poorly for end users, they'll just abandon it, or it's a major IT resource drain, for example.

Before DevOps, you often had situations where a poorly performing app was discovered, and developers would then promptly point their finger at IT, and vice versa. In a DevOps team, everyone owns application performance and is responsible for success. Hence, APM systems give DevOps teams a true, unbiased view of how an application is performing.

Today, these systems are often combined with analytics that let DevOps teams identify the root cause of performance issues — whether code- or IT-related — in minutes, rather than days. In this way, APM helps to eliminate finger-pointing and guessing.

APM also can be used to proactively anticipate the end-user performance impact of new features and functionalities, which can help DevOps teams determine if these possible additions are worth it.

APM: With all the emphasis on testing automation lately, there is a theory that testing will go away as a discipline — "DevOps" will become "NoOps." Do you envision this ever happening?

JJ: In a word – No. It's a myth, misunderstanding and misconception that DevOps leads to reduced testing. In fact the opposite is true. A DevOps team is committed to keeping their code base ALWAYS ready for production. That means every change is tested, and defects are not tracked to be fixed later, but are fixed immediately. The team commits to keeping the build green and ready to go, ready to pass acceptance tests. The key to achieving this is the use of automation tools enabling them to provision, tweak, and de-provision testing resources, quickly and easily, so they can focus more of their time on actual testing.

Read Part 3, the final installment of the interview, where John Jeremiah, Technology Evangelist for HP's Software Research Group, outlines the future of application development.

Hot Topic
The Latest
The Latest 10

The Latest

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

Q&A: HP Talks About App Development and DevOps - Part 2

Pete Goldin
Editor and Publisher
APMdigest

In Part 2 of APMdigest's exclusive interview, John Jeremiah, Technology Evangelist for HP's Software Research Group, talks about DevOps.

Start with Part 1 of the interview

APM: Do you feel that developers and testers are being held more accountable for application quality today? How is their role changing?

JJ: Developers and testers are taking greater and greater accountability for both speed and quality. As we've discussed, if defective code gets into a new software product or update, it becomes much more costly and time-consuming to rectify down the line. It's like one bad ingredient in a sandwich. The more ingredients are added to the sandwich, the more laborious and painful it becomes to take it apart.

The goal of having smaller, more focused releases is to improve both speed and quality. Because development is faster and releases are smaller, then it becomes easier to test and fix bugs.

Faster feedback is a key to both speed and quality – if a bug is quickly found, the developer knows what to fix, as opposed to finding a bug that was created six months ago. Not only is it easier to fix, it's also possible to prevent a spiral of issues based on that one bad line of code.

APM: What about the "Ops" side of DevOps – how is the Ops role changing? What new demands do they face?

JJ: It's not all about Dev and Test. In fact, automating the delivery — code and infrastructure — of an app change is a critical part of DevOps. The explosion of containerization and infrastructure as code is having a real impact on the definition of "Ops". I see their role evolving to where they provide consistent frameworks or patterns of infrastructure for DevOps teams to utilize — shifting from actually doing the provisioning, to providing "standard" and "supported" packages for Dev teams.

IT Ops teams also contribute to application quality in a "shift right" way so to speak. There is a wealth of information in production data that can be fed back to developers, in order to help developers prioritize areas for improvement – for example, what are the most common click-through paths on a website, or where exactly in a site are end users abandoning shopping carts or transactions?

APM: How do you predict that DevOps will evolve?

JJ: DevOps will become more common in many enterprises, evolving from an emerging movement. The collaborative nature and shared responsibilities of DevOps will continue to blur rigid role definitions and we will see traditional silo mentalities increasingly fade away.

Developers are acting more like testers; IT Ops teams are feeding crucial information back to developers to assist in the development process; and developers are architecting applications — based on this feedback — to be more resource-efficient, essentially thinking and behaving like IT ops teams. Everyone is united and focused on application roll-out speed and quality, which includes functional and end-user performance quality, as well as resource-efficiency — yet another ingredient of a quality app.

Visionary business leaders will take advantage of DevOps speed, and will create disruptive offerings in many industries, further accelerating the adoption of DevOps.

APM: What tools are essential to enable DevOps?

JJ: To achieve velocity combined with quality, DevOps teams need automation tools that enable them to eliminate manual, error-prone tasks; and radically increase testing coverage, both earlier in the lifecycle, as well as more realistically and comprehensively, in terms of network environments and end-user devices and geographies.

DevOps teams need visibility and insight into how their application is delivering value, so we're also seeing an increased need for advanced data analytics capabilities, which can identify trends within the wealth of production data being generated.

APM: Where does APM fit into DevOps?

JJ: Application Performance Management (APM) is a critical success factor in DevOps. There is no point rolling out the most feature-rich application — if it performs poorly for end users, they'll just abandon it, or it's a major IT resource drain, for example.

Before DevOps, you often had situations where a poorly performing app was discovered, and developers would then promptly point their finger at IT, and vice versa. In a DevOps team, everyone owns application performance and is responsible for success. Hence, APM systems give DevOps teams a true, unbiased view of how an application is performing.

Today, these systems are often combined with analytics that let DevOps teams identify the root cause of performance issues — whether code- or IT-related — in minutes, rather than days. In this way, APM helps to eliminate finger-pointing and guessing.

APM also can be used to proactively anticipate the end-user performance impact of new features and functionalities, which can help DevOps teams determine if these possible additions are worth it.

APM: With all the emphasis on testing automation lately, there is a theory that testing will go away as a discipline — "DevOps" will become "NoOps." Do you envision this ever happening?

JJ: In a word – No. It's a myth, misunderstanding and misconception that DevOps leads to reduced testing. In fact the opposite is true. A DevOps team is committed to keeping their code base ALWAYS ready for production. That means every change is tested, and defects are not tracked to be fixed later, but are fixed immediately. The team commits to keeping the build green and ready to go, ready to pass acceptance tests. The key to achieving this is the use of automation tools enabling them to provision, tweak, and de-provision testing resources, quickly and easily, so they can focus more of their time on actual testing.

Read Part 3, the final installment of the interview, where John Jeremiah, Technology Evangelist for HP's Software Research Group, outlines the future of application development.

Hot Topic
The Latest
The Latest 10

The Latest

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...