Need A Change? Newer Isn't Necessarily Better
March 02, 2022

Rebecca Dilthey
Rocket Software

Share this

We all love new, shiny objects. When the washing machine dies, most of us don't run to an appliance store that sells used models or parts to repair your machine. You drop the money on a new one with all the bells and whistles.

Same goes for enterprise technology. When a legacy system starts to fail, our eyes tend to widen as we evaluate all the fancy toys on the market. Absolutely everything in hardware and software is about novelty — yesterday's innovation is tomorrow's doorstop.

While it doesn't appear as glamorous as the newest, most disruptive technology, often your old systems can be updated to deliver the performance your organization needs, saving your business the money and time associated with "rip and replace" projects.

During the pandemic, IT scaled back on higher risk new investments and looked at how they could invest in the systems they already had. In a post-COVID world, this trend hasn't changed, even as IT spending returns to pre-pandemic levels. Even though there is increased investment in projects to better service hybrid work environments as well as the hot new trends like hybrid cloud, organizations are realizing they can save valuable time and money by modernizing existing technology rather than throwing capital at the next big technology trend.

It's only natural that IT leaders would consider replacing systems they believe are outdated, especially if there is a perception that the systems cannot natively support a need of the business. In fact, that's often the first order of business for new CIOs when they walk into an IT organization. What often is overlooked, however, is the value of the solutions they have in place — the ones their teams are comfortable with and don't cost hundreds of thousands or even millions of dollars in a rip and replace project. Often these systems are fully capable of meeting their needs and enabling innovation and experimentation, especially if they are kept up to date.

What was that about millions of dollars?

The numbers are not insignificant. When adopting a rip and replace strategy, there are so many costs that aren't realized when initial project scoping occurs. For example, Projects this size often need full time managers, often delegated to consulting firms. Then there is the opportunity cost of employees having to devote time and effort to the project instead of their day-to-day job. And what about the huge amount of risk inherent in a re-platforming project. There are companies that have made the huge investment to replatform, only to find out at the end of the project there are some applications that are so central to how the business operates, they are in essence the hub of all operations and therefore too risky to touch. Millions of dollars and years of effort essentially for nothing.

Fans of rip and replace often counter that change needs to be made for operational reasons — but the data doesn't support it. While the product lines of many large systems are several decades old, the hardware and operating systems are updated every year. For some reason, though, that gets lost when we think of these older systems.

If you drive a Ford Focus, you know it's evolved dramatically from Ford Model T — so why is there this perception that these machines are like their ancestors?

In fact, not only do these systems offer the lower total cost of ownership and the unique security and transaction management capabilities inherent in mainframe and midrange systems, today's developers and programmers can use their favorite open-source languages and tools, new technologies like AI and ML, and more.

Additionally, there are software tools that enable non-RPG and -COBOL developers to cost-effectively create an "innovation layer" that makes it easy and efficient to modernize and automate applications and workflows running on these systems.

Businesses are coming to the realization that it's more valuable to update their existing tech stack on the heritage system — and upgrade to the latest OS — rather than turning to a rip and replace approach. After all, at the end of the day, the two main factors that matter most to IT leaders are: does my infrastructure and the tools I deliver to the business support the business strategy and goals; and can I ensure that support of the business in a cost-effective way. If investing in systems instead of replatforming gives IT the best of both worlds, it would seem a quest for something brand new might not be the best option.

Rebecca Dilthey is a Product Marketing Director at Rocket Software
Share this

The Latest

January 26, 2023

As enterprises work to implement or improve their observability practices, tool sprawl is a very real phenomenon ... Tool sprawl can and does happen all across the organization. In this post, though, we'll focus specifically on how and why observability efforts often result in tool sprawl, some of the possible negative consequences of that sprawl, and we'll offer some advice on how to reduce or even avoid sprawl ...

January 25, 2023

As companies generate more data across their network footprints, they need network observability tools to help find meaning in that data for better decision-making and problem solving. It seems many companies believe that adding more tools leads to better and faster insights ... And yet, observability tools aren't meeting many companies' needs. In fact, adding more tools introduces new challenges ...

January 24, 2023

Driven by the need to create scalable, faster, and more agile systems, businesses are adopting cloud native approaches. But cloud native environments also come with an explosion of data and complexity that makes it harder for businesses to detect and remediate issues before everything comes to a screeching halt. Observability, if done right, can make it easier to mitigate these challenges and remediate incidents before they become major customer-impacting problems ...

January 23, 2023

The spiraling cost of energy is forcing public cloud providers to raise their prices significantly. A recent report by Canalys predicted that public cloud prices will jump by around 20% in the US and more than 30% in Europe in 2023. These steep price increases will test the conventional wisdom that moving to the cloud is a cheap computing alternative ...

January 19, 2023

Despite strong interest over the past decade, the actual investment in DX has been recent. While 100% of enterprises are now engaged with DX in some way, most (77%) have begun their DX journey within the past two years. And most are early stage, with a fourth (24%) at the discussion stage and half (49%) currently transforming. Only 27% say they have finished their DX efforts ...

January 18, 2023

While most thought that distraction and motivation would be the main contributors to low productivity in a work-from-home environment, many organizations discovered that it was gaps in their IT systems that created some of the most significant challenges ...

January 17, 2023
The US aviation sector was struggling to return to normal following a nationwide ground stop imposed by Federal Aviation Administration (FAA) early Wednesday over a computer issue ...
January 13, 2023

APMdigest and leading IT research firm Enterprise Management Associates (EMA) are teaming up on the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 1, Dan Twing, President and COO of EMA, discusses Observability and Automation with Will Schoeppner, Research Director covering Application Performance Management and Business Intelligence at EMA ...

January 12, 2023

APMdigest is following up our list of 2023 Application Performance Management Predictions with predictions from industry experts about how the cloud will evolve in 2023 ...

January 11, 2023

As demand for digital services increases and distributed systems become more complex, organizations must collect and process a growing amount of observability data (logs, metrics, and traces). Site reliability engineers (SREs), developers, and security engineers use observability data to learn how their applications and environments are performing so they can successfully respond to issues and mitigate risk ...