Skip to main content

The Changing World of Application Delivery (and why it pays to have choices)

Chris Marks
Parallels

Getting applications into the hands of those who need them quickly and securely has long been the goal of a branch of IT often referred to as End User Computing (EUC).

Over recent years, the way applications (and data) have been delivered to these "users" has changed noticeably. Organizations have many more choices available to them now, and there will be more to come. Long-term approaches often lead companies down dead ends that are very difficult (and expensive) to back out of. Retaining flexibility to deliver different applications in various ways has served many well.

But how did we get here?

Where are we going?

Is this all too complicated?

Let's examine how we arrived at our current situation.

In the Beginning: Pre-1990s

Okay, so it's not the beginning, but it's early enough to establish a baseline for application delivery.

Back then, most desktop applications were locally installed and ran directly on the user's device. They had access to all the local resources they could grab, but also needed fast access to shared data, which saw the meteoric rise of Network Attached Storage (NAS) to provide that service.

Meanwhile, a new "fad" was emerging called the Internet. This allowed people to become more mobile, and application requirements began to change. Rather than arriving at work, using applications, and leaving, use cases emerged where an application was needed outside the corporate network. This presented a range of challenges and opportunities.

The big challenge? Maintenance and performance. 

IT staff were tasked with travelling significant distances between offices to locally update and maintain each device. Additionally, while the Internet was gaining popularity, it was still no match for the local network, which could deliver a huge 10 Mbps at the time. Applications designed for near-instant data access simply couldn't function properly over slow or remote connections. This caused some big headaches, particularly for organizations with workforces dispersed across multiple offices. Data needed to be replicated, and this required expensive physical cabling in many cases.

Seeds of Change: The 1990s

In the 1990s, some former IBM engineers, including Ed Iacobucci, who had invented a multi-user operating system, saw an opportunity and seized it, forming a company called Citrix. Their WinFrame product (and MultiUser before that) enabled applications to stay close to the data and be maintained by centralized teams, while the people using the applications were sent images of the changing screen. In return, mouse movements and keystrokes were sent to the application. In doing this, the company was able to squeeze more than one user into a single instance of a server operating system.

Because the data wasn't moving anywhere, much less network traffic was generated. This meant that the client computer could connect over much poorer connections and/or more people could access the applications simultaneously without causing a networking bottleneck.

This "Application Remoting" approach had several advantages, particularly in terms of data security and IT operational management, as the applications and data could remain in one place. Large organizations could make significant savings by operating in this way, and as a result, Citrix saw rapid growth, along with the Server-Based Computing market.

Over time, as with any buoyant market, new players entered the space. In 1997, Microsoft licensed the technology from Citrix and built and licensed the first "Terminal Server" edition based on Windows NT 4, allowing them to deliver this approach directly from the operating system.

Installed applications were in the majority, but delivering remote applications in this way was gaining some significant market share, and other players soon emerged as well:

  • 2X Software (later Parallels RAS) launched in 2004, simplifying Terminal Services delivery and reducing the need for deep Citrix expertise.
  • VMware entered in 2006 with VDM (later Horizon), introducing Virtual Desktop Infrastructure (VDI), where each user got their own virtual desktop instead of sharing an OS. VDI required more infrastructure but solved performance issues caused by "noisy neighbours" on multi-user systems.

The Age of Enlightenment (or Not): The 1990s to 2000s

During this same era, application delivery went through an experimental phase filled with "Next Big Things" (NBTs). Each new technology promised to replace everything before it. None succeeded completely, though some left lasting marks.

This was the age of "the next big thing." These NBTs were designed to replace everything else on the market and become the new standard for delivering all features. None succeeded completely, but many organizations tried to move everything to a new delivery model, and some still have applications delivered using these technologies today.

Some of those NBTs included (in no particular order):

  • Java Applets
  • Adobe Flash
  • Silverlight
  • Microsoft Universal Windows Platform (UWP)

Some common traits of these NBTs:

  • Vendor lock-in: proprietary approaches that trapped users.
  • Plugin dependencies: increasingly blocked by browsers for security reasons.
  • Poor mobile support: they didn't anticipate the mobile revolution.
  • Heavy resource use: too demanding for typical systems.

But it wasn't all bad news, this period set the stage for the emergence of standards that would transform the industry: The rise of HTML5, along with JavaScript and CSS3, marked a significant turning point. These standards made web applications platform-independent, browser-native, and universally accessible.

The Rise of SaaS: Late 1990s to 2010s

As we move through the late 1990s and early 2000s, a new model emerged, partially driven by the massive growth in Internet usage and the widespread adoption of web browsers. That model was Software as a Service (SaaS).

Companies like Salesforce went all-in with their "no software" mantra, pioneering browser-based applications. As Amazon Web Services (AWS) and other cloud providers entered the market, they made it easier for software companies to expand from a small scale to a global delivery model. This SaaS model was particularly appealing to software companies, especially considering how investors recognize revenue and the advantage of recurring monthly revenue over perpetual licensing, which requires ongoing maintenance. Since then, the growth of SaaS services has accelerated rapidly, leading many software providers to offer exclusively SaaS-based services.

Down to Earth with a Bump – The 2010s

However, this rush to deliver the new NBT once more led a number of organizations to attempt to deliver 100% of their workloads via this SaaS model, despite both financial and technical barriers to doing so. "Cloud-first" became a phrase many were chanting, and many businesses made the same mistakes we've discussed earlier:

  • Vendor lock-in resurfaced. Data ownership and exit strategies were often unclear.
  • Performance confusion grew - users blamed slow browsers for sluggish apps, unaware that each tab was effectively its own application.
  • Security myths spread. Many assumed browser-based apps were more secure, when in reality some vendors had simply wrapped existing clients in a web shell, sending the same data as before, just via a browser tab.

Securing both access to SaaS applications and controlling the data that arrives onto the client device was already solved for installed apps with Server-Based Computing and VDI (collectively called End User Computing or EUC). Now the challenge reoccurred as the increased use of SaaS services started to replace installed applications.

Dawn of a New Reality: The Now

Key to our story is that it's unusual for a technology to be completely dominant to the exclusion of all others in a short timeframe. There are examples but compared to the significant changes over the last 30 years, they are few. Most businesses cannot easily transition everything from A to B. That's improving over time, but we're not quite there yet.

That said, the shift over time is toward more applications being delivered via SaaS and fewer applications being delivered via installed apps and/or using SBC/VDI or another EUC technology.

Most businesses, though, are somewhere between these two extremes — and that's totally fine. The realization that this is perfectly okay may not seem neat and tidy, but it's essential for businesses to implement change at their own pace, not just because "everybody's doing it."

The history of application delivery shows that flexibility is what endures. Whether through local apps, remote delivery, or SaaS, organizations benefit most when they retain control over how they adapt.

These approaches work side by side, because you shouldn't be forced into changes your business doesn't need.

Chris Marks is Principal Outbound Product Manager at Parallels

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

The Changing World of Application Delivery (and why it pays to have choices)

Chris Marks
Parallels

Getting applications into the hands of those who need them quickly and securely has long been the goal of a branch of IT often referred to as End User Computing (EUC).

Over recent years, the way applications (and data) have been delivered to these "users" has changed noticeably. Organizations have many more choices available to them now, and there will be more to come. Long-term approaches often lead companies down dead ends that are very difficult (and expensive) to back out of. Retaining flexibility to deliver different applications in various ways has served many well.

But how did we get here?

Where are we going?

Is this all too complicated?

Let's examine how we arrived at our current situation.

In the Beginning: Pre-1990s

Okay, so it's not the beginning, but it's early enough to establish a baseline for application delivery.

Back then, most desktop applications were locally installed and ran directly on the user's device. They had access to all the local resources they could grab, but also needed fast access to shared data, which saw the meteoric rise of Network Attached Storage (NAS) to provide that service.

Meanwhile, a new "fad" was emerging called the Internet. This allowed people to become more mobile, and application requirements began to change. Rather than arriving at work, using applications, and leaving, use cases emerged where an application was needed outside the corporate network. This presented a range of challenges and opportunities.

The big challenge? Maintenance and performance. 

IT staff were tasked with travelling significant distances between offices to locally update and maintain each device. Additionally, while the Internet was gaining popularity, it was still no match for the local network, which could deliver a huge 10 Mbps at the time. Applications designed for near-instant data access simply couldn't function properly over slow or remote connections. This caused some big headaches, particularly for organizations with workforces dispersed across multiple offices. Data needed to be replicated, and this required expensive physical cabling in many cases.

Seeds of Change: The 1990s

In the 1990s, some former IBM engineers, including Ed Iacobucci, who had invented a multi-user operating system, saw an opportunity and seized it, forming a company called Citrix. Their WinFrame product (and MultiUser before that) enabled applications to stay close to the data and be maintained by centralized teams, while the people using the applications were sent images of the changing screen. In return, mouse movements and keystrokes were sent to the application. In doing this, the company was able to squeeze more than one user into a single instance of a server operating system.

Because the data wasn't moving anywhere, much less network traffic was generated. This meant that the client computer could connect over much poorer connections and/or more people could access the applications simultaneously without causing a networking bottleneck.

This "Application Remoting" approach had several advantages, particularly in terms of data security and IT operational management, as the applications and data could remain in one place. Large organizations could make significant savings by operating in this way, and as a result, Citrix saw rapid growth, along with the Server-Based Computing market.

Over time, as with any buoyant market, new players entered the space. In 1997, Microsoft licensed the technology from Citrix and built and licensed the first "Terminal Server" edition based on Windows NT 4, allowing them to deliver this approach directly from the operating system.

Installed applications were in the majority, but delivering remote applications in this way was gaining some significant market share, and other players soon emerged as well:

  • 2X Software (later Parallels RAS) launched in 2004, simplifying Terminal Services delivery and reducing the need for deep Citrix expertise.
  • VMware entered in 2006 with VDM (later Horizon), introducing Virtual Desktop Infrastructure (VDI), where each user got their own virtual desktop instead of sharing an OS. VDI required more infrastructure but solved performance issues caused by "noisy neighbours" on multi-user systems.

The Age of Enlightenment (or Not): The 1990s to 2000s

During this same era, application delivery went through an experimental phase filled with "Next Big Things" (NBTs). Each new technology promised to replace everything before it. None succeeded completely, though some left lasting marks.

This was the age of "the next big thing." These NBTs were designed to replace everything else on the market and become the new standard for delivering all features. None succeeded completely, but many organizations tried to move everything to a new delivery model, and some still have applications delivered using these technologies today.

Some of those NBTs included (in no particular order):

  • Java Applets
  • Adobe Flash
  • Silverlight
  • Microsoft Universal Windows Platform (UWP)

Some common traits of these NBTs:

  • Vendor lock-in: proprietary approaches that trapped users.
  • Plugin dependencies: increasingly blocked by browsers for security reasons.
  • Poor mobile support: they didn't anticipate the mobile revolution.
  • Heavy resource use: too demanding for typical systems.

But it wasn't all bad news, this period set the stage for the emergence of standards that would transform the industry: The rise of HTML5, along with JavaScript and CSS3, marked a significant turning point. These standards made web applications platform-independent, browser-native, and universally accessible.

The Rise of SaaS: Late 1990s to 2010s

As we move through the late 1990s and early 2000s, a new model emerged, partially driven by the massive growth in Internet usage and the widespread adoption of web browsers. That model was Software as a Service (SaaS).

Companies like Salesforce went all-in with their "no software" mantra, pioneering browser-based applications. As Amazon Web Services (AWS) and other cloud providers entered the market, they made it easier for software companies to expand from a small scale to a global delivery model. This SaaS model was particularly appealing to software companies, especially considering how investors recognize revenue and the advantage of recurring monthly revenue over perpetual licensing, which requires ongoing maintenance. Since then, the growth of SaaS services has accelerated rapidly, leading many software providers to offer exclusively SaaS-based services.

Down to Earth with a Bump – The 2010s

However, this rush to deliver the new NBT once more led a number of organizations to attempt to deliver 100% of their workloads via this SaaS model, despite both financial and technical barriers to doing so. "Cloud-first" became a phrase many were chanting, and many businesses made the same mistakes we've discussed earlier:

  • Vendor lock-in resurfaced. Data ownership and exit strategies were often unclear.
  • Performance confusion grew - users blamed slow browsers for sluggish apps, unaware that each tab was effectively its own application.
  • Security myths spread. Many assumed browser-based apps were more secure, when in reality some vendors had simply wrapped existing clients in a web shell, sending the same data as before, just via a browser tab.

Securing both access to SaaS applications and controlling the data that arrives onto the client device was already solved for installed apps with Server-Based Computing and VDI (collectively called End User Computing or EUC). Now the challenge reoccurred as the increased use of SaaS services started to replace installed applications.

Dawn of a New Reality: The Now

Key to our story is that it's unusual for a technology to be completely dominant to the exclusion of all others in a short timeframe. There are examples but compared to the significant changes over the last 30 years, they are few. Most businesses cannot easily transition everything from A to B. That's improving over time, but we're not quite there yet.

That said, the shift over time is toward more applications being delivered via SaaS and fewer applications being delivered via installed apps and/or using SBC/VDI or another EUC technology.

Most businesses, though, are somewhere between these two extremes — and that's totally fine. The realization that this is perfectly okay may not seem neat and tidy, but it's essential for businesses to implement change at their own pace, not just because "everybody's doing it."

The history of application delivery shows that flexibility is what endures. Whether through local apps, remote delivery, or SaaS, organizations benefit most when they retain control over how they adapt.

These approaches work side by side, because you shouldn't be forced into changes your business doesn't need.

Chris Marks is Principal Outbound Product Manager at Parallels

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...