Skip to main content

The Changing World of Application Delivery (and why it pays to have choices)

Chris Marks
Parallels

Getting applications into the hands of those who need them quickly and securely has long been the goal of a branch of IT often referred to as End User Computing (EUC).

Over recent years, the way applications (and data) have been delivered to these "users" has changed noticeably. Organizations have many more choices available to them now, and there will be more to come. Long-term approaches often lead companies down dead ends that are very difficult (and expensive) to back out of. Retaining flexibility to deliver different applications in various ways has served many well.

But how did we get here?

Where are we going?

Is this all too complicated?

Let's examine how we arrived at our current situation.

In the Beginning: Pre-1990s

Okay, so it's not the beginning, but it's early enough to establish a baseline for application delivery.

Back then, most desktop applications were locally installed and ran directly on the user's device. They had access to all the local resources they could grab, but also needed fast access to shared data, which saw the meteoric rise of Network Attached Storage (NAS) to provide that service.

Meanwhile, a new "fad" was emerging called the Internet. This allowed people to become more mobile, and application requirements began to change. Rather than arriving at work, using applications, and leaving, use cases emerged where an application was needed outside the corporate network. This presented a range of challenges and opportunities.

The big challenge? Maintenance and performance. 

IT staff were tasked with travelling significant distances between offices to locally update and maintain each device. Additionally, while the Internet was gaining popularity, it was still no match for the local network, which could deliver a huge 10 Mbps at the time. Applications designed for near-instant data access simply couldn't function properly over slow or remote connections. This caused some big headaches, particularly for organizations with workforces dispersed across multiple offices. Data needed to be replicated, and this required expensive physical cabling in many cases.

Seeds of Change: The 1990s

In the 1990s, some former IBM engineers, including Ed Iacobucci, who had invented a multi-user operating system, saw an opportunity and seized it, forming a company called Citrix. Their WinFrame product (and MultiUser before that) enabled applications to stay close to the data and be maintained by centralized teams, while the people using the applications were sent images of the changing screen. In return, mouse movements and keystrokes were sent to the application. In doing this, the company was able to squeeze more than one user into a single instance of a server operating system.

Because the data wasn't moving anywhere, much less network traffic was generated. This meant that the client computer could connect over much poorer connections and/or more people could access the applications simultaneously without causing a networking bottleneck.

This "Application Remoting" approach had several advantages, particularly in terms of data security and IT operational management, as the applications and data could remain in one place. Large organizations could make significant savings by operating in this way, and as a result, Citrix saw rapid growth, along with the Server-Based Computing market.

Over time, as with any buoyant market, new players entered the space. In 1997, Microsoft licensed the technology from Citrix and built and licensed the first "Terminal Server" edition based on Windows NT 4, allowing them to deliver this approach directly from the operating system.

Installed applications were in the majority, but delivering remote applications in this way was gaining some significant market share, and other players soon emerged as well:

  • 2X Software (later Parallels RAS) launched in 2004, simplifying Terminal Services delivery and reducing the need for deep Citrix expertise.
  • VMware entered in 2006 with VDM (later Horizon), introducing Virtual Desktop Infrastructure (VDI), where each user got their own virtual desktop instead of sharing an OS. VDI required more infrastructure but solved performance issues caused by "noisy neighbours" on multi-user systems.

The Age of Enlightenment (or Not): The 1990s to 2000s

During this same era, application delivery went through an experimental phase filled with "Next Big Things" (NBTs). Each new technology promised to replace everything before it. None succeeded completely, though some left lasting marks.

This was the age of "the next big thing." These NBTs were designed to replace everything else on the market and become the new standard for delivering all features. None succeeded completely, but many organizations tried to move everything to a new delivery model, and some still have applications delivered using these technologies today.

Some of those NBTs included (in no particular order):

  • Java Applets
  • Adobe Flash
  • Silverlight
  • Microsoft Universal Windows Platform (UWP)

Some common traits of these NBTs:

  • Vendor lock-in: proprietary approaches that trapped users.
  • Plugin dependencies: increasingly blocked by browsers for security reasons.
  • Poor mobile support: they didn't anticipate the mobile revolution.
  • Heavy resource use: too demanding for typical systems.

But it wasn't all bad news, this period set the stage for the emergence of standards that would transform the industry: The rise of HTML5, along with JavaScript and CSS3, marked a significant turning point. These standards made web applications platform-independent, browser-native, and universally accessible.

The Rise of SaaS: Late 1990s to 2010s

As we move through the late 1990s and early 2000s, a new model emerged, partially driven by the massive growth in Internet usage and the widespread adoption of web browsers. That model was Software as a Service (SaaS).

Companies like Salesforce went all-in with their "no software" mantra, pioneering browser-based applications. As Amazon Web Services (AWS) and other cloud providers entered the market, they made it easier for software companies to expand from a small scale to a global delivery model. This SaaS model was particularly appealing to software companies, especially considering how investors recognize revenue and the advantage of recurring monthly revenue over perpetual licensing, which requires ongoing maintenance. Since then, the growth of SaaS services has accelerated rapidly, leading many software providers to offer exclusively SaaS-based services.

Down to Earth with a Bump – The 2010s

However, this rush to deliver the new NBT once more led a number of organizations to attempt to deliver 100% of their workloads via this SaaS model, despite both financial and technical barriers to doing so. "Cloud-first" became a phrase many were chanting, and many businesses made the same mistakes we've discussed earlier:

  • Vendor lock-in resurfaced. Data ownership and exit strategies were often unclear.
  • Performance confusion grew - users blamed slow browsers for sluggish apps, unaware that each tab was effectively its own application.
  • Security myths spread. Many assumed browser-based apps were more secure, when in reality some vendors had simply wrapped existing clients in a web shell, sending the same data as before, just via a browser tab.

Securing both access to SaaS applications and controlling the data that arrives onto the client device was already solved for installed apps with Server-Based Computing and VDI (collectively called End User Computing or EUC). Now the challenge reoccurred as the increased use of SaaS services started to replace installed applications.

Dawn of a New Reality: The Now

Key to our story is that it's unusual for a technology to be completely dominant to the exclusion of all others in a short timeframe. There are examples but compared to the significant changes over the last 30 years, they are few. Most businesses cannot easily transition everything from A to B. That's improving over time, but we're not quite there yet.

That said, the shift over time is toward more applications being delivered via SaaS and fewer applications being delivered via installed apps and/or using SBC/VDI or another EUC technology.

Most businesses, though, are somewhere between these two extremes — and that's totally fine. The realization that this is perfectly okay may not seem neat and tidy, but it's essential for businesses to implement change at their own pace, not just because "everybody's doing it."

The history of application delivery shows that flexibility is what endures. Whether through local apps, remote delivery, or SaaS, organizations benefit most when they retain control over how they adapt.

These approaches work side by side, because you shouldn't be forced into changes your business doesn't need.

Chris Marks is Principal Outbound Product Manager at Parallels

The Latest

In APMdigest's 2026 Observability Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 6 covers OpenTelemetry ...

In APMdigest's 2026 Observability Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 5 covers APM and infrastructure monitoring ...

AI continues to be the top story across the industry, but a big test is coming up as retailers make the final preparations before the holiday season starts. Will new AI powered features help load up Santa's sleigh this year? Or are early adopters in for unpleasant surprises in the form of unexpected high costs, poor performance, or even service outages? ...

In APMdigest's 2026 Observability Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 4 covers user experience, digital performance, website performance and ITSM ...

In APMdigest's 2026 Observability Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 3 covers more predictions about Observability ...

In APMdigest's 2026 Observability Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 2 covers predictions about Observability and AIOps ...

The Holiday Season means it is time for APMdigest's annual list of predictions, covering Observability and other IT performance topics. Industry experts — from analysts and consultants to the top vendors — offer thoughtful, insightful, and often controversial predictions on how Observability, AIOps, APM and related technologies will evolve and impact business in 2026 ...

IT organizations are preparing for 2026 with increased expectations around modernization, cloud maturity, and data readiness. At the same time, many teams continue to operate with limited staffing and are trying to maintain complex environments with small internal groups. These conditions are creating a distinct set of priorities for the year ahead. The DataStrike 2026 Data Infrastructure Survey Report, based on responses from nearly 280 IT leaders across industries, points to five trends that are shaping data infrastructure planning for 2026 ...

Developers building AI applications are not just looking for fault patterns after deployment; they must detect issues quickly during development and have the ability to prevent issues after going live. Unfortunately, traditional observability tools can no longer meet the needs of AI-driven enterprise application development. AI-powered detection and auto-remediation tools designed to keep pace with rapid development are now emerging to proactively manage performance and prevent downtime ...

Every few years, the cybersecurity industry adopts a new buzzword. "Zero Trust" has endured longer than most — and for good reason. Its promise is simple: trust nothing by default, verify everything continuously. Yet many organizations still hesitate to implement Zero Trust Network Access (ZTNA). The problem isn't that ZTNA doesn't work. It's that it's often misunderstood ...

The Changing World of Application Delivery (and why it pays to have choices)

Chris Marks
Parallels

Getting applications into the hands of those who need them quickly and securely has long been the goal of a branch of IT often referred to as End User Computing (EUC).

Over recent years, the way applications (and data) have been delivered to these "users" has changed noticeably. Organizations have many more choices available to them now, and there will be more to come. Long-term approaches often lead companies down dead ends that are very difficult (and expensive) to back out of. Retaining flexibility to deliver different applications in various ways has served many well.

But how did we get here?

Where are we going?

Is this all too complicated?

Let's examine how we arrived at our current situation.

In the Beginning: Pre-1990s

Okay, so it's not the beginning, but it's early enough to establish a baseline for application delivery.

Back then, most desktop applications were locally installed and ran directly on the user's device. They had access to all the local resources they could grab, but also needed fast access to shared data, which saw the meteoric rise of Network Attached Storage (NAS) to provide that service.

Meanwhile, a new "fad" was emerging called the Internet. This allowed people to become more mobile, and application requirements began to change. Rather than arriving at work, using applications, and leaving, use cases emerged where an application was needed outside the corporate network. This presented a range of challenges and opportunities.

The big challenge? Maintenance and performance. 

IT staff were tasked with travelling significant distances between offices to locally update and maintain each device. Additionally, while the Internet was gaining popularity, it was still no match for the local network, which could deliver a huge 10 Mbps at the time. Applications designed for near-instant data access simply couldn't function properly over slow or remote connections. This caused some big headaches, particularly for organizations with workforces dispersed across multiple offices. Data needed to be replicated, and this required expensive physical cabling in many cases.

Seeds of Change: The 1990s

In the 1990s, some former IBM engineers, including Ed Iacobucci, who had invented a multi-user operating system, saw an opportunity and seized it, forming a company called Citrix. Their WinFrame product (and MultiUser before that) enabled applications to stay close to the data and be maintained by centralized teams, while the people using the applications were sent images of the changing screen. In return, mouse movements and keystrokes were sent to the application. In doing this, the company was able to squeeze more than one user into a single instance of a server operating system.

Because the data wasn't moving anywhere, much less network traffic was generated. This meant that the client computer could connect over much poorer connections and/or more people could access the applications simultaneously without causing a networking bottleneck.

This "Application Remoting" approach had several advantages, particularly in terms of data security and IT operational management, as the applications and data could remain in one place. Large organizations could make significant savings by operating in this way, and as a result, Citrix saw rapid growth, along with the Server-Based Computing market.

Over time, as with any buoyant market, new players entered the space. In 1997, Microsoft licensed the technology from Citrix and built and licensed the first "Terminal Server" edition based on Windows NT 4, allowing them to deliver this approach directly from the operating system.

Installed applications were in the majority, but delivering remote applications in this way was gaining some significant market share, and other players soon emerged as well:

  • 2X Software (later Parallels RAS) launched in 2004, simplifying Terminal Services delivery and reducing the need for deep Citrix expertise.
  • VMware entered in 2006 with VDM (later Horizon), introducing Virtual Desktop Infrastructure (VDI), where each user got their own virtual desktop instead of sharing an OS. VDI required more infrastructure but solved performance issues caused by "noisy neighbours" on multi-user systems.

The Age of Enlightenment (or Not): The 1990s to 2000s

During this same era, application delivery went through an experimental phase filled with "Next Big Things" (NBTs). Each new technology promised to replace everything before it. None succeeded completely, though some left lasting marks.

This was the age of "the next big thing." These NBTs were designed to replace everything else on the market and become the new standard for delivering all features. None succeeded completely, but many organizations tried to move everything to a new delivery model, and some still have applications delivered using these technologies today.

Some of those NBTs included (in no particular order):

  • Java Applets
  • Adobe Flash
  • Silverlight
  • Microsoft Universal Windows Platform (UWP)

Some common traits of these NBTs:

  • Vendor lock-in: proprietary approaches that trapped users.
  • Plugin dependencies: increasingly blocked by browsers for security reasons.
  • Poor mobile support: they didn't anticipate the mobile revolution.
  • Heavy resource use: too demanding for typical systems.

But it wasn't all bad news, this period set the stage for the emergence of standards that would transform the industry: The rise of HTML5, along with JavaScript and CSS3, marked a significant turning point. These standards made web applications platform-independent, browser-native, and universally accessible.

The Rise of SaaS: Late 1990s to 2010s

As we move through the late 1990s and early 2000s, a new model emerged, partially driven by the massive growth in Internet usage and the widespread adoption of web browsers. That model was Software as a Service (SaaS).

Companies like Salesforce went all-in with their "no software" mantra, pioneering browser-based applications. As Amazon Web Services (AWS) and other cloud providers entered the market, they made it easier for software companies to expand from a small scale to a global delivery model. This SaaS model was particularly appealing to software companies, especially considering how investors recognize revenue and the advantage of recurring monthly revenue over perpetual licensing, which requires ongoing maintenance. Since then, the growth of SaaS services has accelerated rapidly, leading many software providers to offer exclusively SaaS-based services.

Down to Earth with a Bump – The 2010s

However, this rush to deliver the new NBT once more led a number of organizations to attempt to deliver 100% of their workloads via this SaaS model, despite both financial and technical barriers to doing so. "Cloud-first" became a phrase many were chanting, and many businesses made the same mistakes we've discussed earlier:

  • Vendor lock-in resurfaced. Data ownership and exit strategies were often unclear.
  • Performance confusion grew - users blamed slow browsers for sluggish apps, unaware that each tab was effectively its own application.
  • Security myths spread. Many assumed browser-based apps were more secure, when in reality some vendors had simply wrapped existing clients in a web shell, sending the same data as before, just via a browser tab.

Securing both access to SaaS applications and controlling the data that arrives onto the client device was already solved for installed apps with Server-Based Computing and VDI (collectively called End User Computing or EUC). Now the challenge reoccurred as the increased use of SaaS services started to replace installed applications.

Dawn of a New Reality: The Now

Key to our story is that it's unusual for a technology to be completely dominant to the exclusion of all others in a short timeframe. There are examples but compared to the significant changes over the last 30 years, they are few. Most businesses cannot easily transition everything from A to B. That's improving over time, but we're not quite there yet.

That said, the shift over time is toward more applications being delivered via SaaS and fewer applications being delivered via installed apps and/or using SBC/VDI or another EUC technology.

Most businesses, though, are somewhere between these two extremes — and that's totally fine. The realization that this is perfectly okay may not seem neat and tidy, but it's essential for businesses to implement change at their own pace, not just because "everybody's doing it."

The history of application delivery shows that flexibility is what endures. Whether through local apps, remote delivery, or SaaS, organizations benefit most when they retain control over how they adapt.

These approaches work side by side, because you shouldn't be forced into changes your business doesn't need.

Chris Marks is Principal Outbound Product Manager at Parallels

The Latest

In APMdigest's 2026 Observability Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 6 covers OpenTelemetry ...

In APMdigest's 2026 Observability Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 5 covers APM and infrastructure monitoring ...

AI continues to be the top story across the industry, but a big test is coming up as retailers make the final preparations before the holiday season starts. Will new AI powered features help load up Santa's sleigh this year? Or are early adopters in for unpleasant surprises in the form of unexpected high costs, poor performance, or even service outages? ...

In APMdigest's 2026 Observability Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 4 covers user experience, digital performance, website performance and ITSM ...

In APMdigest's 2026 Observability Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 3 covers more predictions about Observability ...

In APMdigest's 2026 Observability Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 2 covers predictions about Observability and AIOps ...

The Holiday Season means it is time for APMdigest's annual list of predictions, covering Observability and other IT performance topics. Industry experts — from analysts and consultants to the top vendors — offer thoughtful, insightful, and often controversial predictions on how Observability, AIOps, APM and related technologies will evolve and impact business in 2026 ...

IT organizations are preparing for 2026 with increased expectations around modernization, cloud maturity, and data readiness. At the same time, many teams continue to operate with limited staffing and are trying to maintain complex environments with small internal groups. These conditions are creating a distinct set of priorities for the year ahead. The DataStrike 2026 Data Infrastructure Survey Report, based on responses from nearly 280 IT leaders across industries, points to five trends that are shaping data infrastructure planning for 2026 ...

Developers building AI applications are not just looking for fault patterns after deployment; they must detect issues quickly during development and have the ability to prevent issues after going live. Unfortunately, traditional observability tools can no longer meet the needs of AI-driven enterprise application development. AI-powered detection and auto-remediation tools designed to keep pace with rapid development are now emerging to proactively manage performance and prevent downtime ...

Every few years, the cybersecurity industry adopts a new buzzword. "Zero Trust" has endured longer than most — and for good reason. Its promise is simple: trust nothing by default, verify everything continuously. Yet many organizations still hesitate to implement Zero Trust Network Access (ZTNA). The problem isn't that ZTNA doesn't work. It's that it's often misunderstood ...