The Migration to Serverless Has Begun - Is Your Network Ready?
August 15, 2018

Tal Rom
Alcide

Share this

In 2014, AWS Lambda introduced serverless architecture. Since then, many other cloud providers have developed serverless options. Today, container-based, fully-managed players also share this space with the serverless cloud providers.

What’s behind this rapid growth? Serverless is extremely useful for an increasing number of applications including cloud job automation, serving IoT devices from edge to the cloud, building backend for single page applications (SPA) and image compression.


According to a recent survey, 82 percent in 2018 compared to 45 in 2017 are using serverless at work, suggesting that serverless is definitely here to stay.

As with any new technology, there are also challenges and barriers that are impacting mainstream adoption. Taking a deeper look at both the benefits and challenges of serverless can help network operators decide if it’s right for them and if the potential benefits outweigh the concerns related to network visibility and complexity.

Weighing the Pros and Cons of a Serverless Architecture

Cloud-hosted serverless functions provide immediate value by eliminating some of the problems and overhead associated with managing actual infrastructure, enabling efficient utilization of the underlying infrastructure and resulting in significant operational cost savings. This is beneficial for developers, who are then able to develop with confidence in their language of choice including Python, JavaScript, Go, Java, C# and more.

Conversely, with serverless, all of the infrastructure control is in the hands of the cloud provider. This results in operational challenges and network visibility blind spots. Compared to the simplicity of containers, virtual machine (VM) or bare-metal architectures, serverless also complicates the network organization and security controls.

Barriers to Mainstream Adoption

Adoption of serverless is growing due to its inherent benefits, but it has not yet become fully mainstream because of some of its limitations

As we previously discussed, adoption of serverless is growing due to its inherent benefits, but it has not yet become fully mainstream because of some of its limitations. Network operators must understand these barriers and vulnerabilities if they plan on reaping the benefits while maintaining a safe and secure serverless solution:

Function Runtime Restrictions
In the few years since its introduction, serverless runtime restrictions have emerged, slowing down the process of building or migrating new or existing applications. This is due to the fact that, in order to create new or adjust existing workflows in a serverless environment, significant warm-up time is needed for each individual change across each function hosted in the complex cloud network.

Self-Regulated Application Organization
For self-regulated applications or microservices, migrating to serverless comes with its own set of challenges. They typically use different types of managed or as-a-service databases to store data across requests; deploying caches like Redis or object storage like S3. With these applications and microservices hosted amongst a variety of different caches, network visibility declines and complexity increases.

Ephemeral Functions
Although the burden of patching and maintaining infrastructures is relieved by implementing cloud-hosted serverless functions, the constantly shifting nature of each individual serverless function makes it extremely difficult for developers to establish controls around sensitive data that is always on the move.

These network and visibility challenges not only slow down and complicate operations, they also introduce a number of significant security concerns.

Serverless Security Concerns and Considerations

The main difference between traditional architectures and serverless is that functions rely heavily on non-web, event-based communications and networking channels. Running on public clouds, these event-based communications and channels challenge the implementation of comprehensive security controls that can detect threats and enforce network policies effectively. For serverless functions, new security tools that understand microservices, scale horizontally, and coexist in the existing security stack are required to monitor and scale these new, complex environments.

Before making the decision to go serverless, operations and developers should understand their current network security policies including:

■ Unification around secret consumption

■ Service-to-service authentication and authorization between first and third parties

■ Function workflows and access whitelisting

■ Observability

■ Security network monitoring

■ Access policies to the network and access policies to data

Function-based, serverless workloads are constantly evolving, making them harder to exploit, but it is still important to have a strong pulse on the current state of your network security before moving towards a more fluid and complex computing solution.

Is your Network Ready for Serverless Adoption?

Still in relative infancy, the adoption of serverless architecture continues to grow as companies realize its benefits. Given the limitations outlined in this blog, how do you know if you are ready to implement a serverless framework in your network?

Before jumping head first into serverless, operation teams must understand the visibility blind spots, operational challenges, and potential security threats these complex solutions introduce. Simultaneously, cloud providers must continue to innovate and improve their standards, operations and security measures before serverless adoption will occur seamlessly on community-driven frameworks built on Kubernetes.

If you weigh the pros and cons and end up deciding the current potential benefits for going serverless outweigh the potential risks, understanding the capabilities and challenges associated with each platform provider is key to adopting a solution that works for your complex architecture.

Tal Rom is VP R&D at Alcide
Share this

The Latest

March 26, 2020

While remote work policies have been gaining steam for the better part of the past decade across the enterprise space — driven in large part by more agile and scalable, cloud-delivered business solutions — recent events have pushed adoption into overdrive ...

March 25, 2020

Time-critical, unplanned work caused by IT disruptions continues to plague enterprises around the world, leading to lost revenue, significant employee morale problems and missed opportunities to innovate, according to the State of Unplanned Work Report 2020, conducted by Dimensional Research for PagerDuty ...

March 24, 2020

In today's iterative world, development teams care a lot more about how apps are running. There's a demand for fixing actionable items. Developers want to know exactly what's broken, what to fix right now, and what can wait. They want to know, "Do we build or fix?" This trade-off between building new features versus fixing bugs is one of the key factors behind the adoption of Application Stability management tools ...

March 23, 2020

With the rise of mobile apps and iterative development releases, Application Stability has answered the widespread need to monitor applications in a new way, shifting the focus from servers and networks to the customer experience. The emergence of Application Stability has caused some consternation for diehard APM fans. However, these two solutions embody very distinct monitoring focuses, which leads me to believe there's room for both tools, as well as different teams for both ...

March 19, 2020

The 2019 State of E-Commerce Infrastructure Report, from Webscale, analyzes findings from a comprehensive survey of more than 450 ecommerce professionals regarding how their online stores performed during the 2019 holiday season. Some key insights from the report include ...

March 18, 2020

Robinhood is a unicorn startup that has been disrupting the way by which many millennials have been investing and managing their money for the past few years. For Robinhood, the burden of proof was to show that they can provide an infrastructure that is as scalable, reliable and secure as that of major banks who have been developing their trading infrastructure for the last quarter-century. That promise fell flat last week, when the market volatility brought about a set of edge cases that brought Robinhood's trading app to its knees ...

March 17, 2020

Application backend monitoring is the key to acquiring visibility across the enterprise's application stack, from the application layer and underlying infrastructure to third-party API services, web servers and databases, be they on-premises, in a public or private cloud, or in a hybrid model. By tracking and reporting performance in real time, IT teams can ensure applications perform at peak efficiency — and guarantee a seamless customer experience. How can IT operations teams improve application backend monitoring? By embracing artificial intelligence for operations — AIOps ...

March 16, 2020

In 2020, DevOps teams will face heightened expectations for higher speed and frequency of code delivery, which means their IT environments will become even more modular, ephemeral and dynamic — and significantly more complicated to monitor. As a result, AIOps will further cement its position as the most effective technology that DevOps teams can use to see and control what's going on with their applications and their underlying infrastructure, so that they can prevent outages. Here I outline five key trends to watch related to how AIOps will impact DevOps in 2020 and beyond ...

March 12, 2020

With the spread of the coronavirus (COVID-19), CIOs should focus on three short-term actions to increase their organizations' resilience against disruptions and prepare for rebound and growth, according to Gartner ...

March 11, 2020

Whether you consider the first generation of APM or the updates that followed for SOA and microservices, the most basic premise of the tools remains the same — PROVIDE VISIBILITY ...