In 2014, AWS Lambda introduced serverless architecture. Since then, many other cloud providers have developed serverless options. Today, container-based, fully-managed players also share this space with the serverless cloud providers.
What’s behind this rapid growth? Serverless is extremely useful for an increasing number of applications including cloud job automation, serving IoT devices from edge to the cloud, building backend for single page applications (SPA) and image compression.
According to a recent survey, 82 percent in 2018 compared to 45 in 2017 are using serverless at work, suggesting that serverless is definitely here to stay.
As with any new technology, there are also challenges and barriers that are impacting mainstream adoption. Taking a deeper look at both the benefits and challenges of serverless can help network operators decide if it’s right for them and if the potential benefits outweigh the concerns related to network visibility and complexity.
Weighing the Pros and Cons of a Serverless Architecture
Conversely, with serverless, all of the infrastructure control is in the hands of the cloud provider. This results in operational challenges and network visibility blind spots. Compared to the simplicity of containers, virtual machine (VM) or bare-metal architectures, serverless also complicates the network organization and security controls.
Barriers to Mainstream Adoption
Adoption of serverless is growing due to its inherent benefits, but it has not yet become fully mainstream because of some of its limitations
As we previously discussed, adoption of serverless is growing due to its inherent benefits, but it has not yet become fully mainstream because of some of its limitations. Network operators must understand these barriers and vulnerabilities if they plan on reaping the benefits while maintaining a safe and secure serverless solution:
Function Runtime Restrictions
In the few years since its introduction, serverless runtime restrictions have emerged, slowing down the process of building or migrating new or existing applications. This is due to the fact that, in order to create new or adjust existing workflows in a serverless environment, significant warm-up time is needed for each individual change across each function hosted in the complex cloud network.
Self-Regulated Application Organization
For self-regulated applications or microservices, migrating to serverless comes with its own set of challenges. They typically use different types of managed or as-a-service databases to store data across requests; deploying caches like Redis or object storage like S3. With these applications and microservices hosted amongst a variety of different caches, network visibility declines and complexity increases.
Although the burden of patching and maintaining infrastructures is relieved by implementing cloud-hosted serverless functions, the constantly shifting nature of each individual serverless function makes it extremely difficult for developers to establish controls around sensitive data that is always on the move.
These network and visibility challenges not only slow down and complicate operations, they also introduce a number of significant security concerns.
Serverless Security Concerns and Considerations
The main difference between traditional architectures and serverless is that functions rely heavily on non-web, event-based communications and networking channels. Running on public clouds, these event-based communications and channels challenge the implementation of comprehensive security controls that can detect threats and enforce network policies effectively. For serverless functions, new security tools that understand microservices, scale horizontally, and coexist in the existing security stack are required to monitor and scale these new, complex environments.
Before making the decision to go serverless, operations and developers should understand their current network security policies including:
■ Unification around secret consumption
■ Service-to-service authentication and authorization between first and third parties
■ Function workflows and access whitelisting
■ Security network monitoring
■ Access policies to the network and access policies to data
Function-based, serverless workloads are constantly evolving, making them harder to exploit, but it is still important to have a strong pulse on the current state of your network security before moving towards a more fluid and complex computing solution.
Is your Network Ready for Serverless Adoption?
Still in relative infancy, the adoption of serverless architecture continues to grow as companies realize its benefits. Given the limitations outlined in this blog, how do you know if you are ready to implement a serverless framework in your network?
Before jumping head first into serverless, operation teams must understand the visibility blind spots, operational challenges, and potential security threats these complex solutions introduce. Simultaneously, cloud providers must continue to innovate and improve their standards, operations and security measures before serverless adoption will occur seamlessly on community-driven frameworks built on Kubernetes.
If you weigh the pros and cons and end up deciding the current potential benefits for going serverless outweigh the potential risks, understanding the capabilities and challenges associated with each platform provider is key to adopting a solution that works for your complex architecture.
One common infrastructure challenge arises with virtual private networks (VPNs). VPNs have long been relied upon to deliver the network connectivity and security enterprises required at a price they could afford. Organizations still routinely turn to them to provide internal and trusted third-parties with "secure" remote access to isolated networks. However, with the rise in mobile, IoT, multi- and hybrid-cloud, as well as edge computing, traditional enterprise perimeters are extending and becoming blurred ...
The configuration management database (CMDB), along with its more federated companion, the configuration management system (CMS), has been bathed in a deluge of negative opinions from all fronts — industry experts, vendors, and IT professionals. But from what recent EMA research on analytics, ITSM performance and other areas is indicating, those negative views seem to be missing out on a real undercurrent of truth — that CMDB/CMS alignments, whatever their defects, strongly skew to success in terms of overall IT progressiveness and effectiveness ...
The on-demand economy has transformed the way we move around, eat, learn, travel and connect at a massive scale. However, with disruption and big aspirations comes big, complex challenges. To take these challenges head-on, on-demand economy companies are finding new ways to deliver their services and products to an audience with ever-increasing expectations, and that's what we'll look at in this blog ...
To thrive in today's highly competitive digital business landscape, organizations must harness their "digital DNA." In other words, they need to connect all of their systems and databases — including various business applications, devices, big data and any instances of IoT and hybrid cloud environments — so they're accessible and actionable. By integrating all existing components and new technologies, organizations can gain a comprehensive, trusted view of their business functions, thereby enabling more agile deployment processes and ensuring scalable growth and relevance over the long-term ...
Advancements in technology innovation are happening so quickly, the decision of where and when to transform can be a moving target for businesses. When done well, digital transformation improves the customer experience while optimizing operational efficiency. To get there, enterprises must encourage experimentation to overcome organizational obstacles. In other words ...
IoT adoption is growing rapidly, and respondents believe 30% of their company’s revenue two years from now will be due to IoT, according to the new IoT Signals report from Microsoft Corp ...
It's been all over the news the last few months. After two fatal crashes, Boeing was forced to ground its 737. The doomed model is now undergoing extensive testing to get it back into service and production. Large organizations often tell stakeholders that even though all software goes through extensive testing, this type of thing “just happens.” But that is exactly the problem. While the human component of application development and testing won't go away, it can be eased and supplemented by far more efficient and automated methods to proactively determine software health and identify flaws ...
Despite significant investment in AI, many companies are still struggling to stabilize and scale their AI initiatives, according to State of Development and Operations of AI Applications 2019 from Dotscience ...
IT has two principal functions: create a network that satisfies the business needs of the company and then optimize that network. Unfortunately, the modern enterprise is buried under a myriad of network improvement projects that often do not get deployed ...
Even large companies are not yet realizing the potential of digital transformation, according to a new study from Cherwell Software, The Power of Process Integration in the Information Age ...