How do you eat an elephant? One bite at a time.
I'm guessing that whoever coined that phrase was not thinking about microservices. However, the logic does apply to microservices applications and environments. Microservices allow you to break down a big monolithic application that's hard to manage, with the goal of making the individual parts smaller and easier to manage.
Simplifying application management through microservices, however, can be a significant effort. It's easy to manage the individual parts, but what you really want to do is manage the entire application as a whole. This means ensuring each of the parts works well with the other parts, and this is the challenge with microservices. While microservices make some aspects of software development easier, new issues can arise.
Let's look at some of these, and how we can respond to them.
Within the microservices framework, there are many possible ways to structure your application. The book The Art of Scalability talks about the Scale Cube, and how you can structure your applications according to X, Y, or Z axes. In an X axis approach, you use a load balancer approach to divide any job equally across the various components of the application. This is simple in theory, but in reality, it doesn't scale well as the application grows and becomes more complex. According to the Y axis model, the application is decomposed into categories that are verb-based (e.g. checkout), or noun-based (customer support). Each component handles only one type of workload. This is the most commonly used method, where each application is broken down into a combination of verb-based or noun-based microservices. The Z axis model is similar to X in that it also follows a load balancer approach, but the difference is that the way workload is shared is planned ahead. A routing handler spreads load to the various services based on criteria like size and complexity of the task, customer type, and so on.
Whichever model you choose to follow, what's clear is that microservices bring an added layer of decision-making and planning. As your application hits the real world, you'll need to test which is the right model for you.
What you don't want to be doing is manually provisioning infrastructure to meet the changes in load. This is where the autoscaling capabilities of cloud vendors can be especially useful. AWS Fargate is one of the newest AWS services that brings together two essential components of modern infrastructure—containers and automation. Fargate lets you specify the amount of resources you need in terms of compute and memory, and thereon, it handles the provisioning and scaling of underlying infrastructure to match your needs.
Kubernetes also has an autoscaling feature called Horizontal Pod Autoscaling, which scales the number of pods in a ReplicaSet to meet the need for CPU utilization. You can edit the YAML file that controls this behavior to set other custom metrics as well.
Apart from scaling, a related challenge with running microservices in production is networking. In traditional hardware-chained applications, networking was simple. The endpoints were static, and requests were more straightforward. You could easily look into latencies, as the monolith was much simpler. With microservices, the number of components like services, containers, nodes, virtual firewalls, and more, make networking very complex.
What's needed for microservices networking is a service mesh. This is a service-to-service communication model that takes into account all the complexity of a microservices application stack. The job of a service mesh is to transfer requests from instance A to instance B and ensure that the request is processed. This means it needs to handle service discovery, and be able to find services, even if they've been auto-restarted recently. It needs to handle network failures, automatically retry requests that have failed, and should provide load balancing that takes into account latencies across the entire network.
Linkerd is the most prominent of the service mesh tools. It provides dynamic routing rules across the network and is able to detect which instances perform faster than others. When requests are not successful after a couple of retries, it fails them to avoid burdening the system. This enables a fail-fast approach that helps keep large distributed systems like a microservices app running fast, even at scale.
Monitoring for Visibility
Monitoring microservices requires monitoring at multiple levels. You need to monitor the containers that make up each service, but you also need to monitor services for a realistic view of customer experience and application performance.
Monitoring is now a multi-tool affair, with each monitoring tool specializing in a particular type of monitoring. Metrics are viewed by a tool like Prometheus, which is ideal for time-series data. Closely integrated with Kubernetes, Prometheus is container-aware. Other tools, especially APM (application performance management) tools, also play an important role in microservices monitoring; to be effective for highly distributed microservices applications, however, microservices monitoring tools should incorporate lighter instrumentation and cater to distributed architectures.
Further, each client platform may need specialized tools to monitor them. Mobile apps and mobile OSes can't be monitored with the same tools used to monitor web apps, though there is some overlap. Specialized tools for mobile monitoring and error reporting are essential.
For logging, the open source Elasticsearch stack is a robust solution, but there are many capable commercial ones available as well. Logging gives much-needed context into errors and performance lags, and is indispensable to troubleshooting. In a microservices world where complexity is the name of the game, logging helps bring much needed clarity.
Finally, a distributed tracing tool like OpenTracing helps track the lifecycle of events as it passes through the network. It is easy to get started with and is vendor-neutral, making it an attractive option for those who need visibility into their distributed system without all the complexities that come with it.
Join the CA Technologies webcast,Actionable Insight in Massively Scalable Environments, Wednesday, March 21 11:00 AM EST to learn more
Microservices architectures can make applications easier to manage, but only if you understand and address the novel management complexities that microservices introduce.
Fortunately, new approaches and tools provide robust solutions to these complex challenges.