APM Five Reasons Why Your Microservices Could Be Failing
June 27, 2018

Brien Posey
Fixate IO

Although once proclaimed as revolutionary, the merits of microservices have more recently been hotly debated. Some IT pros see microservices as a path to more agile application development, while others claim that microservices could kill application performance by introducing unnecessary complexity.

As usual, the truth lies somewhere in the middle. Microservices, like so many other things in IT, are just a tool. When used properly, microservices can yield tremendous benefits. When a poorly coded application is broken into microservices, however, performance and reliability tend to suffer.

Of course, poorly written code is not the only thing that can cause a company’s microservices initiatives to fail. Here are five more things to watch out for.

1. Network Latency

Over the last couple of years, I have read several blog posts stating that today’s networks are fast and reliable, and network latency is a non-issue when it comes to the success of a transition to microservices. While there is a degree of truth in this statement, there are a couple of things to keep in mind.

First, network calls will never be as fast as calls to services that are running on the same server as the application that uses them.

Second, it is becoming increasingly common to host microservices on public clouds or on servers in remote locations. Any time that calls to a service have to traverse the Internet, there is a degree of uncertainty about performance.

Third, in the case of a single microservice, the impact of network latency is probably going to be negligible. As you begin to accumulate microservices, however, the network overhead becomes more of a factor. This is especially true for applications that have to make serial calls to microservices, rather than being able to call them in parallel.

2. Fault Tolerance

A second reason why microservices can fail has to do with a lack of redundancy. Suppose for a moment that a dozen different applications all have a dependency upon a particular microservice. If that microservice were to drop offline, then all twelve of the applications that use it would end up failing. This seems like a really obvious thing, and yet I have seen companies deploy shared microservices on non-redundant infrastructure, all in the name of saving a few dollars. The bottom line is that if you use redundancy to protect applications against loss of availability, then similar measures should be used to protect the microservices that those applications depend on.

3. Finger-pointing

Another reason why microservices adoptions sometimes fail has to do with infighting. Imagine a situation in which an application is broken up into a series of microservices, with small groups of engineers being dedicated to each. When a problem occurs with the core application, it becomes easy for these groups to point to microservices other than their own as being the source of the problem.

The problem with this, of course, is that because the groups of engineers lack visibility into the service structure as a whole, it becomes easy to engage in finger-pointing, without any real proof as to the source of the problem. While it may be impossible to ever completely prevent finger-pointing, comprehensive monitoringcan go a long way toward making things better.

4. Too Much of a Good Thing

Another reason why microservice adoption sometimes fails is because of what I like to call microservice sprawl. In other words, legacy applications are sliced and diced in a haphazard manner, breaking out anything that could conceivably be turned into a microservice.

The concept of microservice sprawl reminds me of a small application that I encountered many years ago. Every single application function, even something as simple as displaying a line of text, was broken into a subroutine. Because of the way that the application was written, the code ran very inefficiently, and it was almost impossible for a human to follow what the code was doing without the aid of various tools. What should have been a tiny, very simple application had a codebase that was completely convoluted, simply because too many subroutines had been used.

The same basic rule applies to microservices. Rather than turning every conceivable block of code into a microservice, focus on functions that write, retrieve, or update data.

5. Forgetting About Dependencies

One of the main advantages of microservices is that a single microservice can be used by multiple applications. Yet, I have seen real-world situations in which someone forgot that a microservice is shared, and made a modification that essentially broke dependency applications. Hence, you can never assume that no one else is using a microservice, or that it is safe to change a microservice in a way that alters the required input or the expected output.

Lessons Learned

Lesson number one when it comes to the adoption of a microservices architecture is undoubtedly to write good code. Lesson number two is to use available monitoring tools to keep tabs on the calls to your microservices, and monitor the underlying infrastructure. Given the complexity that microservices introduce, monitoring is the only reliable means of efficiently detecting and resolving performance issues.

To learn more about monitoring microservices and containers, watch this on-demand webcast, The Essentials of Container Monitoring.

Brien Posey is a Fixate IO contributor, a freelance technical writer and a 15-time Microsoft MVP
Share this