Skip to main content

5 Tips for Getting the Most Value from Logs

Ishan Mukherjee
New Relic

Logs are one of the most useful tools for observability and application performance monitoring. However, getting the most mileage from logs requires paying careful attention to planning what data to collect, the best way to display it, and the proper context for log entries.

Logs provide a comprehensive view of events and errors that occur while software is running or when a failure occurs. A log monitoring solution ingests activity records generated by applications, services, and components of the operating systems stack and writes them in the form of text files so issues can be detected and resolved before they slow down the system or impact user experience.

Configuring logs for an entire infrastructure and application stack can be overwhelming because of the sheer amount of data that is generated. Nearly every event that takes place in a system can generate a log entry, which means that modern applications stacks may throw off millions or billions of events each day.

Collecting too much irrelevant information can cause log files to swell to huge proportions and make it difficult for humans or automated solutions to spot anomalies. Conversely, capturing too little information can cause important events to be missed.

Here are five best practices that will ensure you get the greatest value from log analytics.

1. Choose carefully what to log

Decide what information is most critical to understanding system performance and configure the logging solution accordingly. Collecting too many messages can drive up storage costs and make it difficult to identify relevant information when a problem occurs.

The data you gather should be relevant and useful. Some messages may not need to be captured at all. For example, success and redirect entries, which indicate that an operation was completed as planned, are usually not very useful in troubleshooting.

Seek input from everyone on the team to ensure that their needs are considered. Log information should provide the necessary details to understand issues and make decisions at every level of the operating and application stack. Capturing metadata is crucial to pinpointing events and root causes. For example, a message stating that an operation failed is less useful than one that states what operation was attempted and why it failed.

Pay careful attention to sensitive information such as passwords, personal data, and business secrets. If you must capture this data, be sure your logging solution supports encryption. In many cases, you don't need to log this information at all.

Be sure to include timestamp information for all log messages. The level of detail should be customized to the application as some tasks require extremely precise time information while others may need no more than an hourly mark. It's best to apply whatever standard metric you choose across the entire stack so logs can be correlated with other telemetry data types like metrics and events.

2. Establish a baseline for comparison

Logs can help you understand your stack better, which is important for performance tuning as well as distinguishing between real problems and false alerts.

Your first step when adopting a log monitoring solution should be to establish a foundation that can be used to identify anomalies. Choose common scenarios that will help you determine which data points to monitor and use as a baseline. For example, application monitoring can detect if parts of an application are increasing their use of memory over time, which is a symptom of a memory leak, but only if you know what constitutes normal memory usage.

3. Choose messages that support decisions

Infrastructure tends to generate a large amount of log data, only some of which are likely to be useful to you. If your monitoring is confined to applications, you should determine which details relate most directly to the conditions you are looking for, such as slow performance or restarts, and focus on those metrics.

Log messages should provide specific information about errors. For example, a failed transaction should generate a message that includes a detailed description of the problem, the timestamp, the name of the file where the problem occurred, and the line number of the failed code.

Timestamp: 2023-04-11 14:37:05

Error: Exception caught in processOrder() method

Error Message: NullPointerException: Order object is null

Stack Trace:

at com.example.OrderProcessor.processOrder(OrderProcessor.java:36)

at com.example.Application.main(Application.java:22)

The example above tells us that the application encountered a NullPointerException while processing an order. The Order object is null, which caused the processOrder() method to throw an exception. This error occurred in the processOrder() method at line 36 of the OrderProcessor.java file. The Application.java file is the entry point to the application and the main() method called the processOrder() method.

This message will make it easier to discover why the transaction failed and where in the code the problem occurred.

4. Keep log messages concise and relevant

While verbose messages may be helpful in diagnosis, they also drive up storage needs, make log searches more difficult, and increase debugging complexity.

When formatting logs, specify that only the information needed to debug an error should be collected. Chances are you don't need every detail about the operating environment. For example, a message regarding an application program interface failure probably doesn't need information about memory usage.

5. Make sure log messages are clear

You have a variety of logging formats to choose from, including JSON, Common Event Format, the NCSA Common Log Format, the W3C Extended Log File Format, and others. Each has its strengths and weaknesses, so make your selection based on your specific needs.

Whichever option you choose, avoid arcane or overly technical message formats that will only be decipherable by a few people. Emphasize consistency and clarity to ensure that logs are accessible to everyone who needs to see them now and in the future. Some log managers make it easy to customize log parsing rules but only if the underlying data is readable.

An example of an easily parsed format is:

2023-04-12 09:27:55 INFO [server] User "John" logged in from IP address 192.168.0.1.

This format is structured and consistent with a standard date and time format, and each piece of information is separated by a specific delimiter such as a space or a comma. This makes it easy for log monitoring software to read and process.

Following these five guidelines saves money, speeds error diagnosis, and makes logs an even more valuable asset in your observability toolkit.

Ishan Mukherjee is SVP of Growth at New Relic
APM

Hot Topics

The Latest

CEOs are committed to advancing AI solutions across their organization even as they face challenges from accelerating technology adoption, according to the IBM CEO Study. The survey revealed that executive respondents expect the growth rate of AI investments to more than double in the next two years, and 61% confirm they are actively adopting AI agents today and preparing to implement them at scale ...

Image
IBM

 

A major architectural shift is underway across enterprise networks, according to a new global study from Cisco. As AI assistants, agents, and data-driven workloads reshape how work gets done, they're creating faster, more dynamic, more latency-sensitive, and more complex network traffic. Combined with the ubiquity of connected devices, 24/7 uptime demands, and intensifying security threats, these shifts are driving infrastructure to adapt and evolve ...

Image
Cisco

The development of banking apps was supposed to provide users with convenience, control and piece of mind. However, for thousands of Halifax customers recently, a major mobile outage caused the exact opposite, leaving customers unable to check balances, or pay bills, sparking widespread frustration. This wasn't an isolated incident ... So why are these failures still happening? ...

Cyber threats are growing more sophisticated every day, and at their forefront are zero-day vulnerabilities. These elusive security gaps are exploited before a fix becomes available, making them among the most dangerous threats in today's digital landscape ... This guide will explore what these vulnerabilities are, how they work, why they pose such a significant threat, and how modern organizations can stay protected ...

The prevention of data center outages continues to be a strategic priority for data center owners and operators. Infrastructure equipment has improved, but the complexity of modern architectures and evolving external threats presents new risks that operators must actively manage, according to the Data Center Outage Analysis 2025 from Uptime Institute ...

As observability engineers, we navigate a sea of telemetry daily. We instrument our applications, configure collectors, and build dashboards, all in pursuit of understanding our complex distributed systems. Yet, amidst this flood of data, a critical question often remains unspoken, or at best, answered by gut feeling: "Is our telemetry actually good?" ... We're inviting you to participate in shaping a foundational element for better observability: the Instrumentation Score ...

We're inching ever closer toward a long-held goal: technology infrastructure that is so automated that it can protect itself. But as IT leaders aggressively employ automation across our enterprises, we need to continuously reassess what AI is ready to manage autonomously and what can not yet be trusted to algorithms ...

Much like a traditional factory turns raw materials into finished products, the AI factory turns vast datasets into actionable business outcomes through advanced models, inferences, and automation. From the earliest data inputs to the final token output, this process must be reliable, repeatable, and scalable. That requires industrializing the way AI is developed, deployed, and managed ...

Almost half (48%) of employees admit they resent their jobs but stay anyway, according to research from Ivanti ... This has obvious consequences across the business, but we're overlooking the massive impact of resenteeism and presenteeism on IT. For IT professionals tasked with managing the backbone of modern business operations, these numbers spell big trouble ...

For many B2B and B2C enterprise brands, technology isn't a core strength. Relying on overly complex architectures (like those that follow a pure MACH doctrine) has been flagged by industry leaders as a source of operational slowdown, creating bottlenecks that limit agility in volatile market conditions ...

5 Tips for Getting the Most Value from Logs

Ishan Mukherjee
New Relic

Logs are one of the most useful tools for observability and application performance monitoring. However, getting the most mileage from logs requires paying careful attention to planning what data to collect, the best way to display it, and the proper context for log entries.

Logs provide a comprehensive view of events and errors that occur while software is running or when a failure occurs. A log monitoring solution ingests activity records generated by applications, services, and components of the operating systems stack and writes them in the form of text files so issues can be detected and resolved before they slow down the system or impact user experience.

Configuring logs for an entire infrastructure and application stack can be overwhelming because of the sheer amount of data that is generated. Nearly every event that takes place in a system can generate a log entry, which means that modern applications stacks may throw off millions or billions of events each day.

Collecting too much irrelevant information can cause log files to swell to huge proportions and make it difficult for humans or automated solutions to spot anomalies. Conversely, capturing too little information can cause important events to be missed.

Here are five best practices that will ensure you get the greatest value from log analytics.

1. Choose carefully what to log

Decide what information is most critical to understanding system performance and configure the logging solution accordingly. Collecting too many messages can drive up storage costs and make it difficult to identify relevant information when a problem occurs.

The data you gather should be relevant and useful. Some messages may not need to be captured at all. For example, success and redirect entries, which indicate that an operation was completed as planned, are usually not very useful in troubleshooting.

Seek input from everyone on the team to ensure that their needs are considered. Log information should provide the necessary details to understand issues and make decisions at every level of the operating and application stack. Capturing metadata is crucial to pinpointing events and root causes. For example, a message stating that an operation failed is less useful than one that states what operation was attempted and why it failed.

Pay careful attention to sensitive information such as passwords, personal data, and business secrets. If you must capture this data, be sure your logging solution supports encryption. In many cases, you don't need to log this information at all.

Be sure to include timestamp information for all log messages. The level of detail should be customized to the application as some tasks require extremely precise time information while others may need no more than an hourly mark. It's best to apply whatever standard metric you choose across the entire stack so logs can be correlated with other telemetry data types like metrics and events.

2. Establish a baseline for comparison

Logs can help you understand your stack better, which is important for performance tuning as well as distinguishing between real problems and false alerts.

Your first step when adopting a log monitoring solution should be to establish a foundation that can be used to identify anomalies. Choose common scenarios that will help you determine which data points to monitor and use as a baseline. For example, application monitoring can detect if parts of an application are increasing their use of memory over time, which is a symptom of a memory leak, but only if you know what constitutes normal memory usage.

3. Choose messages that support decisions

Infrastructure tends to generate a large amount of log data, only some of which are likely to be useful to you. If your monitoring is confined to applications, you should determine which details relate most directly to the conditions you are looking for, such as slow performance or restarts, and focus on those metrics.

Log messages should provide specific information about errors. For example, a failed transaction should generate a message that includes a detailed description of the problem, the timestamp, the name of the file where the problem occurred, and the line number of the failed code.

Timestamp: 2023-04-11 14:37:05

Error: Exception caught in processOrder() method

Error Message: NullPointerException: Order object is null

Stack Trace:

at com.example.OrderProcessor.processOrder(OrderProcessor.java:36)

at com.example.Application.main(Application.java:22)

The example above tells us that the application encountered a NullPointerException while processing an order. The Order object is null, which caused the processOrder() method to throw an exception. This error occurred in the processOrder() method at line 36 of the OrderProcessor.java file. The Application.java file is the entry point to the application and the main() method called the processOrder() method.

This message will make it easier to discover why the transaction failed and where in the code the problem occurred.

4. Keep log messages concise and relevant

While verbose messages may be helpful in diagnosis, they also drive up storage needs, make log searches more difficult, and increase debugging complexity.

When formatting logs, specify that only the information needed to debug an error should be collected. Chances are you don't need every detail about the operating environment. For example, a message regarding an application program interface failure probably doesn't need information about memory usage.

5. Make sure log messages are clear

You have a variety of logging formats to choose from, including JSON, Common Event Format, the NCSA Common Log Format, the W3C Extended Log File Format, and others. Each has its strengths and weaknesses, so make your selection based on your specific needs.

Whichever option you choose, avoid arcane or overly technical message formats that will only be decipherable by a few people. Emphasize consistency and clarity to ensure that logs are accessible to everyone who needs to see them now and in the future. Some log managers make it easy to customize log parsing rules but only if the underlying data is readable.

An example of an easily parsed format is:

2023-04-12 09:27:55 INFO [server] User "John" logged in from IP address 192.168.0.1.

This format is structured and consistent with a standard date and time format, and each piece of information is separated by a specific delimiter such as a space or a comma. This makes it easy for log monitoring software to read and process.

Following these five guidelines saves money, speeds error diagnosis, and makes logs an even more valuable asset in your observability toolkit.

Ishan Mukherjee is SVP of Growth at New Relic
APM

Hot Topics

The Latest

CEOs are committed to advancing AI solutions across their organization even as they face challenges from accelerating technology adoption, according to the IBM CEO Study. The survey revealed that executive respondents expect the growth rate of AI investments to more than double in the next two years, and 61% confirm they are actively adopting AI agents today and preparing to implement them at scale ...

Image
IBM

 

A major architectural shift is underway across enterprise networks, according to a new global study from Cisco. As AI assistants, agents, and data-driven workloads reshape how work gets done, they're creating faster, more dynamic, more latency-sensitive, and more complex network traffic. Combined with the ubiquity of connected devices, 24/7 uptime demands, and intensifying security threats, these shifts are driving infrastructure to adapt and evolve ...

Image
Cisco

The development of banking apps was supposed to provide users with convenience, control and piece of mind. However, for thousands of Halifax customers recently, a major mobile outage caused the exact opposite, leaving customers unable to check balances, or pay bills, sparking widespread frustration. This wasn't an isolated incident ... So why are these failures still happening? ...

Cyber threats are growing more sophisticated every day, and at their forefront are zero-day vulnerabilities. These elusive security gaps are exploited before a fix becomes available, making them among the most dangerous threats in today's digital landscape ... This guide will explore what these vulnerabilities are, how they work, why they pose such a significant threat, and how modern organizations can stay protected ...

The prevention of data center outages continues to be a strategic priority for data center owners and operators. Infrastructure equipment has improved, but the complexity of modern architectures and evolving external threats presents new risks that operators must actively manage, according to the Data Center Outage Analysis 2025 from Uptime Institute ...

As observability engineers, we navigate a sea of telemetry daily. We instrument our applications, configure collectors, and build dashboards, all in pursuit of understanding our complex distributed systems. Yet, amidst this flood of data, a critical question often remains unspoken, or at best, answered by gut feeling: "Is our telemetry actually good?" ... We're inviting you to participate in shaping a foundational element for better observability: the Instrumentation Score ...

We're inching ever closer toward a long-held goal: technology infrastructure that is so automated that it can protect itself. But as IT leaders aggressively employ automation across our enterprises, we need to continuously reassess what AI is ready to manage autonomously and what can not yet be trusted to algorithms ...

Much like a traditional factory turns raw materials into finished products, the AI factory turns vast datasets into actionable business outcomes through advanced models, inferences, and automation. From the earliest data inputs to the final token output, this process must be reliable, repeatable, and scalable. That requires industrializing the way AI is developed, deployed, and managed ...

Almost half (48%) of employees admit they resent their jobs but stay anyway, according to research from Ivanti ... This has obvious consequences across the business, but we're overlooking the massive impact of resenteeism and presenteeism on IT. For IT professionals tasked with managing the backbone of modern business operations, these numbers spell big trouble ...

For many B2B and B2C enterprise brands, technology isn't a core strength. Relying on overly complex architectures (like those that follow a pure MACH doctrine) has been flagged by industry leaders as a source of operational slowdown, creating bottlenecks that limit agility in volatile market conditions ...