Skip to main content

Log Management for IT Ops: 5 Best Practices

Jim Frey

Log data may be many things, but one thing is for sure – it isn't sexy. In fact, in most cases, it's downright ugly, because there are really no standards out there for how log data should be structured. For decades, this fact has kept log data from being a practical source of information for anything beyond a few specific use cases, such as watching for important events (like system reboots or config changes), security monitoring (like firewall blockages), or deep troubleshooting.

Times have changed, and the most recent crop of log management vendors have taken advantage of the steady growth in processor capacity to overcome the complexity and scale challenges of harvesting and analyzing all of the log data that an IT infrastructure continuously throws off. Now there are practical ways for taking advantage of the unique perspective and insights that log data can provide on a much broader basis.

In my last post, I shared some key findings from an EMA research report published last fall that dove into the ways in which log analytics is being used to support network operations. Building on that, following are a couple of the recommendations that EMA is making on how best to think about log data as part of an integrated management architecture and strategy:

1. Think twice before planning to store all log data

While most organizations are gathering log data for analysis on a continuous, ongoing basis, only a third are storing all log entries all the time. Interesting, those organizations considering log data to be "strategic" are actually much less likely to be storing all log entries all the time than those who consider log data to be "tactical". Strategic log users prefer instead to be more surgical, looking for specific types of logs or storing all log data only when certain trigger situations occur.

2. Consolidate your log analysis tools

We find that an overwhelming majority of organizations are either currently using one centralized log analysis system or are planning to consolidate the multiple tools that they have into a single system. This makes tremendous sense if you are trying to get the most out of your log data either in support of integrated operations or simply for better collaboration and cross-team sharing.

3. Focus on fast and intuitive search capabilities

The number one challenge voiced with respect to analyzing log data is knowing what to look for. It's not surprising then that the most popular feature that IT pros look for in a log data analysis solution is fast search. The latest generation of tools have made quick and effective search a high priority, and if you don't have such capabilities in your current system, you should consider an upgrade or alternative.

4. Don't implement log data analysis as an island

Consistently, we find that organizations are getting the most value when log data collection and analysis is integrated with other data sets and analysis systems. This can be done either via log data collection/analysis tools incorporating non-log data themselves or by openly sharing log data with other management aggregation systems. Some of the strongest values are being achieved by connecting the insights available from streaming log data with other performance monitoring measures, to proactively recognize performance degradations and related root causes.

5. Log data is relevant for BSM/ITSM

EMA has found a very high usage rate of network log data for higher level BSM and ITSM type initiatives, such as service quality monitoring, unified IT operations, and CMDB. Such usages were particularly high among those who consider log data to be strategic rather than tactical. So even though log data may be ugly, don't overlook its importance in supporting your highest level management objectives.

There were a couple of surprising dichotomies uncovered in the research study as well. For instance, the top reason people value log data is that they consider it to be cost-effective, however the second greatest challenge was identified as cost of tools. Another example involves just how effective log data is. The second highest perceived value was faster time to resolution than other data sources, however the number one challenge was knowing what to look for.

Clearly there is great and growing value in collecting and analyzing log data for IT planning, operations, and security. And while there are still challenges to be faced, best practices are emerging to help everyone understand what to expect and how to get the most returns on investments into log data collection and analysis tools.

Hot Topics

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...

Log Management for IT Ops: 5 Best Practices

Jim Frey

Log data may be many things, but one thing is for sure – it isn't sexy. In fact, in most cases, it's downright ugly, because there are really no standards out there for how log data should be structured. For decades, this fact has kept log data from being a practical source of information for anything beyond a few specific use cases, such as watching for important events (like system reboots or config changes), security monitoring (like firewall blockages), or deep troubleshooting.

Times have changed, and the most recent crop of log management vendors have taken advantage of the steady growth in processor capacity to overcome the complexity and scale challenges of harvesting and analyzing all of the log data that an IT infrastructure continuously throws off. Now there are practical ways for taking advantage of the unique perspective and insights that log data can provide on a much broader basis.

In my last post, I shared some key findings from an EMA research report published last fall that dove into the ways in which log analytics is being used to support network operations. Building on that, following are a couple of the recommendations that EMA is making on how best to think about log data as part of an integrated management architecture and strategy:

1. Think twice before planning to store all log data

While most organizations are gathering log data for analysis on a continuous, ongoing basis, only a third are storing all log entries all the time. Interesting, those organizations considering log data to be "strategic" are actually much less likely to be storing all log entries all the time than those who consider log data to be "tactical". Strategic log users prefer instead to be more surgical, looking for specific types of logs or storing all log data only when certain trigger situations occur.

2. Consolidate your log analysis tools

We find that an overwhelming majority of organizations are either currently using one centralized log analysis system or are planning to consolidate the multiple tools that they have into a single system. This makes tremendous sense if you are trying to get the most out of your log data either in support of integrated operations or simply for better collaboration and cross-team sharing.

3. Focus on fast and intuitive search capabilities

The number one challenge voiced with respect to analyzing log data is knowing what to look for. It's not surprising then that the most popular feature that IT pros look for in a log data analysis solution is fast search. The latest generation of tools have made quick and effective search a high priority, and if you don't have such capabilities in your current system, you should consider an upgrade or alternative.

4. Don't implement log data analysis as an island

Consistently, we find that organizations are getting the most value when log data collection and analysis is integrated with other data sets and analysis systems. This can be done either via log data collection/analysis tools incorporating non-log data themselves or by openly sharing log data with other management aggregation systems. Some of the strongest values are being achieved by connecting the insights available from streaming log data with other performance monitoring measures, to proactively recognize performance degradations and related root causes.

5. Log data is relevant for BSM/ITSM

EMA has found a very high usage rate of network log data for higher level BSM and ITSM type initiatives, such as service quality monitoring, unified IT operations, and CMDB. Such usages were particularly high among those who consider log data to be strategic rather than tactical. So even though log data may be ugly, don't overlook its importance in supporting your highest level management objectives.

There were a couple of surprising dichotomies uncovered in the research study as well. For instance, the top reason people value log data is that they consider it to be cost-effective, however the second greatest challenge was identified as cost of tools. Another example involves just how effective log data is. The second highest perceived value was faster time to resolution than other data sources, however the number one challenge was knowing what to look for.

Clearly there is great and growing value in collecting and analyzing log data for IT planning, operations, and security. And while there are still challenges to be faced, best practices are emerging to help everyone understand what to expect and how to get the most returns on investments into log data collection and analysis tools.

Hot Topics

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...