Skip to main content

The No-BS Guide to Logging - Part 1

A vendor-neutral checklist to help you get your log strategy straight
Sven Dummer


We all know log files. We all use log data. At a minimum, every admin and developer knows how to fire up tail –f and use an arsenal of command line tools to dig into a system's log files. But the days where those practices would suffice for operational troubleshooting are long gone. Today, you need a solid log strategy.

Log data has become big data and is more relevant to your success than ever before. Not being able to manage it and make meaningful use of it can, in the worst case, kill your business.

You might have implemented good application monitoring, but that only tells you that something is happening, not why. The information needed to understand the why, and the ability to predict and prevent it, is in your log data. And log data has exploded in volume and complexity.

Why this explosion? The commoditization of cloud technology: one of the greatest paradigm shifts the tech industry has seen over the recent years. Cloud services like Amazon's AWS, Microsoft's Azure, or Rackspace have made it affordable even for small- and medium-sized businesses to run complex applications on elastic virtual server farms. Containers running microservices are the next step in this move toward distributed and modularized systems.

The downside is that complexity is multiplied in those environments. Running tens or hundreds of machines with many different application components increases the risk that one of them will start malfunctioning.

To allow for troubleshooting, each of these many components typically (and hopefully) writes log data. Not only do you have to deal with a staggering number of large log files, but they're also scattered all over your network(s).

To make things a bit more interesting, some components, like VMs or containers, are ephemeral. They're launched on demand, and take their log data with them once they're terminated. Maybe the root cause slowing down or crashing your web store was visible in exactly one of those lost log files.

If that's still not complex enough, add in that people mix different technologies – for example, hybrid clouds that keep some systems on-premise or in a colo. You might run containers inside of VMs or use a container to deploy a hypervisor. Or you also could need to collect data from mobile applications and IoT devices.

Your log management solution needs to be able to receive and aggregate the logs from all your systems and components and store them in one central, accessible place. Leave no log behind – including the ones from ephemeral systems.

Some log management solutions require installing agents to accomplish this, while others are agentless and use de-facto standards like syslog, which are part of copious systems that allow sending logs over the network. Using agents means it's vital to make sure they are available for all operating systems, devices, and other components. You'll also need strategy to keep the agents updated and patched.

Fortunately, there is a checklist of the must-haves when it comes to log management to help you choose and sustain the best practices for your data, which I'll be sharing in my next post.

Read The No-BS Guide to Logging - Part 2

Sven Dummer is Senior Director of Product Marketing at Loggly.

Hot Topics

The Latest

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

The No-BS Guide to Logging - Part 1

A vendor-neutral checklist to help you get your log strategy straight
Sven Dummer


We all know log files. We all use log data. At a minimum, every admin and developer knows how to fire up tail –f and use an arsenal of command line tools to dig into a system's log files. But the days where those practices would suffice for operational troubleshooting are long gone. Today, you need a solid log strategy.

Log data has become big data and is more relevant to your success than ever before. Not being able to manage it and make meaningful use of it can, in the worst case, kill your business.

You might have implemented good application monitoring, but that only tells you that something is happening, not why. The information needed to understand the why, and the ability to predict and prevent it, is in your log data. And log data has exploded in volume and complexity.

Why this explosion? The commoditization of cloud technology: one of the greatest paradigm shifts the tech industry has seen over the recent years. Cloud services like Amazon's AWS, Microsoft's Azure, or Rackspace have made it affordable even for small- and medium-sized businesses to run complex applications on elastic virtual server farms. Containers running microservices are the next step in this move toward distributed and modularized systems.

The downside is that complexity is multiplied in those environments. Running tens or hundreds of machines with many different application components increases the risk that one of them will start malfunctioning.

To allow for troubleshooting, each of these many components typically (and hopefully) writes log data. Not only do you have to deal with a staggering number of large log files, but they're also scattered all over your network(s).

To make things a bit more interesting, some components, like VMs or containers, are ephemeral. They're launched on demand, and take their log data with them once they're terminated. Maybe the root cause slowing down or crashing your web store was visible in exactly one of those lost log files.

If that's still not complex enough, add in that people mix different technologies – for example, hybrid clouds that keep some systems on-premise or in a colo. You might run containers inside of VMs or use a container to deploy a hypervisor. Or you also could need to collect data from mobile applications and IoT devices.

Your log management solution needs to be able to receive and aggregate the logs from all your systems and components and store them in one central, accessible place. Leave no log behind – including the ones from ephemeral systems.

Some log management solutions require installing agents to accomplish this, while others are agentless and use de-facto standards like syslog, which are part of copious systems that allow sending logs over the network. Using agents means it's vital to make sure they are available for all operating systems, devices, and other components. You'll also need strategy to keep the agents updated and patched.

Fortunately, there is a checklist of the must-haves when it comes to log management to help you choose and sustain the best practices for your data, which I'll be sharing in my next post.

Read The No-BS Guide to Logging - Part 2

Sven Dummer is Senior Director of Product Marketing at Loggly.

Hot Topics

The Latest

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...