Skip to main content

The No-BS Guide to Logging - Part 1

A vendor-neutral checklist to help you get your log strategy straight
Sven Dummer

Image removed.

We all know log files. We all use log data. At a minimum, every admin and developer knows how to fire up tail –f and use an arsenal of command line tools to dig into a system's log files. But the days where those practices would suffice for operational troubleshooting are long gone. Today, you need a solid log strategy.

Log data has become big data and is more relevant to your success than ever before. Not being able to manage it and make meaningful use of it can, in the worst case, kill your business.

You might have implemented good application monitoring, but that only tells you that something is happening, not why. The information needed to understand the why, and the ability to predict and prevent it, is in your log data. And log data has exploded in volume and complexity.

Why this explosion? The commoditization of cloud technology: one of the greatest paradigm shifts the tech industry has seen over the recent years. Cloud services like Amazon's AWS, Microsoft's Azure, or Rackspace have made it affordable even for small- and medium-sized businesses to run complex applications on elastic virtual server farms. Containers running microservices are the next step in this move toward distributed and modularized systems.

The downside is that complexity is multiplied in those environments. Running tens or hundreds of machines with many different application components increases the risk that one of them will start malfunctioning.

To allow for troubleshooting, each of these many components typically (and hopefully) writes log data. Not only do you have to deal with a staggering number of large log files, but they're also scattered all over your network(s).

To make things a bit more interesting, some components, like VMs or containers, are ephemeral. They're launched on demand, and take their log data with them once they're terminated. Maybe the root cause slowing down or crashing your web store was visible in exactly one of those lost log files.

If that's still not complex enough, add in that people mix different technologies – for example, hybrid clouds that keep some systems on-premise or in a colo. You might run containers inside of VMs or use a container to deploy a hypervisor. Or you also could need to collect data from mobile applications and IoT devices.

Your log management solution needs to be able to receive and aggregate the logs from all your systems and components and store them in one central, accessible place. Leave no log behind – including the ones from ephemeral systems.

Some log management solutions require installing agents to accomplish this, while others are agentless and use de-facto standards like syslog, which are part of copious systems that allow sending logs over the network. Using agents means it's vital to make sure they are available for all operating systems, devices, and other components. You'll also need strategy to keep the agents updated and patched.

Fortunately, there is a checklist of the must-haves when it comes to log management to help you choose and sustain the best practices for your data, which I'll be sharing in my next post.

Read The No-BS Guide to Logging - Part 2

Sven Dummer is Senior Director of Product Marketing at Loggly.

Hot Topics

The Latest

E-commerce is set to skyrocket with a 9% rise over the next few years ... To thrive in this competitive environment, retailers must identify digital resilience as their top priority. In a world where savvy shoppers expect 24/7 access to online deals and experiences, any unexpected downtime to digital services can lead to significant financial losses, damage to brand reputation, abandoned carts with designer shoes, and additional issues ...

Efficiency is a highly-desirable objective in business ... We're seeing this scenario play out in enterprises around the world as they continue to struggle with infrastructures and remote work models with an eye toward operational efficiencies. In contrast to that goal, a recent Broadcom survey of global IT and network professionals found widespread adoption of these strategies is making the network more complex and hampering observability, leading to uptime, performance and security issues. Let's look more closely at these challenges ...

Image
Broadcom

The 2025 Catchpoint SRE Report dives into the forces transforming the SRE landscape, exploring both the challenges and opportunities ahead. Let's break down the key findings and what they mean for SRE professionals and the businesses relying on them ...

Image
Catchpoint

The pressure on IT teams has never been greater. As data environments grow increasingly complex, resource shortages are emerging as a major obstacle for IT leaders striving to meet the demands of modern infrastructure management ... According to DataStrike's newly released 2025 Data Infrastructure Survey Report, more than half (54%) of IT leaders cite resource limitations as a top challenge, highlighting a growing trend toward outsourcing as a solution ...

Image
Datastrike

Gartner revealed its top strategic predictions for 2025 and beyond. Gartner's top predictions explore how generative AI (GenAI) is affecting areas where most would assume only humans can have lasting impact ...

The adoption of artificial intelligence (AI) is accelerating across the telecoms industry, with 88% of fixed broadband service providers now investigating or trialing AI automation to enhance their fixed broadband services, according to new research from Incognito Software Systems and Omdia ...

 

AWS is a cloud-based computing platform known for its reliability, scalability, and flexibility. However, as helpful as its comprehensive infrastructure is, disparate elements and numerous siloed components make it difficult for admins to visualize the cloud performance in detail. It requires meticulous monitoring techniques and deep visibility to understand cloud performance and analyze operational efficiency in detail to ensure seamless cloud operations ...

Imagine a future where software, once a complex obstacle, becomes a natural extension of daily workflow — an intuitive, seamless experience that maximizes productivity and efficiency. This future is no longer a distant vision but a reality being crafted by the transformative power of Artificial Intelligence ...

Enterprise data sprawl already challenges companies' ability to protect and back up their data. Much of this information is never fully secured, leaving organizations vulnerable. Now, as GenAI platforms emerge as yet another environment where enterprise data is consumed, transformed, and created, this fragmentation is set to intensify ...

Image
Crashplan

OpenTelemetry (OTel) has revolutionized the way we approach observability by standardizing the collection of telemetry data ... Here are five myths — and truths — to help elevate your OTel integration by harnessing the untapped power of logs ...

The No-BS Guide to Logging - Part 1

A vendor-neutral checklist to help you get your log strategy straight
Sven Dummer

Image removed.

We all know log files. We all use log data. At a minimum, every admin and developer knows how to fire up tail –f and use an arsenal of command line tools to dig into a system's log files. But the days where those practices would suffice for operational troubleshooting are long gone. Today, you need a solid log strategy.

Log data has become big data and is more relevant to your success than ever before. Not being able to manage it and make meaningful use of it can, in the worst case, kill your business.

You might have implemented good application monitoring, but that only tells you that something is happening, not why. The information needed to understand the why, and the ability to predict and prevent it, is in your log data. And log data has exploded in volume and complexity.

Why this explosion? The commoditization of cloud technology: one of the greatest paradigm shifts the tech industry has seen over the recent years. Cloud services like Amazon's AWS, Microsoft's Azure, or Rackspace have made it affordable even for small- and medium-sized businesses to run complex applications on elastic virtual server farms. Containers running microservices are the next step in this move toward distributed and modularized systems.

The downside is that complexity is multiplied in those environments. Running tens or hundreds of machines with many different application components increases the risk that one of them will start malfunctioning.

To allow for troubleshooting, each of these many components typically (and hopefully) writes log data. Not only do you have to deal with a staggering number of large log files, but they're also scattered all over your network(s).

To make things a bit more interesting, some components, like VMs or containers, are ephemeral. They're launched on demand, and take their log data with them once they're terminated. Maybe the root cause slowing down or crashing your web store was visible in exactly one of those lost log files.

If that's still not complex enough, add in that people mix different technologies – for example, hybrid clouds that keep some systems on-premise or in a colo. You might run containers inside of VMs or use a container to deploy a hypervisor. Or you also could need to collect data from mobile applications and IoT devices.

Your log management solution needs to be able to receive and aggregate the logs from all your systems and components and store them in one central, accessible place. Leave no log behind – including the ones from ephemeral systems.

Some log management solutions require installing agents to accomplish this, while others are agentless and use de-facto standards like syslog, which are part of copious systems that allow sending logs over the network. Using agents means it's vital to make sure they are available for all operating systems, devices, and other components. You'll also need strategy to keep the agents updated and patched.

Fortunately, there is a checklist of the must-haves when it comes to log management to help you choose and sustain the best practices for your data, which I'll be sharing in my next post.

Read The No-BS Guide to Logging - Part 2

Sven Dummer is Senior Director of Product Marketing at Loggly.

Hot Topics

The Latest

E-commerce is set to skyrocket with a 9% rise over the next few years ... To thrive in this competitive environment, retailers must identify digital resilience as their top priority. In a world where savvy shoppers expect 24/7 access to online deals and experiences, any unexpected downtime to digital services can lead to significant financial losses, damage to brand reputation, abandoned carts with designer shoes, and additional issues ...

Efficiency is a highly-desirable objective in business ... We're seeing this scenario play out in enterprises around the world as they continue to struggle with infrastructures and remote work models with an eye toward operational efficiencies. In contrast to that goal, a recent Broadcom survey of global IT and network professionals found widespread adoption of these strategies is making the network more complex and hampering observability, leading to uptime, performance and security issues. Let's look more closely at these challenges ...

Image
Broadcom

The 2025 Catchpoint SRE Report dives into the forces transforming the SRE landscape, exploring both the challenges and opportunities ahead. Let's break down the key findings and what they mean for SRE professionals and the businesses relying on them ...

Image
Catchpoint

The pressure on IT teams has never been greater. As data environments grow increasingly complex, resource shortages are emerging as a major obstacle for IT leaders striving to meet the demands of modern infrastructure management ... According to DataStrike's newly released 2025 Data Infrastructure Survey Report, more than half (54%) of IT leaders cite resource limitations as a top challenge, highlighting a growing trend toward outsourcing as a solution ...

Image
Datastrike

Gartner revealed its top strategic predictions for 2025 and beyond. Gartner's top predictions explore how generative AI (GenAI) is affecting areas where most would assume only humans can have lasting impact ...

The adoption of artificial intelligence (AI) is accelerating across the telecoms industry, with 88% of fixed broadband service providers now investigating or trialing AI automation to enhance their fixed broadband services, according to new research from Incognito Software Systems and Omdia ...

 

AWS is a cloud-based computing platform known for its reliability, scalability, and flexibility. However, as helpful as its comprehensive infrastructure is, disparate elements and numerous siloed components make it difficult for admins to visualize the cloud performance in detail. It requires meticulous monitoring techniques and deep visibility to understand cloud performance and analyze operational efficiency in detail to ensure seamless cloud operations ...

Imagine a future where software, once a complex obstacle, becomes a natural extension of daily workflow — an intuitive, seamless experience that maximizes productivity and efficiency. This future is no longer a distant vision but a reality being crafted by the transformative power of Artificial Intelligence ...

Enterprise data sprawl already challenges companies' ability to protect and back up their data. Much of this information is never fully secured, leaving organizations vulnerable. Now, as GenAI platforms emerge as yet another environment where enterprise data is consumed, transformed, and created, this fragmentation is set to intensify ...

Image
Crashplan

OpenTelemetry (OTel) has revolutionized the way we approach observability by standardizing the collection of telemetry data ... Here are five myths — and truths — to help elevate your OTel integration by harnessing the untapped power of logs ...