Skip to main content

Q&A: Nastel Talks About Smart Logging

Pete Goldin
APMdigest

In APMdigest's exclusive interview, Charley Rich, VP Marketing and Product Management at Nastel, talks about logging and application performance.

APM: How can logging be used for troubleshooting application misbehavior?

CR: Application developers can write log events about their attempted acquisition of system resources, shortage of resources, current state or errors to a log file. By reading and analyzing the log, one can determine to some extent the problems the application is experiencing.

In addition there are additional logs, including those for: the physical server, the application server and the database. Manually, an expert can examine each of these logs and string together a picture of what is happening when a problem has been reported. But, this is not for the faint of heart and often adds considerable trouble to the troubleshooting process.

APM: Are standard logging facilities such as log4j and syslog insufficient for problem determination?

CR: Yes. Manually correlating the information an application logs along with system and other logs can be quite laborious. In addition, many of these logs have multiple unrelated writers posting entries to these logs. Tracing the information pertaining to specific applications is not easy. It is sort of a signal-to-noise problem with the extraneous information in the log acting as noise. All the sources write log events as they happen and thus, each entry may have no relation at all to the prior entry. Deciphering what is relevant to your task is hard.

Standard logging facilities such as Log4j and syslog do not help resolve this issue. They are not sufficiently structured to perform effective problem determination. The burden is on the developer to include enough detail in the log message for effective root cause analysis. They also do not help with the problem of relating application activity messages spread across multiple applications in multiples tiers and multiple logs.

As a result, standard logging can be a burden on development due to the long time it takes to correlate activities manually. Thus, standard logging facilities are somewhat unhelpful in reproducing production problems.

APM: What are “Smart Events”?

CR: Standard logging can be augmented to become “Smart Events”. Smart events are members of a flow of events and can have location, timing, source, correlation information embedded in them (such as IP, server, GPS, geo, etc.). Such augmentation is a combination of an enhanced logging methodology, combined with a simple programming model that allows recording of relevant information that can aid in fast root cause analysis and inter and intra log correlation.

APM: What happens when a Smart Event is created?

CR: Once a Smart Event has been created, the necessary context is provided for an analytics process to correlate these events into a more meaningful format that will make troubleshooting considerably easier.

APM: When are Smart Events most useful?

CR: Smart Events are most useful for applications that require fast root cause diagnostics for performance problems and application misbehavior using logging facilities.

They are also very useful for applications running in cloud or mobile where very little control exists over application behavior.

APM: How much effort is required to change an application to create Smart Events?

CR: Not a lot is required. The developer must use an enhanced programing model that allows for generation of Smart Events where context, time and location are combined into a single concept. Instead of using logging frameworks directly and write to an event log, the developer uses a simplified interface that supports Smart Event methodology. It is important to note that correlation comes from the application's standpoint and not from the technology point of view

APM: What happens when a developer is unable or not permitted to change existing applications?

CR: Smart Events can be used for existing applications that make use of log4j or other logging frameworks. A post processor could be applied to their current logs where data is mined from the existing set of logs and transformed into smart events. Of course, the value of such transformation will largely depend on the level of detail available in the log entries themselves.

APM: Are Smart Events still helpful if a developer is already using Log4j or one of the log file management tools such as Splunk or Loggly?

CR: Yes. If a user has these installed, the processed or consolidated log files can be used as the source without a need to add API calls to the applications.

APM: Does the smart logging approach track thread interdependencies?

CR: As activities are traced down to the thread level, it becomes even more important to know where that thread is executing and how that relates to the activity produced by other threads. As these multi-threaded applications execute in multiple locations, the complexity in using log files for debugging becomes far greater. Awareness of location, context and behavior would be very helpful.

APM: Can smart logging capture elapsed time for activities?

CR: An auto-processing logging framework would imitate the way the application works and automatically time how long it takes from start to completion. It can also measure the timing in detail between, for example event 1, event 2 and event 3. There are a lot of implicit values in a timing perspective and using this model, as well capturing exceptions and errors associated with application activities.

APM: How does someone know if smart logging is right for their organization?

CR: They should evaluate their current logging process and framework and determine the delta between what they are doing and what has been described here. They should consider how the value described can be garnered across all their applications by evolving their logging.

ABOUT Charley Rich

Charley Rich, Vice President of Product Management and Marketing at Nastel, is a software product management professional who brings over 20 years of experience working with large-scale customers to meet their application and systems management requirements. Earlier in his career, he held positions in Worldwide Product Management at IBM, as Director of Product Management at EMC/SMARTS, and Vice President of Field Marketing for eCommerce firm InterWorld. Rich is a sought after speaker and a published author with a patent in the application management field.

Hot Topic
The Latest
The Latest 10

The Latest

Seeing is believing, or in this case, seeing is understanding, according to New Relic's 2025 Observability Forecast for Retail and eCommerce report. Retailers who want to provide exceptional customer experiences while improving IT operations efficiency are leaning on observability ... Here are five key takeaways from the report ...

Technology leaders across the federal landscape are facing, and will continue to face, an uphill battle when it comes to fortifying their digital environments against hostile and persistent threat actors. On one hand, they are being asked to push digital transformation ... On the other hand, they are facing the fiscal uncertainty of continuing resolutions (CR) and government shutdowns looming near and far. In the face of these challenges, CIOs, CTOs, and CISOs must figure out how to modernize legacy systems and infrastructure while doing more with less and still defending against external and internal threats ...

Reliability is no longer proven by uptime alone, according to the The SRE Report 2026 from LogicMonitor. In the AI era, it is experienced through speed, consistency, and user trust, and increasingly judged by business impact. As digital services grow more complex and AI systems move into production, traditional monitoring approaches are struggling to keep pace, increasing the need for AI-first observability that spans applications, infrastructure, and the Internet ...

If AI is the engine of a modern organization, then data engineering is the road system beneath it. You can build the most powerful engine in the world, but without paved roads, traffic signals, and bridges that can support its weight, it will stall. In many enterprises, the engine is ready. The roads are not ...

In the world of digital-first business, there is no tolerance for service outages. Businesses know that outages are the quickest way to lose money and customers. For smaller organizations, unplanned downtime could even force the business to close ... A new study from PagerDuty, The State of AI-First Operations, reveals that companies actively incorporating AI into operations now view operational resilience as a growth driver rather than a cost center. But how are they achieving it? ...

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Q&A: Nastel Talks About Smart Logging

Pete Goldin
APMdigest

In APMdigest's exclusive interview, Charley Rich, VP Marketing and Product Management at Nastel, talks about logging and application performance.

APM: How can logging be used for troubleshooting application misbehavior?

CR: Application developers can write log events about their attempted acquisition of system resources, shortage of resources, current state or errors to a log file. By reading and analyzing the log, one can determine to some extent the problems the application is experiencing.

In addition there are additional logs, including those for: the physical server, the application server and the database. Manually, an expert can examine each of these logs and string together a picture of what is happening when a problem has been reported. But, this is not for the faint of heart and often adds considerable trouble to the troubleshooting process.

APM: Are standard logging facilities such as log4j and syslog insufficient for problem determination?

CR: Yes. Manually correlating the information an application logs along with system and other logs can be quite laborious. In addition, many of these logs have multiple unrelated writers posting entries to these logs. Tracing the information pertaining to specific applications is not easy. It is sort of a signal-to-noise problem with the extraneous information in the log acting as noise. All the sources write log events as they happen and thus, each entry may have no relation at all to the prior entry. Deciphering what is relevant to your task is hard.

Standard logging facilities such as Log4j and syslog do not help resolve this issue. They are not sufficiently structured to perform effective problem determination. The burden is on the developer to include enough detail in the log message for effective root cause analysis. They also do not help with the problem of relating application activity messages spread across multiple applications in multiples tiers and multiple logs.

As a result, standard logging can be a burden on development due to the long time it takes to correlate activities manually. Thus, standard logging facilities are somewhat unhelpful in reproducing production problems.

APM: What are “Smart Events”?

CR: Standard logging can be augmented to become “Smart Events”. Smart events are members of a flow of events and can have location, timing, source, correlation information embedded in them (such as IP, server, GPS, geo, etc.). Such augmentation is a combination of an enhanced logging methodology, combined with a simple programming model that allows recording of relevant information that can aid in fast root cause analysis and inter and intra log correlation.

APM: What happens when a Smart Event is created?

CR: Once a Smart Event has been created, the necessary context is provided for an analytics process to correlate these events into a more meaningful format that will make troubleshooting considerably easier.

APM: When are Smart Events most useful?

CR: Smart Events are most useful for applications that require fast root cause diagnostics for performance problems and application misbehavior using logging facilities.

They are also very useful for applications running in cloud or mobile where very little control exists over application behavior.

APM: How much effort is required to change an application to create Smart Events?

CR: Not a lot is required. The developer must use an enhanced programing model that allows for generation of Smart Events where context, time and location are combined into a single concept. Instead of using logging frameworks directly and write to an event log, the developer uses a simplified interface that supports Smart Event methodology. It is important to note that correlation comes from the application's standpoint and not from the technology point of view

APM: What happens when a developer is unable or not permitted to change existing applications?

CR: Smart Events can be used for existing applications that make use of log4j or other logging frameworks. A post processor could be applied to their current logs where data is mined from the existing set of logs and transformed into smart events. Of course, the value of such transformation will largely depend on the level of detail available in the log entries themselves.

APM: Are Smart Events still helpful if a developer is already using Log4j or one of the log file management tools such as Splunk or Loggly?

CR: Yes. If a user has these installed, the processed or consolidated log files can be used as the source without a need to add API calls to the applications.

APM: Does the smart logging approach track thread interdependencies?

CR: As activities are traced down to the thread level, it becomes even more important to know where that thread is executing and how that relates to the activity produced by other threads. As these multi-threaded applications execute in multiple locations, the complexity in using log files for debugging becomes far greater. Awareness of location, context and behavior would be very helpful.

APM: Can smart logging capture elapsed time for activities?

CR: An auto-processing logging framework would imitate the way the application works and automatically time how long it takes from start to completion. It can also measure the timing in detail between, for example event 1, event 2 and event 3. There are a lot of implicit values in a timing perspective and using this model, as well capturing exceptions and errors associated with application activities.

APM: How does someone know if smart logging is right for their organization?

CR: They should evaluate their current logging process and framework and determine the delta between what they are doing and what has been described here. They should consider how the value described can be garnered across all their applications by evolving their logging.

ABOUT Charley Rich

Charley Rich, Vice President of Product Management and Marketing at Nastel, is a software product management professional who brings over 20 years of experience working with large-scale customers to meet their application and systems management requirements. Earlier in his career, he held positions in Worldwide Product Management at IBM, as Director of Product Management at EMC/SMARTS, and Vice President of Field Marketing for eCommerce firm InterWorld. Rich is a sought after speaker and a published author with a patent in the application management field.

Hot Topic
The Latest
The Latest 10

The Latest

Seeing is believing, or in this case, seeing is understanding, according to New Relic's 2025 Observability Forecast for Retail and eCommerce report. Retailers who want to provide exceptional customer experiences while improving IT operations efficiency are leaning on observability ... Here are five key takeaways from the report ...

Technology leaders across the federal landscape are facing, and will continue to face, an uphill battle when it comes to fortifying their digital environments against hostile and persistent threat actors. On one hand, they are being asked to push digital transformation ... On the other hand, they are facing the fiscal uncertainty of continuing resolutions (CR) and government shutdowns looming near and far. In the face of these challenges, CIOs, CTOs, and CISOs must figure out how to modernize legacy systems and infrastructure while doing more with less and still defending against external and internal threats ...

Reliability is no longer proven by uptime alone, according to the The SRE Report 2026 from LogicMonitor. In the AI era, it is experienced through speed, consistency, and user trust, and increasingly judged by business impact. As digital services grow more complex and AI systems move into production, traditional monitoring approaches are struggling to keep pace, increasing the need for AI-first observability that spans applications, infrastructure, and the Internet ...

If AI is the engine of a modern organization, then data engineering is the road system beneath it. You can build the most powerful engine in the world, but without paved roads, traffic signals, and bridges that can support its weight, it will stall. In many enterprises, the engine is ready. The roads are not ...

In the world of digital-first business, there is no tolerance for service outages. Businesses know that outages are the quickest way to lose money and customers. For smaller organizations, unplanned downtime could even force the business to close ... A new study from PagerDuty, The State of AI-First Operations, reveals that companies actively incorporating AI into operations now view operational resilience as a growth driver rather than a cost center. But how are they achieving it? ...

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...