In APMdigest's exclusive interview, Charley Rich, VP Marketing and Product Management at Nastel, talks about logging and application performance.
APM: How can logging be used for troubleshooting application misbehavior?
CR: Application developers can write log events about their attempted acquisition of system resources, shortage of resources, current state or errors to a log file. By reading and analyzing the log, one can determine to some extent the problems the application is experiencing.
In addition there are additional logs, including those for: the physical server, the application server and the database. Manually, an expert can examine each of these logs and string together a picture of what is happening when a problem has been reported. But, this is not for the faint of heart and often adds considerable trouble to the troubleshooting process.
APM: Are standard logging facilities such as log4j and syslog insufficient for problem determination?
CR: Yes. Manually correlating the information an application logs along with system and other logs can be quite laborious. In addition, many of these logs have multiple unrelated writers posting entries to these logs. Tracing the information pertaining to specific applications is not easy. It is sort of a signal-to-noise problem with the extraneous information in the log acting as noise. All the sources write log events as they happen and thus, each entry may have no relation at all to the prior entry. Deciphering what is relevant to your task is hard.
Standard logging facilities such as Log4j and syslog do not help resolve this issue. They are not sufficiently structured to perform effective problem determination. The burden is on the developer to include enough detail in the log message for effective root cause analysis. They also do not help with the problem of relating application activity messages spread across multiple applications in multiples tiers and multiple logs.
As a result, standard logging can be a burden on development due to the long time it takes to correlate activities manually. Thus, standard logging facilities are somewhat unhelpful in reproducing production problems.
APM: What are “Smart Events”?
CR: Standard logging can be augmented to become “Smart Events”. Smart events are members of a flow of events and can have location, timing, source, correlation information embedded in them (such as IP, server, GPS, geo, etc.). Such augmentation is a combination of an enhanced logging methodology, combined with a simple programming model that allows recording of relevant information that can aid in fast root cause analysis and inter and intra log correlation.
APM: What happens when a Smart Event is created?
CR: Once a Smart Event has been created, the necessary context is provided for an analytics process to correlate these events into a more meaningful format that will make troubleshooting considerably easier.
APM: When are Smart Events most useful?
CR: Smart Events are most useful for applications that require fast root cause diagnostics for performance problems and application misbehavior using logging facilities.
They are also very useful for applications running in cloud or mobile where very little control exists over application behavior.
APM: How much effort is required to change an application to create Smart Events?
CR: Not a lot is required. The developer must use an enhanced programing model that allows for generation of Smart Events where context, time and location are combined into a single concept. Instead of using logging frameworks directly and write to an event log, the developer uses a simplified interface that supports Smart Event methodology. It is important to note that correlation comes from the application's standpoint and not from the technology point of view
APM: What happens when a developer is unable or not permitted to change existing applications?
CR: Smart Events can be used for existing applications that make use of log4j or other logging frameworks. A post processor could be applied to their current logs where data is mined from the existing set of logs and transformed into smart events. Of course, the value of such transformation will largely depend on the level of detail available in the log entries themselves.
APM: Are Smart Events still helpful if a developer is already using Log4j or one of the log file management tools such as Splunk or Loggly?
CR: Yes. If a user has these installed, the processed or consolidated log files can be used as the source without a need to add API calls to the applications.
APM: Does the smart logging approach track thread interdependencies?
CR: As activities are traced down to the thread level, it becomes even more important to know where that thread is executing and how that relates to the activity produced by other threads. As these multi-threaded applications execute in multiple locations, the complexity in using log files for debugging becomes far greater. Awareness of location, context and behavior would be very helpful.
APM: Can smart logging capture elapsed time for activities?
CR: An auto-processing logging framework would imitate the way the application works and automatically time how long it takes from start to completion. It can also measure the timing in detail between, for example event 1, event 2 and event 3. There are a lot of implicit values in a timing perspective and using this model, as well capturing exceptions and errors associated with application activities.
APM: How does someone know if smart logging is right for their organization?
CR: They should evaluate their current logging process and framework and determine the delta between what they are doing and what has been described here. They should consider how the value described can be garnered across all their applications by evolving their logging.
ABOUT Charley Rich
Charley Rich, Vice President of Product Management and Marketing at Nastel, is a software product management professional who brings over 20 years of experience working with large-scale customers to meet their application and systems management requirements. Earlier in his career, he held positions in Worldwide Product Management at IBM, as Director of Product Management at EMC/SMARTS, and Vice President of Field Marketing for eCommerce firm InterWorld. Rich is a sought after speaker and a published author with a patent in the application management field.
In today's competitive landscape, businesses must have the ability and process in place to face new challenges and find ways to successfully tackle them in a proactive manner. For years, this has been placed on the shoulders of DevOps teams within IT departments. But, as automation takes over manual intervention to increase speed and efficiency, these teams are facing what we know as IT digitization. How has this changed the way companies function over the years, and what do we have to look forward to in the coming years? ...
Although the vast majority of IT organizations have implemented a broad variety of systems and tools to modernize, simplify and streamline data center operations, many are still burdened by inefficiencies, security risks and performance gaps in their IT infrastructure as well as the excessive time it takes to manage legacy infrastructure, according to the State of IT Transformation, a report from Datrium ...
When it comes to network visibility, there are a lot of discussions about packet broker technology and the various features these solutions provide to network architects and IT managers. Packet brokers allow organizations to aggregate the data required for a variety of monitoring solutions including network performance monitoring and diagnostic (NPMD) platforms and unified threat management (UTM) appliances. But, when it comes to ensuring these solutions provide the insights required by NetOps and security teams, IT can spend an exorbitant amount of time dealing with issues around adds, moves and changes. This can have a dramatic impact on budgets and tool availability. Why does this happen? ...
Data may be pouring into enterprises but IT professionals still find most of it stuck in siloed departments and weeks away from being able to drive any valued action. Coupled with the ongoing concerns over security responsiveness, IT teams have to push aside other important performance-oriented data in order to ensure security data, at least, gets prominent attention. A new survey by Ivanti shows the disconnect between enterprise departments struggling to improve operations like automation while being challenged with a siloed structure and a data onslaught ...
A subtle, deliberate shift has occurred within the software industry which, at present, only the most innovative organizations have seized upon for competitive advantage. Although primarily driven by Artificial Intelligence (AI), this transformation strikes at the core of the most pervasive IT resources including cloud computing and predictive analytics ...
When asked who is mandated with developing and delivering their organization's digital competencies, 51% of respondents say their IT departments have a leadership role. The critical question is whether IT departments are prepared to take on a leadership role in which collaborating with other functions and disseminating knowledge and digital performance data are requirements ...
The Economist Intelligence Unit just released a new study commissioned by Riverbed that explores nine digital competencies that help organizations improve their digital performance and, ultimately, achieve their objectives. Here's a brief summary of 7 key research findings you'll find covered in detail in the report ...
Today, the overall customer scenario has digitally transformed and practically there is no limitation to the ways in which the target customers can be reached. These opportunities are throwing multiple challenges for brands and enterprises, and one of the prominent ones is to ensure Omni Channel experience for customers ...
Most businesses (92 percent of respondents) see the potential value of data and 36 percent are already monetizing their data, according to the Global Data Protection Index from Dell EMC. While this acknowledgement is positive, however, most respondents are struggling to properly protect their data ...
IT practitioners are still in experimentation mode with artificial intelligence in many cases, and still have concerns about how credible the technology can be. A recent study from OpsRamp targeted these IT managers who have implemented AIOps, and among other data, reports on the primary concerns of this new approach to operations management ...