/NLog.StructuredLogging.Json

Structured logging for NLog using Json (formerly known as JsonFields)

Primary LanguageC#OtherNOASSERTION

NLog.StructuredLogging.Json

NuGet version

Linux Windows
Build Status Build status Build status
Build History Build history Build history

Join the chat at https://gitter.im/justeat/NLog.StructuredLogging.Json

What

Structured logging with NLog. Generates log entries as JSON. These can .e.g. be sent to Kibana over NXLog.

for each LogEventInfo message, render one JSON object with any parameters as properties.

What problems does this solve?

Structured logging

When logging without StructuredLogging.Json, the "Message" field is used to hold unstructured data, e.g.:

@LogType: nlog
Level: Warn
Message: Order 1234 resent to Partner 4567

When we want to query Kibana for all occurrences of this log message, we have to do partial string matching as the message is slightly different each time. When we want to query Kibana for all messages related to this order, we also have to do partial string matching on the message as the orderId is embedded in the message.

When logging with StructuredLogging.Json, the data is written as Json with extra fields containing any data that you add to the log entry. So the log line written by NLog might be e.g.:

{"TimeStamp":"2016-09-21T08:11:23.483Z","Level":"Info","LoggerName":"Acme.WebApp.OrderController",
"Message":"Order resent to partner","CallSite":"Acme.WebApp.OrderController.ResendOrder",
"OrderId":"1234","PartnerId":"4567",
"NewState":"Sent","SendDate":"2016-09-21T08:11:23.456Z"}

This is well formatted for sending to Kibana.

In Kibana you get:

@LogType: nlog
Level: Warn
Message: Order resent to partner
OrderId: 1234
PartnerId: 4567
NewState: Sent

This makes it much easier to search Kibana for the exact message text and see all the times that this log statement was fired, across time. We can also very easily search for all the different log messages related to a particular orderId, partnerId, or any other fields that can be logged.

Simpler, more flexible logging configuration

No need for a custom nxlog configuration file, and no need to specify all the columns used.

How to get it

  1. Update the dependencies as below
  2. Install the NLog.StructuredLogging.Json package from NuGet
  3. Update your NLog config so you write out JSON with properties
  4. Add additional properties when you log

Update the dependencies


  • Ensure you have version of NLog >= 4.3.0 (assembly version 4.0.0.0 - remember to update any redirects)
  • Ensure you have version of Newtonsoft.Json >= 9.0.1
Update-Package NLog
Update-Package Newtonsoft.Json

Install the NLog.StructuredLogging.Json package from NuGet


Make sure the DLL is copied to your output folder

Install-Package NLog.StructuredLogging.Json

Update your NLog config so you write out JSON with properties


NLog needs to write to JSON using the structuredlogging.json layout renderer.
The structuredlogging.json layout renderer is declared in this project.
Any DLLs that start with NLog. are automatically loaded by NLog at runtime in your app.

Usage

Use the log properties to add extra fields to the JSON. You can add any contextual data values here:

using NLog.StructuredLogging.Json;

...

logger.ExtendedInfo("Sending order", new { OrderId = 1234, RestaurantId = 4567 } );


logger.ExtendedWarn("Order resent", new { OrderId = 1234, CustomerId = 4567 } );

logger.ExtendedError("Could not contact customer", new { CustomerId = 1234, HttpStatusCode = 404 } );

logger.ExtendedException(ex, "Error sending order to Restaurant", new { OrderId = 1234, RestaurantId = 4567 } );

The last parameter can be a Dictionary of names and values, or an anonymous object. If an anonymous object is supplied, the property names and values on this object become field names and corresponding values as shown above.

Example of using a dictionary:

var logProperties = new Dictionary<string, object>
{
	{"orderId", 1234 },
	{"customerId", 3456 }
};

if (partner != null)
{
   logProperties.Add("partnerId", partner.Id)
}

logger.ExtendedInfo("Order received", logProperties);

Logging data from exceptions

If exceptions are logged with ExtendedException then the name-value pairs in the exception's data collection are recorded.

e.g. where we do:

var restaurant = _restaurantService.GetRestaurant(restaurantId);
if (restaurant == null)
{
	throw new RestaurantNotFoundException();
}

We can improve on this with:

var restaurant = _restaurantService.GetRestaurant(restaurantId);
if (restaurant == null)
{
	var ex = RestaurantNotFoundException();
	ex.Data.Add("RestaurantId", restaurantId);
	throw ex;
}

This is useful where the exception is caught and logged by a global "catch-all" exception handler which will have no knowledge of the context in which the exception was thrown.

Use the exception's Data collection rather than adding properties to exception types to store values.

The best practices and pitfalls below also apply to exception data, as these values are serialised in the same way to the same destination.

Logging inner exceptions

You do not need to explicitly log inner exceptions, or exceptions contained in an AggregateException. They are automatically logged in both cases. Each inner exception is logged as a separate log entry, so that the inner exceptions can be searched for all the usual fields such as ExceptionMessage or ExceptionType.

When an exception has one or more inner exceptions, some extra fields are logged: ExceptionIndex, ExceptionCount and ExceptionTag.

  • ExceptionCount: Tells you have many exceptions were logged together.
  • ExceptionIndex: This exception's index in the grouping.
  • ExceptionTag: a unique guid identifier that is generated and applied to the exceptions in the group. Searching for this guid should show you all the grouped exceptions and nothing else.

e.g. logging an exception with 2 inner exceptions might produce these log entries:

ExceptionMessage: "Outer message"
ExceptionType: "ArgumentException"
ExceptionIndex: 1
ExceptionCount: 3
ExceptionTag: "6fc5d910-3335-4eba-89fd-f9229e4a29b3"

ExceptionMessage: "Mid message"
ExceptionType: "ApplicationException"
ExceptionIndex: 2
ExceptionCount: 3
ExceptionTag: "6fc5d910-3335-4eba-89fd-f9229e4a29b3"

ExceptionMessage: "inner message"
ExceptionType: "NotImplementedException"
ExceptionIndex: 3
ExceptionCount: 3
ExceptionTag: "6fc5d910-3335-4eba-89fd-f9229e4a29b3"

Logging data from context

Properties are also read from the Mapped Diagnostic Logical Context.
This is an NLog class, and the data is stored on the logical call context and typed as a Dictionary<string, object>.

Add a value to the MDLC like this:

   MappedDiagnosticsLogicalContext.Set("ConversationId", conversationId);

This value will then be attached to all logging that happens afterwards in the same logical thread of execution, even after await statements that change the actual thread.

Logical scopes

To provide your logs with more logical context information you can use BeginScope extension.

using(Logger.BeginScope("first scope description", firstScopeProps)) // first scope, Guid 5D646242-C5A3-4FA0-9A7A-779ED5EA56E2
{
  Logger.ExtendedInfo("scoped log", innerLogProps); // first message
  using(Logger.BeginScope("second scope description", secondScopeProps)) // second scope, Guid 74253AC8-11BB-4CBD-B68D-ED966DBDB478
  {
    Logger.ExtendedInfo("scoped log", null); // second message
    using(Logger.BeginScope("third scope description", thirdScopeProps)) // third scope, Guid 0721B91D-4764-4693-99D9-1AF4B63463A0
    {
      Logger.ExtendedInfo("scoped log", null); // third message
    }
  }
}

Each scope has start and end logs. And in properties we get something like this:

  • For first message:
  "Scope":"first scope description",
  "ScopeId":"5D646242-C5A3-4FA0-9A7A-779ED5EA56E2", // first scope GUID
  "ScopeIdTrace":"5D646242-C5A3-4FA0-9A7A-779ED5EA56E2",
  "ScopeNameTrace":"first scope description",
  // innerLogProps go here
  // firstScopeProps go here
  • For second message:
  "Scope":"second scope description",
  "ScopeId":"74253AC8-11BB-4CBD-B68D-ED966DBDB478", // second scope GUID
  "ScopeIdTrace":"5D646242-C5A3-4FA0-9A7A-779ED5EA56E2 -> 74253AC8-11BB-4CBD-B68D-ED966DBDB478", // first scope GUID -> second scope GUID
  "ScopeNameTrace":"first scope description -> second scope description",
  // firstScopeProps go here
  // secondScopeProps go here
  • For third message:
  "Scope":"third scope description",
  "ScopeId":"0721B91D-4764-4693-99D9-1AF4B63463A0", // third scope GUID
  "ScopeIdTrace":"5D646242-C5A3-4FA0-9A7A-779ED5EA56E2 -> 74253AC8-11BB-4CBD-B68D-ED966DBDB478 -> 0721B91D-4764-4693-99D9-1AF4B63463A0", // first scope GUID -> second scope GUID -> third scope GUID
  "ScopeNameTrace":"first scope description -> second scope description -> third scope description"
  // firstScopeProps go here
  // secondScopeProps go here
  // thirdScopeProps

Logging additional json properties

You can add context to your log events by using the JsonWithProperties layout instead of the structuredlogging.json layout renderer. The example below shows how you can log the machine name and component version with each log event.

<target name="MyTarget"
       xsi:type="file"
       fileName="MyFilePath"
       encoding="utf-8">
  <layout xsi:type="JsonWithProperties">    
    <property name="MachineName" layout="${machinename}" />
    <property name="ComponentVersion" layout="1.0.0.0" />
  </layout>
</target>

A log entry created using code similar to:

logger.ExtendedInfo("Order sent to partner", new { OrderId = 1234, RestaurantId = 4567 } );

might result in output similar to:

{"TimeStamp":"2016-09-21T08:11:23.483Z","Level":"Info","LoggerName":"Acme.WebApp.OrderController",
"Message":"Order sent to partner","CallSite":"Acme.WebApp.OrderController.ResendOrder",
"OrderId":"1234","PartnerId":"4567",
"NewState":"Sent","SendDate":"2016-09-21T08:11:23.456Z",
"MachineName":"MyMachineName","ComponentVersion":"1.0.0.0"}

where we can see that the properties specified in the layout are appended to the properties that come from the log event.

Best practices

  • The message logged should be the same every time. It should be a constant string, not a string formatted to contain data values such as ids or quantities. Then it is easy to search for.
  • The message logged should be distinct i.e. not the same as the message produced by an unrelated log statement. Then searching for it does not match unrelated things as well.
  • The message should be a reasonable length i.e. longer than a word, but shorter than an essay.
  • The data should be simple values. Use field values of types e.g. string, int, decimal, DateTimeOffset, or enum types. StructuredLogging.Json does not log hierarchical data, just a flat list of key-value pairs. The values are serialised to string with some simple rules:
    • Nulls are serialised as empty strings.
    • DateTime values (and DateTime?, DateTimeOffset and DateTimeOffset?) are serialised to string in ISO8601 date and time format.
    • Everything else is just serialised with .ToString(). This won't do anything useful for your own types unless you override .ToString(). See the "Code Pitfalls" below.
  • The data fields should have consistent names and values. Much of the utility of Kibana is from collecting logs from multiple systems and searching across them. e.g. if one system logs request failures with a data field StatusCode: 404 and another system with HttpStatusCode: NotFound then it will be much harder to search and aggregate logging data across these systems.

Code pitfalls

Reserved field names

Some attributes are generated automatically. Message is the text field, and others are added: CallSite, Level, LoggerName, Parameters, TimeStamp. If there is an exception, there are several other fields present as well, all starting with Exception.

In some cases of name clashes it can keep the value under a different name, e.g. data_ or ex_ prefixes. But if this fails the extra value must be discarded.

Don't do this:

_logger.ExtendedInfo("This text is the message", new { Message = someData, TimeStamp = DateTime.Now } );`

Do not add a timestamp at all, this is done automatically, and find a name for the "Message" that does not clash, e.g.:

_logger.ExtendedInfo("This text is the message", new { QueueMessageData = someData } );`

No format strings

Don't do this:

_logger.ExtendedWarn("Order {0} resent", new { OrderId = 1234 } );

As there's no format string, the {0} is not filled in.

No simple data values

Don't do

int orderId = 1234
_logger.ExtendedWarn("Order resent", orderId);

as the last parameter needs to be an object with named properties on it.

No nested data values

Don't serialise complex objects such as domain objects or DTOs as values, e.g.:

var orderDetails = new OrderDetails
  {
    OrderId = 123,
    Time = DateTimeOffset.UtcNow.AddMinutes(45)
  };

// let's log the OrderDetails
_logger.ExtendedInfo("Order saved", new { OrderDetails = orderDetails });

The orderDetails object will be serialised with ToString(). Unless this method is overridden in the OrderDetails type declaration, it will not produce any useful output. And if it is overridden, we only get one key-value pair, when instead the various values such as OrderId are better logged in separate fields.

Only some log levels are supported

We support ExtendedException which uses LogLevel.Error, ExtendedError, ExtendedWarn, ExtendedInfo and ExtendedDebug. Other log levels could be added if need be, but we don't believe that fine-grained log levels add a lot of value.

The model where log messages are discarded immediately based on configuration, chiefly based on log level, is one that we can leave behind. All messages of every level are sent to kibana for later processing. Filtering is best done after the fact when investigating an error. Log level is a field that can be searched or filtered on, but is far from the only important one.

Field naming and special characters

When sending logs to the ELK stack, the field names are parsed, and some characters such as '.' have special meaning. So don't use them unless you know how they will be interpreted by the back end.

Contributors

Started for JustEat Technology by Alexander Williamson in 2015.

And then battle-tested in production with code and fixes from: Jaimal Chohan, Jeremy Clayden, Andy Garner, Kenny Hung, Henry Keen, Payman Labbaf, João Lebre, Peter Mounce, Simon Ness, Mykola Shestopal, Anthony Steele.