Send log events directly from Logback to Elasticsearch. Logs are delivered asynchronously (i.e. not on the main thread) so will not block execution of the program. Note that the queue backlog can be bounded and messages can be lost if Elasticsearch is down and either the backlog queue is full or the producer program is trying to exit (it will retry up to a configured number of attempts, but will not block shutdown of the program beyond that). For long-lived programs, this should not be a problem, as messages should be delivered eventually.
This software is dual-licensed under the EPL 1.0 and LGPL 2.1, which is identical to the Logback License itself.
This project is a fork of internetitem/logback-elasticsearch-appender, which was last committed in 2017. I decided to fork the project and detach it to continue development. Bugfixes and urgent PR were brought together in this project.
Include slf4j and logback as usual (depending on this library will not automatically pull them in).
In your pom.xml
(or equivalent), add:
<dependency>
<groupId>com.agido</groupId>
<artifactId>logback-elasticsearch-appender</artifactId>
<version>3.0.8</version>
</dependency>
In your logback.xml
:
<appender name="ELASTIC" class="com.agido.logback.elasticsearch.ElasticsearchAppender">
<url>http://yourserver/_bulk</url>
<index>logs-%date{yyyy-MM-dd}</index>
<type>tester</type>
<loggerName>es-logger</loggerName> <!-- optional -->
<errorLoggerName>es-error-logger</errorLoggerName> <!-- optional -->
<connectTimeout>30000</connectTimeout> <!-- optional (in ms, default 30000) -->
<errorsToStderr>false</errorsToStderr> <!-- optional (default false) -->
<includeCallerData>false</includeCallerData> <!-- optional (default false) -->
<logsToStderr>false</logsToStderr> <!-- optional (default false) -->
<maxQueueSize>104857600</maxQueueSize> <!-- optional (default 104857600) -->
<maxRetries>3</maxRetries> <!-- optional (default 3) -->
<readTimeout>30000</readTimeout> <!-- optional (in ms, default 30000) -->
<sleepTime>250</sleepTime> <!-- optional (in ms, default 250) -->
<rawJsonMessage>false</rawJsonMessage> <!-- optional (default false) -->
<includeMdc>false</includeMdc> <!-- optional (default false) -->
<maxMessageSize>100</maxMessageSize> <!-- optional (default -1 -->
<authentication class="com.agido.logback.elasticsearch.config.BasicAuthentication" /> <!-- optional -->
<objectSerialization>true</objectSerialization> <!-- optional (default false) -->
<keyPrefix>data.</keyPrefix> <!-- optional (default None) -->
<operation>index</operation> <!-- optional (supported: index, create, update, delete - default create) -->
<properties>
<!-- please note that <property> tags are also supported, esProperty was added for logback-1.3 compatibility -->
<esProperty>
<name>host</name>
<value>${HOSTNAME}</value>
<allowEmpty>false</allowEmpty>
</esProperty>
<esProperty>
<name>severity</name>
<value>%level</value>
</esProperty>
<esProperty>
<name>thread</name>
<value>%thread</value>
</esProperty>
<esProperty>
<name>stacktrace</name>
<value>%ex</value>
</esProperty>
<esProperty>
<name>logger</name>
<value>%logger</value>
</esProperty>
</properties>
<headers>
<header>
<name>Content-Type</name>
<value>application/json</value>
</header>
</headers>
</appender>
<root level="info">
<appender-ref ref="FILELOGGER" />
<appender-ref ref="ELASTIC" />
</root>
<logger name="es-error-logger" level="INFO" additivity="false">
<appender-ref ref="FILELOGGER" />
</logger>
<logger name="es-logger" level="INFO" additivity="false">
<appender name="ES_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<!-- ... -->
<encoder>
<pattern>%msg</pattern> <!-- This pattern is important, otherwise it won't be the raw Elasticsearch format anyomre -->
</encoder>
</appender>
</logger>
url
(required): The URL to your Elasticsearch bulk API endpointindex
(required): Name if the index to publish to (populated using PatternLayout just like individual properties - see below)type
(optional): Elasticsearch_type
field for records. Although this library does not requiretype
to be populated, Elasticsearch may, unless the configured URL includes the type (i.e.{index}/{type}/_bulk
as opposed to/_bulk
and/{index}/_bulk
). See the Elasticsearch Bulk API documentation for more informationsleepTime
(optional, default 250): Time (in ms) to sleep between attempts at delivering a messagemaxRetries
(optional, default 3): Number of times to attempt retrying a message on failure. Note that subsequent log messages reset the retry count to 0. This value is important if your program is about to exit (i.e. it is not producing any more log lines) but is unable to deliver some messages to ESconnectTimeout
(optional, default 30000): Elasticsearch connect timeout (in ms)readTimeout
(optional, default 30000): Elasticsearch read timeout (in ms)includeCallerData
(optional, default false): If set totrue
, save the caller data (identical to the AsyncAppender's includeCallerData)errorsToStderr
(optional, default false): If set totrue
, any errors in communicating with Elasticsearch will also be dumped to stderr (normally they are only reported to the internal Logback Status system, in order to prevent a feedback loop)logsToStderr
(optional, default false): If set totrue
, dump the raw Elasticsearch messages to stderrmaxQueueSize
(optional, default 104,857,600 = 200MB): Maximum size (in characters) of the send buffer. After this point, logs will be dropped. This should only happen if Elasticsearch is down, but this is a self-protection mechanism to ensure that the logging system doesn't cause the main process to run out of memory. Note that this maximum is approximate; once the maximum is hit, no new logs will be accepted until it shrinks, but any logs already accepted to be processed will still be added to the bufferloggerName
(optional): If set, raw ES-formatted log data will be sent to this loggererrorLoggerName
(optional): If set, any internal errors or problems will be logged to this loggerrawJsonMessage
(optional, default false): If set totrue
, the log message is interpreted as pre-formatted raw JSON message.includeMdc
(optional, default false): If set totrue
, then all MDC values will be mapped to properties on the JSON payload.maxMessageSize
(optional, default -1): If set to a number greater than 0, truncate messages larger than this length, then append "..
" to denote that the message was truncatedauthentication
(optional): Add the ability to send authentication headers (see below)objectSerialization
(optional): specifies whether to use POJO to JSON serializationkeyPrefix
(optional): objects logged within a message will also be logged separately with this prefix addedoperation
(optional, default create): Possible values are:index
,create
,update
&delete
, see the Elasticsearch Bulk API documentation for more information
The fields @timestamp
and message
are always sent and can not currently be configured. Additional fields can be sent by adding <esProperty>
elements to the <properties>
set.
name
(required): Key to be used in the log eventvalue
(required): Text string to be sent. Internally, the value is populated using a Logback PatternLayout, so all Conversion Words can be used (in addition to the standard static variable interpolations like${HOSTNAME}
).allowEmpty
(optional, defaultfalse
): Normally, if thevalue
results in anull
or empty string, the field will not be sent. IfallowEmpty
is set totrue
then the field will be sent regardlesstype
(optional, defaultString
): type of the field on the resulting JSON message. Possible values are:String
,int
,float
andboolean
.
If you configure logback using logback.groovy
, this can be configured as follows:
import com.agido.logback.elasticsearch.ElasticsearchAppender
appender("ELASTIC", ElasticsearchAppender){
url = 'http://yourserver/_bulk'
index = 'logs-%date{yyyy-MM-dd}'
type = 'log'
rawJsonMessage = true
errorsToStderr = true
authentication = new BasicAuthentication()
def configHeaders = new HttpRequestHeaders()
configHeaders.addHeader(new HttpRequestHeader(name: 'Content-Type', value: 'text/plain'))
headers = configHeaders
}
root(INFO, ["ELASTIC"])
Authentication is a pluggable mechanism. You must specify the authentication class on the XML element itself. The currently supported classes are:
com.agido.logback.elasticsearch.config.BasicAuthentication
- Username and password are taken from the URL (i.e.http://username:password@yourserver/_bulk
)com.agido.logback.elasticsearch.config.AWSAuthentication
- Authenticate using the AWS SDK, for use with the Amazon Elasticsearch Service (note that you will also need to includecom.amazonaws:aws-java-sdk-core
as a dependency)
Included is also an Elasticsearch appender for Logback Access. The configuration is almost identical, with the following two differences:
- The Appender class name is
com.agido.logback.elasticsearch.ElasticsearchAccessAppender
- The
value
for eachesProperty
uses the Logback Access conversion words.
To prevent a major and/or breaking API change, the old packagename com.internetitem could also used