This repository dives into five different logging patterns:
- Parse: Take the log files of your applications and extract the relevant pieces of information.
- Send: Add a log appender to send out your events directly without persisting them to a log file.
- Structure: Write your events in a structured file, which you can then centralize.
- Containerize: Keep track of short-lived containers and configure their logging correctly.
- Orchestrate: Stay on top of your logs even when services are short lived and dynamically allocated on Kubernetes.
The slides for this talk are available on Speaker Deck.
- JDK 8+ and Gradle to run the Java code locally.
- Docker (and Docker Compose) to run all the required components of the Elastic Stack (Filebeat, Logstash, Elasticsearch, and Kibana) and the containerized Java application.
- Bring up the Elastic Stack:
$ docker-compose up --build
- Rerun the Java application to generate more logs:
$ docker restart <ID of the Java app>
- Remove the Elastic Stack and its volumes:
$ docker-compose down -v
- Start the demo with
$ docker-compose up --build
. - Look at the code — which pattern are we building with log statements here?
- Look at Management -> Index Management in Kibana.
- How many log events should we have? 40. But we have 42 entries instead. Even though 42 would be the perfect number, here it's not.
- See the
_grokparsefailure
in the tag field. Enable the multiline rules in Filebeat. It should automatically refresh and when you run the application again, it should now only collect 40 events. - Show that this works as expected now and drill down to the errors to see which emoji we are logging.
- Copy a log line and parse it with the Grok Debugger in Kibana, for example, with the pattern
^\[%{TIMESTAMP_ISO8601:timestamp}\]%{SPACE}%{LOGLEVEL:level}
— show https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns to get started. We can copy the rest of the pattern from logstash.conf. - Point to https://github.com/elastic/ecs for the naming conventions.
- Show the Data Visualizer in Machine Learning by uploading the LOG file. The output is actually quite good already, but we are sticking to our manual rules for now.
- Find the log statements in Kibana's Discover view for the parse index.
- Show the pipeline in Kibana's Monitoring view and the other components in Monitoring.
- Create a vertical bar chart visualization on the
level
field. Further break it down intosession
.
- Show that the logs are missing from the first run, since no connection to Logstash had been established yet.
- Rerun the application and see it works now. And we have already seen the main downside of this approach.
- Finally, you would need to rename the fields to match ECS in a Logstash filter.
- Run the application and show the data in the structure index.
- Show the Logback configuration for JSON, since it is a little more complicated than the others.
- Show the metadata we are collecting now.
- Point to the ingest pipeline and show how everything is working.
- See why we needed the grok failure rule, because of the startup error from sending to Logstash directly.
- Filter to down to
container.name : "java_app"
and point out the hinting that stops the multiline statements from being broken up. - Point out how you could break up the output into two indices — docker-* and docker-java-*.
- Show the new Logs UI (adapt the pattern to match the right index).