Java Json format not consistent with ECS
olwulff opened this issue · 4 comments
I've imported the ecs template into our cluster and ingested data using ecs-logging-java and ecs-logging-python. The latter formats json log statements which match the ecs structure but Java not.
Here a snippet:
...
"log.level":"DEBUG",
...
"log.origin": {
"file.line": 655,
"file.name": "SpringApplication.java",
"function": "logStartupProfileInfo"
},
This should look like:
...
"log": {
"level": "DEBUG",
"origin": {
"file": {
"line": 655,
"name": "SpringApplication.java""
},
"function": "logStartupProfileInfo"
},
...
Closing as duplicate of #51
Please let me know whether the dot_expander Filebeat processor would work in your case: elastic/beats#17021
We fix it within logstash at the moment.
Hello,
If we create a ECS java object from yaml ecs file as mentioned in https://github.com/elastic/ecs-logging-java/issues/38 therefore since the client use a typed object hierarchy, there would no longer need to relate on the dot to create the JSON structure.
Could that resolve the issue of https://github.com/elastic/ecs-logging-java/issues/51?
Closing this as it's not feasible to guarantee all fields are nested, especially when allowing user-defined custom attributes.
The typed object hierarchy could standardize the way of organizing the JSON.
Hence, would that not simplify the problem and make the solution possible in the java-ecs-library?
While a typed object structure is a great way to add additional fields, it does come with an overhead for the simple case where you just log a message.
Nesting all fields would also make the JSON less human-readable. It purposefully starts with the fields @timestamp, log.level, and message. If we were to nest all fields, the log.level field would be part of the log object that contains log.logger, and other fields.