duration_μs cannot be handled by DataDog
Closed this issue · 6 comments
I think a good way to solve this would be to sanitize all the metadata key names in DataDog adapter, a PR would be welcome.
Hello @bvobart @pirvudoru and @guzishiwo. Sorry for taking so long to reply. I pushed the simplest possible fix to main - replace μs
with us
. Most destinations do not support unicode so I feel like there is no good reason trying to fight to keep it.
Please give it a try. I will push the release to hex in a few days.
Alright that sounds like a good solution for the general case, @AndrewDryga, but for the Elastic formatter, duration_us
is still not a valid ECS field, so it would still be best to map that to event.duration
. I don't think Elastic and Kibana will crash or error when they encounter duration_us
, but they might well ignore it or at least not do anything useful with it. I'll rebase my MR (#129) and reopen it to fix that.
I see DataDog has a similar predefined format for a duration field (they recommend just using duration
with the value in nanoseconds), so a similar remap will need to happen for the DataDog formatter. Same for the Google Cloud formatter too, though I'm not sure what field they use.
To be clear, I do think it's good to keep the generic duration field as duration_us
because that at least specifies which unit the duration is in. But the product-specific formatters will need to adhere to those products' log specs.
EDIT: see #132
@bvobart I'm sorry for missing that out, lets merge your fix.
TODO for myself:
- Remap duration to HTTPRequest
latency
for GCP: https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry#HttpRequest
The fix will be released in 6.2.0.