Logging with absolute Unix epoch timestamp instead of relative time in QLOG files
birkandenizer opened this issue · 4 comments
According to the official QLOG main schema [1], timestamps do not have to use the UNIX epoch timestamp as their reference. It is suggested that a reference timestamp such as "time since connection start in ms" can be used. I believe this is how it is implemented in the currently outputted sqlog files based on JSON-SEQ from quiche.
I'm currently using quiche-client and quiche-server to generate network traces. However, I need access to the UNIX epoch timestamps of each event for my use case to correlate them with other events. My questions are:
-
Is it possible to get sqlog files with UNIX timestamps instead of relative timestamps that start from 0.0?
-
Is it correct to assume the creation time of a sqlog file as the first time a packet is received and saved? Thus use it as a reference point to convert the relative timestamps in the fields to absolute UNIX timestamps by summing them up together.
[1] https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-02#section-3.4.1
Hi! Thanks for asking, I think this all makes sense. Let me think on it some more over the weekend.
There are also some additional suggestions in the draft about my questions.
For my first question, relative approach that is currently used should include a "reference_time" field in "common_fields" [1]. It is said that ""reference_time" value is typically the first absolute timestamp". However, currently outputted sqlog files do not include such a field. I do not know if this is intentional or an oversight.
For my second question, in the currently outputted sqlog files, there is a "time_offset" field [2] that should indicate "how many milliseconds the starting time of the current trace should be offset". So my understanding from the created sqlog files is that since this "time_offset" is always 0.0, the creation time of the file is the same time when the first event is logged. Would that be a correct assumption?
[1] https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-02#section-3.4.1
[2] https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema-02#section-3.3.1.1
I thought on this some more and ended up creating an associated issue on the qlog spec itself - quicwg/qlog#198. We diverge from a strict interpretation of the spec.
Speaking specifically to quiche, internally we use monotonic non-decreasing clocks, which avoid a lot of the problems with time derived from clocks that might change during the lifetime of a connection. qlog logging starts when an application calls the set_qlog() method. At this point we mark the start time as measured against the monotonic clock, and then events are written using a relative duration since that time - this value will always be positive.
When using quiche-client, the qlog starts quite early at https://github.com/cloudflare/quiche/blob/master/apps/src/client.rs#L178, before packets are sent. In quiche-server, the qlog only starts after we have accepted a connection https://github.com/cloudflare/quiche/blob/master/apps/src/bin/quiche-server.rs#L332 (which can mean that retry or version negotiation stages are not recorded and the packet numbers appear to start later than 1).
In both cases the qlog file is created shortly before being passed to quiche but there is no need for this to be the case so YMMV.
Given the current design, it isn't possible to just "turn on" logging absolute events in UNIX time in quiche. It might be feasible to add support for this but it open up a load of potential problems about how to deal with clock drift, which we don't have to worry about right now. If you wanted to try this yourself, you'd need to look at adding support to the qlog streamer to use SystemTime e.g. on construction and everywhere that an event can be added.
Mapping between a monotonic clock and system clock is tricky. That's why the "time_offset" field doesn't really seem to be practical right now either. You might be able to do some fuzzy matching by post-processing steps today, but those would also be subject to potential clock drift issues.
Closing due to inactivity. Please reopen if there is more information.