Expired events sometimes have the `:time` field set to a value of type `Ratio`
v0lat1le opened this issue · 2 comments
Describe the bug
Expired events sometimes have the :time
field set to a value of type Ratio
To Reproduce
Steps to reproduce the behavior:
- Do print expired event, test config provided below
- Change nothing
- See error
Expected behavior
Expecting the :time
field values to always be of type long
Screenshots
example output, note the timestamp is printed as a fraction
INFO [2022-06-27 17:13:01,941] main - riemann.core - Hyperspace core online
{:host "test", :service "test", :state "expired", :time 828156992471/500}
{:host "test", :service "test", :state "expired", :time 1656313986943/1000}
reproduction script
(logging/init {:console true})
(instrumentation {:enabled? false})
(index)
(periodically-expire 1)
(defn index-event [core]
(riemann.index/insert (:index core) (event {:host "test" :service "test" :ttl 1}))
)
(service! (riemann.service/scheduled-task-service ::index-event :index-event 2 1 index-event))
(streams prn)
Background (please complete the following information):
- OS: MacOS
- Java/JVM version 11
- Riemann version 0.3.6
Additional context
Ran into this when serialising events to a different format to be forwarded to Kafka
Can be easily fixed in riemann.core/reaper
with
(merge {:state "expired"
:time (long (unix-time))}))]
But given that this shows up in few other places, like instrumentation events, would it make sense to update riemann.time/unix-time-real
to
(quot (System/currentTimeMillis) 1000)
?
I don't think this is a bug. Notice that Clojure Ratio is lazy - actual computation will not be done until is explicitly casted or converted. Take a look at these measurements - explicit conversion adds some overhead:
(require 'criterium.core)
;; no explicit conversion
(criterium.core/bench (doall (map #(/ 1234 %) (range 2 25))))
; Evaluation count : 26816220 in 60 samples of 446937 calls.
; Execution time mean : 2.232489 µs
; Execution time std-deviation : 29.268225 ns
; Execution time lower quantile : 2.217370 µs ( 2.5%)
; Execution time upper quantile : 2.318537 µs (97.5%)
; Overhead used : 7.253383 ns
;
; Found 4 outliers in 60 samples (6.6667 %)
; low-severe 1 (1.6667 %)
; low-mild 3 (5.0000 %)
; Variance from outliers : 1.6389 % Variance is slightly inflated by outliers
;; explicit conversion to long
(criterium.core/bench (doall (map #(long (/ 1234 %)) (range 2 25))))
; Evaluation count : 18773880 in 60 samples of 312898 calls.
; Execution time mean : 3.206310 µs
; Execution time std-deviation : 39.479470 ns
; Execution time lower quantile : 3.180998 µs ( 2.5%)
; Execution time upper quantile : 3.310231 µs (97.5%)
; Overhead used : 7.253383 ns
;
; Found 5 outliers in 60 samples (8.3333 %)
; low-severe 2 (3.3333 %)
; low-mild 3 (5.0000 %)
; Variance from outliers : 1.6389 % Variance is slightly inflated by outliers
In your case, I'd explicitly convert it to desired format before you print it. The same is done for other destinations (e.g. InfluxDB) - numbers are converted before they are saved at desired precision. In between, they are flowing as lazy unevaluted number type.
I was leaning towards updating riemann.time/unix-time-real
to (quot (System/currentTimeMillis) 1000)
It is also inconsistent with riemann.common.event
which does (long (round (unix-time)))