Not unique measurements are overwritten in Influx
LoSz opened this issue · 4 comments
Hello Brandawg93
thank you for creating this. I just noticed something and wanted to inform you.
InfluxDB needs a unique tag Value. If we don't send one, only one measurement will be available.
Let's take this example. (I added the the hostname tag, because i am running two piholes)
{'measurement': 'FTL', 'tags': {'host_name': 'npi02', 'type': 'A (IPv4)', 'status': 'localhost', 'domain': 'clients1.google.com', 'client': '10.10.10.105', 'forward': '1.0.0.1'}, 'time': '2020-02-15T16:03:18', 'fields': {'id': 298}}
{'measurement': 'FTL', 'tags': {'host_name': 'npi02', 'type': 'A (IPv4)', 'status': 'localhost', 'domain': 'clients1.google.com', 'client': '10.10.10.105', 'forward': '1.0.0.1'}, 'time': '2020-02-15T16:03:18', 'fields': {'id': 299}}
as you can see the tag values are the same, unfortunately the the timestamp in the pihole db is not unix time with nanoseconds.
if you check in the InfluxDB you'll only find one measurement
I guess the fix would be to send the id twice. once as a filed and once as a tag.
For my usecase with two pihole instances. i think i'll try something to send a tag like this:
uniq_id=item[0]+host_name
Here is some information from Influx: https://v2.docs.influxdata.com/v2.0/write-data/best-practices/duplicate-points/
thanks again for crating this
This is good to know! So anything with the same timestamp is just overwriting the id
field with the latest value? But if I add another tag called uniq
with the same value as the id
, it should be separate values?
I've added a new tag called uniq
to help with duplicate data points. Thanks again for the heads up!
I'm not using the full docker setup; however, I am using the sql_influx script for monitoring. Adding the uniq
tag causes the cardinality to be too high. As a result, InfluxDB responds with a 400 when trying to write the measurement due to max-values-per-tag
.
filling data...
Traceback (most recent call last):
File "main.py", line 89, in <module>
add_new_results(last)
File "main.py", line 79, in add_new_results
client.write_points(json_body)
File "/home/pi/.local/lib/python3.7/site-packages/influxdb/client.py", line 599, in write_points
consistency=consistency)
File "/home/pi/.local/lib/python3.7/site-packages/influxdb/client.py", line 676, in _write_points
protocol=protocol
File "/home/pi/.local/lib/python3.7/site-packages/influxdb/client.py", line 410, in write
headers=headers
File "/home/pi/.local/lib/python3.7/site-packages/influxdb/client.py", line 369, in request
raise InfluxDBClientError(err_msg, response.status_code)
influxdb.exceptions.InfluxDBClientError: 400: {"error":"partial write: max-values-per-tag limit exceeded (100000/100000): measurement=\"pihole-FTL\" tag=\"uniq\" value=\"240807\" dropped=1"}
I think instead, of the monotonic id as the uniq
you should take the modulo by 100 or something to make it sufficiently unique for the time granularity of 1 second without blowing up the cardinality. In other words "uniq": item[0] % 1000
. Happy to make that change.
@sudowork Feel free to submit a pr with this change. I can test it from there.