monogon-dev/NetMeta

Unable to view any data

FalcoXYZ opened this issue · 10 comments

I'm unable to view any netflow information within Grafana. Checking the chi-netmeta-netmeta-0-0-0 logs, I can see the following error message appear which repeats itself. Not sure if this is a misconfiguration or a bug.

I'm using the latest version (master), default example config.

2023.05.01 14:06:18.830838 [ 252 ] {} <Debug> MemoryTracker: Peak memory usage: 64.25 MiB.
2023.05.01 14:06:18.832640 [ 252 ] {} <Error> void DB::StorageKafka::threadFunc(size_t): Poco::Exception. Code: 1000, e.code() = 0, Timeout, Stack trace (when copying this message, always include the lines below):

0. Poco::Net::SocketImpl::receiveBytes(void*, int, int) @ 0x17617c94 in /usr/bin/clickhouse
1. Poco::Net::HTTPSession::refill() @ 0x175fb99d in /usr/bin/clickhouse
2. Poco::Net::HTTPHeaderStreamBuf::readFromDevice(char*, long) @ 0x175f493d in /usr/bin/clickhouse
3. ? @ 0x175ea988 in /usr/bin/clickhouse
4. std::__1::basic_streambuf<char, std::__1::char_traits<char>>::uflow() @ 0x892e44a in /usr/bin/clickhouse
5. std::__1::basic_istream<char, std::__1::char_traits<char>>::get() @ 0x892fa59 in /usr/bin/clickhouse
6. Poco::Net::HTTPResponse::read(std::__1::basic_istream<char, std::__1::char_traits<char>>&) @ 0x175f9d4f in /usr/bin/clickhouse
7. Poco::Net::HTTPClientSession::receiveResponse(Poco::Net::HTTPResponse&) @ 0x175edf4d in /usr/bin/clickhouse
8. ? @ 0x10a03e22 in /usr/bin/clickhouse
9. ? @ 0x10a01bac in /usr/bin/clickhouse
10. ? @ 0x109ffea5 in /usr/bin/clickhouse
11. ? @ 0x109fcec2 in /usr/bin/clickhouse
12. ? @ 0x109fc2b8 in /usr/bin/clickhouse
13. ? @ 0x10a1ff0a in /usr/bin/clickhouse
14. DB::HTTPDictionarySource::loadAll() @ 0x10a1fac1 in /usr/bin/clickhouse
15. DB::IPAddressDictionary::loadData() @ 0x10a3ca4b in /usr/bin/clickhouse
16. DB::IPAddressDictionary::IPAddressDictionary(DB::StorageID const&, DB::DictionaryStructure const&, std::__1::shared_ptr<DB::IDictionarySource>, DB::ExternalLoadableLifetime, bool) @ 0x10a3c850 in /usr/bin/clickhouse
17. ? @ 0x10a624ad in /usr/bin/clickhouse
18. DB::DictionaryFactory::create(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, Poco::Util::AbstractConfiguration const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context const>, bool) const @ 0x123da10b in /usr/bin/clickhouse
19. DB::ExternalDictionariesLoader::create(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, Poco::Util::AbstractConfiguration const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) const @ 0x12fac5db in /usr/bin/clickhouse
20. DB::ExternalLoader::LoadingDispatcher::doLoading(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, unsigned long, bool, unsigned long, bool, std::__1::shared_ptr<DB::ThreadGroupStatus>) @ 0x12fb92ad in /usr/bin/clickhouse
21. void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<void (DB::ExternalLoader::LoadingDispatcher::*)(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, unsigned long, bool, unsigned long, bool, std::__1::shared_ptr<DB::ThreadGroupStatus>), DB::ExternalLoader::LoadingDispatcher*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>&, unsigned long&, bool&, unsigned long&, bool, std::__1::shared_ptr<DB::ThreadGroupStatus>>(void (DB::ExternalLoader::LoadingDispatcher::*&&)(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, unsigned long, bool, unsigned long, bool, std::__1::shared_ptr<DB::ThreadGroupStatus>), DB::ExternalLoader::LoadingDispatcher*&&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>&, unsigned long&, bool&, unsigned long&, bool&&, std::__1::shared_ptr<DB::ThreadGroupStatus>&&)::'lambda'(), void ()>>(std::__1::__function::__policy_storage const*) @ 0x12fbf450 in /usr/bin/clickhouse
22. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xe196e6a in /usr/bin/clickhouse
23. ? @ 0xe19c521 in /usr/bin/clickhouse
24. ? @ 0x7f2af6121609 in ?
25. __clone @ 0x7f2af6046133 in ?
 (version 23.2.4.12 (official build))
2023.05.01 14:06:19.335310 [ 253 ] {} <Debug> StorageKafka (flows_queue): Started streaming to 1 attached views
2023.05.01 14:06:19.725129 [ 253 ] {} <Debug> StorageKafka (flows_queue): Pushing 0.00 rows to default.flows_queue (5a3e20e6-093a-4fd2-81bf-26c5d3504d6b) took 389 ms.
2023.05.01 14:06:19.726266 [ 253 ] {} <Debug> MemoryTracker: Peak memory usage: 151.34 KiB.
2023.05.01 14:06:20.148853 [ 229 ] {} <Debug> system.asynchronous_metric_log (308ada1b-9d7b-49e5-8cc4-6da5998709d3) (MergerMutator): Selected 2 parts from 202305_85519_86230_458 to 202305_86231_86231_0
2023.05.01 14:06:20.149090 [ 211 ] {308ada1b-9d7b-49e5-8cc4-6da5998709d3::202305_85519_86231_459} <Debug> MergeTask::PrepareStage: Merging 2 parts: from 202305_85519_86230_458 to 202305_86231_86231_0 into Wide with storage Full
2023.05.01 14:06:20.149243 [ 211 ] {308ada1b-9d7b-49e5-8cc4-6da5998709d3::202305_85519_86231_459} <Debug> MergeTask::PrepareStage: Selected MergeAlgorithm: Horizontal
2023.05.01 14:06:20.149293 [ 211 ] {308ada1b-9d7b-49e5-8cc4-6da5998709d3::202305_85519_86231_459} <Debug> MergeTreeSequentialSource: Reading 121 marks from part 202305_85519_86230_458, total 979216 rows starting from the beginning of the part
2023.05.01 14:06:20.149439 [ 211 ] {308ada1b-9d7b-49e5-8cc4-6da5998709d3::202305_85519_86231_459} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 202305_86231_86231_0, total 1372 rows starting from the beginning of the part
2023.05.01 14:06:20.227309 [ 254 ] {} <Debug> StorageKafka (flows_queue): Started streaming to 1 attached views
2023.05.01 14:06:20.385757 [ 214 ] {308ada1b-9d7b-49e5-8cc4-6da5998709d3::202305_85519_86231_459} <Debug> MergeTask::MergeProjectionsStage: Merge sorted 980588 rows, containing 4 columns (4 merged, 0 gathered) in 0.236685286 sec., 4143003.6339479084 rows/sec., 61.94 MiB/sec.
2023.05.01 14:06:20.390787 [ 214 ] {} <Debug> MemoryTracker: Peak memory usage to apply mutate/merge in 308ada1b-9d7b-49e5-8cc4-6da5998709d3::202305_85519_86231_459: 12.57 MiB.
2023.05.01 14:06:20.442126 [ 86 ] {} <Debug> DNSResolver: Updating DNS cache
2023.05.01 14:06:20.445636 [ 86 ] {} <Debug> DNSResolver: Updated DNS cache
2023.05.01 14:06:24.026861 [ 179 ] {} <Debug> system.asynchronous_metric_log (308ada1b-9d7b-49e5-8cc4-6da5998709d3): Removing 2 parts from filesystem (serially): Parts: [202305_85519_86162_390, 202305_86163_86163_0]
2023.05.01 14:06:24.032182 [ 179 ] {} <Debug> system.asynchronous_metric_log (308ada1b-9d7b-49e5-8cc4-6da5998709d3): Removing 2 parts from memory: Parts: [202305_85519_86162_390, 202305_86163_86163_0]
2023.05.01 14:06:25.719886 [ 182 ] {} <Debug> system.trace_log (c315da78-9e32-47eb-8c56-3433eb25ec6d): Removing 6 parts from filesystem (serially): Parts: [202305_72505_72545_8, 202305_72546_72546_0, 202305_72547_72547_0, 202305_72548_72548_0, 202305_72549_72549_0, 202305_72550_72550_0]
2023.05.01 14:06:25.724943 [ 182 ] {} <Debug> system.trace_log (c315da78-9e32-47eb-8c56-3433eb25ec6d): Removing 6 parts from memory: Parts: [202305_72505_72545_8, 202305_72546_72546_0, 202305_72547_72547_0, 202305_72548_72548_0, 202305_72549_72549_0, 202305_72550_72550_0]
2023.05.01 14:06:26.470279 [ 147 ] {} <Debug> system.trace_log (c315da78-9e32-47eb-8c56-3433eb25ec6d) (MergerMutator): Selected 4 parts from 202305_72505_72609_25 to 202305_72612_72612_0
2023.05.01 14:06:26.471081 [ 210 ] {c315da78-9e32-47eb-8c56-3433eb25ec6d::202305_72505_72612_26} <Debug> MergeTask::PrepareStage: Merging 4 parts: from 202305_72505_72609_25 to 202305_72612_72612_0 into Compact with storage Full
2023.05.01 14:06:26.471572 [ 210 ] {c315da78-9e32-47eb-8c56-3433eb25ec6d::202305_72505_72612_26} <Debug> MergeTask::PrepareStage: Selected MergeAlgorithm: Horizontal
2023.05.01 14:06:26.478524 [ 210 ] {c315da78-9e32-47eb-8c56-3433eb25ec6d::202305_72505_72612_26} <Debug> MergeTreeSequentialSource: Reading 3 marks from part 202305_72505_72609_25, total 11626 rows starting from the beginning of the part
2023.05.01 14:06:26.478964 [ 210 ] {c315da78-9e32-47eb-8c56-3433eb25ec6d::202305_72505_72612_26} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 202305_72610_72610_0, total 6 rows starting from the beginning of the part
2023.05.01 14:06:26.479103 [ 210 ] {c315da78-9e32-47eb-8c56-3433eb25ec6d::202305_72505_72612_26} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 202305_72611_72611_0, total 6 rows starting from the beginning of the part
2023.05.01 14:06:26.479280 [ 210 ] {c315da78-9e32-47eb-8c56-3433eb25ec6d::202305_72505_72612_26} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 202305_72612_72612_0, total 596 rows starting from the beginning of the part
2023.05.01 14:06:26.498841 [ 216 ] {c315da78-9e32-47eb-8c56-3433eb25ec6d::202305_72505_72612_26} <Debug> MergeTask::MergeProjectionsStage: Merge sorted 12234 rows, containing 12 columns (12 merged, 0 gathered) in 0.027717648 sec., 441379.44171886443 rows/sec., 153.13 MiB/sec.
2023.05.01 14:06:26.502784 [ 216 ] {} <Debug> MemoryTracker: Peak memory usage to apply mutate/merge in c315da78-9e32-47eb-8c56-3433eb25ec6d::202305_72505_72612_26: 13.64 MiB.
2023.05.01 14:06:27.155194 [ 118 ] {} <Debug> system.asynchronous_metric_log (308ada1b-9d7b-49e5-8cc4-6da5998709d3) (MergerMutator): Selected 2 parts from 202305_85519_86231_459 to 202305_86232_86232_0
2023.05.01 14:06:27.155957 [ 219 ] {308ada1b-9d7b-49e5-8cc4-6da5998709d3::202305_85519_86232_460} <Debug> MergeTask::PrepareStage: Merging 2 parts: from 202305_85519_86231_459 to 202305_86232_86232_0 into Wide with storage Full
2023.05.01 14:06:27.160920 [ 219 ] {308ada1b-9d7b-49e5-8cc4-6da5998709d3::202305_85519_86232_460} <Debug> MergeTask::PrepareStage: Selected MergeAlgorithm: Horizontal
2023.05.01 14:06:27.162256 [ 219 ] {308ada1b-9d7b-49e5-8cc4-6da5998709d3::202305_85519_86232_460} <Debug> MergeTreeSequentialSource: Reading 121 marks from part 202305_85519_86231_459, total 980588 rows starting from the beginning of the part
2023.05.01 14:06:27.163481 [ 219 ] {308ada1b-9d7b-49e5-8cc4-6da5998709d3::202305_85519_86232_460} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 202305_86232_86232_0, total 1372 rows starting from the beginning of the part
2023.05.01 14:06:27.408433 [ 212 ] {308ada1b-9d7b-49e5-8cc4-6da5998709d3::202305_85519_86232_460} <Debug> MergeTask::MergeProjectionsStage: Merge sorted 981960 rows, containing 4 columns (4 merged, 0 gathered) in 0.252848184 sec., 3883595.224872171 rows/sec., 58.06 MiB/sec.
2023.05.01 14:06:27.414217 [ 212 ] {} <Debug> MemoryTracker: Peak memory usage to apply mutate/merge in 308ada1b-9d7b-49e5-8cc4-6da5998709d3::202305_85519_86232_460: 12.57 MiB.
2023.05.01 14:06:29.192782 [ 254 ] {} <Debug> MemoryTracker: Peak memory usage: 64.25 MiB.

It seems like it cant update the dictionary. Do you have any firewall for outgoing traffic?

We do have a firewall in place, but netmeta is running locally. Could that still cause any issues?

NetMeta needs to be able to connect to its own loopback locally (as described in the README), as well as establish outbound connections to a CDN to download AS mappings and such.

The VM where it is running has no firewall, all default/stock Ubuntu. The hardware firewall is not blocking HTTP(s) traffic and I don't see anything being blocked.

Hmm. What version of Ubuntu, exactly? Any other customizations done on the machine?

Even tho everything should be run on Ubuntu nowadays there where some issues in the past. Can you check if all pods are running?

(kubectl get pod -A)

Did you got it to work? If not feel free to contact me directly :)

Closed because of inactivity

Closed because of inactivity