About BPF, the calculation of budget1_byte
ShawnLeung87 opened this issue · 11 comments
Whether the budget1_byte of state in BPF is reduced cumulatively or re-decreased every time pkt_len. For example
budget1_byte=1024
Does it decrease cyclically, or re-start from 1024-pkt_len each time?
In a cap_expire_sec time, if a flow is recalculated every time, then every time I launch an attack, it will be smaller than budget1_byte, won’t it never be possible to trigger rejection?
Like this, can I set an interval time, count the number of occurrences of the corresponding package, and reject the flow after the number reaches?
Whether the budget1_byte of state in BPF is reduced cumulatively or re-decreased every time pkt_len. For example budget1_byte=1024 Does it decrease cyclically, or re-start from 1024-pkt_len each time? In a cap_expire_sec time, if a flow is recalculated every time, then every time I launch an attack, it will be smaller than budget1_byte, won’t it never be possible to trigger rejection?
I'm assuming that you are referring to the code in bpf/grantedv2.h
.
Field budget1_byte
is unconditionally decremented by pkt_len
for every packet; see grantedv2_pkt_begin()
.
If a flow expires, a new policy decision is made, and budget1_byte
will be set to this new decision. Often this new decision is the same as before, so this is equivalent to resetting the value of budget1_byte
.
If an attacking flow is always within the allocated budget, the flow won't be punished. But how do you know that this flow is an abuse using only the information available in the packets of this flow?
Like this, can I set an interval time, count the number of occurrences of the corresponding package, and reject the flow after the number reaches?
Yes, you can implement a BPF to do exactly that.
When the flow reaches the next request to renew, package rejection occurs, about 50 packages. Which bpf parameter affects this. Is this renewal_step_ms?
There's not much information on what you're testing to answer this question. Which BPF are you testing? What're the parameters? What's the expiration of the policy decision? How are you measuring that packet loss? What's the condition of the network during the test (e.g. attacks going on, network capacity vs traffic volume, packet drops on the path, etc.)? How are you producing the flow being measured? How have you narrowed down the issue to the renewal of the policy decision?
I have solved this problem because the wrong parameters of the bpf call I wrote were caused.
Can't BPF state add more parameters? We need more parameters here to judge whether the number of tcp flags in the tcp three-way handshake is normal, so as to filter the flow. This type of judgment is mainly used in tcp slow attack. Because the original web bpf or tcp-srv bpf of our test project does not satisfy the defense against tcp slow attacks. Even if the second-level packet speed is used, slow tcp attacks cannot be blocked. Slow attacks are closer to the normal service access frequency. Using packet rate limiting has no effect. Packet speed limit is mainly satisfied in fast flood.
As I have explained here, adding more parameters to the BPFs associated with the flows may not be cheap to do.
If you had any amount of memory desired, what would you implement in the BPF you want? I'm asking because once I understand what you're trying to code, I may be able to guide you to get there through a different path.
You mention "Slow attacks are closer to the normal service access frequency." How can you differentiate these flows from regular flows? Have you tried threat intelligence such as the IP Reputation Feed from Team Cymru?
Currently I have three gatekeepers, each with 1.5T capacity ddr4, is it possible to increase the BPF status parameter?
struct gk_bpf_cookie {
uint64_t mem[8];
};
1.5TB is certainly enough capacity to experiment with a larger BPF cookie. I wrote this patch to show the minimum amount of changes that you need to make to compile Gatekeeper with uint64_t mem[16];
.
You can try larger cookies as well. I recommend using a multiple of 8 in uint64_t mem[X];
to keep alignment with the cache lines of the processors.
Good luck!
For this patch, should both gatekeeper and grantor be updated?
Yes.