russelltg/srt-rs

Limit send buffer size

russelltg opened this issue · 4 comments

Like the receiver buffer, we need to have a configurable limit of how many packets (or bytes?) to have in the send buffer before dropping them on the floor.

Part of this is adding this to the tokio interface by adding backpressure (not letting someone add a packet to the buffer when its full)

Does the existing implementation do back pressure? It might be better to drop packets off the front of the send buffer?

Edit: actually this ought to be configurable I suppose.

Like the receiver buffer, we need to have a configurable limit of how many packets (or bytes?) to have in the send buffer before dropping them on the floor.

Hey, I've just been observing the project, so feel free to correct me if I'm wrong. But wouldn't this be decided by the SRT latency buffer? From the IETF RFC.

The SRT sender and receiver have buffers to store packets.

On the sender, latency is the time that SRT holds a packet to give it
a chance to be delivered successfully while maintaining the rate of
the sender at the receiver. If an acknowledgment (ACK) is missing or
late for more than the configured latency, the packet is dropped from
the sender buffer. A packet can be retransmitted as long as it
remains in the buffer for the duration of the latency window. On the
receiver, packets are delivered to an application from a buffer after
the latency interval has passed. This helps to recover from
potential packet losses. See Section 4.5, Section 4.6 for details.

Latency is a value, in milliseconds, that can cover the time to
transmit hundreds or even thousands of packets at high bitrate.
Latency can be thought of as a window that slides over time, during
which a number of activities take place, such as the reporting of
acknowledged packets (ACKs) (Section 4.8.1) and unacknowledged
packets (NAKs) (Section 4.8.2).

Latency is configured through the exchange of capabilities during the
extended handshake process between initiator and responder. The
Handshake Extension Message (Section 3.2.1.1) has TSBPD delay
information, in milliseconds, from the SRT receiver and sender. The
latency for a connection will be established as the maximum value of
latencies proposed by the initiator and responder.

Like the receiver buffer, we need to have a configurable limit of how many packets (or bytes?) to have in the send buffer before dropping them on the floor.

Hey, I've just been observing the project, so feel free to correct me if I'm wrong. But wouldn't this be decided by SRT latency buffer? From the IETF RFC.

The SRT sender and receiver have buffers to store packets.
On the sender, latency is the time that SRT holds a packet to give it
a chance to be delivered successfully while maintaining the rate of
the sender at the receiver. If an acknowledgment (ACK) is missing or
late for more than the configured latency, the packet is dropped from
the sender buffer. A packet can be retransmitted as long as it
remains in the buffer for the duration of the latency window. On the
receiver, packets are delivered to an application from a buffer after
the latency interval has passed. This helps to recover from
potential packet losses. See Section 4.5, Section 4.6 for details.
Latency is a value, in milliseconds, that can cover the time to
transmit hundreds or even thousands of packets at high bitrate.
Latency can be thought of as a window that slides over time, during
which a number of activities take place, such as the reporting of
acknowledged packets (ACKs) (Section 4.8.1) and unacknowledged
packets (NAKs) (Section 4.8.2).
Latency is configured through the exchange of capabilities during the
extended handshake process between initiator and responder. The
Handshake Extension Message (Section 3.2.1.1) has TSBPD delay
information, in milliseconds, from the SRT receiver and sender. The
latency for a connection will be established as the maximum value of
latencies proposed by the initiator and responder.

No, this is not a time limit, this is a size limit. The size limit combined with the time limit would indirectly serve to constrain max stream bandwidth. There could be a better way of enforcing this though, I suppose, and the buffer limit could even be derived from the LiveBandwidthMode settings I suppose...

If we provide a mechanism that doesn't provide backpressure, it would need a different mechanism as Sink is expected to support backpressure.

However, I think that's totally reasonable. From what I remember, the Udp sockets have a:

async send
async ready_to_send
(non-async) try_send

We could have a similar interface, but it is difficult that there's already a send method on Sink that waits until it's flushed, so we need to be careful that the non-flushing send is distinctly marked and the user doesn't use the flushing send which is incredible inefficient