datarhei/gosrt

UnmarshalURL

Closed this issue · 10 comments

Hi,

it's more a question than an issue, but this can turn to an issue depending on the answer.

I have implemented a SRT using gosrt.
It handles connect/publish/subsribe/.... and acts as a listener.
It's based on the examples from the repo.

On the caller side i use ffmpeg than pushes a SRT stream using an output url like this:
srt://ip:port/?streamid=publish:app/[streamid']?auth=&latency=2000&recv_buffer_size=2097152&sndbuf=2097152

My question is to know if gosrt internally handles the parsing of the url parameters to apply these value to the connection config.
I found that this job seems to be done by UnmarshalURL, but I can't see UnmarshalURL being called anywhere.

Could you please help me understand how gosrt as a listener applies the parameters passed by the caller, if it's done of course?

Thanks a lot!

Because you are using ffmpeg on the caller side, ffmpeg is doing the parsing of the URL. The server using gosrt will never see this URL.

By using ffmpeg you have to stick to the syntax that ffmpeg requires. This command will show you all the options: ffmpeg -h protocol=srt. With ffmpeg you can either use the URL-like syntax. Then the libsrt will parse the URL. If you use the ffmpeg options instead, ffmpeg will set these options directly via the libsrt API.

Thanks your quick answer!
So for instance using -latency n or &latency=N on FFmpeg side with have the same effect, which is FFmpeg sending this value using the SRT protocol to the listener right ?

This is correct

Wonderfull, could you please point me to the right gosrt source code file (and line maybe) where i can monitor the values sent by the caller ? Thanks!

Is it for instance:

            if request.handshake.Version == 5 {
                    if request.handshake.SRTHS.SendTSBPDDelay > recvTsbpdDelay {
                            recvTsbpdDelay = request.handshake.SRTHS.SendTSBPDDelay
                    }

                    if request.handshake.SRTHS.RecvTSBPDDelay > sendTsbpdDelay {
                            sendTsbpdDelay = request.handshake.SRTHS.RecvTSBPDDelay
                    }

                    request.config.StreamId = request.handshake.StreamId
            }

Edit: adding a print here i can see indeed:

--- handshake ---
version: 0x00000005
encryptionField: 0x0000
extensionField: 0x0005
initialPacketSequenceNumber: 0x60a96cc9
maxTransmissionUnitSize: 0x000005dc (1500)
maxFlowWindowSize: 0x00002000 (8192)
handshakeType: 0xffffffff (CONCLUSION)
srtSocketId: 0x177276a9
synCookie: 0x11c9cd7d
peerIP: ::
--- HSExt ---
srtVersion: 0x00010500
srtFlags:
TSBPDSND : true
TSBPDRCV : true
CRYPT : true
TLPKTDROP : true
PERIODICNAK : true
REXMITFLG : true
STREAM : false
PACKET_FILTER: true
recvTSBPDDelay: 0x0078 (120ms)
sendTSBPDDelay: 0x0000 (0ms)
--- /HSExt ---
--- SIDExt ---
streamId : xxxx
--- /SIDExt ---
--- /handshake ---

I'm sorry for all these questions, the root cause if that whatever latency size I choose, I get a lot of packet loss when pushing from FFmpeg to gosrt.
The network conditions are very good, I wonder where I miss something.

Thanks again for your support

Keep in mind that with ffmpeg you have to specify the latency in microseconds. If you want to have a latency of 500ms, you have to use -latency 500000.

Yes i am doing that

SRT is based on UDP. Check that between the sender and receiver are no firewalls or anything else that deprioritzes UDP packets.

I can send from my Mac to a RaspberryPi running the contributed SRT server without packet loss over WiFi in the local network.

Ok thanks for these recommendations! On FFmpeg side, is a high -latency the best thing to try to reduce the losses ?

Yes, to avoid losses, you have to increase the latency. This will increase the buffer in the server such that there's a bigger time window to wait for late or resent packets.