Several VHOST related problems on cluser setup
Closed this issue · 1 comments
Hello everyone,
Let's see if someone can help me a little bit.
I'm trying to test the edge cluster configuration in backup mode with originA, originB and one edge server
The idea was to send a continous stream to originB (a placeholder) and when the actual stream starts send it to originA and cut the stream to originB.
But I'm facing some problems, one is this strange error about not being able to find the coworker when the originB stops streaming, but edge is still showing the placeholder instead of the new stream. Until I have to manually conect/disiconenct the client form the edge server
[2025-09-10 18:26:58.935][ERROR][115651][78tl8f4u][111] serve error code=1018(StConnect)(ST connect server failed) : service cycle : rtmp: stream service : discover coworkers, url=http://127.0.0.1:7091/api/v1/clusters?vhost=__defaultVhost__&ip=127.0.0.1&app=live&stream=livestream&coworker=127.0.0.1:7091 : http: post http://127.0.0.1:7091/api/v1/clusters?vhost=__defaultVhost__&ip=127.0.0.1&app=live&stream=livestream&coworker=127.0.0.1:7091, status=1730333424, res= : http: client post : http: connect server : http: tcp connect http 127.0.0.1:7091 to=30000ms, rto=30000ms : tcp: connect 127.0.0.1:7091 to=30000ms : connect to 127.0.0.1:7091 thread [115651][78tl8f4u]: do_cycle() [./src/app/srs_app_rtmp_conn.cpp:262][errno=111] thread [115651][78tl8f4u]: service_cycle() [./src/app/srs_app_rtmp_conn.cpp:456][errno=111] thread [115651][78tl8f4u]: playing() [./src/app/srs_app_rtmp_conn.cpp:730][errno=111] thread [115651][78tl8f4u]: discover_co_workers() [./src/app/srs_app_http_hooks.cpp:490][errno=111] thread [115651][78tl8f4u]: do_post() [./src/app/srs_app_http_hooks.cpp:627][errno=111] thread [115651][78tl8f4u]: post() [./src/protocol/srs_protocol_http_client.cpp:328][errno=111] thread [115651][78tl8f4u]: connect() [./src/protocol/srs_protocol_http_client.cpp:453][errno=111] thread [115651][78tl8f4u]: connect() [./src/protocol/srs_protocol_st.cpp:697][errno=111] thread [115651][78tl8f4u]: srs_tcp_connect() [./src/protocol/srs_protocol_st.cpp:217][errno=111](Connection refused)
Another problem I'm facing is with the vhosts, I need to have 2 vhosts input.server and output.server, each one with different streams....
I send the placeholder stream via rtmp to rtmp://input.server:19351/live/livestream y and the 'good one' to rtmp://input.server:19350/live/livestream
When the 'good one' is received I start some work over it via callback and send the result to rtmp://output.server:19350/live/livestream
But when I try to 'play' the output stream I'm only able to see the input one.
I know the stream is being received because the 'capture' jpg is created for both vhosts
This is my edge config:
`# Global Config
listen 1935;
max_connections 1000;
pid ./objs/edge.pid;
daemon off;
srs_log_tank console;
http_api {
enabled on;
listen 1985;
}
http_server {
enabled on;
listen 7080;
dir ./objs/nginx/html;
}
Default VHOST config
vhost __defaultVhost__ {
cluster {
mode remote;
origin 127.0.0.1:19351;
debug_srs_upnode on;
}
# FLV Config
http_remux {
enabled on;
mount [vhost]/[app]/[stream].flv;
}
# Transcode Config (snapshot)
transcode {
enabled on;
ffmpeg ./objs/ffmpeg/bin/ffmpeg;
engine snapshot {
enabled on;
iformat flv
vfilter {
vf fps=0.1;
}
vcodec png;
vparams {
vframes 1;
}
acodec an;
oformat image2;
output /mnt/images/[app]/orig-[stream].jpg;
}
}
}`
This is the OrigA config:
`listen 19350;
max_connections 1000;
daemon off;
srs_log_tank console;
pid ./objs/origin.cluster.serverA.pid;
http_api {
enabled on;
listen 7091;
}
Default VHOST config
vhost defaultVhost {
cluster {
mode local;
origin_cluster on;
coworkers 127.0.0.1:7092;
}
}
Input vhost
vhost input.server {
cluster {
mode local;
origin_cluster on;
coworkers 127.0.0.1:7092;
}
# HTTP hooks
http_hooks {
enabled on;
on_publish http://127.0.0.1:4321/api/v1/srs/streams http://10.0.0.2:5000/stop;
on_unpublish http://127.0.0.1:4321/api/v1/srs/streams http://10.0.0.2:5000/start;
on_play http://127.0.0.1:4321/api/v1/srs/sessions;
on_stop http://127.0.0.1:4321/api/v1/srs/sessions;
}
# Global Low latency Configs
tcp_nodelay on;
min_latency on;
# GOP cache configuration (disabled for low latency)
gop_cache off;
queue_length 5;
# Publishing configuration
publish {
mr off;
mr_latency 100;
firstpkt_timeout 10000;
normal_timeout 3000;
}
# Playback settings (valid options only)
play {
gop_cache off;
queue_length 3;
mw_latency 50;
}
# WebRTC optimizado
rtc {
enabled off;
rtmp_to_rtc off;
rtc_to_rtmp off;
bframe discard;
}
}
Output VHOST
vhost output.server {
cluster {
mode local;
origin_cluster on;
coworkers 127.0.0.1:7092;
}
# Configuraciones globales de baja latencia
tcp_nodelay on;
min_latency on;
# GOP cache configuration (disabled for low latency)
gop_cache off;
queue_length 5;
# Publishing configuration
publish {
mr off;
mr_latency 100;
firstpkt_timeout 10000;
normal_timeout 3000;
}
# Playback settings (valid options only)
play {
gop_cache off;
queue_length 3;
mw_latency 50;
}
# WebRTC optimizado
rtc {
enabled off;
rtmp_to_rtc off;
rtc_to_rtmp off;
bframe discard;
}
}`
### And this is OrigB conf:
`# Global config
listen 19351;
max_connections 1000;
daemon off;
srs_log_tank console;
pid ./objs/origin.cluster.serverB.pid;
http_api {
enabled on;
listen 7092;
}
Default VHOST
vhost __defaultVhost__ {
cluster {
mode local;
origin_cluster on;
coworkers 127.0.0.1:7091;
}
}
Input VHOST
vhost input.server {
cluster {
mode local;
origin_cluster on;
coworkers 127.0.0.1:7091;
}
http_hooks {
enabled on;
on_publish http://127.0.0.1:4321/api/v1/srs/streams http://10.0.0.2:5000/start;
on_unpublish http://127.0.0.1:4321/api/v1/srs/streams http://10.0.0.2:5000/stop;
on_play http://127.0.0.1:4321/api/v1/srs/sessions;
on_stop http://127.0.0.1:4321/api/v1/srs/sessions;
}
# Configuraciones globales de baja latencia
tcp_nodelay on;
min_latency on;
# GOP cache configuration (disabled for low latency)
gop_cache off;
queue_length 5;
# Publishing configuration
publish {
mr off;
mr_latency 100;
firstpkt_timeout 10000;
normal_timeout 3000;
}
# Playback configuration (valid options only)
play {
gop_cache off;
queue_length 3;
mw_latency 50;
}
# WebRTC optimizado
rtc {
enabled off;
rtmp_to_rtc off;
rtc_to_rtmp off;
bframe discard;
}
}
Output VHOST
vhost output.server {
cluster {
mode local;
origin_cluster on;
coworkers 127.0.0.1:7091;
}
# Configuraciones globales de baja latencia
tcp_nodelay on;
min_latency on;
# GOP cache settings (disabled for low latency)
gop_cache off;
queue_length 5;
# Publishing configuration
publish {
mr off;
mr_latency 100;
firstpkt_timeout 10000
normal_timeout 3000;
}
# Playback settings (valid options only)
play {
gop_cache off;
queue_length 3;
mw_latency 50;
}
# WebRTC optimizado
rtc {
enabled off;
rtmp_to_rtc off;
rtc_to_rtmp off;
bframe discard;
}
}`
Hope anyone can helpme a little bit
TRANS_BY_GPT4
Hi, please use the new proxy-based Origin Cluster instead of the legacy MESH-based cluster.
The legacy origin cluster (with origin_cluster on and coworkers) is deprecated since SRS 7.0. The new Origin Cluster works with SRS 5+ and supports all protocols (RTMP/WebRTC/SRT/HLS/HTTP-FLV).
Documentation: https://ossrs.io/lts/en-us/docs/v7/doc/origin-cluster
Proxy server: https://github.com/ossrs/proxy-go
The new architecture is simpler, more scalable, and better maintained.