Pls refer to attached for scenario i described above.
Attachments
-
46 KB Views: 191
To play the stream from instance behind LB, or to republish this stream to FB, anyway, you should detect the direct instance IP using backend as we recommended above.Pls refer to attached for scenario i described above.
Could you explain the above using similar diagram?
Please clarify: how much publishers you plan to handle simultaneously? If the bottleneck is on ffmpeg server, it may be easier to deploy one WCS server powerful enough to handle all the streamers (for example c5.4xlarge), in this case load balancer is not needed.FYI, the reason that we want to put auto scaling group of instances behind LB is that ffmpeg (used to edit then forward streams to FB etc) used up a lot of CPU resources when launched; due to the unpredictable usage of our services by our customers (at times there could be many concurrent streams running in parallel from different sources/customers, other time may not be any), we created auto-scaling group to spin up more instances when needed, and scaled them down when usage is low.
In this case, use the scheme above: Origins are behind LB, and one Edge is permanent (and powerful enough to collect all streams, c5.4xlarge for example). You can also place Edges behind a separate LB (not the same as publish LB) if you play streams only as RTMP because ffmpeg uses one port to connect.however as there is long silent period especially from 12am - 12pm (as clients typically use our service from 12pm onwards), it does not make sense to deploy a big instance from the get go.
This depends on streams published count. You will have as many subscribers on this server as streams published. Please read this article about one-to-one streaming testing. The server with 24 CPU cores, 80 Gb RAM is enough to handle 200 720p 2500 kbps streams in this case.Do I need a high CPU instance as edge since it will just serve as WRTC server to collect all streams?