Launching Flashphoner server in AWS Auto Scaling Group

Max

Administrator
Staff member
Pls refer to attached for scenario i described above.
To play the stream from instance behind LB, or to republish this stream to FB, anyway, you should detect the direct instance IP using backend as we recommended above.
Or, you may set up CDN:
- place Origin servers behind load balancer 1 (to publish streams)
- place Edge servers behind load balancer 2 (to play streams via WebRTC or RTMP)
In this case no matter to which server publisher or player connects. But, if you want to republish stream to FB, you should arrange this using ffmpeg or some other third party tool.
 

jasonkc

Member
Could you explain the above using similar diagram? FYI, the reason that we want to put auto scaling group of instances behind LB is that ffmpeg (used to edit then forward streams to FB etc) used up a lot of CPU resources when launched; due to the unpredictable usage of our services by our customers (at times there could be many concurrent streams running in parallel from different sources/customers, other time may not be any), we created auto-scaling group to spin up more instances when needed, and scaled them down when usage is low.
 

Max

Administrator
Staff member
Could you explain the above using similar diagram?
1631515413292.png

If you want to collect all the streams to one ffmpeg server before republishing to FB, you should place Origin servers behind load balancer, and get streams by RTMP from one Edge server powerful enough to collect al the published streams.
FYI, the reason that we want to put auto scaling group of instances behind LB is that ffmpeg (used to edit then forward streams to FB etc) used up a lot of CPU resources when launched; due to the unpredictable usage of our services by our customers (at times there could be many concurrent streams running in parallel from different sources/customers, other time may not be any), we created auto-scaling group to spin up more instances when needed, and scaled them down when usage is low.
Please clarify: how much publishers you plan to handle simultaneously? If the bottleneck is on ffmpeg server, it may be easier to deploy one WCS server powerful enough to handle all the streamers (for example c5.4xlarge), in this case load balancer is not needed.
 

jasonkc

Member
Yes, the bottleneck is indeed ffmpeg as it uses up a lot of CPU resources especially when there are too many concurrent sessions (we are offering SaaS to our clients to live stream to multiple social platforms) running, however as there is long silent period especially from 12am - 12pm (as clients typically use our service from 12pm onwards), it does not make sense to deploy a big instance from the get go. Thats the reason we plan to deploy auto-scaling group behind LB so that instances are created according to demand.
 

Max

Administrator
Staff member
however as there is long silent period especially from 12am - 12pm (as clients typically use our service from 12pm onwards), it does not make sense to deploy a big instance from the get go.
In this case, use the scheme above: Origins are behind LB, and one Edge is permanent (and powerful enough to collect all streams, c5.4xlarge for example). You can also place Edges behind a separate LB (not the same as publish LB) if you play streams only as RTMP because ffmpeg uses one port to connect.
 

Max

Administrator
Staff member
Do I need a high CPU instance as edge since it will just serve as WRTC server to collect all streams?
This depends on streams published count. You will have as many subscribers on this server as streams published. Please read this article about one-to-one streaming testing. The server with 24 CPU cores, 80 Gb RAM is enough to handle 200 720p 2500 kbps streams in this case.
 
Top