HLS output different resolution then WebRTC input

Jet-Stream

New Member
Hi,

we are currently testing the flashphone server with a trial license. But we experience some strange behavior in our opinion. So I was wondering if there are limitations when using the trial license vs official license.

We are trying to ingest a WebRTC streams with a resolution of 1280x720 (HD ready). We have changed the 'streamer' demo page so that it should produce a HD ready WebRTC stream. The preview looks like 16:9 which looks ok. I cannot check if the resolution is actually HD because the preview windows are small. We are not using the Flashphoner player, due to special demands. So we are currently testing by directly entering the HLS url in Safari on the iPhone.

But when we then watch it back on an iPhone, the resolution is 640x480. So that is smaller then the ingest and the aspect ratio is now 4:3 which is more square. We would expect that the HLS output should also be 1280x720. But it is not. We tried: https://forum.flashphoner.com/threads/choose-resolution-on-player.10816/#post-11857 . Is this caused by the fact that the trial license will add an overlay image and add audio?

Secondly, I do not see any way to specify the streaming bandwidth. Normally, this can be set in the SDP file so that the browser will stay at the bitrate that we selected. We would like to have a WebRTC ingest of 1Mbps. But I can't see how I can enforce this. I know this is possible by altering the SDP file.

And a third issue is that we need to add a canvas stream WITH audio through Flashphoner server. We have tried https://github.com/flashphoner/flashphoner_client/issues/9 but that will only pass through the canvas video. And it is still selecting/asking a mic from the browser. We have tried to create a new MediaStream object with a video track from the canvas and added the audio track:

Code:
var newMediaSource = canvas.captureStream(30); //new MediaStream();
newMediaSource.addTrack(audiomixer.destination.stream.getTracks()[0]);
So we have a mediasource object, but when passing that the the publish function of Flashphoner, will give errors. Duplicate audio values in the SDP. When we leave out the audio track it is working with a mic. But we need to be able to add a custom audio track and not from the mic. Is this possible, or will this be possible in the future?
We need this, as we are developing a webbased audio and video mixer where the output is then send through WebRTC to a media server that can make HLS of it. Therefore we need to be able to pass a custom mediasource object to the publish function.
Or is there some lowlevel code possible to bypass this?

I hope you are able to answer the following questions:
  1. Is the HLS output altered due to trial license?
  2. Should the HLS output resolution be the same as the WebRTC input resolution?
  3. How can we specify the min and max bandwidth?
  4. Will the WebSDK support custom mediasource objects?
Kind regards
Joshua
 
Last edited:

Max

Administrator
Staff member
Hello
Is the HLS output altered due to trial license?
No. The trial license may add video or audio watermark.
It does not affect other features.
Should the HLS output resolution be the same as the WebRTC input resolution?
Yes. It should be the same.
By default HLS stream resolution is set to 640x480
You have to set 0x0 to use source resolution:
Code:
hls_player_width=0
hls_player_height=0
in WCS_HOME/conf/flashphoner.properties
How can we specify the min and max bandwidth?
You can change SDP like described here:
https://docs.flashphoner.com/display/WCS5EN/Managing camera and microphone#Managingcameraandmicrophone-RisingupthebitrateofvideostreampublishedinChromebrowser
Example:
Code:
a=fmtp:$1 $2;x-google-max-bitrate=7000;x-google-min-bitrate=3000
Additionally you can set bitrate range on server-side
Code:
webrtc_cc_min_bitrate=1000000
webrtc_cc_max_bitrate=10000000
in WCS_HOME/conf/flashphoner.properties
Will the WebSDK support custom mediasource objects?
We have Web SDK 3.0 in our roadmap which will support MediaStream and MediaSource objects directly.
For now we'll check if we can capture sound from canvas.
I will inform once we have any news. Case WCS-1638.
 

Jet-Stream

New Member
Mi Max,

thanks for the fast answer. But unfortunately, it does not work like expected.

When putting
Code:
hls_player_width=0
hls_player_height=0
in the flashphoner.properties the HLS output is not working. There is nothing getting out. So changing that, will brake the HLS output. So then we forced it to 1280x720 with:
Code:
hls_player_width=1280
hls_player_height=720
And that is working. But I think this is a bug, because now, we force server side the resolution. We are using the latest version of the flashphoner software.

All other settings are not needed as it looks like it now. The TS chunks are 2 seconds. Key frames are also there every 2 seconds. So the only change for us was the resolution. But we will try them and see what it does and if it is needed for us.

Bandwidth control can we fix by creating a SDP modification function that changes those values. We can make this in our code.

And can I subscribe to get updates of the case WCS-1638.? And do you have some ETA when this will be made and released? As for us, this is a rather big thing, as we are creating an audio and video mixer in the browser.
 

Max

Administrator
Staff member
Hello
And can I subscribe to get updates of the case WCS-1638?
You can subscribe on this forum thread. We will post updates related WCS-1638 right here.
And do you have some ETA when this will be made and released?
On the forum-based support level we can't provide any ETA.
However if this feature is simple, it has a chance to be implemented next week.
in the flashphoner.properties the HLS output is not working
We'll check it. Try also other settings:
Code:
hls_player_width=0
hls_player_height=0
hls_auto_start=true
periodic_fir_request=true
All other settings are not needed as it looks like it now. The TS chunks are 2 seconds. Key frames are also there every 2 seconds. So the only change for us was the resolution. But we will try them and see what it does and if it is needed for us.
You have 2 seconds chunks because of transcoding.
Transcoder setting:
Code:
video_encoder_h264_gop=60
So if you have stable 30 FPS on source stream, it will be 60/30 = 2 seconds chunks.
If you disable transcoding, your chunks will be about 5 seconds according periodic_fir_request=true and periodic_fir_request_interval=5000 settings
 

Jet-Stream

New Member
Hmm not sure what I had tested, but I got the stream working with a custom mediasource including audio. So now we have done:
Code:
var constraints = {
    audio: false,
    video: false,
    customStream: source
};
Where the source stream is created like
Code:
var newMediaSource = livestream.broadcast[0].captureStream(livestream.video.fps); //new MediaStream();
newMediaSource.addTrack(livestream.mixer.audio.destination.stream.getTracks()[0]);
So, magically, it is working for us :)
 

Max

Administrator
Staff member
yes this works for us too
key change is constraints
Code:
audio: false,
video: false
It should be set to false to use a custom stream
 
Top