Jet-Stream
New Member
Hi,
we are currently testing the flashphone server with a trial license. But we experience some strange behavior in our opinion. So I was wondering if there are limitations when using the trial license vs official license.
We are trying to ingest a WebRTC streams with a resolution of 1280x720 (HD ready). We have changed the 'streamer' demo page so that it should produce a HD ready WebRTC stream. The preview looks like 16:9 which looks ok. I cannot check if the resolution is actually HD because the preview windows are small. We are not using the Flashphoner player, due to special demands. So we are currently testing by directly entering the HLS url in Safari on the iPhone.
But when we then watch it back on an iPhone, the resolution is 640x480. So that is smaller then the ingest and the aspect ratio is now 4:3 which is more square. We would expect that the HLS output should also be 1280x720. But it is not. We tried: https://forum.flashphoner.com/threads/choose-resolution-on-player.10816/#post-11857 . Is this caused by the fact that the trial license will add an overlay image and add audio?
Secondly, I do not see any way to specify the streaming bandwidth. Normally, this can be set in the SDP file so that the browser will stay at the bitrate that we selected. We would like to have a WebRTC ingest of 1Mbps. But I can't see how I can enforce this. I know this is possible by altering the SDP file.
And a third issue is that we need to add a canvas stream WITH audio through Flashphoner server. We have tried https://github.com/flashphoner/flashphoner_client/issues/9 but that will only pass through the canvas video. And it is still selecting/asking a mic from the browser. We have tried to create a new MediaStream object with a video track from the canvas and added the audio track:
So we have a mediasource object, but when passing that the the publish function of Flashphoner, will give errors. Duplicate audio values in the SDP. When we leave out the audio track it is working with a mic. But we need to be able to add a custom audio track and not from the mic. Is this possible, or will this be possible in the future?
We need this, as we are developing a webbased audio and video mixer where the output is then send through WebRTC to a media server that can make HLS of it. Therefore we need to be able to pass a custom mediasource object to the publish function.
Or is there some lowlevel code possible to bypass this?
I hope you are able to answer the following questions:
Joshua
we are currently testing the flashphone server with a trial license. But we experience some strange behavior in our opinion. So I was wondering if there are limitations when using the trial license vs official license.
We are trying to ingest a WebRTC streams with a resolution of 1280x720 (HD ready). We have changed the 'streamer' demo page so that it should produce a HD ready WebRTC stream. The preview looks like 16:9 which looks ok. I cannot check if the resolution is actually HD because the preview windows are small. We are not using the Flashphoner player, due to special demands. So we are currently testing by directly entering the HLS url in Safari on the iPhone.
But when we then watch it back on an iPhone, the resolution is 640x480. So that is smaller then the ingest and the aspect ratio is now 4:3 which is more square. We would expect that the HLS output should also be 1280x720. But it is not. We tried: https://forum.flashphoner.com/threads/choose-resolution-on-player.10816/#post-11857 . Is this caused by the fact that the trial license will add an overlay image and add audio?
Secondly, I do not see any way to specify the streaming bandwidth. Normally, this can be set in the SDP file so that the browser will stay at the bitrate that we selected. We would like to have a WebRTC ingest of 1Mbps. But I can't see how I can enforce this. I know this is possible by altering the SDP file.
And a third issue is that we need to add a canvas stream WITH audio through Flashphoner server. We have tried https://github.com/flashphoner/flashphoner_client/issues/9 but that will only pass through the canvas video. And it is still selecting/asking a mic from the browser. We have tried to create a new MediaStream object with a video track from the canvas and added the audio track:
Code:
var newMediaSource = canvas.captureStream(30); //new MediaStream();
newMediaSource.addTrack(audiomixer.destination.stream.getTracks()[0]);
We need this, as we are developing a webbased audio and video mixer where the output is then send through WebRTC to a media server that can make HLS of it. Therefore we need to be able to pass a custom mediasource object to the publish function.
Or is there some lowlevel code possible to bypass this?
I hope you are able to answer the following questions:
- Is the HLS output altered due to trial license?
- Should the HLS output resolution be the same as the WebRTC input resolution?
- How can we specify the min and max bandwidth?
- Will the WebSDK support custom mediasource objects?
Joshua
Last edited: