Recording Issues

AlanM

Member
Issue
Recording files don't split at the correct time.

Environment
Server: c5.xlarge
WCS: 5.2.267-018743542d11cabedb2fdc2dd34e1fc68e676158
Publisher: Demo - Several Stream Recording (15 count)

flashphoner.properties
Code:
record_rotation=300
stream_record_policy_template={streamName}_{startTime}_{endTime}
wcs-core.properties
Code:
### JVM OPTIONS ###
-Xmx6g
records
upload_2019-8-1_16-47-46.png


Logs
Emailed (report_2019-08-01-21-50-15.tar.gz)
 

Max

Administrator
Staff member
Good day.
We raised internal ticket WCS-2207 to reproduce the issue.
On your side, please repeat the test with client debug logs enabled
Code:
client_log_level=DEBUG
and collect traffic dump during experiment
Code:
tcpdump -i any -s 4096 -B 10240 -w log.pcap
Then, send us report including traffic dump (or link to report if its size is bigger than 20 M). It will help us to investigate this issue.
 

Max

Administrator
Staff member
Good day.
Server calculates recording fragment duration by stream frames timestamps. So, recording file duration can be stretched if channel bandwith is less than minimal publishing bitrate, or due to packets loss, or if stream has few key frames.
We have checked logs and dump. It seems like you have good enough channel without packet loss, but publisher browser sends keyframes unevenly and rarely. Please try to set the following option
Code:
periodic_fir_request=true
to request key frames periodically from publisher browsers. This should help to solve the problem.
 

AlanM

Member
We will test that option.

Is there any way to set recordings so they are not affected by packet loss / bandwidth limitations? We want the files split based on absolute time, not based on frame time.
 

AlanM

Member
Just tested the periodic_fir_request option and it still appears to be broken. We can send in logs if needed.
 

Max

Administrator
Staff member
Good day.
We've tried to reproduce the problem and performed a series of load tests.
The test "Several stream recording" is a synthetic load test that generates a huge load on publishing client, so key frames are sent one per minute even whel server generates a periodic request. Server in its turn waits for key frame to close current fragment and start to write next one.
Please try to test stream recordings from different sources (PCs) simultaneously, but not all streams from one source.
About splitting by absolute time, we can implement recording file cut by time, but in this case the next file will start from P-frame, not I-frame, therefore players like VLC will not be able to play such file. So you should to implement your own player that can concatenate stream from recording files to play it.
 

AlanM

Member
It would be preferable to have a hard set length. The main issue we have is that the metadata doesn't match the actual length of the content of the file, it is sped up.

As you can see below, we have the file set to split at 5 minutes. The file actually gets split around 5:02, but the metadata is always 5:00, which causes the video to play slightly sped up. We either need the file to be exactly 5 minutes, or at least have the metadata length match the actual length.

upload_2019-8-22_14-16-39.png
 

Max

Administrator
Staff member
Good day.
It would be preferable to have a hard set length.
If we implement this setting, you should implement your own player to concatenate all the fragments to play them correctly, because fragment will start from P-frame, not I-frame, therefore most players like VLC will not be able to play the fragment.
So before implementing this feature, please try the following approach to normalize key frame interval in recordings:
1) Publish WebRTC stream
2) Start stream transcoder using REST API
Code:
/rest-api//transcoder/startup
{
"uri": "transcoder://tcode1",
"remoteStreamName": "stream",
"localStreamName": "streamT",
"encoder": {
  "keyFrameInterval": 30,
  "fps": 30
}
}
3) Find transcoder to detect mediaSessionId
Code:
/transcoder/find
{
"remoteStreamName": "stream"
}
this query returns
Code:
[
{"localMediaSessionId": "42a92132-bcd1-4436-a96f-3fec36b32b37","localStreamName": "streamT","remoteStreamName": "stream","uri": "transcoder://tcode1","status": "PROCESSED_LOCAL",
...
}
]
4) Start transcoded stream recording
Code:
/rest-api/recorder/startup
{
"mediaSessionId": "42a92132-bcd1-4436-a96f-3fec36b32b37"
  "config": {
    "fileTemplate": "{streamName}-{startTime}-{endTime}",
    "rotation": "300"
  }
}
In this case, recording workflow is more complex, but key frame interval in stream recording will be one per second, so file splitting should not differ from metadata. And it is not necessary to implement your own player to play fragments.
Note that stream transcoding requires more CPU resources (near 3 streams 480p per CPU core).
 

Max

Administrator
Staff member
The file actually gets split around 5:02, but the metadata is always 5:00, which causes the video to play slightly sped up.
This seems like a bug, so we will fix it (internal ticket WCS-2207). But key frame interval normalization should also help as workaround.
 

Max

Administrator
Staff member
The file actually gets split around 5:02, but the metadata is always 5:00, which causes the video to play slightly sped up.
The issue has not been reproduced. Please send examples of such recordings (with metadata not corresponding to the actual length).
 

AlanM

Member
We tried using the Rest API for recording, but after about a minute we get this on the transcode stream, and it stops along with the recording.

Code:
  "info" : "Stopped by session disconnect",
 

Max

Administrator
Staff member
Transcoder is terminated after timeout if the stream has no subscribers.
The timeout can be configured with this setting in WCS_HOME/conf/flashphoner.properties:
Code:
transcoder_agent_activity_timer_timeout=60000
The default timeout is 60000 milliseconds. Increase the value as required and restart WCS to apply.
 

Max

Administrator
Staff member
Template {duration} for stream record name has been added in WCS v. 5.2.346. E.g.,
Code:
stream_record_policy_template={streamName}-{duration}
It is applicable for MP4 recordings only. Duration value is taken from mvhd (movie header) atom of recording metadata.
 
Top