Enable camera after app starts (CallKit flow)

andrew.n

Member
@Max regarding CallKit intergration on iOS, right now, after reporting the incoming call to CallKit SDK, we use the following setup:
FPWCSApi2MediaConstraints(audio: true, videoWidth: videoWidth, videoHeight: videoHeight)
and use broadcastURLStream?.muteVideo() method.
After the user opens the app we use the unmuteVideo() method on the broadcastURLStream, but even like that, on the mobile I can see both views (playing and broadcasting) but on the web, I don't see the input stream from the phone.
Do I have to change the constraints after the app is active by using the method audio: true video true?
 

Max

Administrator
Staff member
Good day.
Do I have to change the constraints after the app is active by using the method audio: true video true?
You cannot change the constraints if stream is already publishing.
Please clarify your flow:
1. Report the incoming call to CallKit SDK
2. Create a stream object
3. Publish the stream
4. Mute video
5. User opens the application
6. Unmute video
Is that right? If yes, this seems to be a wrong flow. You should publish the stream after user opens the application, not before.
Also, please check publishing stream metrics at server side to make sure the stream is publishing correctly: Receiving common stream information
Set the following server parameter to provide a regular keyframes sending from the client
Code:
periodic_fir_request=true
 

andrew.n

Member
@Max I did some debugging and manage to find the possible issue.
So, when the app is already running, everything works perfect, BUT, when the app is closed and the device is locked, the function "func provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession) {" is not called.

I added some device debug logs to manage to debug the flow when the phone is closed, more about how to do this you have here:
You are going to need this to debug this flow and see what NSLogs you have from the app.

So, back to our topic, those are the logs I added when the app is already running - check Group 1.png - I added multiple prints in different places
And in the Group 2.png you have the flow when the device is closed.

As you can see, the log: didFinishLaunchingWithOptions is showed in both cases, but the one I'm waiting for didActivate audioSession is not being called.
Related on this topic:

I tried to move the setup of the audio session:
Code:
//Need for execute didActivate AudioSession on start app
private func setupAudioSession() {
    DispatchQueue.global().sync {
        do {
            try AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playAndRecord, options: AVAudioSession.CategoryOptions.mixWithOthers)
            try AVAudioSession.sharedInstance().overrideOutputAudioPort(AVAudioSession.PortOverride.none)
            try AVAudioSession.sharedInstance().setMode(AVAudioSession.Mode.voiceChat)
            try AVAudioSession.sharedInstance().setActive(true)
        } catch _ {

        }
    }
}

func provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession) {
    NSLog("======== func provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession) {")
    setupAudioSession()
    VideoCallInteractor.shared.publish()
}
I even tried (which makes sense for me) to do the following:
Code:
func provider(_ provider: CXProvider, perform action: CXAnswerCallAction) {
    //other stuff
    setupAudioSession()
    action.fulfill(withDateConnected: NSDate.now)
}

func provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession) {
    NSLog("======== func provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession) {")
    //        setupAudioSession()
    VideoCallInteractor.shared.publish()
}
But still no luck...

In the function func provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession), but I have the same issue.
I will keep you updated if I find a work around, let me if you have any feedback.
 

Attachments

Last edited:

wyvasi

Member
Hello Max, the issue we have is that we don't receive any audio on server side, publish always fail because because
Code:
"info" : "Failed by RTP activity",
"description" : "Video RTP activity",
These are the constraints but on server side I always see ` "hasVideo" `: true, and also always width:0, height: 0 no matter what constraints Andrei is using.

Code:
let constraints = FPWCSApi2MediaConstraints(audio: true, video: false)
let stream = try! session.createStream(options)
We would like to start with audio: true, video: true (mute the video after publish but this didn't work) We are publishing from CallKit just so you know, there is no video available only audio.
 

andrew.n

Member
@Max the first problem was solved because I commented a few lines of code:
Do you have any idea why this can happen? I don't understand how those 4 parameters can affect the execution.
Code:
func reportIncomingCall(uuid: UUID, handle: String, user: User, completion: ((NSError?) -> Void)? = nil) {
    _user = user
    let update = CXCallUpdate()
    update.remoteHandle = CXHandle(type: .generic, value: handle)
    update.hasVideo = true
    //        update.supportsDTMF = false
    //        update.supportsHolding = false
    //        update.supportsGrouping = false
    //        update.supportsUngrouping = false
    setupAudioSession()
    provider.reportNewIncomingCall(with: uuid, update: update) { error in
        completion?(error as NSError?)
    }
}

......

private func setupAudioSession() {
    DispatchQueue.global().sync {
        do {
            try AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playAndRecord, options: AVAudioSession.CategoryOptions.mixWithOthers)
            try AVAudioSession.sharedInstance().overrideOutputAudioPort(AVAudioSession.PortOverride.none)
            try AVAudioSession.sharedInstance().setMode(AVAudioSession.Mode.voiceChat)
            try AVAudioSession.sharedInstance().setActive(true)
        } catch let error {
            NSLog("======== audio session error:")
            NSLog(error.localizedDescription)
        }
    }
}
Right now, even if the phone is locked (and app killed), or unlocked (with the app in the foreground) same lines of code are executed. The only difference is that when the phone is locked, I can hear @wyvasi but he can't hear me.

Note: when the device is unlocked everything works perfect. this happens only when the device is locked.
 
Last edited:

Max

Administrator
Staff member
"info" : "Failed by RTP activity", "description" : "Video RTP activity",
This means no video traffic is sent from the device
Do you have any idea why this can happen? I don't understand how those 4 parameters can affect the execution.
From here https://developer.apple.com/documentation/callkit/cxcallupdate:
1668042570324.png

All those parameters relate to SIP calls, but you have no SIP party in the call. So setting those parameters may break anything.
Please also make sure you have set supportsVideo = true in provider configuration:
Code:
    /// The app's provider configuration, representing its CallKit capabilities
    static var providerConfiguration: CXProviderConfiguration {
        let localizedName = NSLocalizedString("CallKitDemo", comment: "Call Kit Demo for WCS")
        let providerConfiguration = CXProviderConfiguration(localizedName: localizedName)

        // This should be set to true to support video 
        providerConfiguration.supportsVideo = true
        providerConfiguration.maximumCallGroups = 1
        providerConfiguration.maximumCallsPerCallGroup = 1

        providerConfiguration.supportedHandleTypes = [.phoneNumber]

        if let iconMaskImage = UIImage(named: "IconMask") {
            providerConfiguration.iconTemplateImageData = iconMaskImage.pngData()
        }

        providerConfiguration.ringtoneSound = "Ringtone.caf"

        return providerConfiguration
    }
The only difference is that when the phone is locked, I can hear @wyvasi but he can't hear me.
Is this issue (no audio from the application when phone is locked) reproducing in CallKitDemo example?
 

andrew.n

Member
@Max Yes, I have supportsVideo = true, also I have includesCallsInRecents = true, but I don't think this can affect.
We couldn't make the CallKitDemo example because we have to prepare the certificates and so on... I might take a little bit to make CallKitDemo reproduce the issue. I have to discuss with @wyvasi about this.

Later edit
If I have the following configuration:
Code:
func reportIncomingCall(uuid: UUID, handle: String, user: User, completion: ((NSError?) -> Void)? = nil) {
    _user = user
    let update = CXCallUpdate()
    update.remoteHandle = CXHandle(type: .generic, value: handle)
    update.hasVideo = false
    setupAudioSession()
    provider.reportNewIncomingCall(with: uuid, update: update) { error in
        completion?(error as NSError?)
    }
}
Code:
...
let constraints = FPWCSApi2MediaConstraints(audio: true, video: false)
options.constraints = constraints
let stream = try! session.createStream(options)

stream.on(.fpwcsStreamStatusPublishing) { [weak self] (stream) in
...
Code:
static var providerConfiguration: CXProviderConfiguration {
    let providerConfiguration = CXProviderConfiguration()
    providerConfiguration.supportsVideo = false
//        providerConfiguration.includesCallsInRecents = true
    providerConfiguration.maximumCallGroups = 1
    providerConfiguration.maximumCallsPerCallGroup = 1
    providerConfiguration.supportedHandleTypes = [.phoneNumber]
    providerConfiguration.iconTemplateImageData = #imageLiteral(resourceName: "PlaceholderLogo").pngData()
    providerConfiguration.ringtoneSound = "call.mp3"
    return providerConfiguration
}
When @wyvasi is calling me we can hear each other when the phone is locked. Ofc, when we open the app, the video is not active.

After that, if I just change the following: (video: true)
Code:
...
let constraints = FPWCSApi2MediaConstraints(audio: true, video: true)
options.constraints = constraints
let stream = try! session.createStream(options)

stream.on(.fpwcsStreamStatusPublishing) { [weak self] (stream) in
...
We can't hear each other anymore.

Is there any way if we continue with video as false everywhere, and after the app starts, we can change the constraints to enable the video?
 
Last edited:

Max

Administrator
Staff member
Is there any way if we continue with video as false everywhere, and after the app starts, we can change the constraints to enable the video?
When stream is published with video: false, it has audio track only. You should stop WebRTC stream and publish it again with audio: true, video: true to add a video track.
Another way is to publish an additional video stream after app starts (and is seems like a suitable workaround for your case).
There is also an SFU simulcast feature which allows to add and remove audio and video tracks dynamically on the fly, but it is working only in browser or in Electron, and no plans yet to implement it in native SDKs.
 

wyvasi

Member
Did you try publishing with audio: true, video: true from background? no audio/video data is received in this case.
I don't want to publish with audio: true, video: false, but this is the only one working and I don't want to restart stream when phone is unlocked.
Can you find a way to stream with audio: true, video: true even if there is no video access and add it later?
 

Max

Administrator
Staff member
Can you find a way to stream with audio: true, video: true even if there is no video access and add it later?
Seems like there is no camera access in background. So if you want to start audio in background and then video in foreground, the only way to do it looks like the following:
1. In background, publish audio only stream a_stream with constraints audio: true, video: false
2. When application is in foreground, publish additional video only stream v_stream with constraints audio: false, video: true
3. Other party plays both a_stream and v_stream
 

andrew.n

Member
@Max should we use the same FPWCSApi2Session object for both video stream and audio stream? or should I have 2 different session objects?
 
Top