Can we integrate your SDK on a WebRTC session?
When integrating our SDK in an application, our SDK catches the camera screen the moment application activates the SDK.
Since our SDK catches the camera session to process the frames required to extract the user's vital signs, it is not an option to use the camera for any parallel processes (such as WebRTC)
Is there any way to use your SDK on a WebRTC app?
Yes, there are two ways to integrate our SDK on a WebRTC native application.
A waiting room BEFORE session begins:
It means that before WebRTC session begins, the patient goes to a "waiting room" and then measures himself using our SDK.
The moment the vital signs are extracted, you can start a WebRTC session and results are allowed to be shared with a Doctor via a dedicated API.
Closing the Video stream while WebRTC endures:
This scenario was never tested and might not work!
WebRTC protocol controls the audio and the Video inputs.
That means that while WebRTC process has already started, you can shut the video input down on demand and than start measuring the patient's vital signs using our SDK.
When measurement is finalized, you can then transfer the results via a dedicated API, deactivate our SDK and enable the video for the WebRTC session.