Can we integrate Binah.ai's SDK on a WebRTC session?
When integrating our SDK in an application, our SDK takes control of the device's camera the moment the application activates the SDK.
Since our SDK takes control of the camera to process the frames required to extract the user's vital signs, it is impossible to use the camera for any parallel processes (such as WebRTC).
Is there any way to use Binah.ai's SDK within a WebRTC app?
Yes, there are two ways to integrate Binah.ai's SDK within a WebRTC native application.
A waiting room before a session begins:
It means that before the WebRTC session begins, the patient goes to a "waiting room" and then measures themselves using our SDK.
The moment the vital signs are extracted, you can start a WebRTC session, and the results are allowed to be shared with a Doctor via a dedicated API.
Closing the Video stream while the WebRTC endures:
This scenario is not supported and must be validated by the client
WebRTC protocol controls the audio and the video inputs.
It means that while the WebRTC process has already started, you can shut the video input down on demand and then start measuring the patient's vital signs using Binah.ai's SDK.
When the measurement is finished, you can then share the results via a dedicated API, deactivate Binah.ai's SDK and re-enable the video for the WebRTC session.