Can we integrate your SDK on a WebRTC session?
When integrating our SDK in an application, our SDK catches the camera screen the moment application activates the SDK.
Because our SDK catches the camera session in order to process any frame to extract the user's vital signs - It is not an option to use the camera for a parallel process (such as WebRTC)
Is there any way to use your SDK on a WebRTC app?
Yes, there are two ways to integrate our SDK on a WebRTC native application.
A waiting room BEFORE session begins:
It means that before WebRTC session begins, the patient goes to a waiting room and than using our SDK measures himself.
The moment he finishes and has his vital signs, you can start a WebRTC session and results can move to the Doctor via a dedicated API.
Closing the Video stream while WebRTC endures:
This scenario was never tested and might not work!
WebRTC protocol controls the audio and the Video inputs.
Means that while WebRTC already started, you can shut the Video input on demand and than using our SDK to start measuring the patient's vital signs.
When results exist you can transfer via a dedicated API, deactivate our SDK and enable the Video for the WebRTC session.