Virtual Background
Virtual backgrounds are becoming necessary nowadays in the video conferencing world. It allows us to replace our natural background with an image or a video. We can also upload our custom images in the background.
Dependencies
Add the dependencies for the Mediapipe Android libraries to the module’s app-level gradle file, which is usually app/build.gradle:
Common WebRTC terms you should know
- VideoFrame: It contains the buffer of the frame captured by the camera device in I420 format.
- VideoSink: It is used to send the frame back to WebRTC native source.
- VideoSource: It reads the camera device, produces VideoFrames, and delivers them to VideoSinks.
- VideoProcessor: It is an interface provided by WebRTC to update videoFrames produced by videoSource .
- MediaStream: It is an API related to WebRTC which provides support for streaming audio and video data. It consists of zero or more MediaStreamTrack objects, representing various audio or video tracks
Apply in the code
Idea of virtual background in WebRTC
Getting the VideoFrame from WebRTC
Initialize Mediapipe Image Segmenter
Handle Person Mask from Mediapipe
Draw segmented and background on canvas
Task benchmarks
Here’s the task benchmarks for the whole pipeline based on the above pre-trained models. The latency result is the average latency on Pixel 6 using CPU / GPU.
Model Name | CPU Latency | GPU Latency |
---|---|---|
SelfieSegmenter (square) | 33.46ms | 35.15ms |