medusa.recon.mpipe.mpipe
#
Module with a wrapper around a Mediapipe face mesh model [1] that can be used in Medusa.
Module Contents#
- class medusa.recon.mpipe.mpipe.Mediapipe(static_image_mode=False, det_threshold=0.1, device=DEVICE, lm_space='world', **kwargs)[source]#
A Mediapipe face mesh reconstruction model.
- Parameters:
static_image_mode (bool) – Whether to expect a sequence of related images (like in a video)
det_threshold (float) – Minimum detection threshold (default set to 0.1 because lots of false negatives)
device (str) – Either ‘cuda’ (GPU) or ‘cpu’
**kwargs (dict) – Extra keyword arguments to be passed to the initialization of FaceMesh
- model#
The actual Mediapipe model object
- Type:
mediapipe.solutions.face_mesh.FaceMesh
- forward(imgs)[source]#
Performs reconstruction of the face as a list of landmarks (vertices).
- Parameters:
imgs (np.ndarray) – A 4D (b x w x h x 3) numpy array representing a batch of RGB images
- Returns:
out – A dictionary with two keys:
"v"
, the reconstructed vertices (468 in total) and"mat"
, a 4x4 Numpy array representing the local-to-world matrix- Return type:
dict
Notes
This implementation returns 468 vertices instead of the original 478, because the last 10 vertices (representing the irises) are not present in the canonical model.
Examples
To reconstruct an example, simply call the
Mediapipe
object:>>> from medusa.data import get_example_image >>> model = Mediapipe() >>> img = get_example_image() >>> out = model(img) # reconstruct! >>> out['v'].shape # vertices (1, 468, 3) >>> out['mat'].shape # local-to-world matrix (1, 4, 4)