Advertisements

Thursday 29 November 2018

Google explains how it achieved Portrait mode on Pixel 3

Similar to last year, Google again is describing how it achieved Portrait mode on the Pixel 3 smartphones. Google says that Portrait Mode uses a neural network to determine what pixels correspond to people versus the background, and augments this two layer person segmentation mask with depth information derived from the PDAF pixels. This is meant to enable a depth-dependent blur. PDAF pixels capture two slightly different views of a scene. With Portrait Mode on the Pixel 3, Google says that it is fixing these errors by utilizing the fact that the parallax used by depth from stereo algorithms is only one of many depth cues present in images. Google says that it has built its own custom “Frankenphone” rig that contains five Pixel 3 phones, along with a Wi-Fi-based solution that allowed it to simultaneously capture pictures from all of the phones. With this rig, Google computed high-quality depth from photos by using structure from motion and multi-view stereo. However, even though the data captured from this rig is ideal, it is still extremely challenging to predict the absolute depth of objects in a scene a given PDAF pair can correspond to a range of different depth maps. To account for this, Google instead predict the ...