Barbershop: GAN-based Image Compositing using Segmentation Masks
Event Type
Technical Papers
Hybrid Formats
Registration Categories
TimeTuesday, December 141pm - 1:11pm JST
LocationHall B5 (1) (5F, B Block) & Virtual Platform
DescriptionSeamlessly blending features from multiple images is extremely challenging because of complex relationships in lighting, geometry, and partial occlusion which cause coupling between different parts of the image. Even though recent work on GANs enables synthesis of realistic hair or faces, it remains challenging to combine them into a single, coherent, and plausible image rather than a disjointed set of image patches. We present a novel solution to image blending, particularly for the problem of hairstyle transfer, based on GAN-inversion. We propose a novel latent space for image blending, and propose an extension to existing GAN-inversion algorithms to align reference images to a single composite image. Our novel representation enables the transfer of the visual properties of reference images including specific details such as moles and wrinkles, and because we do image blending in a latent space we are able to synthesize images that are coherent. Our approach avoids blending artifacts present on other approaches and finds a globally consistent image. Our results demonstrate a significant improvement over the current state of the art in a user study, with users preferring our blending solution over 95 percent of the time.