Binaural Audio Generation via Multi-task Learning
Event Type
Technical Papers
Hybrid Formats
Registration Categories
TimeWednesday, December 151:33pm - 1:44pm JST
LocationHall B5 (1) (5F, B Block) & Virtual Platform
DescriptionWe present a learning-based approach to generate binaural audio from mono audio using multi-task learning. Our formulation leverages additional information from two related tasks: the binaural audio generation task and the flipped audio classification task. Our learning model extracts spatialization features from the visual and audio input, predicts the left and right audio channels, and judges whether the left and right channels are flipped. First, we extract visual features using ResNet from the video frames. Next, we perform binaural audio generation and flipped audio classification using separate subnetworks based on visual features. Our learning method optimizes the overall loss based on the weighted sum of the losses of the two
tasks. We train and evaluate our model on the FAIR-Play dataset and the YouTube-ASMR dataset. We perform quantitative and qualitative evaluations to demonstrate the benefits of our approach over prior techniques.