Research

Foley Music: Learning to Generate Music from Videos

ECCV

Authors

Published on

08/28/2020

Categories

ECCV

In this paper, we introduce Foley Music, a system that can synthesize plausible music for a silent video clip about people playing musical instruments. We first identify two key intermediate representations for a successful video to music generator: body keypoints from videos and MIDI events from audio recordings. We then formulate music generation from videos as a motion-to-MIDI translation problem. We present a Graph−Transformer framework that can accurately predict MIDI event sequences in accordance with the body movements. The MIDI event can then be converted to realistic music using an off-the-shelf music synthesizer tool. We demonstrate the effectiveness of our models on videos containing a variety of music performances. Experimental results show that our model outperforms several existing systems in generating music that is pleasant to listen to. More importantly, the MIDI representations are fully interpretable and transparent, thus enabling us to perform music editing flexibly. We encourage the readers to watch the demo video with audio turned on to experience the results.

This paper has been published at ECCV 2020

Please cite our work using the BibTeX below.

@inproceedings{gan2020foley,
      title={Foley Music: Learning to Generate Music from Videos}, 
      author={Chuang Gan and Deng Huang and Peihao Chen and Joshua B. Tenenbaum and Antonio Torralba},
      journal={European Conference on Computer Vision (ECCV)},
  year={2020}
}
Close Modal