Skip to content

AniketRajpoot/DeepMusicGeneration

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DeepMusicGeneration

Exploring endless possiblilties to generate music using deep learning!

Sample outputs | Video demo | PDF report

Screenshot

Table of Contents

Introduction

Generating long pieces of music using deep learning is a challenging problem, as music contains structure at multiple timescales, from milisecond timings to motifs to phrases to repetition of entire sections. This repositary explores various techniques to manipulate and generate music pieces in MIDI asa well as RAW Audio format. This repositary aims to provide code and checkpoints for such models for end use. We also integrated everything in a web applicaiton for easier use.

There are currently two branches in this repository:

  1. archive: The branch contains archived notebooks with code for running Transformers on MIDI and wav files, both single and multi-instruments, it also includes samples generated by the said models.

Run the following command to clone the branch:

git clone -b archive --single-branch https://github.com/AniketRajpoot/DeepMusicGeneration.git
  1. master: The branch includes the work done for the B-Tech project as Colab notebooks (Demo and report available at header of this README) involving learning of inter-instrument dependencies along with the objective of developing realtime applications for the purpose of assisting musicians. Run the following command to clone the branch:
git clone -b master --single-branch https://github.com/AniketRajpoot/DeepMusicGeneration.git

Acknowledgements

This project was made possible with previous contributions referenced below:

  1. https://github.com/bearpelican/musicautobot/
  2. https://web.mit.edu/music21/
  3. https://streamlit.io/

Methodology

Tasks

We perform following music related tasks and also provide the code for the same :

By combining all the models in a singular pipeline, full potential of all the models can be unleashed and on can compose a complete song!

Preprocessing

Models

We provide pretrained checkpoints for the following models used to perform various tasks mentioned in the tasks section :

Deep Music Generator

The model is trained on a subset of LakhMIDI dataset with genre conditioning.

Run the following command:

gdown --id 1LJKXFEap9YrQ7Md4S38CD5ergr1jRVML

Alternatively, the link to the same is given below:

https://drive.google.com/file/d/1LJKXFEap9YrQ7Md4S38CD5ergr1jRVML/view?usp=sharing

Deep Mask Modelling

Run the following command:

!gdown --id 1lWR0VDT8jz_CbkCI8xBrlXyk8dAidH7t

Alternatively, the link to the same is given below:

https://drive.google.com/file/d/1lWR0VDT8jz_CbkCI8xBrlXyk8dAidH7t/view?usp=sharing

Dataset

All the 3 models are pretrained using LakhMIDI dataset. Due to limited resources we were only able to train small models for music generation and music harmonization but musicBERT is a large model pretrained on the whole dataset. More about this here.

Training

Evaluation

Running Streamlit app

About

Music Generation in MIDI format using Deep Learning.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •