Replies: 1 comment 3 replies
-
Hey, Fyi, You can apply the learned IR responses just like any other IR response, so you would just need to save the learned variable of the reverb module as a wav file and load it as an IR in your favorite convolutional reverb. There's no code to do that out of the box, but it should just be a line or two. |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
As someone who is closely involved in the electronic music production community, I'm fascinated by the potential of DDSP-based models for new kinds of audio processing.
The example of reverb transfer in the original paper is already something that would be incredibly impactful among producers if it can be done reliably. As I understand it, the way that works is that you train your model with the Reverb effect at the end, and then you can take that trained reverb model and apply it to other signals. Is that training an IR that could be loaded into other reverb modules in other programs? Can it only be applied to other signals that are generated through DDSP?
Also, is this concept of "train the model on one source and apply it to another" possible with other effects? For example, a lot of sound design in electronic music is achieved through lots of modulation of the cutoff frequency of various filters. Could a DDSP model be trained to capture that filter movement and transfer it to another signal? Or perhaps "de-filter" the signal to show what it would could sounded like without that filter movement?
My machine learning knowledge and DSP knowledge are a bit more intuitive than academic, so I'd love a more concrete understanding of what's possible with DDSP to know what I should be pursuing in my learning to implement things like this.
Beta Was this translation helpful? Give feedback.
All reactions