-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot export Pytorch model (ReDimNet) to ONNX #13
Comments
Hello! Thank you for submitting issue. I see the problem is in features, especially in
We are going to release soon more accurate models pretrained on voxblink2 + cnceleb + vox2 and we'll finetune them with different features, that are based on conv1d operations and that should be convertable to onnx. |
Thank you for your reply. "that are based on conv1d operations and that should be convertable to onnx" , will it be similar to https://github.com/adobe-research/convmelspec , "Convmelspec: Convertible Melspectrograms via 1D Convolutions" |
Yes, it will be similar in a way that both solutions are using convolution of signal and discrete fourier transform kernels, but implementations will differ. |
Hi, I try to find a solution for this while you are developing the custom spectrogram implementation. I follow one github issue and I can export the redimnet model to onnx successfully . But I am not sure how it affects the model performance.
Code use for exporting:
Is this good enough or will this affect the model performance? |
@pongthang thanks for looking into it, I'm sorry, but we currently don't have resources and time for evaluation of the method proposed by you. It should be pretty simple to check model performance after conversion. |
Happy to share good news, we have released first models pretrained on |
@pongthang Hi, it’s great to hear that ONNX can be exported. How is the performance and speed of ONNX? |
Hi, performance is good , and same as pytorch . Speed is also similar. |
Thank you for your response! Based on your previous description, I successfully exported the ONNX model. However, I encountered an issue where the input length cannot be dynamic. I noticed that you mentioned a similar problem in a PyTorch issue. Have you found a solution for this? If so, could you kindly share your experience or workaround? Thank you very much! |
I have found the solution, the code is as follows:
Referenced wespeaker https://github.com/wenet-e2e/wespeaker/blob/master/wespeaker/bin/export_onnx.py But it cannot be used on gpu. I am still discovering the cause of the problem and looking for solutions. |
I try to export ReDimNet model from pytorch to ONNX. Please help me this out. The code I use is :
The Error is :
The text was updated successfully, but these errors were encountered: