-
Notifications
You must be signed in to change notification settings - Fork 180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incompatible with latest faster-whisper #403
Comments
There seems to be a lot more that broke from those changes with the sudden change from numpy to pytorch. |
Yes, very weird they would do that |
I tried running stable-ts[fw] in jpy notebook, it crashed at model.transcribe. It works on the prompt tho, dunno what causes it. |
|
Thanks for this! Solved the crashing by downgrading faster-whisper to 1.0.0, changed my CUDA to 12.1, and changed my pytorch to cu121 |
-updated `align()` and `transcribe_stable()` to be compatible with models on the latest faster-whisper commit (#403) -added `pipeline_kwargs` to `load_hf_whisper()` for passing specific arguments to `transformers.AutoModelForSpeechSeq2Seq.pipeline()` -added `"large-v3-turbo"` and `"turbo"` to `HF_MODELS` for loading `"openai/whisper-large-v3-turbo"` on Hugging Face
Looks like recent changes to faster-whisper broke compatibility with stable-ts, giving errors like this:
The text was updated successfully, but these errors were encountered: