-
Notifications
You must be signed in to change notification settings - Fork 59
Issues: aws/sagemaker-huggingface-inference-toolkit
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Can’t create an inference service for models that depend on packages i.e.
espeak
#128
opened Jun 23, 2024 by
ghubnerr
Custom Inference Code - model_fn() takes more positional argument
#126
opened Jun 17, 2024 by
dil-bhantos
Endpoint creation completes before custom model_fn finishes loading resources
#111
opened Feb 12, 2024 by
Tripping-Hazard
Sagemaker endpoint inferencing error with HF model loading from s3bucket with new transformer update
#108
opened Nov 13, 2023 by
miteshkotak
get_pipeline function passes Path object rather than PretrainedTokenizer
#105
opened Oct 1, 2023 by
jpang32
How to enable Batch inference on AWS deployed Serverless model from Hub?
#98
opened Sep 6, 2023 by
jmparejaz
Support passing model_kwargs to pipeline
enhancement
New feature or request
#85
opened May 10, 2023 by
lukealexmiller
Previous Next
ProTip!
Updated in the last three days: updated:>2024-12-14.