-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize Error Handling and Regex Caching in Tensor Loading #221
base: main
Are you sure you want to change the base?
Conversation
Hey! I'm happy to see a limit set for the memory usage in LRU Cache. Would it be possible for you to roll a test for this change? I think a pytest test would suffice. |
Perhaps |
…egex Operations with Caching
""" | ||
For path_tuple_to_string(), | ||
introducing a simple caching mechanism to avoid recomputing regex matches for paths that have already been processed. | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this docstring supposed to be here, or inside the path_tuple_to_string
method?
After delving a bit. I now think 32MB in LRU cache may be too much overkill. I don't think more than 250kb at most should be necessary. |
"delving" hahahaha (for reference: https://x.com/JeremyNguyenPhD/status/1774021645709295840) |
Hadn't seen this, and the stat doesn't really correspond with anything related to ChatGPT. I most certainly didn't need to use ChatGPT to come to that conclusion. I recommend trying both ways and running benchmarks to see which provides the best performance improvement. |
This PR introduces two key enhancements to the tensor loading process: (Fixes #220)
These changes aim to enhance the robustness and performance of tensor loading, particularly in distributed computing environments.