-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resources #21
Comments
Hello, the first paper is by the TACO team at MIT/Stanford, and they are working on quite a lot of stuff to enhance the performance of sparse tensor algebra, as well as to complete some algorithms that aren't possible in the first paper:
A lot of these benchmarks are doing some kind of Apples-Oranges comparison between TACO and other frameworks. If you take the idea of TACO (generalization of iteration to sparse tensors), then you can see the value in TACO, and apply e.g. scheduling GPUs and SIMD to get equivalent performance to other (hand-tweaked) algorithms. The value of TACO is the flexibility it offers in generalizing to more algorithm types, not performance vs hand-optimized algorithms. |
Thanks @hameerabbasi ! |
This isn't as directly-related, but just sharing as I have found it really interesting: Thallo is a DSL for least-squares optimisation. They JIT-compile a high-level description of the parameters and cost function for a particular problem, and output a custom kernel that is able to run the optimisation just for that exact problem, in a matrix-free manner. I believe it's similar to the sort of approach that TACO will be taking, so just thought to share in case it's of any inspiration for anyone. |
This issue is for tracking resources related to sparse tensors and algorithms.
The text was updated successfully, but these errors were encountered: