-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce memory usage in the sieve #45
Comments
Would you be willing to submit a PR for this? cc @pabloferz, who I think is quite familiar with the sieve code. |
I could definitely look into it, but I'll have to experiment with the (very cool) wheel-idea first. One issue: Also, I think I should show some numbers, otherwise it's just speculation :p |
I wanted to implement a segmented sieve once, the only problem is, AFAIK, that there is no native way in Julia to find L1 cache sizes for any system, of course there are a couple of packages that we could use, but I don't fancy the idea of depending on them (I'd like to keep An option is to chose a fixed size that certainly fits on L1 cache for the architectures that julia supports, but that would miss some performance on systems with bigger L1 cache. And yeah, in this case If you want to submit a PR, I can certainly have a look. |
An optimistic (over)estimation of the cache size would still reduce cache misses with respect to the current implementation, but indeed it is messy to have a constant around. It would be interesting to see a cache-oblivious algorithm, but I doubt it exists. One idea is to let the user decide the segment size and provide a reasonable default. Maybe a signature like
|
Is there a way to see if a package is installed, and use it if it exists? |
@oscardssmith just for reference, that kind of question is better suited for a site like https://discourse.julialang.org/ |
Since this is the first issue I have ever opened related to Julia, I feel like I should close it at some point as well. At the moment I have implemented an absolute monstrosity that unrolls the inner loop of the {2,3,5}-prime wheel sieve with The next step is to reduce the memory from O(n) to O(√n / log n) by saving just an O(1) chunk of memory that fits in L1 cache, and keeping an O(√n / log n) list of siever primes up to √n. I'm hoping that would also be 5x to 10x faster for large n... Lastly with Julia 1.3 it should not be hard to make it multithreaded, right? |
I've now implemented a cache-friendly version here: https://github.com/haampie/FastPrimeSieve.jl. It generates the unrolled sieving loop with macro's, which is kinda neat. I'm getting a 12x speedup 🚀 for the prime-counting function for Last things are to see how to contribute it back to this package and consider multithreading. |
Currently, memory usage of the prime sieve increases linearly with the size of the interval
[lo, hi]
, while the number of primes only grows asymptotically as(hi) / log (hi)
. Furthermore, L1 memory caching is not exploited.By implementing a segmented sieve both problems can be addressed at once: lowering memory consumption and increasing speed (in C++ I noticed up to
10x5x better performance).The current algorithm is basically:
p
less than√hi
p
starting atp * p
.[√hi, hi]
The suggested procedure is:
p
less than√hi
[√hi, hi]
into fixed size segments / smaller intervals (fitting L1 cache)You only need to store one segment (constant memory allocation) in step 2/3 and if the segment fits in L1 cache it is extremely fast, since you're iterating over it multiple times when sieving.
For reference see http://primesieve.org/
The text was updated successfully, but these errors were encountered: