-
Notifications
You must be signed in to change notification settings - Fork 29
A small issue regarding noiseXarray #22
Comments
Those functions expects numpy arrays as inputs, not ints like you tried to use, but I think I've forgot to document that.. I'll try take a closer look at this later this evening! |
Oh! My bad, I thought that it would for some reason return an array from given values, rather than having to pass it an array. I assume that it also returns an array of values then? Also, I wanted to know if it uses Numba upon detecting it by default - making it run on CUDA cores, or do I have to force CUDA accel myself? |
Yap it will return an array. As an example you can take a look at the test file in the `tests/` dir.
I'll make sure to update the docs!
And yap, the lib will try to use numba automatically. Don't have much experience with it though so I would be really gratefull if it got some more usage and testing.
--
Alex
…On Thu, 6 Jan 2022, at 17:08, zodiuxus wrote:
Oh! My bad, I thought that it would for some reason return an array
from given values, rather than having to pass it an array. I assume
that it also returns an array of values then?
Also, I wanted to know if it uses Numba upon detecting it by default -
making it run on CUDA cores, or do I have to force CUDA accel myself?
—
Reply to this email directly, view it on GitHub
<#22 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABBDCXMXJPCNU5Z2XSUQI4DUUW5AHANCNFSM5LMPV5AA>.
You are receiving this because you commented.Message ID:
***@***.***>
|
I'll go ahead and test out a function with @vectorize and one without and compare the time it takes for each. |
Sounds interesting, much appreciated!
…--
Alex
On Thu, 6 Jan 2022, at 17:33, zodiuxus wrote:
I'll go ahead and test out a function with @vectorize
<https://github.com/vectorize> and one without and compare the time it
takes for each.
—
Reply to this email directly, view it on GitHub
<#22 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABBDCXMG57QHDBYFKBJJQA3UUW75LANCNFSM5LMPV5AA>.
You are receiving this because you commented.Message ID:
***@***.***>
|
I've been banging my head the entire day trying to get this thing to work. It seems that it's unnecessary to force vectorization, or it needs to be done in the module itself, I'm not sure. Removing the @vectorize puts away this issue as it's not using CUDA, or multiple cores. Your module already does caching by default which does speed up the process, but the issue lies elsewhere it seems.
|
I'm having exams in a week, so I really can't dig deeper into bigger issues like this for the next week or so :( |
Unfortunately, I have exams starting next week as well. I think I tried with the old method (by directly calling OpenSimplex.Function) which gave me the same result, but I'll give it another shot.
…-------- Original Message --------
On Jan 7, 2022, 08:47, Alex wrote:
Unknown attribute 'noise2' of type Module
That seems odd? Are you using opensimplex v0.4? And instead of the module helpers you could try import the OpenSimplex class and call that instead (the old method).
I'm having exams in a week, so I really can't dig deeper into bigger issues like this for the next week or so :(
—
Reply to this email directly, [view it on GitHub](#22 (comment)), or [unsubscribe](https://github.com/notifications/unsubscribe-auth/AFK7RKUBJSL4LXLK6WPGKC3UU2LC5ANCNFSM5LMPV5AA).
Triage notifications on the go with GitHub Mobile for [iOS](https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675) or [Android](https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub).
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
I tried your idea, still an issue occurs. I don't know if it's got to do with my own computer or what, but it is a bit annoying. I won't paste the whole Numba error block, so here's the important part: numba.core.errors.TypingError: Failed in cuda mode pipeline (step: nopython frontend)
Untyped global name 'os': Cannot determine Numba type of <class 'opensimplex.opensimplex.OpenSimplex'>
File "main.py", line 15:
def cuda_os(a,b):
return os.noise2(a,b)
^ Which is done on the following source code: from numba.cuda import target
from numba.np.ufunc.decorators import vectorize
from opensimplex import OpenSimplex
os = OpenSimplex()
@vectorize(['float32(float32, float32)', 'float64(float64, float64)'], target='cuda')
def cuda_os(a,b):
return os.noise2(a,b) And again with def cuda_os(a,b):
os = OpenSimplex()
os.__init__(1234)
return os.noise2(a,b) where it returns numba.core.errors.TypingError: Failed in cuda mode pipeline (step: nopython frontend)
Untyped global name 'OpenSimplex': Cannot determine Numba type of <class 'type'> It's weird to see that this only happens with the vectorize tag, or anything that uses CUDA. Anything else gives me some deprecation warnings but does its thing anyway. I came back to writing this after looking at how and what CUDA supports: http://numba.pydata.org/numba-doc/latest/cuda/cudapysupported.html and it may have to do with most likely the _init function, which returns 2 lists, the return of which is unsupported by CUDA - as per the last line in the link: perm = np.zeros(256, dtype=np.int64)
perm_grad_index3 = np.zeros(256, dtype=np.int64) Looks like this is one thing that's preventing this module from becoming fully CUDA-parallelizable, which I would really love to see happen. It may happen that I've missed a detail in a different function, but from what I read they only make calculations to make the noise. Have you considered alternatively using PyTorch in place of Numba? PT allows you to set your device for all necessary calculations, and from the looks of it, it utilizes the some functions NumPy does, and can convert NP lists to PT ones. To be fair, this would eliminate the need to force CUDA accel, as it can be done by simply passing an argument for which device to be used. |
Well, to finish off this interesting part of the project, I've gone ahead and made some changes to make it use PyTorch for easier parallelization, rather than having Numba be a pain in the neck. Here's the fork: https://github.com/zodiuxus/opensimplex (-0.7320515695722086, 'cuda', 1.1209726333618164)
(-0.7320515695722086, 'cpu', 0.008008718490600586) |
I've also found another issue with the noise generators that don't use the arrays as input: For some reason, numbers seem to randomly go on a massive exponent regardless of whether I make them follow the rule I've set for them. I do notice a pattern here with this being at (0,0) and (3,3), (4,2), and (2,4) >>> xy = [[(0 if os.noise3(i, j, 0.0)<-1 else os.noise3(i, j, 0.0),(i,j)) for i in range(5)] for j in range(5)]
>>> print(xy)
[[(5.2240987598031274e-67, (0, 0)), (-0.23584641815494026, (1, 0)), (0.04730512605377769, (2, 0)), (-0.03559870550161819, (3, 0)), (0.4973430820248515, (4, 0))],
[(-0.02225418514523135, (0, 1)), (0.09588876902792749, (1, 1)), (-0.2394822006472489, (2, 1)), (-0.4947860481841064, (3, 1)), (-0.38147748611610544, (4, 1))],
[(0.2314115625873986, (0, 2)), (0.16181229773462766, (1, 2)), (0.0754324983019698, (2, 2)), (0.022254185145231333, (3, 2)), (2.1910484798468403e-65, (4, 2))],
[(-0.12944983818770212, (0, 3)), (-0.23908266410963255, (1, 3)), (0.2899836190019574, (2, 3)), (4.4558489421850186e-66, (3, 3)), (0.031883015701786116, (4, 3))],
[(-0.07543249830196942, (0, 4)), (-0.2455551560190179, (1, 4)), (-1.103206738099601e-65, (2, 4)), (-0.15817651524231863, (3, 4)), (-0.06776139677973557, (4, 4))]] Could it be an overflow? |
Nah I had the same struggles and errors when trying to add the
Yeah working with Numba has been a struggle. But there was a demand for it (see #4) and I was happy to be able to add it as an optional dependency. |
Ah fair enough, it's possible to make a Numba and PyTorch version. I can probably look further into optimizing and cutting down most of the code for PyTorch, but I'd have to do this in about 3 weeks due to finals. |
Ugh I just now noticed the negative exponents. Those values are so small it's barely worth noting, you might as well treat them as zero. |
Hmm I think I'll close this due to updated docs and going off topic. Haven't had any other demands of pytorch so I won't be doing any work with that for now, but feel free to open a new issue for supporting pytorch or reopen this one if you have further comments about the original issue. |
Perhaps a different attribute needs to be used?
The text was updated successfully, but these errors were encountered: