Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

viewing a tensor as a new dtype with a different number of bytes per element is not supported. #107

Open
Levishery opened this issue Oct 29, 2024 · 2 comments

Comments

@Levishery
Copy link

My PyTorch version is 1.10.0.
On line 143 of the hilbert.py file, the code
locs_uint8 = locs.long().view(torch.uint8).reshape((-1, num_dims, 8)).flip(-1)
will report an error:
viewing a tensor as a new dtype with a different number of bytes per element is not supported.
I understand that switching to PyTorch 1.12.0 can solve this problem, but my CUDA version doesn't support me doing so. May I ask if there is an alternative solution for this line of code?

@Levishery
Copy link
Author

Is torch.view(torch.uint8) do the same function as numpy.view(numpy.uint8)?
Is changing the code to

    locs = locs.long()
    locs_uint8 = torch.tensor(np.asarray(locs).view(np.uint8))
    locs_uint8 = locs_uint8.reshape((-1, num_dims, 8)).flip(-1)

ok?

@Gofinge
Copy link
Member

Gofinge commented Oct 30, 2024

I think it would harm the performance with change tensor in CUDA to array in CPU. How about ask ChatGPT about the solution?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants