-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Unable to access cuDF due to RuntimeError: cuDF failure : Unsupported type_id conversion to cudf #1803
Comments
It looks like you have an older version of |
@karlhigley May I use
in the docker file so that I can always install the latest one ? |
@karlhigley , I got a build error:
|
Ah sorry, I meant the latest version of |
@karlhigley , I am using this for the container image
I got:
When I run
I got:
|
You can build an image that way, but we don't generally guarantee the stability of the nightly images. Are you seeing the same issue building
? |
@karlhigley , yes, I got the same error for
|
@jperez999 Are there known version incompatibility issues between Pandas and cuDF that might explain this? |
Describe the bug
A clear and concise description of what the bug is.
I am trying to run the example code at https://nvidia-merlin.github.io/NVTabular/main/api/ops/categorify.html
also, at https://github.com/NVIDIA-Merlin/NVTabular/blob/main/tests/unit/examples/test_02-Advanced-NVTabular-workflow.py
I got error for
from merlin.core.compat import cudf
Expected behavior
It should work well.
Environment details (please complete the following information):
Platform: Debian 4.19.269-1
Python version: 3.8.10
PyTorch version (GPU?): 2.0.0 (yes support GPU)
Environment location: [Bare-metal, Docker, Cloud(specify cloud provider)]
GCP
Method of NVTabular install: [conda, Docker, or from source]
Docker
If method of install is [Docker], provide
docker pull
&docker run
commands usedI am using nvcr.io/nvidia/merlin/merlin-pytorch:23.02. All cudf libs were installed by GCP by default.
Additional context
cudf : 22.8.0a0+304.g6ca81bbc78.dirty
dask-cudf : 22.8.0a0+304.g6ca81bbc78.dirty
CUDA Version: 11.8
NVIDIA-SMI 510.47.03
Driver Version: 510.47.03
merlin 1.9.1
merlin-core 0.5.0
merlin-dataloader 0.0.3
merlin-models 23.2.0
merlin-systems 23.2.0
nvidia-cublas-cu11 11.10.3.66
nvidia-cuda-cupti-cu11 11.7.101
nvidia-cuda-nvrtc-cu11 11.7.99
nvidia-cuda-runtime-cu11 11.7.99
nvidia-cudnn-cu11 8.5.0.96
nvidia-cufft-cu11 10.9.0.58
nvidia-curand-cu11 10.2.10.91
nvidia-cusolver-cu11 11.4.0.1
nvidia-cusparse-cu11 11.7.4.91
nvidia-nccl-cu11 2.14.3
nvidia-nvtx-cu11 11.7.91
nvidia-pyindex 1.0.9
nvtabular 23.2.0
GPU : Tesla T4
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
triton 2.0.0
tritonclient 2.32.0
Ubuntu 20.04.5 LTS
rmm 22.8.0a0+62.gf6bf047.dirty
torch 2.0.0
The text was updated successfully, but these errors were encountered: