-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
load pyramid crash #5
Comments
I seem to have trouble reproducing your error. Could you post your h5py, hdf5-tools and numpy versions? Does it manage to load data at all (as evidenced by 'Description: loading sparse data into hdf5')? Also, we are aware that GRAAL may be hard to deploy due to its specific requirements; if you could please describe the issues you encountered when trying to run the GUI, we would be more than happy to help facilitate and streamline the process. |
Thanks for the quick reply, here the versions running in our computer: hdf5-tools 1.8.13 It seemed to like the input files and prints out pyramid built, but then it crashes, here the full output: filtering already done... Traceback (most recent call last): And here comes the traceback I pasted before. |
I still cannot reproduce your error, even using the same exact version of h5py and hdf5-tools as you are. Which specific sample files are you using as input (e.g. Trichoderma, Cerevisiae or your own dataset)? |
We tried cerevisiae. Then we've tried Trichoderma and by building the pyramid again (removing the pyramid folder before hitting build pyramid) it kind of worked till PyCUDA complained afterwards with the following: Exception in thread Thread-2: PyCUDA ERROR: The context stack was not empty upon module cleanup. A context was still active when the context stack was being Aborted Cerevisiae still does not work, neither with the pyramid already built present in the zip file nor trying to make it again. I think it's because the abs_fragments_contacts_weighted.txt file in Trichoderma has three columns of integers while the Cerevisiae one has 4 columns instead of three and both last columns are not integers but decimals. If you try to build the pyramid again pyramid_sparse.py complains about not being able to convert the column three to integers. I was wondering what kind of file does the HiCBox script output then, because that would be the one that we will be using with our own data. Thank you very much, sorry for the inconvenience but we really want to make this work. We are in the process of gathering the fixes we did to make the GUI run in case its helpful in a Debian 8.0 system, mainly in the process of installing wxPython2.8 that it's no longer in Debian repositories. |
The data used as input is indeed in 3 columns (in the form fragA, fragB, nb_contacts) but GRAAL is supposed to automatically convert the 4-column files (the ones with biases) when encountered. I suspect it didn't happen because it first checked whether a pyramid was present, found one and bypassed the column check and directly moved on to load data into HDF5, which led to your issue. Try deleting the pyramid_4_thresh_auto folder (located within cerevisiae_malaisyan_strain/analysis/pyramids) and any other folder you might find, and try again. The pyramid data was rebuilt correctly with no issue on my end, which may explain why I couldn't reproduce it at first. As for the pycuda error, when does it occur? Do you get to see the GRAAL menu with all the available parameters (like 'Explode Genome', 'Allow repeated fragments', etc.)? Do you get to see a window called 'Structure visualization'? Does the error occur repeatedly? Also, just in case, please post your pycuda and CUDA version. |
pycuda is the latest: 2015.1.3 Yes, I see the GRAAL menu with the parameters you mention after loading the pyramid, the error comes after hitting start simulation. It seems to start well but it always crashes soon with the error posted above. |
What about your pyopengl version? Notably, is it below or above 3.1.0? |
It's exactly 3.1.0, is that ok? |
There seems to have been a change of behavior between pyopengl 3.0.2 and 3.1.0. We're going to look into this, but in the meantime could you please downgrade to 3.0.2 and see if the issue still persists? |
The issue persists with the downgrading, yes. |
After huge efforts to make the GUI work, we tried to run the program with the sample files but it crashes when loading the pyramid after hitting the build pyramid button. It reports a rather cryptic error related to the h5py module. Here is the traceback:
<HDF5 file "pyramid.hdf5" (mode r+)>
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in bootstrap_inner
self.run()
File "main_window.py", line 90, in run
lev = pyr.level(pyramid, 2)
File "/home/jtena/Desktop/graal/pyramid_sparse.py", line 1195, in __init
self.load_data(pyramid)
File "/home/jtena/Desktop/graal/pyramid_sparse.py", line 1209, in load_data
self.n_frags = np.copy(pyramid.data[str(self.level)]['nfrags'][0])
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/tmp/pip-build-uVX5Nb/h5py/h5py/_objects.c:2579)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/tmp/pip-build-uVX5Nb/h5py/h5py/_objects.c:2538)
File "/usr/local/lib/python2.7/dist-packages/h5py/_hl/dataset.py", line 384, in getitem
new_dtype = readtime_dtype(self.id.dtype, names)
File "/usr/local/lib/python2.7/dist-packages/h5py/_hl/dataset.py", line 370, in readtime_dtype
raise ValueError("Field names only allowed for compound types")
ValueError: Field names only allowed for compound types
The text was updated successfully, but these errors were encountered: