Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory Limit #23

Open
ge00rgem opened this issue Mar 27, 2023 · 11 comments
Open

Memory Limit #23

ge00rgem opened this issue Mar 27, 2023 · 11 comments

Comments

@ge00rgem
Copy link

Hello,

We are trying to analyze a large number of images and were wondering if it is at all possible to increase the program's memory limit. It currently seems to only be able to handle ~15 MB worth of images. After that, all images are imported entirely black with predictions at 0% for all pathologies.

Thank you!

@ieee8023
Copy link
Member

There is no set memory limit. So there is likely some error and then the processing fails. Can you include the output of the browser debug console?

An alternative is to use the TorchXRayVision library to process the images in python. The model named "all" is the same as what is used in Chester (just a slight difference in calibration) https://github.com/mlmed/torchxrayvision

@ge00rgem
Copy link
Author

ge00rgem commented Mar 27, 2023

I am using the windows application. If I add studies one by one, once I hit around ~15MB worth of images, I get the following error:

index.htm?local=true:306 Status: Error! Failed to compile fragment shader.
system.js?v=1.198:298 Error: Failed to compile fragment shader.
at OM (tf-2.0.1.min.js:17)
at t.e.createProgram (tf-2.0.1.min.js:17)
at tf-2.0.1.min.js:17
at tf-2.0.1.min.js:17
at e.n.getAndSaveBinary (tf-2.0.1.min.js:17)
at e.n.runWebGLProgram (tf-2.0.1.min.js:17)
at tf-2.0.1.min.js:17
at Object.kernelFunc (tf-2.0.1.min.js:17)
at h (tf-2.0.1.min.js:17)
at tf-2.0.1.min.js:17
index.htm?local=true:1 WebGL: CONTEXT_LOST_WEBGL: loseContext: context lost

Before reaching ~15MB, the console shows this warning:

High memory usage in GPU: 1307.86 MB, most likely due to a memory leak
n.acquireTexture @ tf-2.0.1.min.js:17
n.uploadToGPU @ tf-2.0.1.min.js:17
n.runWebGLProgram @ tf-2.0.1.min.js:17
(anonymous) @ tf-2.0.1.min.js:17
kernelFunc @ tf-2.0.1.min.js:17
h @ tf-2.0.1.min.js:17
(anonymous) @ tf-2.0.1.min.js:17
e.scopedRun @ tf-2.0.1.min.js:17
e.runKernelFunc @ tf-2.0.1.min.js:17
div_ @ tf-2.0.1.min.js:17
div @ tf-2.0.1.min.js:17
pg.div @ tf-2.0.1.min.js:17
(anonymous) @ tf-2.0.1.min.js:17
(anonymous) @ tf-2.0.1.min.js:17
(anonymous) @ tf-2.0.1.min.js:17
(anonymous) @ tf-2.0.1.min.js:17
e.scopedRun @ tf-2.0.1.min.js:17
e.tidy @ tf-2.0.1.min.js:17
h @ tf-2.0.1.min.js:17
(anonymous) @ tf-2.0.1.min.js:17
e.scopedRun @ tf-2.0.1.min.js:17
e.runKernelFunc @ tf-2.0.1.min.js:17
(anonymous) @ tf-2.0.1.min.js:17
mean_ @ tf-2.0.1.min.js:17
mean @ tf-2.0.1.min.js:17
e.mean @ tf-2.0.1.min.js:17
prepare_image_resize_crop @ system.js?v=1.198:328
prepare_image @ system.js?v=1.198:337
predict_real @ system.js?v=1.198:396
predict @ system.js?v=1.198:289
img.onload @ system.js?v=1.198:144
load (async)
reader.onload @ system.js?v=1.198:142
load (async)
(anonymous) @ system.js?v=1.198:138

@ieee8023
Copy link
Member

Interesting. Does it work if you run it in the web browser? Potentially the packaged webbrowser for the offline version is out of date to support the GPU you have.

@ge00rgem
Copy link
Author

I want to avoid the browser version if possible because the names of the files have sensitive information and coming up with a code and changing all the names could be a bit of hassle.

I downloaded TorchXRayVision library, but a little lost on which of the python files I should use.

@ieee8023
Copy link
Member

The web version does not send any data. It is all processed locally.

You can use this script: https://github.com/mlmed/torchxrayvision/blob/master/scripts/process_image.py

It should work like this:

$ python3 process_image.py ../tests/00000001_000.png
{'preds': {'Atelectasis': 0.50500506,
           'Cardiomegaly': 0.6600903,
           'Consolidation': 0.30575264,
           'Edema': 0.274184,
           'Effusion': 0.4026162,
           'Emphysema': 0.5036339,
           'Enlarged Cardiomediastinum': 0.40989172,
           'Fibrosis': 0.53293407,
           'Fracture': 0.32376793,
           'Hernia': 0.011924741,
           'Infiltration': 0.5154413,
           'Lung Lesion': 0.22231922,
           'Lung Opacity': 0.2772148,
           'Mass': 0.32237658,
           'Nodule': 0.5091847,
           'Pleural_Thickening': 0.5102617,
           'Pneumonia': 0.30947986,
           'Pneumothorax': 0.24847917}}

I can write a version of that utility that will process an entire folder and output a csv file if that would work for you better?

@ge00rgem
Copy link
Author

Yes, that would certainly be helpful! Thank you!

@ieee8023
Copy link
Member

@ge00rgem
Copy link
Author

I got the following error when I ran that script:

image

@ieee8023
Copy link
Member

Strange. Try updating torch with pip install torch --upgrade

Maybe that issue has to do with using the cpu so you can try to run the script with -cuda to use the GPU if that machine has one.

@ge00rgem
Copy link
Author

I got it to work! It didn't like that I had some packages installed twice essentially (prior to importing this project and then as part of the project/package). I got an impatient and instead of figuring out which ones it did not like, I just uninstalled all the packages I had and started fresh haha.

You mentioned that there were some calibration differences between this code and the algorithm behind Chester. Could you please outline the key differences so that we are aware?

Thanks so much for all your help!

@ieee8023
Copy link
Member

There is some scaling done on the predictions >0.5 to make the interface more usable: https://github.com/mlmed/chester-xray/blob/master/res/js/system.js#L639 specified but the scaling factor here: https://github.com/mlmed/chester-xray/blob/master/models/xrv-all-45rot15trans15scale/config.json#L25

So the output from the model using the command line does not include this scaling. Just in case you see different numbers. The performance is not impacted because the AUC scores remain the same.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants