-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to plot uncertainties on maps #29
Comments
It is also possible to add hatching to maps. |
I am not sure what you mean about uncertainty, particularly in the context of mapping the error metrics. We can plot the confidence of predictions (e.g., the highest_density/credibility/confidence interval of the prediction samples at any given observation). |
Yes I was thinking of the confidence of predictions, not the error metrics here. |
Ok. One approach could then be to use If you rather would want the best estimate plotted, but hatching or something else whenever the confidence spans a wide range, we would need to define what kind of range would constitute an "uncertain" range. My initial thought is that this range should be defined globally, so that it is comparable across models. I think some testing would be needed to find a suitable range. Ideas and comments are welcome. |
I think the first approach you suggest works well, with perhaps 0.95 or 0.99 too (in some cases, the interesting trends are shown there I think). Sara can coordinate with Xiaolong then, he had already started exploring options on this last week. |
Originally posted by @paolavesco in #22 (comment)
The text was updated successfully, but these errors were encountered: