Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to plot uncertainties on maps #29

Open
kvelleby opened this issue Oct 13, 2023 · 5 comments
Open

How to plot uncertainties on maps #29

kvelleby opened this issue Oct 13, 2023 · 5 comments
Assignees

Comments

@kvelleby
Copy link
Contributor

          One option is to use alpha levels to plot uncertainty on a map, as in here: https://nhess.copernicus.org/articles/22/1499/2022/#&gid=1&pid=1 especially Figure 3. I am exploring this with Xiaolong.

Originally posted by @paolavesco in #22 (comment)

@kvelleby
Copy link
Contributor Author

It is also possible to add hatching to maps.

@kvelleby
Copy link
Contributor Author

I am not sure what you mean about uncertainty, particularly in the context of mapping the error metrics.

We can plot the confidence of predictions (e.g., the highest_density/credibility/confidence interval of the prediction samples at any given observation).
We can explore the sensitivity of error metrics (but this is currently not done). See #19 for a discussion.
We can plot the mean ignorance scores converted to a probability (np.exp(-IGN)). This would tell us how high probability the model assigned to the actual outcome on average.

@paolavesco
Copy link

Yes I was thinking of the confidence of predictions, not the error metrics here.

@kvelleby
Copy link
Contributor Author

Ok. One approach could then be to use plotting.get_quantiles(low=0.25, high=0.75) and plot df.high - df.low using plotting.choropleth(). (The low and high percentiles could be other numbers of course.) This could be plotted as a separate plot. Sara can be working on this, I think.

If you rather would want the best estimate plotted, but hatching or something else whenever the confidence spans a wide range, we would need to define what kind of range would constitute an "uncertain" range. My initial thought is that this range should be defined globally, so that it is comparable across models. I think some testing would be needed to find a suitable range. Ideas and comments are welcome.

@paolavesco
Copy link

I think the first approach you suggest works well, with perhaps 0.95 or 0.99 too (in some cases, the interesting trends are shown there I think). Sara can coordinate with Xiaolong then, he had already started exploring options on this last week.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants