-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hcubature hangs on some integrand #4
Comments
Hmm, weird, if I evaluate this integral with an increasing number of function evaluations, it seems like the estimated error is decreasing extremely slowly: julia> hcubature(x -> 1.0+(x[1]*x[3]*sin(x[2]))^2, [0,0,-0.2], [0.2,2π,0.2], atol=1e-6, maxevals=10^5)
(0.5026855749435449, 2.9897655378679496e-5)
julia> hcubature(x -> 1.0+(x[1]*x[3]*sin(x[2]))^2, [0,0,-0.2], [0.2,2π,0.2], atol=1e-6, maxevals=10^6)
(0.5026855741377734, 2.9896795019367782e-5)
julia> hcubature(x -> 1.0+(x[1]*x[3]*sin(x[2]))^2, [0,0,-0.2], [0.2,2π,0.2], atol=1e-6, maxevals=10^7)
(0.5026855740388327, 2.9896689374725856e-5)
julia> hcubature(x -> 1.0+(x[1]*x[3]*sin(x[2]))^2, [0,0,-0.2], [0.2,2π,0.2], atol=1e-6, maxevals=10^8)
(0.5026855740264947, 2.9896676198455505e-5) The correct integral (according to |
Maybe it would be worth trying the alternative error estimate from Berntsen and Espelid (1990) as discussed with @pabloferz in pabloferz/NIntegration.jl#8, but my understanding is that this was to correct under estimates of the error, whereas here our error is pretty good. (And if even we were underestimating the error, I don't understand why it would fail to terminate.) |
The integral value comes almost exclusively from the integration of 1.0 on the domain and is
it is only the sum of the two that fails. |
I believe the reason for the more complex error estimation scheme was not only about underestimations. We could try changing the error estimation procedure, but I believe the original Genz-Malik rule would have to be changed by the Genz-Malik rule from "An Imbedded Family of Fully Symmetric Numerical Integration Rules" (which is the one recommended by Berntsen, Espelid and Genz on their later paper "An Adaptive Algorithm for the Approximate Calculation of Multiple Integrals"). The exact rule recommended is not reported explicitly anywhere (except, I believe, in the codes out there that implement it), but we could ask professor Genz about it. In the worst case the rule is "almost" all there in the first paper I mentioned, so it shouldn't be to hard to get a good complete rule from the data data on it. |
I forgot to mention, the rule I was referring above is a 7th degree rule that requires 39 function evaluation per region. Over https://github.com/pabloferz/NIntegration.jl I have a 11th degree rule (for 3d only) we could also add here and put it as an option to the user (as EDIT: The problem with the 11th rule is that is not available in a way that could be easily adapted to handle arbitrary floating point numbers. |
@stevengj wrote:
Sorry for butting in just to nitpick, but this is not the correct value of this integral. If I'm not misreading the integrand, it can be solved analytically. The "1" part yields just the integration volume (0.16π = 0.5026548245743669) while the rest yields 2π/9 * 0.2^6 = 4.468042885105485e-5, summed up that is 0.502699505003218. Could you double-check whether Cubature.pcubature really gives the result you quoted? |
@traktofon You are correct: the integration of
The result @stevengj quotes from |
For the record, julia> using Cuba
julia> g(x, y, z) = 1.0+(x * z * sin(y))^2
g (generic function with 1 method)
julia> cuhre((x, f) -> f[1] = g(0.2 * x[1], 2 * pi * x[2], 0.4 * x[3] - 0.2) * 0.2 * 2 * pi * 0.4, 3, abstol = 2e-7, reltol = 1e-12, maxevals = 1e7)
Component:
1: 0.5026994828177098 ± 1.99996602008537e-7 (prob.: 0.0)
Integrand evaluations: 4833747
Fail: 0
Number of subregions: 19031
julia> ans[1][1] - (22502//140625) * pi # true error
-2.218550809729436e-8 Much better result with Divonne algorithm: julia> divonne((x, f) -> f[1] = g(0.2 * x[1], 2 * pi * x[2], 0.4 * x[3] - 0.2) * 0.2 * 2 * pi * 0.4, 3, abstol = 1e-10, reltol = 1e-10, maxevals = 1e7)
Component:
1: 0.5026995050031596 ± 9.946332467145315e-11 (prob.: 0.0)
Integrand evaluations: 4944204
Fail: 0
Number of subregions: 986
julia> ans[1][1] - (22502//140625) * pi
-5.828670879282072e-14 As mentioned above, also julia> using NIntegration
julia> nintegrate(g, (0.0,0.0,-0.2), (0.2,2pi,0.2), abstol = 1e-10, reltol = 1e-10)
(0.5026995050032179, 6.36325584885474e-11, 1905, 8 subregions)
julia> ans[1] - (22502//140625) * pi
0.0 |
For all routines (
|
I looked again at the Berntsen/Espelid/Genz paper, more carefully, and it is maddeningly vague on one point regarding error estimation: All of this information is presumably in the DCUHRE code in some form, but for copyright reasons I'd rather not look in it. A "clean room" approach where someone else looks at the code and posts the relevant mathematical details (but not the code!) should be fine, though. |
I believe the heuristic constants appear in a table below in the Berntsen/Espelid/Genz peper.
I've been there. That's is why I only implemented the 13th rule over |
I pushed a PR that should fix this issue, it doesn't touch the rule used or the error estimation procedure. Anyway, I think there are some integrands that could benefit from changing those, but that could be experimented with later on. |
HCubature
hangs on the following integrand:See also issue JuliaMath/Cubature.jl/issues/25
The text was updated successfully, but these errors were encountered: