Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hcubature hangs on some integrand #4

Closed
mzaffalon opened this issue Jul 24, 2017 · 12 comments
Closed

hcubature hangs on some integrand #4

mzaffalon opened this issue Jul 24, 2017 · 12 comments

Comments

@mzaffalon
Copy link

HCubature hangs on the following integrand:

hcubature(x -> 1.0+(x[1]*x[3]*sin(x[2]))^2, [0,0,-0.2], [0.2,2π,0.2], atol=1e-6)

See also issue JuliaMath/Cubature.jl/issues/25

@stevengj
Copy link
Member

Hmm, weird, if I evaluate this integral with an increasing number of function evaluations, it seems like the estimated error is decreasing extremely slowly:

julia> hcubature(x -> 1.0+(x[1]*x[3]*sin(x[2]))^2, [0,0,-0.2], [0.2,2π,0.2], atol=1e-6, maxevals=10^5)
(0.5026855749435449, 2.9897655378679496e-5)

julia> hcubature(x -> 1.0+(x[1]*x[3]*sin(x[2]))^2, [0,0,-0.2], [0.2,2π,0.2], atol=1e-6, maxevals=10^6)
(0.5026855741377734, 2.9896795019367782e-5)

julia> hcubature(x -> 1.0+(x[1]*x[3]*sin(x[2]))^2, [0,0,-0.2], [0.2,2π,0.2], atol=1e-6, maxevals=10^7)
(0.5026855740388327, 2.9896689374725856e-5)

julia> hcubature(x -> 1.0+(x[1]*x[3]*sin(x[2]))^2, [0,0,-0.2], [0.2,2π,0.2], atol=1e-6, maxevals=10^8)
(0.5026855740264947, 2.9896676198455505e-5)

The correct integral (according to Cubature.pcubature, which converges to machine precision with no problem) is 0.5026548245743667, so the error estimate is actually pretty accurate (the true error is 3e-5 after maxevals=10^8).

@stevengj
Copy link
Member

Maybe it would be worth trying the alternative error estimate from Berntsen and Espelid (1990) as discussed with @pabloferz in pabloferz/NIntegration.jl#8, but my understanding is that this was to correct under estimates of the error, whereas here our error is pretty good. (And if even we were underestimating the error, I don't understand why it would fail to terminate.)

@mzaffalon
Copy link
Author

The integral value comes almost exclusively from the integration of 1.0 on the domain and is 2π*0.2*0.4 = 0.5026548245743669 as you say. The remaining part seems to be correctly computed by hcubature too:

julia> hcubature(x -> (x[1]*x[3]*sin(x[2]))^2, [0,0,-0.2], [0.2,2π,0.2], atol=1e-6)
(4.468042885105483e-5, 2.795208725939191e-20)

it is only the sum of the two that fails.

@mzaffalon mzaffalon reopened this Jul 24, 2017
@pabloferz
Copy link
Member

I believe the reason for the more complex error estimation scheme was not only about underestimations. We could try changing the error estimation procedure, but I believe the original Genz-Malik rule would have to be changed by the Genz-Malik rule from "An Imbedded Family of Fully Symmetric Numerical Integration Rules" (which is the one recommended by Berntsen, Espelid and Genz on their later paper "An Adaptive Algorithm for the Approximate Calculation of Multiple Integrals").

The exact rule recommended is not reported explicitly anywhere (except, I believe, in the codes out there that implement it), but we could ask professor Genz about it. In the worst case the rule is "almost" all there in the first paper I mentioned, so it shouldn't be to hard to get a good complete rule from the data data on it.

@pabloferz
Copy link
Member

pabloferz commented Jul 25, 2017

I forgot to mention, the rule I was referring above is a 7th degree rule that requires 39 function evaluation per region. Over https://github.com/pabloferz/NIntegration.jl I have a 11th degree rule (for 3d only) we could also add here and put it as an option to the user (as cuba does, for example). At least, the scheme over NIntegration.jl does not suffer from this problem for this integrand.

EDIT: The problem with the 11th rule is that is not available in a way that could be easily adapted to handle arbitrary floating point numbers.

@traktofon
Copy link

@stevengj wrote:

The correct integral (according to Cubature.pcubature, which converges to machine precision with no problem) is 0.5026548245743667

Sorry for butting in just to nitpick, but this is not the correct value of this integral. If I'm not misreading the integrand, it can be solved analytically. The "1" part yields just the integration volume (0.16π = 0.5026548245743669) while the rest yields 2π/9 * 0.2^6 = 4.468042885105485e-5, summed up that is 0.502699505003218. Could you double-check whether Cubature.pcubature really gives the result you quoted?

@mzaffalon
Copy link
Author

mzaffalon commented Jul 25, 2017

@traktofon You are correct: the integration of sin(x[2])^2 fails using pcubature and I think it is still related to JuliaMath/Cubature.jl/issues/24: replacing sin by cos should not change the value of the integral because the integrand is a periodic function integrated on its period, but it does, as you can see in the following example:

julia> pcubature(x -> (x[1]*x[3]*cos(x[2]))^2, [0,0,-0.2], [0.2,2π,0.2], abstol=1e-8)  
(8.93608577021097e-5, 1.3552527156068805e-20)                                          
                                                                                       
julia> pcubature(x -> (x[1]*x[3]*sin(x[2]))^2, [0,0,-0.2], [0.2,2π,0.2], abstol=1e-8)  
(1.78693094034391e-36, 1.78693094034391e-36)                                           

The result @stevengj quotes from pcubature effectively drops the sin^2 term.

@giordano
Copy link
Member

giordano commented Jul 25, 2017

For the record, Cuba.jl computes the correct result (within requested accuracy, but converges quite slowly):

julia> using Cuba

julia> g(x, y, z) = 1.0+(x * z * sin(y))^2
g (generic function with 1 method)

julia> cuhre((x, f) -> f[1] = g(0.2 * x[1], 2 * pi * x[2], 0.4 * x[3] - 0.2) * 0.2 * 2 * pi * 0.4, 3, abstol = 2e-7, reltol = 1e-12, maxevals = 1e7)
Component:
 1: 0.5026994828177098 ± 1.99996602008537e-7 (prob.: 0.0)
Integrand evaluations: 4833747
Fail:                  0
Number of subregions:  19031

julia> ans[1][1] - (22502//140625) * pi # true error
-2.218550809729436e-8

Much better result with Divonne algorithm:

julia> divonne((x, f) -> f[1] = g(0.2 * x[1], 2 * pi * x[2], 0.4 * x[3] - 0.2) * 0.2 * 2 * pi * 0.4, 3, abstol = 1e-10, reltol = 1e-10, maxevals = 1e7)
Component:
 1: 0.5026995050031596 ± 9.946332467145315e-11 (prob.: 0.0)
Integrand evaluations: 4944204
Fail:                  0
Number of subregions:  986

julia> ans[1][1] - (22502//140625) * pi
-5.828670879282072e-14

As mentioned above, also NIntegration can compute this integral (actually to machine precision):

julia> using NIntegration

julia> nintegrate(g, (0.0,0.0,-0.2), (0.2,2pi,0.2), abstol = 1e-10, reltol = 1e-10)
(0.5026995050032179, 6.36325584885474e-11, 1905, 8 subregions)

julia> ans[1] - (22502//140625) * pi
0.0

@mzaffalon
Copy link
Author

For all routines (Cubature, HCubature, NIntegration, Cuba), the number of function evaluations seems very wasteful compared to the minimum of 60 required to achieve machine precision
using a tensor product of Gaussian points and weights.

Nρ, Nθ, Nz = 2, 15, 2
ρ, w_ρ = QuadGK.gauss(Nρ)
θ, w_θ = QuadGK.gauss(Nθ)
z, w_z = QuadGK.gauss(Nz)

ρθ = hcat(repeat(ρ, inner=[Nθ,1]), repeat(θ, outer=[Nρ,1]))
ρθz = hcat(repeat(ρθ, inner=[Nz,1]), repeat(z, outer=[Nρ*Nθ,1]))
w_ρθ = repeat(w_ρ, inner=[Nθ,1]) .* repeat(w_θ, outer=[Nρ,1])
w_ρθz = repeat(w_ρθ, inner=[Nz,1]) .* repeat(w_z, outer=[Nρ*Nθ,1])

f(ρ, θ, z) = (ρ+1)^2 * sin(π*(θ+1))^2 * z^2
dot(w_ρθz, f.(ρθz[:,1], ρθz[:,2], ρθz[:,3])) / (2/3 * 8/3) # normalized to 1

@stevengj
Copy link
Member

I looked again at the Berntsen/Espelid/Genz paper, more carefully, and it is maddeningly vague on one point regarding error estimation:
image
That is, they use a combination of multiple null rules, but don't say which ones! Ref. 14 discusses these rules in more detail, but again is vague about which specific ones should be used. Nor do they give the value of their "heuristic constants" c. (This is all in addition to the two-level error-estimation described in eq. 12.)

All of this information is presumably in the DCUHRE code in some form, but for copyright reasons I'd rather not look in it. A "clean room" approach where someone else looks at the code and posts the relevant mathematical details (but not the code!) should be fine, though.

@pabloferz
Copy link
Member

I believe the heuristic constants appear in a table below in the Berntsen/Espelid/Genz peper.

That is, they use a combination of multiple null rules, but don't say which ones!

I've been there. That's is why I only implemented the 13th rule over NIntegration (the rule and null rules are explicitly reported). I've emailed professor Genz asking him about the null rules, but I'm playing the data on the Genz and Malik's "An Imbedded Family of Fully Symmetric Numerical Integration Rules" paper to see if I can get anywhere, if so, I'll submit a PR.

@pabloferz
Copy link
Member

I pushed a PR that should fix this issue, it doesn't touch the rule used or the error estimation procedure. Anyway, I think there are some integrands that could benefit from changing those, but that could be experimented with later on.

pabloferz added a commit to pabloferz/HCubature.jl that referenced this issue Aug 1, 2017
pabloferz added a commit to pabloferz/HCubature.jl that referenced this issue Aug 3, 2017
pabloferz added a commit to pabloferz/HCubature.jl that referenced this issue Aug 3, 2017
pabloferz added a commit to pabloferz/HCubature.jl that referenced this issue Aug 3, 2017
pabloferz added a commit to pabloferz/HCubature.jl that referenced this issue Aug 3, 2017
pabloferz added a commit to pabloferz/HCubature.jl that referenced this issue Aug 3, 2017
pabloferz added a commit to pabloferz/HCubature.jl that referenced this issue Aug 4, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants