-
Notifications
You must be signed in to change notification settings - Fork 250
Implementation of the log of the lower incomplete gamma function #1349
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Implementation of the log of the lower incomplete gamma function #1349
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## develop #1349 +/- ##
========================================
Coverage 95.29% 95.29%
========================================
Files 814 814
Lines 67422 67469 +47
========================================
+ Hits 64249 64295 +46
- Misses 3173 3174 +1
Continue to review full report in Codecov by Sentry.
🚀 New features to boost your workflow:
|
|
I think your added series is the same as the existing:
|
You're absolutely right. Based on the name of the function, I assumed it was a continued fraction implementation. This makes things a lot easier! |
|
I also changed the criteria for when the asymptotic series is used based on code paths in previous functions that used the asymptotic series. With my limited spot testing, this gives accurate results and is ready for testing. |
|
@jzmaddock I think this PR for the lower incomplete gamma function is nearly complete. I'm having some trouble testing long double types on ARM64 architectures where most of the tests are failing. The difference between my computed value and the expected is of order I'm now thinking the expected values are not quite right. I haven't been able to get Mathematica to output arbitrarily large precision so I've been using the mpmath library for Python. Do you happen to have the Mathematica code you used to calculate values for the upper incomplete gamma function? |
|
I'll bet the platform you are describing uses 10-byte floating-point representations for |
I don't think that's the issue. In |
|
Never mind, I got Mathematica to output higher precision. It matches the values that I've entered. So the implementation only has precision up to 10^{-18}. Here's the Mathematica code if anyone is interested. Details
N[Log[GammaRegularized[100,0,1/10]],64] |
Can you please add that as a comment in the test file for posterity? |
I've added this right before the test! |
|
I'm not sure what's going wrong with the |
|
Not sure it's the whole issue, but it is an issue, this code: |
I think that might actually be the issue! I just tried this code on my computer and got a difference of order 1e-32. typedef boost::multiprecision::cpp_bin_float_quad T;
T x = static_cast<T>(static_cast<T>(1) / 10);
T a = static_cast<T>(100);
T trueVal = static_cast<T>("-594.0968942751157965882143339808498054972378571169718495813176479");
std::cout << std::setprecision(64) << boost::math::lgamma_p(a, x) - trueVal << std::endl;This is strange though because in previous tests for BOOST_CHECK_CLOSE(::boost::math::lgamma_q(static_cast<T>(20), static_cast<T>(0.25)), static_cast<T>(-2.946458104491857816330873290969917497748067639461638294404e-31L), tolerance);and this passed for long doubles.
Is there an easy way to convert |
Yes, because all the input values have exact binary representations, so you could use 0.25f as the input argument and it would still calculate to whatever precision the result_type is.
No because 0.6 has no exact decimal representation: it's an infinitely recurring number when expressed as base 2 (at least I think it is, 0.1 is the classic example). So you would need a way to store an infinite number of binary digits. And that's not all... even if you use an exact fraction, say 6/10 calculated at the precision of T, then the input values for float precision are notably different (truncated) compared to long double, or indeed the arbitrary precision of mathematica. So if the function is ill-conditioned, the 0.5ulp error in the float input values can be enough to make a significant difference to the result. So for the sake of easiness, I always choose exact binary values, or where we use randomly generated test data, the inputs are truncated to fit inside a float without precision loss first, and then the expected result is calculated. This can sometimes miss obscure issues where the last few bits of the input are somehow getting lost in the calculation, but for the sake of everyone's sanity, it sure does save a lot of grief :) |
I hadn't even realized some decimals had exact binary values and others didn't. That's really good to know in the future. Thanks for all your help again with this PR (and the previous)! |
For real @JacobHass8. to get 50 decimal digits of In the C/C++ world, all compilers will take fractional powers of 2 and linear combinations thereof (and also reasonably small) integral values to be exact. Stuff like Cc: @jzmaddock and @mborland |
|
Everything looks like it is passing now except g++-14 C++23 autodiff. The error code is below and looks unrelated to the changes that I've made. Details
testing.capture-output ../../../bin.v2/libs/math/test/test_autodiff_8.test/gcc-14/release/x86_64/link-static/threading-multi/visibility-hidden/test_autodiff_8.run
====== BEGIN OUTPUT ======
Running 53 test cases...
unknown location(0): fatal error: in "test_autodiff_8/heuman_lambda_hpp<_Float32>": Throw location unknown (consider using BOOST_THROW_EXCEPTION)
Dynamic exception type: boost::wrapexcept<std::domain_error>
std::exception::what: Error in function boost::math::heuman_lambda<N5boost4math15differentiation11autodiff_v16detail4fvarIDF32_Lm5EEE>(N5boost4math15differentiation11autodiff_v16detail4fvarIDF32_Lm5EEE, N5boost4math15differentiation11autodiff_v16detail4fvarIDF32_Lm5EEE): When 1-k^2 == 1 then phi must be < Pi/2, but got phi = depth(1)(4.20396852,1,0,0,0,0)
test_autodiff_8.cpp(38): last checkpoint
*** 1 failure is detected in the test module "test_autodiff" |
|
I should still update the docs and add Cuda support which I'm just going to copy from #1346. |
See #1346 and #1173.