Skip to content

Conversation

@JacobHass8
Copy link
Contributor

See #1346 and #1173.

@codecov
Copy link

codecov bot commented Jan 13, 2026

Codecov Report

❌ Patch coverage is 97.87234% with 1 line in your changes missing coverage. Please review.
✅ Project coverage is 95.29%. Comparing base (1d670c8) to head (27c1f98).

Files with missing lines Patch % Lines
include/boost/math/special_functions/gamma.hpp 95.65% 1 Missing ⚠️
Additional details and impacted files

Impacted file tree graph

@@           Coverage Diff            @@
##           develop    #1349   +/-   ##
========================================
  Coverage    95.29%   95.29%           
========================================
  Files          814      814           
  Lines        67422    67469   +47     
========================================
+ Hits         64249    64295   +46     
- Misses        3173     3174    +1     
Files with missing lines Coverage Δ
test/compile_test/instantiate.hpp 100.00% <100.00%> (ø)
test/test_igamma.hpp 100.00% <100.00%> (ø)
include/boost/math/special_functions/gamma.hpp 99.72% <95.65%> (-0.14%) ⬇️

Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 1d670c8...27c1f98. Read the comment docs.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@jzmaddock
Copy link
Collaborator

I think your added series is the same as the existing:

BOOST_MATH_GPU_ENABLED inline T lower_gamma_series(T a, T z, const Policy& pol, T init_value = 0)

@JacobHass8
Copy link
Contributor Author

I think your added series is the same as the existing:

BOOST_MATH_GPU_ENABLED inline T lower_gamma_series(T a, T z, const Policy& pol, T init_value = 0)

You're absolutely right. Based on the name of the function, I assumed it was a continued fraction implementation. This makes things a lot easier!

@JacobHass8
Copy link
Contributor Author

I also changed the criteria for when the asymptotic series is used based on code paths in previous functions that used the asymptotic series. With my limited spot testing, this gives accurate results and is ready for testing.

@JacobHass8
Copy link
Contributor Author

@jzmaddock I think this PR for the lower incomplete gamma function is nearly complete. I'm having some trouble testing long double types on ARM64 architectures where most of the tests are failing. The difference between my computed value and the expected is of order $10^{-18}$ whereas the tolerance is $10^{-30}$. The difference isn't small enough to increase the tolerance.

I'm now thinking the expected values are not quite right. I haven't been able to get Mathematica to output arbitrarily large precision so I've been using the mpmath library for Python. Do you happen to have the Mathematica code you used to calculate values for the upper incomplete gamma function?

@ckormanyos
Copy link
Member

I'll bet the platform you are describing uses 10-byte floating-point representations for long double. This is about the precision range where 80-bit long double reaches. But your control values expect the full 128-bits. You can express your tolerances in terms of multiplies of std::numeric_limits<T>::epsilon() and these will actually scale to T, assumed to be the floating-point type in the instantiation of the test(s).

@JacobHass8
Copy link
Contributor Author

JacobHass8 commented Jan 19, 2026

I'll bet the platform you are describing uses 10-byte floating-point representations for long double. This is about the precision range where 80-bit long double reaches. But your control values expect the full 128-bits. You can express your tolerances in terms of multiplies of std::numeric_limits<T>::epsilon() and these will actually scale to T, assumed to be the floating-point type in the instantiation of the test(s).

I don't think that's the issue. In test_igamma.hpp, we set T tolerance = boost::math::tools::epsilon<T>() * 1000;. I think the tolerance is right, but my expected values aren't.

@JacobHass8
Copy link
Contributor Author

Never mind, I got Mathematica to output higher precision. It matches the values that I've entered. So the implementation only has precision up to 10^{-18}. Here's the Mathematica code if anyone is interested.

Details

N[Log[GammaRegularized[100,0,1/10]],64]

@mborland
Copy link
Member

Never mind, I got Mathematica to output higher precision. It matches the values that I've entered. So the implementation only has precision up to 10^{-18}. Here's the Mathematica code if anyone is interested.

Details

N[Log[GammaRegularized[100,0,1/10]],64]

Can you please add that as a comment in the test file for posterity?

@JacobHass8
Copy link
Contributor Author

Can you please add that as a comment in the test file for posterity?

I've added this right before the test!

@JacobHass8
Copy link
Contributor Author

I'm not sure what's going wrong with the long doubles. The approximation returns the same result as log(gamma_p) and is still off from Mathematica's result by 1e-18. Maybe at some point the long doubles are converted to doubles?

@jzmaddock
Copy link
Collaborator

Not sure it's the whole issue, but it is an issue, this code: static_cast<T>(0.6) takes a double approximation to an inexact floating point value, and then converts it to a long double: which is clearly wrong. static_cast<T>(0.6L) is better, but will still fail for multiprecision code (if we test it) where T is wider than a long double. Using an exact binary value for inputs tends to be my choice, otherwise you need something horrible like static_cast<T>(static_cast<T>(6) / 10). You should also use 6/10 as the input to mathematica in this case, otherwise 0.6 is interpreted as something like a double precision value.

@JacobHass8
Copy link
Contributor Author

Not sure it's the whole issue, but it is an issue, this code: static_cast<T>(0.6) takes a double approximation to an inexact floating point value, and then converts it to a long double: which is clearly wrong. static_cast<T>(0.6L) is better, but will still fail for multiprecision code (if we test it) where T is wider than a long double. Using an exact binary value for inputs tends to be my choice, otherwise you need something horrible like static_cast<T>(static_cast<T>(6) / 10). You should also use 6/10 as the input to mathematica in this case, otherwise 0.6 is interpreted as something like a double precision value.

I think that might actually be the issue! I just tried this code on my computer and got a difference of order 1e-32.

typedef boost::multiprecision::cpp_bin_float_quad T; 

T x = static_cast<T>(static_cast<T>(1) / 10);
T a = static_cast<T>(100);
T trueVal = static_cast<T>("-594.0968942751157965882143339808498054972378571169718495813176479");
std::cout << std::setprecision(64) << boost::math::lgamma_p(a, x) - trueVal << std::endl;

This is strange though because in previous tests for lgamma_q, we used

   BOOST_CHECK_CLOSE(::boost::math::lgamma_q(static_cast<T>(20), static_cast<T>(0.25)), static_cast<T>(-2.946458104491857816330873290969917497748067639461638294404e-31L), tolerance);

and this passed for long doubles.

Using an exact binary value for inputs tends to be my choice, otherwise you need something horrible like static_cast<T>(static_cast<T>(6) / 10)

Is there an easy way to convert 0.6 to a binary literal? I've seen calculators online, but haven't been able to correctly convert them to a long double. I might just go with the horrible static_cast for now to see if it works.

@jzmaddock
Copy link
Collaborator

This is strange though because in previous tests for lgamma_q, we used

BOOST_CHECK_CLOSE(::boost::math::lgamma_q(static_cast(20), static_cast(0.25)), static_cast(-2.946458104491857816330873290969917497748067639461638294404e-31L), tolerance);

and this passed for long doubles.

Yes, because all the input values have exact binary representations, so you could use 0.25f as the input argument and it would still calculate to whatever precision the result_type is.

Is there an easy way to convert 0.6 to a binary literal? I've seen calculators online, but haven't been able to correctly convert them to a long double. I might just go with the horrible static_cast for now to see if it works.

No because 0.6 has no exact decimal representation: it's an infinitely recurring number when expressed as base 2 (at least I think it is, 0.1 is the classic example). So you would need a way to store an infinite number of binary digits.

And that's not all... even if you use an exact fraction, say 6/10 calculated at the precision of T, then the input values for float precision are notably different (truncated) compared to long double, or indeed the arbitrary precision of mathematica. So if the function is ill-conditioned, the 0.5ulp error in the float input values can be enough to make a significant difference to the result.

So for the sake of easiness, I always choose exact binary values, or where we use randomly generated test data, the inputs are truncated to fit inside a float without precision loss first, and then the expected result is calculated. This can sometimes miss obscure issues where the last few bits of the input are somehow getting lost in the calculation, but for the sake of everyone's sanity, it sure does save a lot of grief :)

@JacobHass8
Copy link
Contributor Author

So for the sake of easiness, I always choose exact binary values, or where we use randomly generated test data, the inputs are truncated to fit inside a float without precision loss first, and then the expected result is calculated. This can sometimes miss obscure issues where the last few bits of the input are somehow getting lost in the calculation, but for the sake of everyone's sanity, it sure does save a lot of grief :)

I hadn't even realized some decimals had exact binary values and others didn't. That's really good to know in the future. Thanks for all your help again with this PR (and the previous)!

@ckormanyos
Copy link
Member

ckormanyos commented Jan 21, 2026

I hadn't even realized some decimals had exact binary values and others didn't.

For real @JacobHass8.
That is a profound point. When using even computer algebra systems, I still write stuff like

N[Whatever[1/4], 50]

to get 50 decimal digits of Whatever.

In the C/C++ world, all compilers will take fractional powers of 2 and linear combinations thereof (and also reasonably small) integral values to be exact. Stuff like 1/3 in C/C++ is inexact. Stuff like 0.625 in C/C++ will be exact. It is trippy.

Cc: @jzmaddock and @mborland

@JacobHass8
Copy link
Contributor Author

Everything looks like it is passing now except g++-14 C++23 autodiff. The error code is below and looks unrelated to the changes that I've made.

Details

testing.capture-output ../../../bin.v2/libs/math/test/test_autodiff_8.test/gcc-14/release/x86_64/link-static/threading-multi/visibility-hidden/test_autodiff_8.run
====== BEGIN OUTPUT ======
Running 53 test cases...
unknown location(0): fatal error: in "test_autodiff_8/heuman_lambda_hpp<_Float32>": Throw location unknown (consider using BOOST_THROW_EXCEPTION)
Dynamic exception type: boost::wrapexcept<std::domain_error>
std::exception::what: Error in function boost::math::heuman_lambda<N5boost4math15differentiation11autodiff_v16detail4fvarIDF32_Lm5EEE>(N5boost4math15differentiation11autodiff_v16detail4fvarIDF32_Lm5EEE, N5boost4math15differentiation11autodiff_v16detail4fvarIDF32_Lm5EEE): When 1-k^2 == 1 then phi must be < Pi/2, but got phi = depth(1)(4.20396852,1,0,0,0,0)

test_autodiff_8.cpp(38): last checkpoint

*** 1 failure is detected in the test module "test_autodiff"

@JacobHass8
Copy link
Contributor Author

I should still update the docs and add Cuda support which I'm just going to copy from #1346.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants