This is a fixed-precision floating point library. How much precision is a compiletime constant. Its purpose is to be more accurate and less of an appeal to authority than doubles and math.h. It does not load any values (that would be cheating, not to mention a problem when the precision changes), but it does precompute and cache a number of values, such as e and pi.
My motive for implementing big number arithmetic was to get the correct constants for symmetric multistep methods for n-body gravitational simulation. Doubles only took me so far. It turns out that implementing a big number library is a waste of time, because gcc comes with a floating point big number library, mpfr, which does all this stuff well already. But on the other hand, that requires setting up some libraries that it isn't clear how to do. g++ installed them on my computer, but i686-pc-cygwin-g++ doesn't compile even hello-world programs, and an hour of googling and experimenting did not bring joy. If I supply plain .h and .cpp files that do the job with no external dependencies (like assembly or math.h or mpfr.h), that avoids configuration issues. And anyhow, bignum arithmetic isn't all that big. If I've done it myself I'm depending on fewer black boxes. And maybe I'm learning something useful in the process.
How to use this library: download base.h, bigfloat.h, and bigfloat.cpp into your source tree somewhere. An example of using them: multistep.cpp #includes "bigfloat.h", and is compiled by "g++ -O3 bigfloat.cpp multistep.cpp -o multistep". You should probably replace base.h with your existing primitives and modify bigfloat.h, bigfloat.cpp to use your existing primitives. base.h, bigfloat.h, bigfloat.cpp (and the example multistep.cpp) are public domain, no strings attached, no warranty implied. You can test it by "g++ -g -DBIGFLOAT_TEST bigfloat.cpp -o bigfloat".
Timings: about 9x slower than mpfr for 1000-digit precision (exponent 104 base 232), and around 1.5x slower than mpfr for 100-digit precision (exponent 11 base 232). Going from this page and accounting for my 3.6GHz CPU. Accuracy: pi to 8192 hexadecimal digits is accurate for the first 8191 digits.
The datatype is a fixed array of 32-bit digits, a 64-bit exponent, a 1-byte length, and a boolean sign bit. It's not a very compact representation for simple numbers like 1. The exponent is base 232, that is, one per digit. The digits are not initialized beyond the length, which makes certain operations faster (for example 1*1 requires 1 multiplication instead of n*n). Some operation use 16-bit digits instead to do calculations, but they all pack the answer back into 32-bit digits when they are done. The top digit is always nonzero, except for 0, which has no digits at all, and is positive and has a smaller exponent than anything else.
It would be better if the exponent were base 2 (like in mpfr) rather than base 232. It would be better if the numbers near 0 followed the IEEE standard, with the top digits 0, so the smallest representable numbers don't have a big gap right around 0. I reserve the right to change my code incompatibly in the future. Make a copy of it if you want something that doesn't change.
There are tons of special cases. The way to catch all those special cases is to exhaustively test all inputs. The way to exhaustively test all inputs to a bignum package is to reduce the data type: when testing, use just 4 2-bit digits with a range of exponents just a little more than double the number of digits, rather than n 32-bit digits with all 64-bit signed values as exponents. This preserves all the edge cases, including having some middle values, yet makes it possible to enumerate all possible values. If you have lots of time to burn you can test all possible values with 3 4-bit digits too (I made assumptions that the digits were a power-of-two number of bits).
Addition and subtraction of floating point numbers is one of the hardest little problems in computer science. It has unbelievably many conditions and special cases. The examples below assume base-10 digits.
I still used 32-bit digits for this, but I produced a 64-bit results and added the bottom 32 bits to one digit of the result and the top 32 bits to the next digit. I only calculate the first c_digits+3 digits, then round. I think c_digits+2 is sufficient full scale, but the scaled-down version used for testing needed c_digits+3.
Eventually I'll implement Karatsuba multiplication. That is O(nln(3)/ln(2)) (O(n1.585)) instead of O(nln(4)/ln(2)) (O(n2)) because a normal n*n long-multiply requires four long-multiplies of half the number of digits, while Karatsuba requires three. Karatsuba multiplication is what finally convinced me to have variable length floating point rather than fixed length floating point, because that algorithm is only fast if the lengths are variable. Karatsuba works like this: you know (10a + b) * (10c + d) = 100ac + 10(ad + bc) + bd. Well, (a + b) * (c + d) = (ac + ad + bd + bd). You have to calculate ac and bd for the full result anyhow. So, you could do 3 multiplication instead of 4 by saying the result is 100ac + 10((a+c)(b+d)-ac-bd) + bd. Notice that ad+bc might overflow a 64-bit number. I'll cross that bridge when I come to it.
Division: I split each 32-bit digit into two 16-bit digits. If I used 32-bit digits, then to estimate the next digit I would need to divide an numerator with at least 2 digits precision by a denominator with at least 1 digit precision. That means at least a 64-bit numerator. That's challenging: if the denominator is 1, that isn't 32 bits of precision, that's 1 bit of precision. Shifting in 31 extra bits of denominator means shifting in 31 extra bits to the numerator, which goes over 64 bits, doesn't fit. Also I allow signed differences, so I really only have 63 bit integers to play with not 64. To get around this, I split each 32-bit digit into two 16-bit digits. Then my denominator can have at least 1 digit precision (1 to 2), and the numerator can have at least 2 digits precision (2 to 3 if the digits were all in range, maybe up to 4 if there were some unhandled carries). 2*16=32 bits, 3*16=48 bits, these fit comfortably in a 63-bit integer.
Estimating the next digit could be worked around by using doubles instead, with the numerator 232 times bigger than the denominator, then it only needs 32 bits of precision. But multiplying the estimated digit by the digits of the remainder will overflow 63 bits again, especially if there are some unhandled carries in the remainder. Maybe 32-bit digits could be made to work, but it'd be a tricky dance, so I kept to 16-bit digits.
Division can be carried on for infinitely many digits. For the scaled-down datatype and all possible numerators and divisors, an extra 2 half-digits was not good enough to have (numerator - divisor*quotient)/numerator with an exponent of at most 1-maxPrecision. I got that by trying all possible inputs for a variety of scaled-down datatypes, not by a proof.
You are given x2, and need to find x. Again, split each 32-bit digit into two 16-bit digits. Collect the first 4 half-digits in an integer and use Newton's method to figure out the first 2 estimated digits of the square root.
After that, you want to reduce the error of your guess. You have the square x2 and a guess at the answer, (x+d). Now, x2-(x+d)2 = -2xd - d2. So (remainder)/(-2*old_guess) = (-2xd - d2)/(-2*(x+d)) = d - d2/(2(x+d)) is an estimate of d that will bring us closer to the right answer, provided |d2/(2(x+d))| is smaller than |d|. For example, if your current estimate of sqrt(2) is 1, then sqrt(2)2 - (sqrt(2)-.4142...)2 = 2 - 1 = 1, and 1/(-2*1) = -.5 is an estimate of d. It's a reasonable estimate, see d2/2(x+d) = (-.5)2/(2*1) = .125 , and |.125| < |-.5| .
Suppose you estimate your error as d but your guess is really x+d+e. Hopefully with |e| much smaller than |d|. Then (x+d+e)2 = x2 + d2 + e2 + 2xd + 2xe + 2de = (x+e)2 + 2xd + 2de + d2 = (x+e)2 + 2d(x+d+e) - d2. So, to replace x2 - (x+d+e)2 with x2 - (x+e)2, you need to add 2d(x+d+e) and subtract d2. When your old estimate is low, d will be negative, so 2d(old_estimate) and -d2 have the same sign.
For example, if I estimate sqrt(2) = 1, and the next guessed delta is -.5, then the old remainder is x2 - (x+d+e)2 = 2-1 = 1 = sqrt(2)2 - (sqrt(2) + -.5 + .08578...)2. To get to a new remainder of -.25 = 2-2.25 = sqrt(2)2 - (1.5)2, starting from an old remainder of 1 = 2 - 12, I need to add 2*(-.5)*(1) and subtract (-.5)2 from the old remainder, that is, the new remainder is 1 - 1 - .25 = -.25 .
We could apply this by bignum Newton's method again, doubling the number of correct digits each round. Or we can increase the estimate just one digit at a time, but using 8-byte integer arithmetic, which is what bignum multiplication and division do internally. Which is faster? The faster way is doing it one digit at a time with integer arithmetic. It's cheaper than a single bignum invert, so it's clearly cheaper than log(n) bignum inverts.
I'm pretty sure fast multiplication can speed this up same as for division, recursively using fast multiplication for most things except for O(n) work along the diagonal calculating the next digit of the square root.
There's a Brent-Salamin method that doubles the number of digits of accuracy for π . It requires a square root function, that was what clinched that I had to implement square root. Several others (e, exp, sin, cos, ln(1+x), atan) have reasonable Taylor series near zero. Others (tan, sec, arccos, power) are simple combinations of other functions.
A vital trick is to only use the Taylor series very close to zero, because it converges quickly then, and to splice together those results in various ways to form the full functions. Beware that making the segments smaller causes slight discontinuities at the additional splice points.
Another useful trick is to cache constants. For example, 1/n! is the same value regardless of x, so you could make a static array of the n values 1/0! .. 1/(n-1)!, where n is how many terms were needed to compute e. A common Init() method can precalculate all the constants needed for all of these routines.
Another useful trick is to use integer arithmetic instead of bignum arithmetic whenever possible. For example if you have to increase n! to (n+2)!, don't multiply a BigFloat by n+1 then multiply it again by n+2, instead calculate the integer (n+1)*(n+2) then multiply by that. Another example, if you want to find xn/n! - xn+2/(n+2)!, do ((n+1)*(n+2) - x2) / (n+2)! . Converting small integers to BigFloat is faster than looking up cached versions of the same thing.
For cos(-π/4..π/4) and sin(-π/4..π/4), |x| stays less than .8, and the coefficients get small quickly, so the Taylor converges reasonably fast. cos(x=π/4..π/2) is just sin(π/2-x) and vice versa, and all other ranges are also copies of those curves, you just need an accurate modulo π/4 . There are some identities that seem like they could get the same results but with x closer to zero: cos(4x) = 1 - 8sin2(x) + 8sin4(x), and sin(3x) = 3sin(x) - 4sin3(x). I don't know if it's better to make use of those identities or compute values directly.