The table below lists explicit methods for calculating the next position for a body in the n-body problem given previous positions and accelerations. In the table below,
Order | Method | (positions) | (accelerations) | Escape | Accuracy | Source | ||
---|---|---|---|---|---|---|---|---|
3rd | p_{1} | = | +2p_{0} -p_{-1} | + | dt*dt*(a_{0}) | 6 | 1800 | C |
3rd | p_{1.5} | = | +p_{0.5} +p_{-0.5} -p_{-1.5} | + | dt*dt*( + a_{0.5} + a_{-0.5} ) | 6 | 1000 | J |
5th | p_{2} | = | +p_{1} +p_{-1} -p_{-2} | + | dt*dt*( +5(a_{1}+a_{-1}) +2a_{0} )/4 | 8 | 100 | J |
5th | p_{2.5} | = | +p_{1.5} +p_{-1.5} -p_{-2.5} | + | dt*dt*( +7(a_{1.5}+a_{-1.5}) +5(a_{0.5}+a_{-0.5}) )/6 | 9 | 90 | J |
7th | p_{3} | = | +p_{2} +p_{-2} -p_{-3} | + | dt*dt*( +67(a_{2}+a_{-2}) -8(a_{1}+a_{-1}) +122a_{0} )/48 | 12 | 38 | J |
7th | p_{3.5} | = | +p_{2.5} +p_{-2.5} -p_{-3.5} | + | dt*dt*( +317(a_{2.5}+a_{-2.5}) +69(a_{1.5}+a_{-1.5}) +334(a_{0.5}+a_{-0.5}) )/240 | 14 | 37 | J |
9th | p_{4} | = | +p_{3} +p_{-3} -p_{-4} | + | dt*dt*( +13207(a_{3}+a_{-3}) -8934(a_{2}+a_{-2}) +42873(a_{1}+a_{-1}) -33812a_{0} )/8640 | 20 | 20 | J |
9th | p_{4.5} | = | +p_{3.5} +p_{-3.5} -p_{-4.5} | + | dt*dt*( + 22081(a_{3.5}+a_{-3.5}) - 7337(a_{2.5}+a_{-2.5}) + 45765(a_{1.5}+a_{-1.5}) - 29(a_{0.5}+a_{-0.5}) )/15120 | 19 | 19 | J |
9th | p_{4} | = | -1(p_{-4}) +2(p_{3}+p_{-3}) -2(p_{2}+p_{-2}) +1(p_{1}+p_{-1}) | + | dt*dt*( + 17671(a_{3}+a_{-3}) - 23622(a_{2}+a_{-2}) + 61449(a_{1}+a_{-1}) - 50516a_{0} )/15120 | 13 | 23 | QT |
11th | p_{5} | = | +p_{4} +p_{-4} -p_{-5} | + | dt*dt*( +666151(a_{4}+a_{-4}) -841748(a_{3}+a_{-3}) +3606748(a_{2}+a_{-2}) -5111276(a_{1}+a_{-1}) +6989050a_{0} )/403200 | 33 | 33 | J |
11th | p_{5.5} | = | +p_{4.5} +p_{-4.5} -p_{-5.5} | + | dt*dt*( +1153247(a_{4.5}+a_{-4.5}) -1055189(a_{3.5}+a_{-3.5}) +4412680(a_{2.5}+a_{-2.5}) -3621776(a_{1.5}+a_{-1.5}) +2739838(a_{0.5}+a_{-0.5}) )/725760 | 29 | 29 | J |
11th | p_{5} | = | - 1(p_{-5}) + 1(p_{4}+p_{-4}) - 1(p_{3}+p_{-3}) + 1(p_{2}+p_{-2}) - 1(p_{1}+p_{-1}) + 2p_{0} | + | dt*dt*( +399187(a_{4}+a_{-4}) -485156(a_{3}+a_{-3}) +2391436(a_{2}+a_{-2}) -2816732(a_{1}+a_{-1}) +4651330a_{0} )/241920 | 50 | QT | |
13th | p_{6} | = | +p_{5} +p_{-5} -p_{-6} | + | dt*dt*( +25671198(a_{5}+a_{-5}) -48082866(a_{4}+a_{-4}) +214734403(a_{3}+a_{-3}) -426775928(a_{2}+a_{-2}) +713681566(a_{1}+a_{-1}) -798789548a_{0} )/14515200 | 60 | 60 | J |
13th | p_{6.5} | = | +p_{5.5} +p_{-5.5} -p_{-6.5} | + | dt*dt*( + 136462207(a_{5.5}+a_{-5.5}) - 207556851(a_{4.5}+a_{-4.5}) + 867125681(a_{3.5}+a_{-3.5}) - 1296919125(a_{2.5}+a_{-2.5}) + 1550731494(a_{1.5}+a_{-1.5}) - 570841806(a_{0.5}+a_{-0.5}) )/79833600 | 47 | 47 | J |
13th | p_{7} | = | +p_{7} +p_{-7} -p_{-8} | + |
dt*dt*(
+ 1.591688000812(a_{7}+a_{-7})
- 1.427664903774(a_{6}+a_{-6})
+ 5.796478222905(a_{5}+a_{-5})
- 4.129792946373(a_{4}+a_{-4})
+ 3.028157911887(a_{3}+a_{-3})
+ 2.141133714589(a_{2}+a_{-2})
+ a_{0}
)
I don't know how I got this, this is not a 13th degree polynomial. |
32 | 33 | J |
13th | p_{6} | = | - p_{-6} + 2(p_{5}+p_{-5}) - 2(p_{4}+p_{-4}) + 1(p_{3}+p_{-3/sub>) } | + | dt*dt*( + 90987349(a_{5}+a_{-5}) - 229596838(a_{4}+a_{-4}) + 812627169(a_{3}+a_{-3}) - 1628539944(a_{2}+a_{-2}) + 2714971338(a_{1}+a_{-1}) - 3041896548a_{0} )/53222400 | 36 | 36 | QT |
15th | p_{7} | = | +p_{6} +p_{-6} -p_{-7} | + | dt*dt*( + 378058032343(a_{6}+a_{-6}) - 945040569456(a_{5}+a_{-5}) + 4583977840758(a_{4}+a_{-4}) - 11577417859120(a_{3}+a_{-3}) + 23470490529945(a_{2}+a_{-2}) - 34487534887776(a_{1}+a_{-1}) + 39770282562612a_{0} )/201180672000 | 115 | 115 | J |
15th | p_{7.5} | = | +p_{6.5} +p_{-6.5} -p_{-7.5} | + | dt*dt*( + 681136420843(a_{6.5}+a_{-6.5}) - 1460925809093(a_{5.5}+a_{-5.5}) + 6596939334222(a_{4.5}+a_{-4.5}) - 13816376923762(a_{3.5}+a_{-3.5}) + 22389594250325(a_{2.5}+a_{-2.5}) - 21489156635931(a_{1.5}+a_{-1.5}) + 9714138099396(a_{0.5}+a_{-0.5}) )/373621248000 | 82 | 82 | J |
15th | p_{7} | = | - p_{-7} + 2(p_{6}+p_{-6}) - 2(p_{5}+p_{-5}) + 1(p_{4}+p_{-4}) | + | dt*dt*( + 433489274083(a_{6}+a_{-6}) - 1364031998256(a_{5}+a_{-5}) + 5583113380398(a_{4}+a_{-4}) - 14154444148720(a_{3}+a_{-3}) + 28630585332045(a_{2}+a_{-2}) - 42056933842656(a_{1}+a_{-1}) + 48471792742212a_{0} )/237758976000 | 70 | 70 | QT |
I recently derived these methods and an explanation for them. I didn't find them first though. A previous paper, with a much better explanation of the methods, is
Gerald D. Quinlan and Scott Tremaine Symmetric Multistep Methods for the Numerical Integration of Planetary Orbits The Astronomical Journal, Volume 100 Number 5 November 1990, pp 1694-1700It turns out they, too, derived these methods and an explanation, then found that someone else had previously discovered them. I haven't looked up that earlier paper yet, but its reference is
Lambert, J.D., and Watson, I.A. (1976), J. Inst. Maths Applics 18, 189.A more recent paper analyzing these methods is Backward Error Analysis for Multistep Methods by Ernst Hairer. At any rate, my explanation is below. I'll enhance using material from those references as I have time.
An nth degree extrapolation is exact if the positions are an nth degree polynomial and the accelerations are the second derivative of that polynomial. I calculated the number above by setting up those equations with p_{i} = i^{n-1} and a_{i} = (n-1) * (n-2) * i^{n-3}, then solving for the coefficients with Gaussian elimination using high precision floating point numbers. Code here: ( base.h, bigfloat.h, bigfloat.cpp, multistep.cpp), compile it like "g++ -O2 bigfloat.cpp multistep.cpp -o multistep", run it like "multistep 6" to get the coefficients (represented as a hexadecimal fraction) for the 2*6+1=13th degree polynomial.
The n-body problem refers to determining the orbits of stars and planets interacting via gravity, or atoms and molecules interacting via electric potential. Typically you have a starting state, then you determine the next state (after some small time interval), then a next after that, and so on, and you watch how the system progresses over time.
Single-step methods, such as Runge Kutta and Runge Kutta Fehlberg, determine the next step given only the current state. Multistep methods such as Adams-Moulton, Adams-Bashforth, Nystrom, and Stormer methods use several previous states to determine the next state. Multistep methods can be more efficient, but they require several previous states to be known. Single-step methods offer the flexibility of changing the step size every step in response to the current state.
Time reversible methods have the property that if states 0..n predict state n+1, then states n+1..1 predict state 0. They cannot be biased towards gaining or losing energy, because for every state that produces a gain, there is a matching state that produces a loss. Unfortunately, all time reversible systems are subject to entropy. For example, if states 0..n have obvious errors, like oscillating back and forth, those errors cannot be dampened. The only hope is not to get errors in the first place. Irreversible systems, on the other hand, can dampen such errors. An example of an irreversible method is to choose state n+1 as the average of a reversible method on 0..n and another on 1..n. Irreversible systems do not conserve energy as well as reversible systems.
All the methods considered here base their predictions on polynomials determined by a couple measurements. N+1 data values can give you an nth degree polynomial. Derivatives beyond the nth are ignored. If you look at 6 points per cycle with a timestep of 1, then the position is equal in magnitude to the velocity, and the velocity is equal in magnitude to the acceleration, and so forth for all derivatives on to infinity. Ignoring all but the first finite few is never good enough. Therefore, no polynomial method is going to allow you to reach 6 points per cycle. At fewer than 6 points per cycle, each derivative is actually larger than the one before it, so things get really hopeless.
The sections of this paper are
Below is a simulation running a 9th-order position-acceleration method, complete with source code).
"p_{2} = p_{1} + v_{1}*dt + 0.5*a_{1}*dt*dt", and "v_{2} = v_{1} + a_{1}*dt", where p_{1} is the position, v_{1} is the velocity, and a_{1} is the acceleration at time t_{1}. p_{2} is the position and v_{2} is the velocity at time t_{1}+dt. The original formula is the first three terms of the Taylor series, and can be found in all calculus and physics handbooks. This formula produces quite a bit of error when dt is not infinitessimal. Simulated orbits spiral outwards.
The accelerations are determined from the masses and distances. It is possible to scale the timestep (dt) to 1 by scaling the masses and velocities. This removes all the multiplications by dt, and is a good thing. I'll assume from here on out that that is always done. The original formula is then "p_{2} = p_{1} + v_{1} + 0.5*a_{1}" and "v_{2} = v_{1} + a_{1}".
"v_{2} = v_{1} + a_{1}" and "p_{2} = p_{1} + v_{2}". This is called the leapfrog method, Stormer-Verlet, Adams-Verlet, and so forth. It does an excellent job of conserving energy and works far better than the original formula. It is the most common formula in simulations on the web. Elliptical orbits tend to precess, and orbits are slightly longer than they ought to be. The leapfrog method's approximation is exact for any 3rd degree polynomial.
The leapfrog formula can be derived from a 3rd-order characteristic:
a_{1} = (p_{2} - p_{1}) - (p_{1} - p_{0}) |
a_{1} = p_{2} - p_{1} - (p_{1} - p_{0}) |
p_{1} + (p_{1} - p_{0}) + a_{1} = p_{2} |
p_{1} + v_{1} + a_{1} = p_{2} |
The leapfrog method can also be derived from the original formula:
p_{2} = p_{1} + v_{1} + 0.5*a_{1} |
p_{0} = p_{1} - v_{1} + 0.5*a_{1} |
p_{2} + p_{0} = 2*p_{1} + a_{1} |
p_{2} = 2*p_{1} - p_{0} + a_{1} |
p_{2} = p_{1} + (p_{1}-p_{0}) + a_{1} |
p_{2} = p_{1} + v_{1} + a_{1} |
As you can see, the velocity in the leapfrog method isn't really the velocity. It is the difference between the current and previous position. Given the current position, velocity, and acceleration, the previous position can be estimated as "p_{0} = p_{1} - v_{1} - 0.5*a_{1}", or "(p_{1}-p_{0}) = v_{1} - 0.5*a_{1}". (When given the starting position and velocity, and using the leapfrog method, it is helpful to adjust the velocity as shown before starting the simulation.)
The leaprfrog method, in light of the above, can be used as a single-step method. I use it to determine the first n states for the multistep methods. The leapfrog method is much less accurate, though. Given n states, I can use a multistep method to generate n more, then take every other state, then treat that as the starting n states but with double the stepsize. I start the leapfrog method with a stepsize 1/512 of what I intend to finally run with, then I double the stepsize 9 times.
A multistep method (also known as an Adams method) uses several already-computed positions to determine the next position. The Stormer methods are a type of Adams method that use the last n accelerations to determine the next position. There are explicit methods (where the formula tells you the next position), and implicit methods (where the formula includes the next position, and you have to guess that position and iterate until the formula converges).
The leapfrog method is the 3rd order Stormer method. Higher-order explicit Stormer methods are all unstable. They are of the form "p_{2} = p_{1} + (p_{1}-p_{0}) + c_{1}a_{1} + c_{2}a_{0} + c_{3}a_{-1} + c_{4}a_{-2} + c_{5}a_{-2} ...".
A stable step function can be iterated for any number of steps and still produce reasonable looking results. Slightly unstable step functions will map a world in a line for a couple steps, then the world will develop increasingly large oscillations until the world is crossing the universe every step. Very unstable step functions skip straight to crossing the universe every step. The original method, although very inaccurate, is stable. The 7th order Stormer method is slightly unstable, and the 14th order Stormer method is very unstable.
An nth order characteristic is true when applied to any nth degree polynomial. Here are (new?) characteristics involving the zeroth and second derivative (that is, position p and acceleration a).
The methods above are linear combinations of certain characteristics of nth-degree polynomials. These characteristics, up to order 15, are listed below. (There are only odd-degree characteristics for positions and accelerations.) The 9th-order method, for example, is a linear combination of the 9th, 11th, 13th, and 15th order characteristics. The exact linear combination that produces the methods can be found by Gaussian elimination, solving for a pattern of coefficients for the positions that looks like this: (-1, 1, 0, 0, ..., 0, 0, 1, -1).
Order | (position) | (acceleration) | |
---|---|---|---|
3rd | + 1(p_{1} + p_{-1}) - 2p_{0} | = | dt*dt*( + a_{0}) |
5th | + 12(p_{1} + p_{-1}) - 24p_{0} | = | dt*dt*( + 1(a_{1} + a_{-1}) + 10a_{0}) |
7th | + 3(p_{2} + p_{-2}) + 48(p_{1} + p_{-1}) - 102p_{0} | = | dt*dt*( + 8(a_{1} + a_{-1}) + 44a_{0}) |
9th | + 465(p_{2} + p_{-2}) + 1920(p_{1} + p_{-1}) - 4770p_{0} | = | dt*dt*( + 23(a_{2} + a_{-2}) + 688(a_{1} + a_{-1}) + 2358a_{0}) |
11th | + 79(p_{3} + p_{-3}) + 4671(p_{2} + p_{-2}) + 9585(p_{1} + p_{-1}) - 28670p_{0} | = | dt*dt*( + 387(a_{2} + a_{-2}) + 6012(a_{1} + a_{-1}) + 16182a_{0}) |
13th | + 49483(p_{3} + p_{-3}) + 785862(p_{2} + p_{-2}) + 790965(p_{1} + p_{-1}) - 3252620p_{0} | = | dt*dt*( + 1857(a_{3} + a_{-3}) + 110322(a_{2} + a_{-2}) + 989739(a_{1} + a_{-1}) + 2175924a_{0}) |
15th | + 344007(p_{4} + p_{-4}) + 44853824(p_{3} + p_{-3}) + 401644656(p_{2} + p_{-2}) + 214861248(p_{1} + p_{-1}) - 1323407470p_{0} | = | dt*dt*( + 2630976(a_{3} + a_{-3}) + 78996384(a_{2} + a_{-2}) + 523869120(a_{1} + a_{-1}) + 1019635440a_{0}) |
It is unusual for differential equations to ignore the first derivative (velocity). The techniques used above work equally well on position and velocity rather than position and acceleration. The table below gives 2nd through 20th order characteristics for position and velocity. (Half-positions were used to emphasize symmetry, and to match how linear combinations were done for the methods above.)
Order | (position) | (velocity) | |
---|---|---|---|
2rd | + 2(p_{0.5} - p_{-0.5}) | = | dt*( + 1(v_{0.5} + v_{-0.5})) |
4th | + 1(p_{1.5} - p_{-1.5}) + 9(p_{0.5} - p_{-0.5}) | = | dt*( + 6(v_{0.5} + v_{-0.5})) |
6th | + 11(p_{1.5} - p_{-1.5}) + 27(p_{0.5} - p_{-0.5}) | = | dt*( + 3(v_{1.5} + v_{1.5}) + 27(v_{0.5} + v_{0.5})) |
8th | + 3(p_{2.5} - p_{-2.5}) + 175(p_{1.5} - p_{-1.5}) + 300(p_{0.5} - p_{-0.5}) | = | dt*( + 60(v_{1.5} + v_{-1.5}) + 360(v_{0.5} + v_{-0.5})) |
10th | + 137(p_{2.5} - p_{-2.5}) + 1625(p_{1.5} - p_{-1.5}) + 2000(p_{0.5} - p_{-0.5}) | = | dt*( + 30(v_{2.5} + v_{-2.5}) + 750(v_{1.5} + v_{-1.5}) + 3000(v_{0.5} + v_{-0.5})) |
12th | + 5(p_{3.5} - p_{-3.5}) + 784(p_{2.5} - p_{-2.5}) + 5880(p_{1.5} - p_{-1.5}) + 6125(p_{0.5} - p_{-0.5}) | = | dt*( + 210(v_{2.5} + v_{-2.5}) + 3150(v_{1.5} + v_{-1.5}) + 10500(v_{0.5} + v_{-0.5})) |
14th | + 363(p_{3.5} - p_{-3.5}) + 9947(p_{2.5} - p_{-2.5}) + 48363(p_{1.5} - p_{-1.5}) + 42875(p_{0.5} - p_{-0.5}) | = | dt*( + 70(v_{3.5} - v_{-3.5}) + 3430(v_{2.5} - v_{-2.5}) + 30870(v_{1.5} - v_{-1.5}) + 85750(v_{0.5} - v_{-0.5})) |
16th | + 35(p_{4.5} - p_{-4.5}) + 10863(p_{3.5} - p_{-3.5}) + 179424(p_{2.5} - p_{-2.5}) + 691488(p_{1.5} - p_{-1.5}) + 555660(p_{0.5} - p_{-0.5}) | = | dt*( + 2520(v_{3.5} + v_{-3.5}) + 70560(v_{2.5} + v_{-2.5}) + 493920(v_{1.5} + v_{-1.5}) + 1234800(v_{0.5} + v_{-0.5})) |
18th | + 7129(p_{4.5} - p_{-4.5}) + 350649(p_{3.5} - p_{-3.5}) + 3569184(p_{2.5} - p_{-2.5}) + 10965024(p_{1.5} - p_{-1.5}) + 8001504(p_{0.5} - p_{-0.5}) | = | dt*( + 1260(v_{4.5} + v_{-4.5}) + 102060(v_{3.5} + v_{-3.5}) + 1632960(v_{2.5} + v_{-2.5}) + 8890560(v_{1.5} + v_{-1.5}) + 20003760(v_{0.5} + v_{-0.5})) |
20th | + 126(p_{5.5} - p_{-5.5}) + 65945(p_{4.5} - p_{-4.5}) + 1900305(p_{3.5} - p_{-3.5}) + 14799510(p_{2.5} - p_{-2.5}) + 39334680(p_{1.5} - p_{-1.5}) + 26893944(p_{0.5} - p_{-0.5}) | = | dt*( + 13860(v_{4.5} + v_{-4.5}) + 623700(v_{3.5} + v_{-3.5}) + 7484400(v_{2.5} + v_{-2.5}) + 34927200(v_{1.5} + v_{-1.5}) + 73347120(v_{0.5} + v_{-0.5)) } |
All of these can be used as implicit or explicit Adams methods by solving for one of the terms. The 3rd order characteristic is the leapfrog method, in this sense. The 7th, 11th, and 15th order characteristics form explicit Adams methods, but they are unusably unstable. The 5th, 9th, and 13th order characteristics form implicit Adams methods. I haven't checked them, but I imagine they are unusably unstable too.
The n+4th characteristic is always some linear combination of the nth, n+2th, and the nth shifted -1 and +1. (For example, 3rd shifted -1 and +1 is "p_{-2} - 2p_{-1} + p_{0} + p_{0} - 2p_{1} + p_{2} = a_{-1} + a_{1}".)
I found these using a C program doing long-double Gaussian elimination on i^{n}, for a sequence of points all 1 apart and centered at 1/4. I took advantage of the fact that the constants are symmetric to cut the matrix size in half. For the 5th, 9th, and 13th order characteristics, I first found the characteristics with the middle acceleration zero, then subtracted that from the 7th, 11th, 15th order characteristics to get the form shown here. I used continued fractions to determine the denominators of the resulting constants.
Update on that: if I know the position coefficients, I can use Gaussian elimination to find the other coefficients. That lets me find methods directly without searching for linear combinations of characteristics, even without knowing the characteristics. The position coefficients sum to 1 for all methods. The last coefficient is always 1 for position-velocity and -1 for position-acceleration.
(All the methods I gave in the tables at the top are time-reversible. This means they will probably do a good job of conserving energy. It also means they cannot dampen any oscillations due to errors.)
Whether a method is stable can be tested by considering an object at rest, for the problem x' = x'' = 0, with an error recorded in the position of +1 at time zero. Track how the error gets propogated. The accelerations (velocities) are always zero, so their coefficients don't matter. The position coefficients (1, 0, 0, ... 0, 0, 1, -1) propogate errors like so: 0 0 0 0 .. 0 0 0 0 1 1 1 1 .. 1 1 1 1 2 2 2 2.. 2 2 2 2 3 3 3 3 .. 3 3 3 3 4 4 4 4 ... As you can see, this is equivalent to a change in velocity plus an oscillation every n points, where the oscillation stays a fixed size. The position coefficients (0, 0, ..., 0, 0, 1) propogate like so: 0 0 .. 0 0 1 0 0 .. 0 0 1 0 0 .. 0 0 1, which you can see is just a repeating oscillation. Position-velocity methods can use (0 0 .. 0 1). Position-acceleration methods need (1 0 0 .. 0 0 1 -1).
Once you have position coefficients you are happy with, let n be the order of the method, and use Gaussian elimination on the sequence i^{n} to determine the velocity or acceleration coefficients.
For most problems (for example, x'=x(y-2), y'=y(1-x)), the acceleration (velocity) does matter, in fact no symmetric multistep method seems to be stable. But for some problems (for example, x'=y, y'=-x, or the n-body problem), the feedback seems to always cancel itself out.
I don't know yet what makes a problem good or bad.
For position-acceleration methods, with coefficients (-1, 1, 0 .. 0, 1, -1), which propogate an error as (0 0 .. 0 0 1 1 .. 1 1 2 2 .. 2 2 3 3 ..) when the acceleration is zero, I can say a little more about stability. An impulse of 1 translates into a change in velocity between every n-1th and nth step. n consecutive impulses of 1 translates into a change in velocity of 1 between all points, that is, a smooth change in velocity of 1. All the accelerations times their coefficients can be treated as impulses. The velocity at point after a long period of time is the sum of the impulses from every nth point.
There is a continuum of stable position coefficients. For example, ((1/2), 1, (1/2), -1) is stable. The oscillation doesn't repeat, it looks kinda random. But it never grows. (The acceleration coefficients for this one are 31/24, 22/24, 31/24, if you feel like testing it.)
An old textbook of mine, "Introductory Combinatorics" by Brualdi, 1977 Elsevier Science Publishing, pp 131-139, shows how to find close-form expressions for "linear recurrence relations", where a_{i+1} = c_{0}a_{i} + c_{1}a_{i-1} + c_{2}a_{i-2} + c_{3}a_{i-3}. You have to factor (c_{0} + c_{1}x + c_{2}x^{2} + c_{3}x^{3}) into (x-A)(x-B)(x-C)(x-D). When the values of c are (2+2Q, -2-4Q, 2+2Q, -1), as is required for the position coefficients for four-term position-acceleration methods, that factors into (x-1) (x-1) (x+(Q+sqrt(QQ-1))) (x+(Q-sqrt(QQ-1))). The reference above shows how to translate that into the closed form a_{n} = (W*1^{n} + X*(n-1)*1^{n} + Y*cos(n*arccos(Q)) + Z*sin(n*arccos(Q))) when (QQ-1) is negative. (All those terms are powers of the roots of the original polynomial. When QQ-1 is negative, ||Q + sqrt(QQ-1)|| = sqrt(||QQ||+||QQ-1||) = sqrt(QQ - QQ + 1) = sqrt(1) = 1. Exponentiating complex numbers of magnitude 1 produces sine waves.) (I ought to define W,X,Y,Z in terms of Q, but I haven't been able to work it through without making a mistake somewhere along the way yet. I'm awful at algebra.) This predicts that position coefficients strictly between (-1, 0, 2, 0, -1) and (-1, 4, -6, 4, -1) should be stable, and outside of that methods should diverge, and this is in fact the case. The same techniques can be applied to higher order methods.
For example, for (-1, 1/2, 1, 1/2, -1), Q=-3/4, arccos(Q) is about 138.59 degrees, W=0, X=2/7, Y=2/7, and Z=6/(7*sqrt(7)). That correctly predicts that the next term after 0, 0, 0, 1, is 1/2.
If you eliminate time-reversibility (that is, the requirement that the coefficients are symmetric), you can dampen oscillations. That would allow the output of one method to be used as the input to another method without errors growing exponentially. A first step in this direction, given a bunch of symmetric methods, is to average together a couple methods. For example, you can make the position-velocity methods above useful by averaging together the two methods of the same order.
The same techniques can be used to derive methods for position and velocity, rather than position and acceleration. Any error oscillates without getting bigger, and errors do not change velocity. However, if you use the output of one method as the input to another (for example using acceleration to find velocity then velocity to find position), errors increase exponentially. I tried the n-body simulations that way with lots of combinations of these, and they all explode after an orbit or two. I also tried a single-level problem where the orbit depended strongly on the position, and that always exploded too.
These methods are just good for nothing on their own. Tests were run for the n-body problem, where two consecutive methods were averaged together. For methods on half-points, measurements were run on (x+2y)/3, where x is the method and y is the next consecutive (whole-point) method. For methods on whole points, measurements were run on (3x+y)/4, where x is the current method and y is the next consecutive method (a half-point method). Even with the methods averaged together, these results look pretty bad to me. At this point I'd recommend using something (anything) else.
Order | Method | (positions) | (velocities) | Escape | Accuracy | ||
---|---|---|---|---|---|---|---|
2nd | p_{1} | = | + p_{-1.0} | + | dt*( + 2v_{0.0} ) | 15 | >1000 |
2nd | p_{1.5} | = | + p_{-1.5} | + | dt*( + 3v_{0.5} + 3v_{0.5} )/2 | ? | ? |
4th | p_{2} | = | + p_{-2} | + | dt*( + 8v_{1} - 4v_{0} + 8v_{-1} )/3 | 35 | 200 |
4th | p_{2.5} | = | + p_{-2.5} | + | dt*( + 55v_{1.5} + 5v_{1.5} + 5v_{-1.5} + 55v_{-1.5} )/24 | ? | ? |
6th | p_{3} | = | + p_{-3} | + | dt*( + 33v_{2} - 42v_{1} + 78v_{0} - 42v_{-1} + 33v_{-2} )/10 | 120 | 120 |
6th | p_{3.5} | = | + p_{-3.5} | + | dt*( + 4277v_{2.5} - 3171v_{1.5} + 3934v_{0.5} + 3934v_{-0.5} - 3171v_{-1.5} + 4277v_{-2.5} )/1440 | ? | ? |
8th | p_{4} | = | + p_{-4} | + | dt*( + 3680v_{3} - 7632v_{2} + 17568v_{1} - 19672v_{0} + 17568v_{-1} - 7632v_{-2} + 3680v_{-3} )/945 | 400 | 600 |
8th | p_{4.5} | = | + p_{-4.5} | + | dt*( + 16083v_{3.5} - 25227v_{2.5} + 44703v_{1.5} - 15399v_{0.5} - 15399v_{-0.5} + 44703v_{-1.5} - 25227v_{-2.5} + 16083v_{-3.5} )/4480 | ? | ? |
10th | p_{5} | = | + p_{-5} | + | dt*( + 20225v_{4} - 58450v_{3} + 166700v_{2} - 275350v_{1} + 339110v_{0} - 275350v_{-1} + 166700v_{-2} - 58450v_{-3} + 20225v_{-4} )/4536 | 3000 | >10000 |
10th | p_{5.5} | = | + p_{-5.5} | + | dt*( + 30277247v_{4.5} - 72635189v_{3.5} + 172412680v_{2.5} - 187941776v_{1.5} + 97803838v_{0.5} + 97803838v_{-0.5} - 187941776v_{-1.5} + 172412680v_{-2.5} - 72635189v_{-3.5} + 30277247v_{-4.5} )/7257600 | ? | ? |
12th | p_{6} | = | + p_{-6} | + | dt*( + 9626v_{5} -35771v_{4} +123058v_{3} -266298v_{2} +427956v_{1} -494042v_{0} +427956v_{-1} -266298v_{-2} +123058v_{-3} -35771v_{-4} + 9626v_{-5} )/1925 | ? | ? |
12th | p_{6.5} | = | + p_{-6.5} | + | dt*( + 4.7262539404881v_{5.5} -15.285227125547v_{4.5} +45.75275765364v_{3.5} -77.59663041319v_{2.5} +85.1901407992v_{1.5} +36.2872948546v_{0.5} +36.2872948546v_{-0.5} +85.1901407992v_{-1.5} -77.59663041319v_{-2.5} +45.75275765364v_{-3.5} -15.285227125547v_{-4.5} + 4.7262539404881v_{-5.5} ) | ? | ? |
14th | p_{7} | = | + p_{-7} | + | dt*( + 5.52398548399v_{6} - 25.1322711876v_{5} + 101.7001500619v_{4} - 271.271046097v_{3} + 540.80724780v_{2} - 805.2153417v_{1} + 921.1745513v_{0} - 805.2153417v_{-1} + 540.80724780v_{-2} - 271.271046097v_{-3} + 101.7001500619v_{-4} - 25.1322711876v_{-5} + 5.52398548399v_{-6} ) | ? | ? |
14th | p_{7.5} | = | + p_{-7.5} | + | dt*( + 5.259634135629v_{6.5} - 21.426674812136 v_{5.5} + 77.57813318807v_{4.5} - 174.6135460410 v_{3.5} + 274.436211315v_{2.5} - 271.085884246v_{1.5} + 117.352126463v_{0.5} + 117.352126463v_{-0.5} - 271.085884246v_{-1.5} + 274.436211315v_{-2.5} - 174.6135460410 v_{-3.5} + 77.57813318807v_{-4.5} - 21.426674812136 v_{-5.5} + 5.259634135629v_{-6.5} ) | ? | ? |
Exploring orbits with Java
Klemperer Rosettes
A set of noncolliding orbits that
fills 3-space
Table of Contents