Monday, 25 April 2011

ca.analysis and odes - Closed forms for Monotonic polynomial recurrences?

A lot depends of what you mean by "fair accuracy" and on what exactly you are going to do with your formula. If a 30% upside error in each dn is tolerable, you can do the following.



We look at the recursion dn+1=qdn(1dn) with 0<q=2p<1 starting with some d1in[0,1]. It'll be convenient to do the first step separately, so we have d2=qd1(1d1). The reason is that q1d2in[0,1/4]. Now, denote bn=q(n1)dn and rewrite the recurrence as b1n+1=b1n+qn1fracbnbn+1. Note now that the ratio of each term to the next is between 1 and 4/3 and tends to 1, so, replacing it by 1, we get b1napproxb12+fracqqn11q with accuracy 30% and being sure that it is an underestimate. Putting all this stuff together, we get
dnapproxqn1left[frac1d1(1d1)+fracqqn11qright]1


for nge2.



Note also that 30% is a very rough estimate for the accuracy. In practice, I've never seen it being worse that 6% (though I haven't done too many simulations).



P.S. The estimate
dnapproxqn1left[frac1d1(1d1)+fracqqn11q+logleft(1+d1(1d1)fracq2q2(n1)1q2right)right]1


is even better (3% accuracy for small n and about 0.5% accuracy for large n) but noticeably uglier. As several people have already mentioned, there is no exact formula, so the higher precision you want, the longer and uglier the approximation gets.



P.P.S. The main idea behind the second approximation is that if Bn=b1n, then we make an error qn1fracBn+1BnBnapproxfracq2(n1)Bn during each step that led us to the first approximation. To compensate, we would like to take the partial sums of that series. We also know that Bnapproxfrac1d(1d)+q+q2+dots+qn2. Unfortunately, we cannot sum the resulting series nicely. But if we double all the exponents in the approximate formula for Bn (which will lead only to a small error when qapprox1), then we'll get the Riemann sum type expression, which we can replace by the corresponding integral. Clearly, it may overshoot on the long run but to my own surprise, it not only corrects the first few terms nicely, but also corrects the asymptotics giving an error below 0.5% for large n in the entire range of parameters (well, at least that is so for all values I looked at and I tried quite a few). Why it works this way remains a mystery, but it does. It can be shown rigorously that the total relative overshot coming from all steps after n is at most n1 and I've never seen the overshots of more than 0.1% up to n=300.

No comments:

Post a Comment