Most people define a function, f(n) on N recursively. I think I can calculate f(n) without dealing with f(n-r) for any 0 < r < n. How do I know that my method isn't still going through the same calculations needed to find f(n-1) (or whatever previous terms are required to find f(n) recursively) -- ?
If my method takes many fewer calculations than the recursive way of calculating it does that show that I am not relying on f(n-r) for any 0 < r < n? What would "many fewer" have to mean for this to be significant?
The number of calculations my method takes still depends on n, just like the recursive way of calculating f(n), does that alone mean that the methods are pretty much the same?
If my method takes the same number (or more) calculations than recursive way of calculating f(n) is there any other way of telling if my method is not, in some way, duplicating the recursive way of calculating f(n)?
Examples:
f(n) is recursively defined to be f(n) = f(n-1) + 1 and f(1) = 1
Then f(n) = n. Clearly, f(n) = n is a much faster way to find f(2876) rather than counting up from 1.
f(n) is recursively defined to be f(n) = f(n-1) + f(n-2) This is a linear recurrence and has a closed-form solution. $Fleft(nright) = {{varphi^n-(1-varphi)^n} over {sqrt 5}}={{varphi^n-(-1/varphi)^{n}} over {sqrt 5}},$
(from wikipedia)
$S(n,k)=kS(n−1,k)+S(n−1,k−1)$ with $S(n,n)=S(n,1)=1$ (Stirling numbers of the second kind) Almost seems like it's a linear recurrence... but we need to know about k-1. These numbers are defined as "the number of ways to partition a set of n objects into k groups" so, if I have written a few programs to find S(n,k) from that definition I want to know if I "must" find the values in the linear recurrence along the way...
But, I was trying to keep it more general to make it interesting?
One more example:
$C(n,k)=C(n−1,k)+C(n−1,k−1)$ with $C(n,n)=C(n,1)=1$ but $C(n,k) = frac{n!}{k!(n-k!)}$, most people like the 2nd one better of you want to look at large values of n and k>1...
No comments:
Post a Comment