Wednesday, 19 May 2010

divergent series - Do Abel summation and zeta summation always coincide?

I think the answer is 'yes.' I don't have a suitably general reason why this is the case, although surely one exists and is in the literature somewhere.



At any rate, for the problem at hand, we have for $s > 0$



$$sum frac{a_n}{n^s} = frac{1}{Gamma(s)}int_0^infty sum a_n e^{-nt} t^{s-1} dt.$$



Edit: the interchange of limit and sum used here requires justification, and this is done below. Supposing that $sum a_n x^n rightarrow sigma$, we may write $sum a_n e^{-nt} = (sigma + epsilon(t))cdot e^{-t}$ where $epsilon(t) rightarrow 0$ as $t rightarrow 0$, and $epsilon(t)$ is bounded for all t. In this case



$$sum frac{a_n}{n^s} = sigma + Oleft(sint_0^infty epsilon(t) e^{-t} t^{s-1} dtright)$$



Showing that error term tends to $0$ is just a matter of epsilontics; for any $epsilon > 0$, there is $Delta$ so that $|epsilon(t)| < epsilon$ for $t < Delta$. Hence



$$left|sint_0^infty epsilon(t) e^{-t} t^{s-1} dt right| < sepsilon int_0^Delta t^{s-1} dt + sint_Delta^infty e^{-t}t^{s-1}dt < epsilon Delta^s + sint_Delta^infty e^{-t} t^{-1} dt.$$



Letting $s rightarrow 0$, our error term is bounded by $epsilon$, but $epsilon$ of course is arbitrary.



Edit: Justifying the interchange of limit and sum above is surprisingly difficult. We will require



Lemma: If for fixed $epsilon > 0$, the partial sums $D_{epsilon}(N) = sum_{n=1}^N a_n/n^epsilon = O(1),$ then



(a) $A(N) = sum_{n leq N} a_n = O(n^epsilon)$, and



(b) $sum_{n leq N} a_n e^{-nt} = O(t^{-epsilon})$,



where the O-constants depend on $epsilon.$



This, with the hypothesis that $sum a_n/n^s$ converges for all $s > 0$, imply the conclusions a) and b) for all positive $epsilon$.



To prove part a), note that



$$sum_{n leq N} a_n = sum_{n leq N} a_n n^{-epsilon}n^epsilon = sum_{n leq N-1} D_{epsilon}(n) (n^epsilon - (n+1)^epsilon) + D_epsilon(N)N^epsilon,$$



which is seen to be $O(N^epsilon)$ upon taking absolute values inside the sum.



To prove part b), note that



$$t^epsilon sum_{n leq N} a_n e^{-nt} = t^epsilon sum_{n=1}^{N-1} A(n)(e^{-nt} - e^{-(n+1)t}) + t^epsilon A(N) e^{-Nt} = O left( sum_{nleq N} (tn)^epsilon e^{-nt}(1-e^{-t}) + (tN)^epsilon e^{-Nt}right).$$



Now, $(tN)^epsilon e^{-Nt} = O(1)$, and



$$sum_{nleq N} (tn)^epsilon e^{-nt}(1-e^{-t}) = 2^epsilon(1-e^{-t}) sum_{nleq N} (tn/2)^epsilon e^{-nt/2} e^{-nt/2} = Oleft(frac{1-e^{-t}}{1-e^{-t/2}}right) = Oleft(frac{1}{1+e^{t/2}}right) = O(1),$$



and this proves b).



We use this to justify interchanging sum and integral as follows: note that



$$sum_{n=1}^N frac{a_n}{n^s} = frac{1}{Gamma(s)}int_0^infty sum_{n=1}^N a_n e^{-nt} t^{s-1} dt,$$



and therefore



$$frac{1}{Gamma(s)}int_0^infty lim_{Nrightarrowinfty}sum_{n=1}^N a_n e^{-nt} t^{s-1} dt = frac{1}{Gamma(s)}int_0^1 lim_{Nrightarrowinfty}sum_{n=1}^N a_n e^{-nt} t^{s-1} dt + frac{1}{Gamma(s)}int_1^infty lim_{Nrightarrowinfty}sum_{n=1}^N a_n e^{-nt} t^{s-1} dt.$$



In the first integral, note that for $epsilon < s$, $sum_{n leq N} a_n e^{-nt} t^{s-1} = O(t^{s-epsilon -1})$ for all $N$. So by dominated convergence in the first integral, and uniform convergence of $e^t sum_{n=1}^N a_n e^{-nt}$ for $t geq 1$ in the second, this is limit is



$$lim_{Nrightarrowinfty}frac{1}{Gamma(s)}int_0^1 sum_{n=1}^N a_n e^{-nt} t^{s-1} dt + lim_{Nrightarrowinfty}frac{1}{Gamma(s)}int_1^infty sum_{n=1}^N a_n e^{-nt} t^{s-1} dt = lim_{Nrightarrowinfty} sum_{n=1}^N a_n frac{1}{Gamma(s)}int_0^infty e^{-nt}t^{s-1} dt.$$



This is just $sum_{n=1}^infty frac{a_n}{n^s}$.



Note then that we do not need to assume from the start that the infinite Dirichlet sum tends to anything as $s rightarrow 0$; once it converges for each fixed $s$, that is implied by the behavior of the power series.

No comments:

Post a Comment