Wiener has the following fantastic results about approximations using translation families:
Given a function $h: mathbb{R} to mathbb{R}$, the set ${sum a_i h(cdot - x_i): a_i, x_i in mathbb{R}}$ is
i) dense in $L^1(mathbb{R})$ if and only if the Fourier transform of $h$ has no zeros.
ii) dense in $L^2(mathbb{R})$ if and only if zeros of the Fourier transform of $h$ has zero Lebesgue measure.
After this, a further step is natually about the speed of convergence, i.e., how fast does the error vanishes with respect to the the number of translates. Now let us focus on the $L^1$ case and take $h = varphi$ to be the standard normal density whose Fourier transform does not vanish on the real line. Given a function $f in L^1(mathbb{R})$, the error of the optimal $m$-term approximation is
$mathop{inf}limits_{a_i, x_i in mathbb{R}} left|f - sum_{i=1}^m a_i h(cdot - x_i)right|_1$.
My question is whether there is any way to lower bound this quantity. Of course, there won't be any meaningful conclusion without assumptions on $f$ (e.g., $f$ is a finite-mixture of translates of $varphi$, then it is trivial). So let us consider $f = g * varphi$ for some smooth $g$ (e.g., $g = varphi$), where $*$ denotes convolution, that is, $f$ is an ``infinite"-mixture of translates of $varphi$. Any idea would be greatly appreciated. If things could be easier using $L^2, L^{infty}$ or other distance, it should also be helpful.
For the upper bound, there have been many work, the speed of convergence could be $O(m^{-2})$ or even exponential in $m$. For the lower bound, most work consider a min-max setup: for $f$ belonging to a given class of functions, the worst-case convergence rate can never by faster than $O(m^{-2})$. But for a given $f$, there seems to be no known result.
No comments:
Post a Comment