Wiener has the following fantastic results about approximations using translation families:
Given a function h:mathbbRtomathbbR, the set sumaih(cdot−xi):ai,xiinmathbbR is
i) dense in L1(mathbbR) if and only if the Fourier transform of h has no zeros.
ii) dense in L2(mathbbR) if and only if zeros of the Fourier transform of h has zero Lebesgue measure.
After this, a further step is natually about the speed of convergence, i.e., how fast does the error vanishes with respect to the the number of translates. Now let us focus on the L1 case and take h=varphi to be the standard normal density whose Fourier transform does not vanish on the real line. Given a function finL1(mathbbR), the error of the optimal m-term approximation is
mathopinflimitsai,xiinmathbbRleft|f−summi=1aih(cdot−xi)right|1.
My question is whether there is any way to lower bound this quantity. Of course, there won't be any meaningful conclusion without assumptions on f (e.g., f is a finite-mixture of translates of varphi, then it is trivial). So let us consider f=g∗varphi for some smooth g (e.g., g=varphi), where ∗ denotes convolution, that is, f is an ``infinite"-mixture of translates of varphi. Any idea would be greatly appreciated. If things could be easier using L2,Linfty or other distance, it should also be helpful.
For the upper bound, there have been many work, the speed of convergence could be O(m−2) or even exponential in m. For the lower bound, most work consider a min-max setup: for f belonging to a given class of functions, the worst-case convergence rate can never by faster than O(m−2). But for a given f, there seems to be no known result.
No comments:
Post a Comment