Talk from Archives

Local Asymptotic Minimax property of the Least Squares Estimator for monotone functions

26.03.2012

In this talk, we will consider the Least Squares Estimator (LSE) for monotone regression functions under the white noise model. When we estimate the monotone function $f$ at a fixed point $x$, we are able to prove that the LSE has a Local Asymptotic Minimax (LAM) property with respect to the rate of estimation. This means that no other estimator can obtain a significantly better rate of convergence uniformly over a decreasing $L^2$-ball of monotone functions around $f$ than the LSE, even though the LSE obtains its rate of convergence uniformly over the same balls. A consequence of this is that the LSE adapts its convergence rate to the (unknown) local behavior of $f$ near $x$. This adaptation is to the specific underlying function, not to a smoothness class of functions. Classically, the LAM property was used for estimating smooth finite dimensional parameters and obtaining optimal constants. We use it to compare rates of convergence, for which we think the LAM property may be very relevant, especially in shape constrained models.  

The results of this talk are applicable to more general regression models with monotone functions, and monotone density estimation.