Consider the problem of joint estimation of the means for a large number of distributions in R^d using separate, independent data sets from each of them, sometimes also called "multi-task averaging" problem.
We propose an improved estimator (compared to the naive empirical means of each data set) to exploit possible similarities between means, without any related information being known in advance. First, for each data set, similar or neighboring means are determined from the data by multiple testing. Then each naive estimator is shrunk towards the local average of its neighbors. We prove that this approach provides an improvement in mean squared error that can be significant when the (effective) dimensionality of the data is large, and when the unknown means exhibit structure such as clustering or concentration on a low-dimensional set (but which is totally unknown in advance). This is directly linked to the fact that the separation distance for testing is smaller than the estimation error in high dimension and generalizes the well-known James-Stein phenomenon. An application of this approach is the estimation of multiple kernel mean embeddings, which plays an important role in many modern applications.
This is based on joined work with Hannah Marienwald and Jean-Baptiste Fermanian.
The talk also can be joined online via our ZOOM MEETING
Meeting room opens at: May 8, 2023, 4.30 pm Vienna
Meeting ID: 684 6183 7546
Password: 881731