Talk from Archives

Learning Robustly from Multiple Sources

07.12.2020 16:45 - 17:45

MOVED TO SUMMER SEMESTER 2021 DUE TO COVID19 RESTRICTIONS @UNIVIE

 

We study the problem of learning from multiple untrusted data sources, a scenario of increasing practical relevance given the recent emergence of crowdsourcing and collaborative learning paradigms. Specifically, we analyze the situation in which a learning system obtains datasets from multiple sources, some of which might be biased or even adversarially perturbed. It is known that in the single-source case, an adversary with the power to corrupt a fixed fraction of the training data can prevent "learnability", that is, even in the limit of infinitely much training data, no learning system can approach the optimal test error. I present recent work with Nikola Konstantinov in which we show that, surprisingly, the same is not true in the multi-source setting, where the adversary can arbitrarily corrupt a fixed fraction of the data sources.

 

Personal website of Christoph Lampert

Location:
HS 7 OMP 1