Paper
Publication
Information-theoretic Online Memory Selection for Continual Learning
Abstract

A challenging problem in task-free continual learning from data streams is the online selection of a representative replay memory. In this work we investigate the online memory selection problem from an information-theoretic perspective. To gather the most information, we propose the \textit{surprise} and the \textit{learnability} criterions to pick informative points and to avoid outliers. We present a Bayesian model for efficient computation of the criterions by exploiting rank-one matrix structures, and we use these criterions in a greedy algorithm for online memory selection. Furthermore, to avoid problems of a deterministic greedy procedure regarding \textit{the timing to update the memory}, we introduce a stochastic information-theoretic reservoir sampler (InfoRS), which conducts sampling among selective points with high information. Compared to reservoir sampling, InfoRS demonstrates an improved robustness against data imbalance. Finally, the empirical performances over continual learning benchmarks, manifest both the efficiency and the efficacy of the proposed approaches.

Authors' notes