Reconstructing Training Data with Informed Adversaries

Given access to a machine learning model, can an adversary reconstruct the model's training data? This work proposes a formal threat model to study this question, shows that reconstruction attacks are feasible in theory and in practice, and presents preliminary results assessing how different factors of standard ML pipelines affect the success of reconstruction attacks. Finally, we empirically evaluate what level of differential privacy suffices to prevent reconstruction attacks.

Authors' notes