Jump to Content

Intriguing Properties of Generative Classifers

Published
View publication

Abstract

What is the best paradigm to recognize objects---discriminative inference (fast but potentially prone to shortcut learning) or using a generative model (slow but potentially more robust)? We build on recent advances in generative modeling that turn text-to-image models into classifiers. This allows us to study their behavior and to compare them against discriminative models and human psychophysical data. We report four intriguing emergent properties of diffusion-based generative classifiers: they show a record-breaking human-like shape bias (99\% for Imagen), near human-level out-of-distribution accuracy, state-of-the-art alignment with human classification errors, and they understand certain perceptual illusions. Our results indicate that while the current dominant paradigm for modeling human object recognition is discriminative inference, zero-shot generative models approximate human object recognition data surprisingly well.

Authors

Priyank Jaini, kevclark , Robert Geirhos

Venue

ICLR 2024