From Data to Functa: Your data point is a function and you should treat it like one

It is common practice in deep learning to represent a measurement of the world on a discrete grid, e.g. a 2D grid of pixels. However, the underlying signal represented by these measurements is often continuous, e.g. the scene depicted in an image. A powerful continuous alternative is to represent these measurements using an \textit{implicit neural representation}, a neural function trained to output the appropriate measurement value for any input spatial location. In this paper, we take this idea to its next level: what would it take to perform deep learning directly on these functions, treating them as data? In this context we refer to the data as \textit{functa}, and propose a framework for doing deep learning directly on functa. This view presents a number of challenges around efficient conversion from data to functa, compact representation of functa, and effectively solving downstream tasks on functa. We outline a recipe to overcome these challenges, which we apply to a wide range of data modalities including images, 3D shapes, Neural Radiance Field (NeRF) scenes and data on manifolds. We demonstrate that this approach has various compelling and desirable properties across data modalities, in particular on the canonical tasks of generative modeling, data imputation, novel view synthesis and classification.

Authors' notes