Paper
Publication
Learning Casual Overhypotheses through Exploration in Childrenand Computational Models
Abstract

Despite recent progress in reinforcement learning (RL), algorithms for gathering causal informa-tion about an environment remain an active area of research. Existing algorithms for explorationoften focus on state-based metrics, and do not consider causal structures, and while recent researchhas begun to explore environments for causal learning, these environments are primarily leveragecausal information through on causal inference/induction rather than exploration. While agentsmay not leverage causal information for exploration, human children, some of the most proficientexplorers, have been shown to use such information to great benefit. In this work, we introduce anovel environment designed with a controllable causal structure which allows us to test both agentsand children in a unified environment. In addition, through experimentation on both agents andchildren, we demonstrate that there are significant differences between information-gain optimalRL exploration in causal environments and the exploration of children in the same environments.We leverage this new insight to lay the groundwork for future research into efficient explorationand disambiguation of causal structures for RL algorithms.

Authors' notes