Starting this weekend, the thirty-ninth International Conference on Machine Learning (ICML 2022) is meeting from 17-23 July, 2022 at the Baltimore Convention Center in Maryland, USA, and will be running as a hybrid event.
Researchers working across artificial intelligence, data science, machine vision, computational biology, speech recognition, and more are presenting and publishing their cutting-edge work in machine learning.
In addition to sponsoring the conference and supporting workshops and socials run by our long-term partners LatinX, Black in AI, Queer in AI, and Women in Machine Learning, our research teams are presenting 30 papers, including 17 external collaborations. Here’s a brief introduction to our upcoming oral and spotlight presentations:
Making reinforcement learning (RL) algorithms more effective is key to building generalised AI systems. This includes helping increase the accuracy and speed of performance, improve transfer and zero-shot learning, and reduce computational costs.
In one of our selected oral presentations, we show a new way to apply generalised policy improvement (GPI) over compositions of policies that makes it even more effective in boosting an agent’s performance. Another oral presentation proposed a new grounded and scalable way to explore efficiently without the need of bonuses. In parallel, we propose a method for augmenting an RL agent with a memory-based retrieval process, reducing the agent’s dependence on its model capacity and enabling fast and flexible use of past experiences.
Language is a fundamental part of being human. It gives people the ability to communicate thoughts and concepts, create memories, and build mutual understanding. Studying aspects of language is key to understanding how intelligence works, both in AI systems and in humans.
Our oral presentation about unified scaling laws and our paper on retrieval both explore how we might build larger language models more efficiently. Looking at ways of building more effective language models, we introduce a new dataset and benchmark with StreamingQA that evaluates how models adapt to and forget new knowledge over time, while our paper on narrative generation shows how current pretrained language models still struggle with creating longer texts because of short-term memory limitations.
Neural algorithmic reasoning is the art of building neural networks that can perform algorithmic computations. This growing area of research holds great potential for helping adapt known algorithms to real-world problems.
We introduce the CLRS benchmark for algorithmic reasoning, which evaluates neural networks on performing a diverse set of thirty classical algorithms from the Introductions to Algorithms textbook. Likewise, we propose a general incremental learning algorithm that adapts hindsight experience replay to automated theorem proving, an important tool for helping mathematicians prove complex theorems. In addition, we present a framework for constraint-based learned simulation, showing how traditional simulation and numerical methods can be used in machine learning simulators – a significant new direction for solving complex simulation problems in science and engineering.
See the full range of our work at ICML 2022 here.