François Chollet discusses the limitations of Large Language Models (LLMs) and proposes a new approach to advancing artificial intelligence. He argues that current AI systems excel at pattern recognition but struggle with logical reasoning and true generalization.
This was Chollet's keynote talk at AGI-24, filmed in high-quality. We will be releasing a full interview with him shortly. A teaser clip from that is played in the intro!
Chollet introduces the Abstraction and Reasoning Corpus (ARC) as a benchmark for measuring AI progress towards human-like intelligence. He explains the concept of abstraction in AI systems and proposes combining deep learning with program synthesis to overcome current limitations. Chollet suggests that breakthroughs in AI might come from outside major tech labs and encourages researchers to explore new ideas in the pursuit of artificial general intelligence.
MLST is sponsored by Tufa Labs:
Are you interested in working on ARC and cutting-edge AI research with the MindsAI team (current ARC winners)?
Focus: ARC, LLMs, test-time-compute, active inference, system2 reasoning, and more.
Future plans: Expanding to complex environments like Warcraft 2 and Starcraft 2.
Interested? Apply for an ML research position: benjamin@tufa.ai
TOC
1. LLM Limitations and Intelligence Concepts
[00:00:00] 1.1 LLM Limitations and Composition
[00:12:05] 1.2 Intelligence as Process vs. Skill
[00:17:15] 1.3 Generalization as Key to AI Progress
2. ARC-AGI Benchmark and LLM Performance
[00:19:59] 2.1 Introduction to ARC-AGI Benchmark
[00:20:05] 2.2 Introduction to ARC-AGI and the ARC Prize
[00:23:35] 2.3 Performance of LLMs and Humans on ARC-AGI
3. Abstraction in AI Systems
[00:26:10] 3.1 The Kaleidoscope Hypothesis and Abstraction Spectrum
[00:30:05] 3.2 LLM Capabilities and Limitations in Abstraction
[00:32:10] 3.3 Value-Centric vs Program-Centric Abstraction
[00:33:25] 3.4 Types of Abstraction in AI Systems
4. Advancing AI: Combining Deep Learning and Program Synthesis
[00:34:05] 4.1 Limitations of Transformers and Need for Program Synthesis
[00:36:45] 4.2 Combining Deep Learning and Program Synthesis
[00:39:59] 4.3 Applying Combined Approaches to ARC Tasks
[00:44:20] 4.4 State-of-the-Art Solutions for ARC
Shownotes (new!): [ Ссылка ]
[0:01:15] Abstraction and Reasoning Corpus (ARC): AI benchmark (François Chollet)
[ Ссылка ]
[0:05:30] Monty Hall problem: Probability puzzle (Steve Selvin)
[ Ссылка ]
[0:06:20] LLM training dynamics analysis (Tirumala et al.)
[ Ссылка ]
[0:10:20] Transformer limitations on compositionality (Dziri et al.)
[ Ссылка ]
[0:10:25] Reversal Curse in LLMs (Berglund et al.)
[ Ссылка ]
[0:19:25] Measure of intelligence using algorithmic information theory (François Chollet)
[ Ссылка ]
[0:20:10] ARC-AGI: GitHub repository (François Chollet)
[ Ссылка ]
[0:22:15] ARC Prize: $1,000,000+ competition (François Chollet)
[ Ссылка ]
[0:33:30] System 1 and System 2 thinking (Daniel Kahneman)
[ Ссылка ]
[0:34:00] Core knowledge in infants (Elizabeth Spelke)
[ Ссылка ]
[0:34:30] Embedding interpretive spaces in ML (Tennenholtz et al.)
[ Ссылка ]
[0:44:20] Hypothesis Search with LLMs for ARC (Wang et al.)
[ Ссылка ]
[0:44:50] Ryan Greenblatt's high score on ARC public leaderboard
[ Ссылка ]
Ещё видео!