Surely, it is worth taking a few moments to reflect on the claim from OpenAI that we’ve reached ‘human-level reasoning’ in their o1 series of AI models? I scrutinize Altman’s dev day comments (picking out 4 highlights) and cover the most recent papers and analysis of o1’s capabilities. Then, what the colossal new valuation means for OpenAI and the context you might not has realized. We’ll look further down the ‘Levels of AGI’ chart, cover a NotebookLM update, and end with a powerful question over whether we should, ultimately, be aiming to automate OpenAI.
Assembly AI Speech to Text: [ Ссылка ] [ Ссылка ]
AI Insiders: [ Ссылка ]
Chapters:
00:00 – Introduction
00:52 – Human-level Problem Solvers?
03:22 – Very Steep Progress + Huge Gap Coming
04:23 – Scientists React
05:44 – SciCode
06:55 – Benchmarks Harder to Make + Mensa
07:30 – Agents
08:36 – For-profit and Funding Blocker
09:45 – AGI Clause + Microsoft Definition
11:23 – Gates Shift
12:43 – NotebookLM Update + Assembly
14:11 – Automating OpenAI
Reuters Funding-block Exclusive: [ Ссылка ]
OpenAI Scaling AGI: [ Ссылка ]
NYT Revenue Story: [ Ссылка ]
For Profit Move: [ Ссылка ]
Bloomberg Levels Chart: [ Ссылка ]
Scientists React, in Nature: [ Ссылка ]
Math prof: [ Ссылка ]
AGI Clause: [ Ссылка ]
Microsoft Sci-fi: [ Ссылка ]
Mensa Tweet: [ Ссылка ]
Sci-Code: [ Ссылка ]
[ Ссылка ]
FT Agentic Systems 2025: [ Ссылка ]
[ Ссылка ]
Bill Gates Turnaround: [ Ссылка ]
OpenAI Preparedness Framework: [ Ссылка ]
[ Ссылка ]
NotebookLM: [ Ссылка ]
[ Ссылка ]
My Coursera Course - The 8 Most Controversial Terms in AI: [ Ссылка ]
Non-hype Newsletter: [ Ссылка ]
I use Descript to edit my videos (no pauses or filler words!): [ Ссылка ]
Many people expense AI Insiders for work. Feel free to use the Template in the 'About Section' of my Patreon:
[ Ссылка ]
Ещё видео!