In this video, a researcher from Oxford University, Joar Skalse, proposes a framework that anticipates different stages of AI systems' development. The framework introduces world models, safety specifications, and verification methods for testing the safety of AI systems.
It includes suggestions from prominent AI researchers and scientists, including Yoshua Bengio and Max Tegmark.
The biggest argument during the @BuzzRobot talk was around the notion of the 'world model' and whether it's even possible to 'model' the world.
Timestamps:
0:00 Introduction to AI safety issues
8:26 Designing "world models" for testing AI systems
15:28 Elaborating on safety specifications of AI systems
20:37 Verification methods for AI systems' safety
23:01 Q&A
Social Links:
Newsletter: [ Ссылка ]
X: [ Ссылка ]
Slack: [ Ссылка ]
Ещё видео!