Ian Bremmer's analysis: Can states learn to govern artificial intelligence before it’s too late?
Sign up for GZERO Daily (free newsletter on global politics): [ Ссылка ]
Subscribe to GZERO on YouTube: [ Ссылка ]
Ian Bremmer's Quick Take:
Hi everybody, Ian Bremmer here, and a piece to share with you that I've just completed with Mustafa Suleyman, my good friend, the founder, co-founder of DeepMind, and now Inflection AI, in Foreign Affairs.
This issue called, "The AI Power Paradox - Can states learn to govern artificial intelligence before it’s too late?" The biggest change in power and governance in a very short period of time that I've experienced in my lifetime.
It's how to deal with artificial intelligence. Just a year ago, there wasn't a head of state I would meet with that was asking anything about AI, and now it's in every single meeting. And in part, that's because of how explosive this technology itself has suddenly become in terms of both its staggering upside. I mean, when you get anyone with a smartphone that has access to some of the, you know, global levels of intelligence on in an education field, in a health care field,in a managerial field. I mean, not just access to information and communication, but access to intelligence and the ability to take action with it. That is a game changer for globalization and for productivity of the sort that we've never seen before in such a short period of time all across the world. And yet those same technologies can be used in very disruptive ways by bad actors all over the world, and not just governments, but organizations and individual people. And that's a big piece of why these leaders are concerned.
They're concerned about can we still run an election that's free and fair and people will believe in? Will we be able to limit the proliferation of bad actors to develop and distribute malware or bioweapons? And will people, intellectual workers still have, or white-collar workers still have jobs, have productive things to do, but also because the top issues that policymakers are concerned about are also affected very dramatically by AI, whether it's how you think about the war with Russia and Russia's ability to be a disruptive actor or it’s US-China relations, and to what extent that continues to be a reasonably stable and constructive interdependent relationship.
And also the United States and other advanced industrial democracies, can they persist as functional democracies given the proliferation of AI? So everyone's worried about it. Everyone has urgency. Very few people know what to do. So a few big takeaways from us in this piece.
The big concept that we think should infuse AI governance is techno-prudentialism. That's a big long word, but it's aligned with macro-prudentialism and it's aligned with the way that global finance has been governed. The idea that you need to identify and limit risks to global stability in AI without choking off innovation and the opportunities that come from it. And that's the way the financial stability board works. The Bank of International Settlements, the IMF, despite all of the conflict between the United States and China and the Europeans, they all work together in those institutions. They do it because global finance is too important to allow it to break. It fundamentally needs to be and is global. It has the potential to cause systemic contagion and collapse and everyone wants to work against and mitigate that.
Read the full transcript: [ Ссылка ]
GZERO Media is a multimedia publisher providing news, insights and commentary on the events shaping our world. Our properties include GZERO World with Ian Bremmer, our newsletter GZERO Daily, Puppet Regime, the GZERO World Podcast, In 60 Seconds and GZEROMedia.com
#QuickTake #AI #AIRegulation
Ещё видео!