Artificial intelligence (AI) technologies are not new, but they have made rapid advances in recent years and attracted the attention of policymakers and observers from all points on the political spectrum. These advances have intensified concerns about AI’s potential to discriminate against select groups of Americans or to import human bias against particular ideologies.
AI programs like ChatGPT learn from internet content and are liable to present opinions – specifically dominant cultural opinions – as facts. Is it inevitable that these programs will acquire and reproduce the discriminatory ideas or biases we see in humans? Because AI learns by detecting patterns in real world data, are disparate impacts unavoidable in AI systems used for hiring, lending decisions, or bail determinations? If so, how does this compare to the bias of human decision-making unaided by AI?
Increasingly, laws and regulations are being proposed to address these bias concerns. But do we need new laws or are the anti-discrimination laws that already govern human decision-makers sufficient? An expert panel joined us to discuss these questions and more.
Featuring:
Curt Levey, President, Committee for Justice
Keith Sonderling, Commissioner, Equal Employment Opportunity Commission
Dr. Gary Marcus, Professor Emeritus, New York University & Founder, Geometric Intelligence
[Moderator] Ken Marcus, Founder and Chairman, Louis D. Brandeis Center for Human Rights Under Law
* * * * *
As always, the Federalist Society takes no position on particular legal or public policy issues; all expressions of opinion are those of the speaker.
Ещё видео!