Ethics in AI Seminar - presented by the Institute for Ethics in AI
Chair: Peter Millican, Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford University
● What role should the technical AI community play in questions of AI ethics and those concerning the broader impacts of AI? Are technical researchers well placed to reason about the potential societal impacts of their work?
● What does it mean to conduct and publish AI research responsibly?
● What challenges does the AI community face in reaching consensus about responsibilities, and adopting appropriate norms and governance mechanisms?
● How can we maximise the benefits while minimizing the risks of increasingly advanced AI research?
AI and related technologies are having an increasing impact on the lives of individuals, as well as society as a whole. Alongside many current and potential future benefits, there has been an expanding catalogue of harms arising from deployed systems, raising questions about fairness and equality, privacy, worker exploitation, environmental impact, and more. In addition, there have been increasing incidents of research publications which have caused an outcry over ethical concerns and potential negative societal impacts. In response, many are now asking whether the technical AI research community itself needs to do more to ensure ethical research conduct, and to ensure beneficial outcomes from deployed systems. But how should individual researchers and the research community more broadly respond to the existing and potential impacts from AI research and AI technology? Where should we draw the line between academic freedom and centering societal impact in research, or between openness and caution in publication? Are technical researchers well placed to grapple with issues of ethics and societal impact, or should these be left to other actors and disciplines? What can we learn from other high-stakes, ‘dual-use’ fields? In this seminar, Rosie Campbell, Carolyn Ashurst and Helena Webb will discuss these and related issues, drawing on examples such as conference impact statements, release strategies for large language models, and responsible research innovation in practice.
For more information on future TORCH events, you can subscribe to our newsletter at [ Ссылка ].
To keep up to date with our YouTube events, please subscribe to our channel.
Ещё видео!