BEGIN:VCALENDAR
VERSION:2.0
METHOD:PUBLISH
PRODID:-//Telerik Inc.//Sitefinity CMS 15.1//EN
BEGIN:VTIMEZONE
TZID:Eastern Standard Time
BEGIN:STANDARD
DTSTART:20251102T020000
RRULE:FREQ=YEARLY;BYDAY=1SU;BYHOUR=2;BYMINUTE=0;BYMONTH=11
TZNAME:Eastern Standard Time
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20250301T020000
RRULE:FREQ=YEARLY;BYDAY=2SU;BYHOUR=2;BYMINUTE=0;BYMONTH=3
TZNAME:Eastern Daylight Time
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
DESCRIPTION:As Artificial Intelligence (AI) becomes increasingly important 
 in our everyday lives\, governance mechanisms for AI are receiving increas
 ing attention. At the same time\, AI presents significant challenges for o
 ur governance efforts\, because of both its (sometime) unpredictability\, 
 and our relative lack of understanding about how the systems are successfu
 l. AI governance is difficult in large part because we often do not know w
 hat an AI will do or why it will behave in a certain way.In this talk\, Da
 vid Danks will present two different efforts to develop appropriate govern
 ance mechanisms for AI systems. First\, Professor Danks will discuss effor
 ts to develop a novel&nbsp\;framework for dynamic governance\, including f
 ormal tools to translate legal\, societal\, ethical\, and psychological co
 nstraints into constraints on AI performance. Second\, he will examine gov
 ernance mechanisms for situations that have significant information asymme
 tries between users/consumers and AI developers/deployers. The goal of the
 se (and other) efforts is to enable better governance of AI systems despit
 e persistent limits on our knowledge.David Danks is professor of data scie
 nce\, philosophy and policy at the University of California\, San Diego. H
 e serves on the National AI Advisory Committee and the Computer Science an
 d Telecommunications Board of the National Academies.His research lies at 
 the intersection of philosophy\, cognitive science\, and machine learning\
 , focusing on the ethical and&nbsp\;societal impacts of AI and developing 
 novel causal discovery algorithms. Professor Danks is the recipient of a J
 ames S. McDonnell Foundation Scholar Award (2008) and an Andrew Carnegie F
 ellowship (2017).&nbsp\;This lecture is one of a two-part series.&nbsp\;Da
 nks will give a&nbsp\;second talk\, on Friday\, April 25\, at 3 p.m.\, on 
  AI and moral responsibility&nbsp\;at the University of Rochester.&nbsp\;T
 ravel funding available.Sponsored by the Central New York Humanities Corri
 dor from an award by the Mellon Foundation.
DTEND:20250424T200000Z
DTSTAMP:20260513T115444Z
DTSTART:20250424T183000Z
LOCATION:
SEQUENCE:0
SUMMARY:Governing the Unforesseable
UID:RFCALITEM639142556846180842
X-ALT-DESC;FMTTYPE=text/html:<p>As Artificial Intelligence (AI) becomes inc
 reasingly important in our everyday lives\, governance mechanisms for AI a
 re receiving increasing attention. At the same time\, AI presents signific
 ant challenges for our governance efforts\, because of both its (sometime)
  unpredictability\, and our relative lack of understanding about how the s
 ystems are successful. AI governance is difficult in large part because we
  often do not know what an AI will do or why it will behave in a certain w
 ay.</p><p>In this talk\, David Danks will present two different efforts to
  develop appropriate governance mechanisms for AI systems. First\, Profess
 or Danks will discuss efforts to develop a novel&nbsp\;framework for dynam
 ic governance\, including formal tools to translate legal\, societal\, eth
 ical\, and psychological constraints into constraints on AI performance. <
 /p><p>Second\, he will examine governance mechanisms for situations that h
 ave significant information asymmetries between users/consumers and AI dev
 elopers/deployers. The goal of these (and other) efforts is to enable bett
 er governance of AI systems despite persistent limits on our knowledge.</p
 ><p><strong>David Danks </strong>is professor of data science\, philosophy
  and policy at the University of California\, San Diego. He serves on the 
 National AI Advisory Committee and the Computer Science and Telecommunicat
 ions Board of the National Academies.</p><p>His research lies at the inter
 section of philosophy\, cognitive science\, and machine learning\, focusin
 g on the ethical and&nbsp\;societal impacts of AI and developing novel cau
 sal discovery algorithms. Professor Danks is the recipient of a James S. M
 cDonnell Foundation Scholar Award (2008) and an Andrew Carnegie Fellowship
  (2017).&nbsp\;</p><p>This lecture is one of a two-part series.&nbsp\;<spa
 n style="background-color: rgba(0\, 0\, 0\, 0)\; font-family: inherit\; fo
 nt-size: inherit\; text-align: inherit\; text-transform: inherit\; word-sp
 acing: normal\; caret-color: auto\; white-space: inherit">Danks will give 
 a&nbsp\;</span>second talk\, on Friday\, April 25\, at 3 p.m.\, on  <a hre
 f="https://events.rochester.edu/event/ai-and-the-challenge-of-foreseeabili
 ty" target="_blank">AI and moral responsibility</a>&nbsp\;at the Universit
 y of Rochester.&nbsp\;<a href="https://www.cnycorridor.net/resources/intra
 -corridor-travel-supplement/" target="_blank">Travel funding available</a>
 .</p><p>Sponsored by the Central New York Humanities Corridor from an awar
 d by the Mellon Foundation.</p>
END:VEVENT
END:VCALENDAR
