BEGIN:VCALENDAR
VERSION:2.0
METHOD:PUBLISH
PRODID:-//Telerik Inc.//Sitefinity CMS 15.1//EN
BEGIN:VTIMEZONE
TZID:Eastern Standard Time
BEGIN:STANDARD
DTSTART:20251102T020000
RRULE:FREQ=YEARLY;BYDAY=1SU;BYHOUR=2;BYMINUTE=0;BYMONTH=11
TZNAME:Eastern Standard Time
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20250301T020000
RRULE:FREQ=YEARLY;BYDAY=2SU;BYHOUR=2;BYMINUTE=0;BYMONTH=3
TZNAME:Eastern Daylight Time
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
DESCRIPTION:Autonomous Systems Policy Institute (ASPI) speaker series prese
 nts Ferdinando Fioretto.Many agencies release statistics about groups of i
 ndividuals\nthat are then used as input to critical decision processes. Fo
 r example\, census\ndata is used to allocate funds and distribute&nbsp\;&n
 bsp\;resources to states\nand jurisdictions.&nbsp\;Similarly\, corporates 
 are increasingly adopting&nbsp\;machine\nlearning systems to derive socio-
 technical&nbsp\;decisions\, including\ncriminal&nbsp\;assessment\, landing
 \, and hiring.&nbsp\;The resulting decisions can\nhave significant societa
 l and economic impacts for participating\nindividuals.&nbsp\;\nIn many cas
 es\, the released data contain sensitive information whose privacy is\nstr
 ictly regulated and Differential Privacy has become the paradigm of\nchoic
 e for protecting data&nbsp\;privacy.&nbsp\;However\, while differential\np
 rivacy provides strong privacy guarantees\, it has become apparent recentl
 y\nthat it may induce biases and fairness issues in&nbsp\;downstream decis
 ion\nprocesses\, including the allotment of federal funds\, apportionment 
 of\ncongressional seats\, and learning biased classification results in le
 nding and\nhiring. These issues may adversely affect the health\, well-bei
 ng\, and sense of\nbelonging of many individuals\, and are currently poorl
 y understood.&nbsp\;\n\nThis talk will describe our efforts in understandi
 ng and addressing these\nissues at the intersection of privacy\, fairness\
 , and decision processes. I will\nfirst review the notion of Differential&
 nbsp\;Privacy and discuss its\napplications in data release and learning t
 asks. I will then examine the\nsocietal impacts of privacy under a fairnes
 s lens and shed light on what\naspects of the&nbsp\;private algorithms and
  the data may be responsible in\nexacerbating unfairness. Finally\, I will
  propose a path to partially address\nthese fairness issues. The talk will
  conclude with an open discussion on the\nneed for tools that may be used 
 by policymakers to test and address unfairness\nin privacy-preserving deci
 sion processes.&nbsp\;Join Zoom Meeting\n\nhttps://syracuseuniversity.zoom
 .us/j/94181628976?pwd=ZUZPZHBRajAwMWJrbFRHb3g3Z25Vdz09Any questions\, plea
 se contact Lynnell Cabezas at&nbsp\;lncabeza@syr.edu
DTEND:20210929T200000Z
DTSTAMP:20260317T125110Z
DTSTART:20210929T190000Z
LOCATION:
SEQUENCE:0
SUMMARY:Privacy-Preserving Machine Learning: Uses and Unintended Disparate 
 Effects
UID:RFCALITEM639093342702484087
X-ALT-DESC;FMTTYPE=text/html:<p>Autonomous Systems Policy Institute (ASPI) 
 speaker series presents Ferdinando Fioretto.</p><p><br></p><p>Many agencie
 s release statistics about groups of individuals\nthat are then used as in
 put to critical decision processes. For example\, census\ndata is used to 
 allocate funds and distribute&nbsp\;&nbsp\;resources to states\nand jurisd
 ictions.&nbsp\;Similarly\, corporates are increasingly adopting&nbsp\;mach
 ine\nlearning systems to derive socio-technical&nbsp\;decisions\, includin
 g\ncriminal&nbsp\;assessment\, landing\, and hiring.&nbsp\;The resulting d
 ecisions can\nhave significant societal and economic impacts for participa
 ting\nindividuals.&nbsp\;<br>\nIn many cases\, the released data contain s
 ensitive information whose privacy is\nstrictly regulated and <i>Different
 ial Privacy</i> has become the paradigm of\nchoice for protecting data&nbs
 p\;privacy.&nbsp\;However\, while differential\nprivacy provides strong pr
 ivacy guarantees\, it has become apparent recently\nthat it may induce bia
 ses and fairness issues in&nbsp\;downstream decision\nprocesses\, includin
 g the allotment of federal funds\, apportionment of\ncongressional seats\,
  and learning biased classification results in lending and\nhiring. These 
 issues may adversely affect the health\, well-being\, and sense of\nbelong
 ing of many individuals\, and are currently poorly understood.&nbsp\;<br>\
 n<br></p><p>\nThis talk will describe our efforts in understanding and add
 ressing these\nissues at the intersection of privacy\, fairness\, and deci
 sion processes. I will\nfirst review the notion of Differential&nbsp\;Priv
 acy and discuss its\napplications in data release and learning tasks. I wi
 ll then examine the\nsocietal impacts of privacy under a fairness lens and
  shed light on what\naspects of the&nbsp\;private algorithms and the data 
 may be responsible in\nexacerbating unfairness. Finally\, I will propose a
  path to partially address\nthese fairness issues. The talk will conclude 
 with an open discussion on the\nneed for tools that may be used by policym
 akers to test and address unfairness\nin privacy-preserving decision proce
 sses.&nbsp\;</p><p><br></p>Join Zoom Meeting<p>\n\n</p><p><a href="https:/
 /syracuseuniversity.zoom.us/j/94181628976?pwd=ZUZPZHBRajAwMWJrbFRHb3g3Z25V
 dz09">https://syracuseuniversity.zoom.us/j/94181628976?pwd=ZUZPZHBRajAwMWJ
 rbFRHb3g3Z25Vdz09</a></p><p><br></p><p>Any questions\, please contact Lynn
 ell Cabezas at&nbsp\;<a href="https://www.maxwell.syr.edu/Sitefinity/Publi
 c/Services/ICalanderService/file.ics/lncabeza@syr.edu">lncabeza@syr.edu</a
 ><u></u></p>
END:VEVENT
END:VCALENDAR
