BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//C2SMART Home - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://c2smart.engineering.nyu.edu
X-WR-CALDESC:Events for C2SMART Home
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250416T130000
DTEND;TZID=America/New_York:20250416T140000
DTSTAMP:20260430T114146
CREATED:20250402T154334Z
LAST-MODIFIED:20250402T154334Z
UID:88242-1744808400-1744812000@c2smart.engineering.nyu.edu
SUMMARY:Seminar: Investigating Vulnerabilities in Autonomous Vehicle Perception Algorithms
DESCRIPTION:Autonomous vehicles (AVs) rely on deep neural networks (DNNs) for critical tasks such as environment perception—identifying traffic signs\, pedestrians\, and lane markings—and executing control decisions like braking\, acceleration\, and lane changing. However\, DNNs are vulnerable to adversarial attacks\, including structured perturbations to inputs and misleading training samples that can degrade performance. This presentation begins with an overview of adversarial training\, emphasizing the impact of input sizes on DNNs’ vulnerability to cyberattacks. Subsequently\, I will share our recent findings that explore the hypothesis that DNNs learn piecewise linear relationships between inputs and outputs. This conjecture is crucial for developing both adversarial attacks and defense strategies in machine learning security. The last part of the presentation will focus on recent work on using error-correcting codes to safeguard DNN-based classifiers. \nDr. Saif Jabari is an Associate Professor of Civil and Urban Engineering at New York University Abu Dhabi (NYUAD) and a Global Network Associate Professor at the Tandon School of Engineering at NYU in Brooklyn\, NY. At NYUAD\, he is co-PI of the Center for Integrated Urban Networks (CITIES) and the Center for Stability\, Instability\, and Turbulence (SITE). He is an Associate Editor for Transportation Science and Area Editor with the new Elsevier journal Artificial Intelligence for Transportation. His research focuses on developing advanced computational methods and theoretical guarantees of performance for urban traffic management problems. The techniques integrate traffic data\, typically in high resolution\, with principles of traffic physics to address the rapidly evolving needs of the field. His current research focuses on understanding and addressing vulnerabilities in deep neural networks\, specifically as they relate to environment perception in autonomous vehicles.
URL:https://c2smart.engineering.nyu.edu/event/seminar-investigating-vulnerabilities-in-autonomous-vehicle-perception-algorithms/
LOCATION:C2SMART Center Viz Lab\, 6 Metrotech Center\, Room 460\, Brooklyn\, 11201
CATEGORIES:Seminars
END:VEVENT
END:VCALENDAR