AI-based Navigation for Accessible Cities: Challenges and Opportunities
Visual place recognition (VPR) is an AI-based technology that is critical in not only localization and mapping for autonomous driving vehicles, but also assistive navigation for the visually impaired population, both of which are important topics for urban transportation systems. In this talk, after a quick overview of various VPR algorithms, Professors Rizzo and Feng discuss several challenges to be addressed for enabling a long-term VPR system on a large scale. First, different applications may require different image view directions, such as front views for self-driving cars or lateral views for persons with visual impairment. Second, VPR in metropolitan scenes often exacerbates privacy concerns due to the imaging of pedestrian and vehicle identity information, calling for the need for data anonymization before VPR queries and database construction. Adjustments for both factors may lead to VPR performance variations that are not well understood. To study their inﬂuences, researchers present the NYU-VPR dataset that contains more than 200,000 images over a 2km×2km area near the New York University campus, acquired during the 2016 calendar year. They present benchmark results along with our explanations and in-depth analysis on several popular VPR algorithms, showing that lateral views are significantly more challenging for current VPR methods although the inﬂuence of data anonymization is negligible.
Dr. Chen Feng is an assistant professor at New York University Tandon School of Engineering in civil and mechanical engineering. His lab AI4CE aims to advancerobot vision and machine learning through multidisciplinary use-inspired research that originates from civil and mechanical engineering domains. Before joining NYU in 2018, Chen was a research scientist in the computer vision group at Mitsubishi Electric Research Labs (MERL) in Cambridge, MA, focusing on visual SLAM and deep learning for self-driving cars and robotics. Chen holds a master’s degree in electrical engineering and a Ph.D. in civil engineering, both from the University of Michigan at Ann Arbor. Chen’s research works are published in top AI/robotics conferences and civil engineering journals, such as CVPR/ECCV/ICRA/IROS/AutCon. Chen also holds multiple U.S. patents. More information can be found at https://scholar.google.com/
Dr. John-Ross (JR) Rizzo, M.D., M.S.C.I., is an American physician-scientist at NYU Langone Medical Center. He is serving as the Director of Innovation and Technology for the Department of Physical medicine and rehabilitation at Rusk Institute of Rehabilitation Medicine, with cross-appointments in the Department of Neurology and the Departments of Biomedical & Mechanical and Aerospace Engineering New York University Tandon School of Engineering. He is also the Associate Director of Healthcare for the NYU Wireless Laboratory in the Department of Electrical and Computer Engineering at New York University Tandon School of Engineering. He leads the Visuomotor Integration Laboratory (VMIL), where his team focuses on eye-hand coordination, as it relates to acquired brain injury, and the REACTIV Laboratory (Rehabilitation Engineering Alliance and Center Transforming Low Vision), where his team focuses on advanced wearables for the sensory deprived and benefits from his own personal experiences with vision loss.