PhD Proposal: Ubiquitous Accessibility Digital-Maps for Smart Cities: Principles and Realization
To support individuals-with-disabilities’ active participation in the society, the Americans with Disabilities Act (ADA) requires installing various accessibility measures in roads, buildings, transportations, etc. Such measures include curb ramps on sidewalks, braille signs and audio clues in elevators, among others. Note that individuals can experience different types of impairment and each requires a different type of support. However, even with ADA-compliant facilities, accessing them can still be challenging for a disabled individual. For example, it is sufficient to have one accessible route in a place but usually there are no directions on how to reach that route. With the ubiquitous presence of smartphones, we believe that the best way to make these measures accessible is through smartphones.In this proposal, we propose the AccessMap—Accessibility Digital Maps—system to build ubiquitous accessibility digital-maps; where the indoor/outdoor spaces are automatically updated with the various accessibility features and marked with assessment of their accessibility levels for the different disability types; such as vision-impairment, wheel-chaired, etc. To build the maps automatically, we propose a passive crowd-sourcing approach where the users’ sensor-rich mobile devices (e.g. smartphones and smartwatches) share their spatiotemporal multimodal sensors (e.g. barometer, accelerometer, etc.) to detect and map the accessibility features. There are three key components to build AccessMap: (1) A localization module which obtains the location stamp for the collected sensor information indoors and Outdoors, (2) An accessibility features detector module which identifies the various accessibility features from the multimodal sensor data and (3) A crowd-sourcing framework that works on passively collecting the sensor data and building/updating the map.To identify the user-location, GPS is the de-facto-standard outdoor localization method, but we are still lacking a similar ubiquitous indoor localization system. We propose to combine signal-processing, deep-learning and probabilistic models to identify the user 2.5D location (i.e. the user floor-level and her 2D location within that floor) from the widely available off-the-shelf standard WiFi in a calibration-free manner. Additionally, we propose to investigate using crowdsourced data to improve the location estimation. Crowdsourced WiFi scans can be obtained from APs that monitor the network (e.g. Mojo APs) or piggybacked on an app (e.g. the Google Location API passive crowdsourcing). Next, to detect the various accessibility features, we propose to combine machine-learning, computer vision and pattern recognition algorithms to identify the different features from the multimodal sensor data. For crowd-sourcing and building the map, we propose to pursue a decision theoretic approach with probabilistic based models that takes into account the accessibility-features’ states and data uncertainty. Our preliminary results show that, in high-rise buildings (up to 9 floors), we could achieve significant improvements over state-of-the-art research and commercial indoor-localization systems. Additionally, we could identify different map semantics to assess road/buildings accessibility in multiple countries.
Chair: Dr. Ashok Agrawala Dept. rep: Dr. Atif Memon Members: Dr. Moustafa Youssef