Using Ambient Signal Modalities for Ubiquitous Sensing
[Thesis]
Hampapur Venkatnarayan, Raghav
Guvenc, Ismail
North Carolina State University
2019
164
Ph.D.
North Carolina State University
2019
Fast and accurate perception of human presence and interaction is a challenging problem for realizing smart environments, especially at the network edge. While current approaches largely use cameras and depth sensors for sensing human movements, they cannot be deployed in all environments due to a variety of reasons such as limited field-of-view, access restrictions, lighting requirements, and privacy concerns, which limit their ubiquity. However, given the ubiquitous presence of WiFi and lighting fixtures in today's buildings, it is possible to ameliorate the problems of camera-based approaches by switching to widely available signals such as ambient WiFi or ambient light. This is because these pervasive signals are also affected by human movements, which can be exploited to achieve seamless, low-cost human sensing applications at the network edge. Therefore, in this work, we study the use of ambient signals for realizing ubiquitous sensing applications with commodity devices. Specifically, we explore four novel and challenging problems of ubiquitous sensing, using only ambient WiFi and light signals. The first three problems involve ubiquitous sensing with ambient WiFi signals that are unresolved in the literature: (i) accurately measuring distance moved by humans (or robots) in indoor environments, (ii) accurately tracking indoor location of multiple human targets, and (iii) accurately recognizing simultaneous gestures of multiple persons. Meanwhile, the fourth problem explores the use of ambient light signals for the first time, to perform human gesture recognition. The first problem of accurately measuring the distance traversed by a subject, i.e. odometry, is of fundamental importance in many applications such as position tracking for virtual reality, indoor navigation, and robot route guidance. While theoretically, odometry can be performed with accelerometers, practically, it is well-known that distances measured using accelerometers suffer from large drift errors. To solve this problem, we propose WIO, a WiFi-assisted Inertial Odometry technique that uses ambient WiFi signals as an auxiliary source of information to correct drift errors. The key idea behind WIO is that, because multiple copies of a transmitted WiFi signal arrive along different paths to a WiFi receiver, WIO can first isolate one reflection path and then measure the change in its path length during subject motion to derive the traversed distance. To demonstrate the idea, we implement WIO using commodity devices, and evaluate it extensively in a variety of complex indoor scenarios on both human and robotic subjects. Our results demonstrate an average error of just 6.28% in estimating the distances traversed by the subjects. The second problem of accurately tracking locations of multiple human targets is also an important problem in indoor monitoring and virtual reality. While there exist some approaches that use ambient WiFi signals to track humans, they are largely limited to tracking a single human target, which limits their usability. Therefore, to solve this problem, we propose Wi Polar, the first Wi-Fi based human tracking system that enables simultaneous tracking of multiple human targets using polarization diversity of WiFi signals. The key insight of WiPolar is that the human targets in a tracking environment possess varied horizontal and vertical radar cross-sections due to their physical dimensions and reflection characteristics. This consequently allows for more accurate resolution of multiple human reflections in the polarization domain. Thus, WiPolar introduces polarization diversity in the transmitted WiFi signal to first jointly measure a polarization parameter of multiple human reflection paths along with their Angle-of-Arrival, Time-of-Flight and Doppler-Frequency Shift and then continuously derive their locations. To demonstrate this idea, we implement WiPolar using commodity WiFi devices and evaluate it extensively in multiple environments. Our results show that WiPolar achieves a median tracking error of just 56cm for up to five humans, with over 73% reduction in the median tracking error due to the use of polarization-diversity. The third problem of multiple-person gesture recognition has applications in Virtual Reality, such as group interaction. Similar to human tracking, existing ambient WiFi signal based gesture recognition approaches are also limited to recognizing gestures of only a single human target. To address this limitation, we propose WiMU, a WiFi based Multi-User gesture recognition system. The key idea behind WiMU is that it develops theoretical models of multiple-person limb movements. This allows it to identify features from patterns of multiple-user reflections in the received WiFi signal and then recognize them using machine learning techniques. To illustrate our idea, we implement and extensively evaluate WiMU with commodity WiFi devices. Our results show that WiMU recognizes 2, 3, 4, 5, and 6 simultaneously performed gestures with accuracies of 95.0%, 94.6%, 93.6%, 92.6%, and 90.9%, respectively. Finally we demonstrate LiGest, the first ambient light based gesture recognition system, with applications similar to WiMU. The key property of LiGest is that it is agnostic to lighting conditions, position and orientation of a user, and who performs the gestures. The general idea behind LiGest is that when a user performs different gestures, the shadows of the user move in unique patterns, which can be captured on photodiodes distributed on the floor. LiGest then learns these patterns using training samples and recognizes unknown samples by matching them with the learnt patterns using machine learning techniques. We develop a prototype of LiGest using commercially available light sensors and evaluate it extensively with the help of 20 volunteers. Our results show that LiGest achieves an average accuracy of 96.36%.