Accepted Papers

Ubicomp 2014 is colocated with ISWC 2014. All delegates will be able to view paper presentations from either conference.

Links to the papers in the ACM digital library are included below for Ubicomp submissions, the ISWC papers can also be found in the digital library.

A considerable amount of research has been carried out towards making long-standing smart home visions technically feasible. The technologically augmented homes made possible by this work are starting to become reality, but thus far living in and interacting with such homes has introduced significant complexity while offering limited benefit. As these technologies are increasingly adopted, the knowledge we gain from their use suggests a need to revisit the opportunities and challenges they pose. Synthesizing a broad body of research on smart homes with observations of industry and experiences from our own empirical work, we provide a discussion of ongoing and emerging challenges, namely challenges for meaningful technologies, complex domestic spaces, and human-home collaboration. Within each of these three challenges we discuss our visions for future smart homes and identify promising directions for the field.
Sarah Mennicken, Jo Vermeulen, Elaine M. Huang
Presented on Monday September 15th as part of the In the Home session.
Whilst the ubicomp community has successfully embraced a number of societal challenges for human benefit, including healthcare and sustainability, the well-being of other animals is hitherto underrepresented. We argue that ubicomp technologies, including sensing and monitoring devices as well as tangible and embodied interfaces, could make a valuable contribution to animal welfare. This paper particularly focuses on dogs in kenneled accommodation, as we investigate the opportunities and challenges for a smart kennel aiming to foster canine welfare. We conducted an in-depth ethnographic study of a dog rehoming center over four months; based on our findings, we propose a welfare-centered framework for designing smart environments, integrating monitoring and interaction with information management. We discuss the methodological issues we encountered during the research and propose a smart ethnographic approach for similar projects.
Clara Mancini, Janet van der Linden, Gerd Kortuem, Guy Dewsbury, Daniel Mills, Paula Boyden
Presented on Monday September 15th as part of the In the Home session.
We investigated how household deployment of Internet-connected locks and security cameras could impact teenagers' privacy. In interviews with 13 teenagers and 11 parents, we investigated reactions to audit logs of family members' comings and goings. All parents wanted audit logs with photographs, whereas most teenagers preferred text-only logs or no logs at all. We unpack these attitudes by examining participants' parenting philosophies, concerns, and current monitoring practices. In a follow-up online study, 19 parents configured an Internet-connected lock and camera system they thought might be deployed in their home. All 19 participants chose to monitor their children either through unrestricted access to logs or through real-time notifications of access. We discuss directions for auditing interfaces that could improve home security without impacting privacy.
Blase Ur, Jaeyeon Jung, Stuart Schechter
Presented on Monday September 15th as part of the In the Home session.
Ubicomp products have become more important in providing emotional experiences as users increasingly assimilate these products into their everyday lives. In this paper, we explored a new design perspective by applying a pet dog analogy to support emotional experience with ubicomp products. We were inspired by pet dogs, which are already intimate companions to humans and serve essential emotional functions in daily live. Our studies involved four phases. First, through our literature review, we articulated the key characteristics of pet dogs that apply to ubicomp products. Secondly, we applied these characteristics to a design case, CAMY, a mixed media PC peripheral with a camera. Like a pet dog, it interacts emotionally with a user. Thirdly, we conducted a user study with CAMY, which showed the effects of pet-like characteristics on users’ emotional experiences, specifically on intimacy, sympathy, and delightedness. Finally, we presented other design cases and discussed the implications of utilizing a pet dog analogy to advance ubicomp systems for improved user experiences.
Yea Kyung Row, Tek Jin Nam
Presented on Monday September 15th as part of the Ubicomp and Design session.
The rapid growth of the Ubicomp field has recently raised concerns regarding its identity. These concerns have been compounded by the fact that there exists a lack of empirical evidence on how the field has evolved until today. In this study we applied co-word analysis to examine the status of Ubicomp research. We constructed the intellectual map of the field as reflected by 6858 keywords extracted from 1636 papers published in the HUC, UbiComp and Pervasive conferences during 1999-2013. Based on the results of a correspondence analysis we identify two major periods in the whole corpus: 1999-2007 and 2008-2013. We then examine the evolution of the field by applying graph theory and social network analysis methods to each period. We found that Ubicomp is increasingly focusing on mobile devices, and has in fact become more cohesive in the past 15 years. Our findings refute the assertion that Ubicomp research is now suffering an identity crisis.
Yong Liu, Jorge Goncalves, Denzil Ferreira, Simo Hosio, Vassilis Kostakos
Presented on Monday September 15th as part of the Ubicomp and Design session.
This research builds on the UbiComp vision of systems that do not do things for people but engage people in their computational environment so that people can do things for themselves better. In this investigation, we sought to make good on a proof-of-concept where people interact with a social robot whereby the robot helps people to be more humanly creative. Twenty seven participants interacted with ATR’s humanoid robot Robovie (through a WoZ interface) in a creativity task. Results supported our proof of concept insofar as 100% of the participants generated creative ideas, and 63% incorporated the robot’s ideas into their own ideas for their creative output. Of the participants who had the highest creativity scores, 83% incorporated the robot’s ideas into their own. Discussion focuses on next steps toward building the Natural Language Processing system, and integrating the system into a more extensive networked UbiComp environment.
Peter H Kahn, Jr., Takayuki Kanda, Hiroshi Ishiguro, Solace Shen, Heather E Gary, Jolina H Ruckert
Presented on Monday September 15th as part of the Ubicomp and Design session.
Recent years have seen an increased research interest in multi-device interactions and digital ecosystems. This research addresses new opportunities and challenges when users are not simply interacting with one system or device at a time, but orchestrate ensembles of them as a larger whole. One of these challenges is to understand what principles of interaction work well for what, and to create such knowledge in a form that can inform design. Our contribution to this research is a framework of interaction principles for digital ecosystems, which can be used to analyze and understand existing systems and design new ones. The 4C framework provides new insights over existing frameworks and theory by focusing specifically on explaining the interactions taking place within digital ecosystems. We demonstrate this value through two examples of the framework in use, firstly for understanding an existing digital ecosystem, and secondly for generating ideas and discussion when designing a new one.
Henrik Sørensen, Dimitrios Raptis, Jesper Kjeldskov, Mikael B. Skov
Presented on Monday September 15th as part of the Ubicomp and Design session.
Color blindness is a highly prevalent vision impairment that inhibits people's ability to understand colors. Although classified as a mild disability, color blindness has important effects on the daily activity of people, preventing them from performing their tasks in the most natural and effective ways. In order to address this issue we developed Chroma, a wearable augmented-reality system based on Google Glass that allows users to see a filtered image of the current scene in real-time. Chroma automatically adapts the scene-view based on the type of color blindness, and features dedicated algorithms for color saliency. Based on interviews with 23 people with color blindness we implemented four modes to help colorblind individuals distinguish colors they usually can't see. Although Glass still has important limitations, initial tests of Chroma in the lab show that colorblind individuals using Chroma can improve their color recognition in a variety of real-world activities. The deployment of Chroma on a wearable augmented-reality device makes it an effective digital aid with the potential to augment everyday activities, effectively providing access to different color dimensions for colorblind people.
Enrico Tanuwidjaja, Derek Huynh, Kirsten Koa, Calvin Nguyen, Churen Shao, Patrick Torbett, Colleen Emmenegger, Nadir Weibel
Presented on Wednesday September 17th as part of the Assistive Devices session.
Spatiotemporal gait analysis with body worn inertial sensors improves diagnosis in clinical practice. Most of the gait performance measures are affected by walking speed. However, it has not been investigated that how much information foot clearance parameters share with the key parameters of gait performance domains. Using shoe-worn inertial sensors and previously validated algorithm we measured spatiotemporal as well as clearance gait parameters in a cohort of able-bodied adults over the age of 65 (N=879). Principal components analysis showed that variability of foot clearance parameters contribute to the main variability in gait data. Moreover, only weak to moderate correlation of gait speed and stride length with some clearance parameters has been observed. We recommend the assessment of clearance parameters during gait analysis in addition to parameters such as gait speed, bearing in mind the importance of foot clearance measures in obstacle negotiation, slipping and tripping related falls.
Kamiar Aminian, Farzin Dadashi, Benoit Mariani, Constanze Hoskovec, Brigitte Brigitte Santos-Eggimann, Christophe Büla
Presented on Tuesday September 16th as part of the Sensing the Body session.
A number of wearable ’lifelogging’ camera devices have been released recently, allowing consumers to capture images and other sensor data continuously from a first-person perspective. Unlike traditional cameras that are used deliberately and sporadically, lifelogging devices are always ’on’ and automatically capturing images. Such features may challenge users’ (and bystanders’) expectations about privacy and control of image gathering and dissemination. While lifelogging cameras are growing in popularity, little is known about privacy perceptions of these devices or what kinds of privacy challenges they are likely to create. To explore how people manage privacy in the context of lifelogging cameras, as well as which kinds of first-person images people consider ’sensitive,’ we conducted an in situ user study (N = 36) in which participants wore a lifelogging device for a week, answered questionnaires about the collected images, and participated in an exit interview. Our findings indicate that: 1) some people may prefer to manage privacy through in situ physical control of image collection in order to avoid later burdensome review of all collected images; 2) a combination of factors including time, location, and the objects and people appearing in the photo determines its ’sensitivity;’ and 3) people are concerned about the privacy of bystanders, de- spite reporting almost no opposition or concerns expressed by bystanders over the course of the study.
Roberto Hoyle, Robert Templeman, Steven Armes, Denise Anthony, David Crandall, Apu Kapadia
Presented on Tuesday September 16th as part of the Human Behaviour session.
Projective tests are personality tests that reveal individuals’ emotions (e.g., Rorschach inkblot test). Unlike direct question-based tests, projective tests rely on ambiguous stimuli to evoke responses from individuals. In this paper we develop one such test, designed to be delivered automatically, anonymously and to a large community through public displays. Our work makes a number of contributions. First, we develop and validate in controlled conditions a quantitative projective test that can reveal emotions. Second, we demonstrate that this test can be deployed on a large scale longitudinally: we present a four-week deployment in our university’s public spaces where 1431 tests were completed anonymously by passers-by. Third, our results reveal strong diurnal rhythms of emotion consistent with results we obtained independently using the Day Reconstruction Method (DRM), literature on affect, well-being, and our understanding of our university’s daily routine.
Jorge Goncalves, Pratyush Pandab, Denzil Ferreira, Mohammad Ghahramani, Guoying Zhao, Vassilis Kostakos
Presented on Tuesday September 16th as part of the Public Displays & Interactions session.
Augmented Information Display (AiD) is an LCD-based communicative display device that transmits both visible (RGB) and invisible (infrared) information using temporal and spectral multiplexing. A field-sequential backlight system switches between a standard white or RGB LED backlight and a near-infrared (NIR) LED backlight at 120Hz frequency. The visible and invisible information are transmitted through the same LCD electro-optics elements but during different time intervals that synchronize with the corresponding backlights. We implemented several prototype software systems to demonstrate the potential applications of this novel display platform, such as an augmenting digital signage display, an information beacon for positioning systems, and an accessibility system for people with hearing impairments.
Shuguang Wu, Jun Xiao
Presented on Tuesday September 16th as part of the Public Displays & Interactions session.
PriCal is an ambient calendar display that shows a user's schedule similar to a paper wall calendar. PriCal provides context-adaptive privacy to users by detecting present persons and adapting event visibility according to the user's privacy preferences. We present a detailed privacy impact assessment of our system, which provides insights on how to leverage context to enhance privacy without being intrusive. PriCal is based on a decentralized architecture and supports the detection of registered users as well as unknown persons. In a three-week deployment study with seven displays, ten participants used PriCal in their real work environment with their own digital calendars. Our results provide qualitative insights on the implications, acceptance, and utility of context-adaptive privacy in the context of a calendar display system, indicating that it is a viable approach to mitigate privacy implications in ubicomp applications.
Florian Schaub, Bastian Könings, Peter Lang, Björn Wiedersheim, Christian Winkler, Michael Weber
Presented on Tuesday September 16th as part of the Public Displays & Interactions session.
As the cost of display hardware falls so the number of public display networks being deployed is increasing rapidly. While these networks have traditionally taken the form of digital signage used for advertising and information there is increasing interest in the vision of ’open display networks’. A key component of any open display network is an effective channel for disseminating applications created by third-parties and recent research has proposed a display-oriented ’application store’ as one such channel. In this paper we present a critical analysis of the requirements and design of display application stores ’ providing insights designed to help the implementers of future application stores.
Sarah Clinch, Mateusz Mikusz, Miriam Greis, Nigel Davies, Adrian Friday
Presented on Tuesday September 16th as part of the Public Displays & Interactions session.
Much of the stress and strain of student life remains hidden. The StudentLife continuous sensing app assesses the day-today and week-by-week impact of workload on stress, sleep, activity, mood, sociability, mental well-being and academic performance of a single class of 48 students across a 10 week term at Dartmouth College using Android phones. Results from the StudentLife study show a number of significant correlations between the automatic objective sensor data from smartphones and mental health and educational outcomes of the student body. We also identify a Dartmouth term lifecycle in the data that shows students start the term with high positive affect and conversation levels, low stress, and healthy sleep and daily activity patterns. As the term progresses and the workload increases, stress appreciably rises while positive affect, sleep, conversation and activity drops off. The StudentLife dataset is publicly available on the web.
Rui Wang, Fanglin Chen, Zhenyu Chen, Tianxing Li, Gabriella Harari, Stefanie Tignor, Xia Zhou, Dror Ben-Zeev, Andrew Campbell
Presented on Monday September 15th as part of the Activity and Group Interactions session.
In the context of a myriad of mobile apps which collect personally identifiable information (PII) and a prospective market place of personal data, we investigate a user-centric monetary valuation of mobile PII. During a 6-week long user study in a living lab deployment with 60 participants, we collected their daily valuations of 4 categories of mobile PII (communication, e.g. phonecalls made/received, applications, e.g. time spent on different apps, location and media, e.g. photos taken) at three levels of complexity (individual data points, aggregated statistics and processed, i.e. meaningful interpretations of the data). In order to obtain honest valuations, we employ a reverse second price auction mechanism. Our findings show that the most sensitive and valued category of personal information is location. We report statistically significant associations between actual mobile usage, personal dispositions, and bidding behavior. Finally, we outline key implications for the design of mobile services and future markets of personal data.
Jacopo Staiano, Nuria Oliver, Bruno Lepri, Rodrigo de Oliveira, Michele Caraviello, Nicu Sebe
Presented on Tuesday September 16th as part of the Human Behaviour session.
Nicholas D Lane, Li Pengyu, Lin Zhou, Feng Zhao
Presented on Tuesday September 16th as part of the Human Behaviour session.
This paper presents a new method for estimating which outlet an electrical appliance is plugged into by using the electrical wiring installed in the building. By making use of the voltage drop caused by the wire, we can estimate the distance between the sensor and an electrical appliance plugged into an outlet on an electrical circuit. If we have a floor plan of an environment of interest showing a wiring diagram and where a sensor is attached, we can determine which outlet an electrical appliance is plugged into from the distance between the sensor and the appliance. The estimated outlet position of an appliance is very useful for understanding real-world events and developing real-world applications, e.g., providing user location and appliance location aware services, daily activity recognition, and estimating a user's indoor location through electrical appliance use under specific conditions.
Quan Kong, Takuya Maekawa
Presented on Tuesday September 16th as part of the Sensing in the Home session.
Power remains a challenge in the widespread deployment of long-lived wireless sensing systems, which has led researchers to consider power harvesting as a potential solution. In this paper, we present a thermal power harvester that utilizes naturally changing ambient temperature in the environment as the power source. In contrast to traditional thermoelectric power harvesters, our approach does not require a spatial temperature gradient; instead it relies on temperature fluctuations over time, enabling it to be used freestanding in any environment in which temperature changes throughout the day. By mechanically coupling linear motion harvesters with a temperature sensitive bellows, we show the capability of harvesting up to 21 mJ of energy per cycle of temperature variation within the range 5 ’ to 25 ’. We also demonstrate the ability to power a sensor node, transmit sensor data wirelessly, and update a bistable E-ink display after as little as a 0.25 ’ ambient temperature change.
Chen Zhao, Sam Yisrael, Josh R Smith, Shwetak Patel
Presented on Tuesday September 16th as part of the Sensing in the Home session.
We demonstrate that a cheap (30USD) small, low power 8x8 thermal sensor array can by itself provide a broad range of information relevant for human activity monitoring in home and office environments. In particular the sensor can track people with an accuracy in the range of 1m (which is sufficient to recognize activity relevant regions), detect the operation mode of various appliances such as toaster, water cooker or egg cooker and actions such as opening a refrigerator, the oven or taking a shower. While there are sensing modalities for each of the above types of information (e.g. current sensors for appliances) the fact that they can all be detected by such a simple sensor is highly relevant for practical activity recognition systems. Compared to vision (or thermal imaging systems) the system has the advantage is being less privacy invasive allowing it for example to monitor bathroom activities (as shown in one of our evaluation scenarios). The paper describes the sensor, the methods used for activity detection and the evaluation.
Peter Hevesi, Sebastian Wille, Gerald Pirkl, Norbert Wehn, Paul Lukowicz
Presented on Monday September 15th as part of the In the Home session.
In this paper, we present a significant improvement over past work on non-contact end-user deployable sensor for real time whole home power consumption. The technique allows users to place a single device consisting of magnetic pickups on the outside of a power or breaker panel to infer whole home power consumption without the need for professional installation of current transformers (CTs). The new approach does not require precise placement on the breaker panel, a key requirement in previous approaches. This is enabled through a self-calibration technique using a neural network that dynamically learns the transfer function despite the placement of the sensor and the construction of the breaker panel itself. We also demonstrate the ability to actually infer true power using this technique, unlike past solutions that have only been able to capture apparent power. We have evaluated our technique in six homes and one industrial building, including one seven-day deployment. Our results show we can estimate true power consumption with an average accuracy of 95.0% during naturalistic energy use in the home.
Md Tanvir Islam Aumi, Sidhant Gupta, Cameron Pickett, Matt Reynolds, Shwetak Patel
Presented on Tuesday September 16th as part of the Sensing in the Home session.
There is a large class of routine physical exercises that are performed on the ground, often on dedicated ’mats’ (e.g. push-ups, crunches, bridge). Such exercises involve coordinated motions of different body parts and are difficult to recognize with a single body worn motion sensors (like a step counter). Instead a network of sensors on different body parts would be needed, which is not always practicable. As an alternative we describe a cheap, simple textile pressure sensor matrix, that can be unobtrusively integrated into exercise mats to recognize and count such exercises. We evaluate the system on a set of 10 standard exercises. In an experiment with 7 subjects, each repeating each exercise 20 times, we achieve a user independent recognition rate of 82.5% and a user independent counting accuracy of 89.9%. The paper describes the sensor system, the recognition methods and the experimental results.
Mathias Sundholm, Jingyuan Cheng, Bo Zhou, Akash Sethi, Paul Lukowicz
Presented on Tuesday September 16th as part of the Sensing in the Home session.
Yanxia Zhang, Hans Jörg Müller, Ming Ki Chong, Andreas Bulling, Hans Gellersen
Presented on Tuesday September 16th as part of the Input & Interaction session.
Mobile devices have become people's indispensable companion, since they allow each individual to be constantly connected with the outside world. In order to keep connected, the devices periodically send out data, which reveal some information about the device owner. Data sent by these devices can be captured by any external observer. Since the observer can observe only the wireless data, the actual person using the device is unknown. In this work, we propose IdentityLink, an approach leveraging the captured wireless data and computer vision to infer the user-device links, i.e., inferring which device is carried by which user. Knowing the user-device links opens up new opportunities for applications such as identifying unauthorized personnel in enterprises or finding criminals by law enforcement. By conducting experiments in a realistic scenario, we demonstrate how IdentityLink can be effectively applied to real practice.
Le T. Nguyen, Yu Seung Kim, Patrick Tague, Joy Zhang
Presented on Tuesday September 16th as part of the Input & Interaction session.
We introduce AirLink, a novel technique for sharing files between multiple devices. By waving a hand from one device towards another, users can directly transfer files between them. The system utilizes the devices’ built-in speakers and microphones to enable easy file sharing between phones, tablets and laptops. We evaluate our system in an 11-participant study with 96.8% accuracy, showing the feasibility of using AirLink in a multiple-device environment. We also implemented a real-time system and demonstrate the capability of AirLink in various applications.
Ke-Yu Chen, Daniel Ashbrook, Mayank Goel, Sung-Hyuck Lee, Shwetak Patel
Presented on Tuesday September 16th as part of the Input & Interaction session.
This paper presents a recognition scheme for fine-grain gestures. The scheme leverages directional antenna and short- range wireless propagation properties to recognize a vocabulary of action-oriented gestures from the American Sign Language. Since the scheme only relies on commonly available wireless features such as Received Signal Strength (RSS), signal phase differences, and frequency subband selection, it is readily deployable on commercial-off-the-shelf IEEE 802.11 devices. We have implemented the proposed scheme and evaluated it in two potential application scenarios: gesture-based electronic activation from wheelchair and gesture-based control of car infotainment system. The results show that the proposed scheme can correctly identify and classify up to 25 fine-grain gestures with an average ac- curacy of 92% for the first application scenario and 84% for the second scenario.
Pedro Melgarejo, Xinyu Zhang, Parameswaran Ramanathan, David Chu
Presented on Tuesday September 16th as part of the Input & Interaction session.
We present BendID, a bendable input device that recognizes the location, magnitude and direction of its deformation. We use BendID to provide users with a tactile metaphor for pressure based input. The device is constructed by layering an array of indium tin oxide (ITO)-coated PET film electrodes on a Polymethylsiloxane (PDMS) sheet, which is sandwiched between conductive foams. The pressure values that are interpreted from the ITO electrodes are classified using a Support Vector Machine (SVM) algorithm via the Weka library to identify the direction and location of bending. A polynomial regression model is also employed to estimate the overall magnitude of the pressure from the device. A model then maps these variables to a GUI to perform tasks. In this preliminary paper, we demonstrate this device by implementing it as an interface for 3D shape bending and a game controller.
Vinh P Nguyen, Sang Ho Yoon, Ansh Verma, Karthik Ramani
Presented on Tuesday September 16th as part of the Input & Interaction session.
Crossroad is among the most dangerous parts outside for the visually impaired people. Numerous studies have exploited navigating systems for the visually impaired community, providing services ranging from block detection, route planning to realtime localization. However, none of them have addressed the safety issue in crossroad and integrated three key factors necessary for a practical crossroad navigation system: detecting the crossroad, locating zebra patterns, and guiding the user within zebra crossing when passing the road. Our CrossNavi application responds to these needs, providing an integrated crossroad navigation service that incorporates all the essential functionalities mentioned above. The overall service is fulfilled by the collaboration of built-in sensors on commodity phones, and requires minimal human participation. We describe the technical aspects of its design, implementation, interface, and further improvements to make the system practical on a wider basis. Experimental results from three visually impaired volunteers show that the system exhibits promising behavior in both urban and rural areas.
Longfei Shangguan, Zheng Yang, Zimu Zhou, Xiaolong Zheng, Chenshu Wu, Yunhao Liu
Presented on Wednesday September 17th as part of the Assistive Devices session.
Smart objects within instrumented environments offer an always available and intuitive way of interacting with a system. Connecting these objects to other objects in range, or even to smartphones and computers, enables substantially innovative interaction and sensing approaches. In this paper, we investigate the concept of Capacitive Near-Field Communication to enable ubiquitous interaction with everyday objects in a short-range spatial context. Our central contribution is a generic framework describing and evaluating the communication method in Ubiquitous Computing. We prove the relevance of our approach by an open-source implementation of a low-cost object tag and a transceiver offering a high-quality communication link at typical distances up to 15 cm. Moreover, we present three case studies considering tangible interaction for the visually impaired, natural interaction with everyday objects, and sleeping behavior analysis.
Tobias Grosse-Puppendahl, Sebastian Herber, Raphael Wimmer, Frank Englert, Sebastian Beck, Julian von Wilmsdorff, Reiner Wichert, Arjan Kuijper
Presented on Monday September 15th as part of the Sensing and Communication session.
Using magnetic field data as fingerprints for localization in indoor environment has become popular in recent years. Particle filter is often used to improve accuracy. However, most of existing particle filter based approaches either are heavily affected by motion estimation errors, which makes the system unreliable, or impose strong restrictions on smartphone such as fixed phone orientation, which is not practical for real-life use. In this paper, we present an indoor localization system named MaLoc, built on our proposed augmented particle filter. We create several innovations on the motion model, the measurement model and the resampling model to enhance the traditional particle filter. To minimize errors in motion estimation and improve the robustness of particle filter, we augment the particle filter with a dynamic step length estimation algorithm and a heuristic particle resampling algorithm. We use a hybrid measurement model which combines a new magnetic fingerprinting model and the existing magnitude fingerprinting model to improve the system performance and avoid calibrating different smartphone magnetometers. In addition, we present a novel localization quality estimation method and a localization failure detection method to address the "Kidnapped Robot Problem" and improve the overall usability. Our experimental studies show that MaLoc achieves a localization accuracy of 1~2.8m on average in a large building.
Hongwei Xie, Tao Gu, Xianping Tao, Haibo Ye, Jian Lv
Presented on Monday September 15th as part of the Sensing and Communication session.
Weiwei Jiang, Denzil Ferreira, Jani Ylioja, Jorge Goncalves, Vassilis Kostakos
Presented on Monday September 15th as part of the Sensing and Communication session.
Pedestrians have difficulty noticing hybrid vehicles (HVs) and electrical vehicles (EVs) quietly approaching from behind. We propose a vehicle detection scheme using a smartphone carried by a pedestrian. A notification of a vehicle approaching can be delivered to wearable devices such as Google Glass. We exploit the high-frequency switching noise generated by the motor unit in HVs and EVs. Although people are less sensitive to these high-frequency ranges, these sounds are prominent even on a busy street, and it is possible for a smartphone to detect these signs . The ambient sound captured at 48 kHz is converted to a feature vector in the frequency domain. A J48 classifier implemented on a smartphone can determine whether an EV or HV is approaching. We have collected a large amount of vehicle data at various locations. The false-positive and false-negative rates of our detection scheme are 1.2% and 4.95%, respectively. The first alarm was detected as early as 11.6 s before the vehicle approached the observer. The scheme can also determine the vehicle speed and vehicle type.
Masaru Takagi, Kosuke Fujimoto, Yoshihiro Kawahara, Tohru Asami
Presented on Tuesday September 16th as part of the Mobile Applications session.
The goal of this work is to provide an abstraction of ideal sound environments to a new emerging class of Mobile Multi-speaker Audio (MMA) applications. Typically, it is challenging for MMA applications to implement advanced sound features (e.g., surround sound) accurately in mobile environments, especially due to unknown, irregular loudspeaker configurations. Towards an illusion that MMA applications run over specific loudspeaker configurations (i.e., speaker type, layout), this work proposes AMAC, a new Adaptive Mobile Audio Coordination system that senses the acoustic characteristics of mobile environments and controls individual loudspeakers adaptively and accurately. The prototype of AMAC implemented on commodity smartphones shows that it provides the coordination accuracy in sound arrival time in several tens of microseconds and reduces the variance in sound level substantially.
Hyosu Kim, SangJeong Lee, Jung-Woo Choi, Hwidong Bae, Jiyeon Lee, Junehwa Song, Insik Shin
Presented on Tuesday September 16th as part of the Mobile Applications session.
We propose a novel technique that aggregates multiple sensor streams generated by totally different types of sensors into a visually enhanced video stream. This paper shows major features of SENSeTREAM and demonstrates enhancement of user experience in an online live music event. Since SENSeTREAM is a video stream with sensor values encoded in a two-dimensional graphical code, it can transmit multiple sensor data streams while maintaining their synchronization. A SENSeTREAM can be transmitted via existing live streaming services, and can be saved into existing video archive services. We have implemented a prototype SENSeTREAM generator and deployed it to an online live music event. Through the pilot study, we confirmed that SENSeTREAM works with popular streaming services, and provide a new media experience for live performances. We also indicate future direction for establishing visual stream aggregation and its applications.
Takuro Yonezawa, Masaki Ogawa, Yutaro Kyono, Hiroki Nozaki, Jin Nakazawa, Osamu Nakamura, Hideyuki Tokuda
Presented on Tuesday September 16th as part of the Mobile Applications session.
Quality improvement in mobile applications should be based on the consideration of several factors, such as users’ diversity in spatio-temporal usage, as well as the device’s resource usage, including battery life. Although application tuning should consider this practical issue, it is difficult to ensure the success of this process during the development stage due to the lack of information about application usage. This paper proposes a user interaction-based profiling system to overcome the limitations of development-level application debugging. In our system, the analysis of both device behavior and energy consumption is possible with fine-grained process-level application monitoring. By providing fine-grained information, including user interaction, system behavior, and power consumption, our system provides meaningful analysis for application tuning. The proposed method does not require the source code of the application and uses a web-based framework so that users can easily provide their usage data. Our case study with a few popular applications demonstrates that the proposed system is practical and useful for application tuning.
Seokjun Lee, Chanmin Yoon, Hojung Cha
Presented on Tuesday September 16th as part of the Mobile Applications session.
We propose a graph-based, low-complexity sensor fusion approach for ubiquitous pedestrian indoor positioning using mobile devices. We employ our fusion technique to combine relative motion information based on step detection with WiFi signal strength measurements. The method is based on the well-known particle filter methodology. In contrast to previous work, we provide a probabilistic model for location estimation that is formulated directly on a fully discretized, graph-based representation of the indoor environment. We generate this graph by adaptive quantization of the indoor space, removing irrelevant degrees of freedom from the estimation problem. We evaluate the proposed method in two realistic indoor environments using real data collected from smartphones. In total, our dataset spans about 20 kilometers in distance walked and includes 13 users and four different mobile device types. Our results demonstrate that the filter requires an order of magnitude less particles than state-of-the-art approaches while maintaining an accuracy of a few meters. The proposed low-complexity solution not only enables indoor positioning on less powerful mobile devices, but also saves much-needed resources for location-based end-user applications which run on top of a localization service.
Sebastian Hilsenbeck, Dmytro Bobkov, Georg Schroth, Robert Huitl, Eckehard Steinbach
Presented on Monday September 15th as part of the Indoor Location session.
We present a device-free indoor tracking system that uses received signal strength (RSS) from radio frequency (RF) transceivers to estimate the location of a person. While many RSS-based tracking systems use a body-worn device or tag, this approach requires no such tag. The approach is based on the key principle that RF signals between wall-mounted transceivers reflect and absorb differently depending on a person’s movement within their home. A hierarchical neural network hidden Markov model (NN-HMM) classifier estimates both movement patterns and stand vs. walk conditions to accurately perform tracking. The algorithm and features used are specifically robust to changes in RSS mean shifts in the environment over time allowing for greater than 90% region level classification accuracy over an extended testing period. In addition to tracking, the system also estimates the number of people in different regions. It is currently being developed to support independent living and long-term monitoring of seniors.
Anindya Paul, Eric A Wan, Fatema Adenwala, Erich Schafermeyer, Nicholas Preiser, Jeffrey Kaye, Peter Jacobs
Presented on Monday September 15th as part of the Indoor Location session.
In recent years, there has been an explosion of social and collaborative applications that leverage location to provide users novel and engaging experiences. Current location technologies work well outdoors but fare poorly indoors. In this paper we present LoCo, a new framework that can provide highly accurate room-level location using a supervised classification scheme. We provide experiments that show this technique is orders of magnitude more efficient than current state-of-the-art WiFi localization techniques. Low classification overhead and computational footprint make classification practical and efficient even on mobile devices. Our framework has also been designed to be easily deployed and leveraged by developers to help create a new wave of location driven applications and services.
Jacob T Biehl, Matthew Cooper, Gerry Filby, Sven Kratz
Presented on Monday September 15th as part of the Indoor Location session.
Location prediction enables us to use a person’s mobility history to realize various applications such as efficient temperature control, opportunistic meeting support, and automated receptionists. Indoor location prediction is a challenging problem, particularly due to a high density of possible locations and short transition distances between these locations. In this paper we present Indoor-ALPS, an Adaptive Indoor Location Prediction System that uses temporal-spatial features to create individual daily models for the prediction of when a user will leave their current location (transition time) and the next location she will transition to. We tested Indoor-ALPS on the Augsburg Indoor Location Tracking Benchmark and compared our approach to the best performing temporal-spatial mobility prediction algorithm, Prediction by Partial Match (PPM). Our results show that Indoor-ALPS improves the temporal- spatial prediction accuracy over PPM for look-aheads up to 90 minutes by 6.2%, and for up to 30 minute look-aheads by 10.7%. These results demonstrate that Indoor-ALPS can be used to support a wide variety of indoor mobility prediction-based applications.
Christian Koehler, Nikola Banovic, Ian Oakley, Jen Mankoff, Anind Dey
Ppresented on Monday September 15th as part of the Indoor Location session.
Indoor object localization can enable many ubicomp applications, such as asset tracking and object-related activity recognition. Most location and tracking systems rely on either battery-powered devices which create cost and maintenance issues or cameras which have accuracy and privacy issues. This paper introduces a system that is able to detect the 3D position and motion of a battery-free RFID tag embedded with an ultrasound detector and an accelerometer. Combining tags' acceleration with location improves the system's power management and supports activity recognition. We characterize the system's localization performance in open space as well as implement it in a smart wet lab application. The system is used to track real-time location and motion of the tags in the wet lab as well as recognize pouring actions performed on the objects to which the tag is attached. The median localization accuracy is $7.6cm$ -- $(3.1,5,1.9)cm$ for each $(x,y,z)$ axis -- with max update rates of 15 Sample/s using single RFID reader antenna.
Yi Zhao, Anthony LaMarca, Joshua R Smith
Presented on Monday September 15th as part of the Sensing and Communication session.
Mohit Sethi, Elena Oat, Mario Di Francesco, Tuomas Aura
Presented on Wednesday September 17th as part of the Security session.
Activity-based social networks, where people upload and share information about their location-based activities (e.g., the routes of their activities), are increasingly popular. Such systems, however, raise privacy and security issues: The service providers know the exact locations of their users; the users can report fake location information in order to, for example, unduly brag about their performance. In this paper, we propose a secure privacy-preserving system for reporting location-based activity summaries (e.g., the total distance covered and the elevation gain). Our solution is based on a combination of cryptographic techniques and geometric algorithms, and it relies on existing Wi-Fi access-point networks deployed in urban areas. We evaluate our solution by using real data sets from the FON community networks and from the Garmin Connect activity-based social network, and we show that it can achieve tight (up to a median accuracy of 76%) verifiable lower-bounds of the distance covered and of the elevation gain, while protecting the location privacy of the users with respect to both the social network operator and the access-point network operator(s).
Anh Pham, Kévin Huguenin, Igor Bilogrevic, Jean-Pierre Hubaux
Presented on Wednesday September 17th as part of the Security session.
This paper presents Zero-Effort Payments (ZEP), a seamless mobile computing system designed to accept payments with no effort on the customer’s part beyond a one-time opt-in. With ZEP, customers need not present cards nor operate smartphones to convey their identities. ZEP uses three complementary identification technologies: face recognition, proximate device detection, and human assistance. We demonstrate that the combination of these technologies enables ZEP to scale to the level needed by our deployments. We designed and built ZEP, and demonstrated its usefulness across two real-world deployments lasting five months of continuous deployment, and serving 274 customers. The different nature of our deployments stressed different aspects of our system. These challenges led to several system design changes to improve scalability and fault-tolerance.
Christopher Smowton, Jacob R Lorch, David Molnar, Stefan Saroiu, Alec Wolman
Presented on Wednesday September 17th as part of the Security session.
Touch-enabled user interfaces have become ubiquitous, such as on ATMs or portable devices. At the same time, authentication using touch input is problematic, since finger smudge traces may allow attackers to reconstruct passwords. We present SmudgeSafe, an authentication system that uses random geometric image transformations, such as translation, rotation, scaling, shearing, and flipping, to increase the security of cued-recall graphical passwords. We describe the design space of these transformations and report on two user studies: A lab-based security study involving 20 participants in attacking user-defined passwords, using high quality pictures of real smudge traces captured on a mobile phone display; and an in-the-field usability study with 374 participants who generated more than 130,000 logins on a mobile phone implementation of SmudgeSafe. Results show that SmudgeSafe significantly increases security compared to authentication schemes based on PINs and lock patterns, and exhibits very high learnability, efficiency, and memorability.
Stefan Schneegass, Frank Steimle, Andreas Bulling, Florian Alt, Albrecht Schmidt
Presented on Wednesday September 17th as part of the Security session.
The smartphone contact list has the potential to be a valuable source of data about personal relationships. To understand how we might data mine the information that people store in their contact lists, we collected the contact lists of 54 participants. Initially we found that the majority of contact list features were unused. However, a further examination of the ’name’ field revealed a broad variety of contact-naming behaviors. We observed contact ’name’ fields that included affiliations, relationship role labels, multiple names, phone types, and references to companies / services / places. People’s appropriation and usage of contact lists have implications for automated attempts to merge or mine contact lists that assume people use the features and structure of the contact list tool as intended. They also offer new opportunities for data mining to better describe relationships between users and their contacts.
Jason Wiese, Jason I. Hong, John Zimmerman
Presented on Tuesday September 16th as part of the Mobile-Social session.
Existing location-based social networks (LBSNs), e.g. Foursquare, depend mainly on GPS or network-based localization to infer users' locations. However, GPS is unavailable indoors and network-based localization provides coarse-grained accuracy. This limits the accuracy of current LBSNs in indoor environments, where people spend 89% of their time. This in turn affects the user experience, in terms of the accuracy of the ranked list of venues, especially for the small-screens of mobile devices; misses business opportunities; and leads to reduced venues coverage. In this paper, we present CheckInside: a system that can provide a fine-grained indoor location-based social network. CheckInside leverages the crowd-sensed data collected from users' mobile devices during the check-in operation and knowledge extracted from current LBSNs to associate a place with its name and semantic fingerprint. This semantic fingerprint is used to obtain a more accurate list of nearby places as well as automatically detect new places with similar signatures. A novel algorithm for handling incorrect check-ins and inferring a semantically-enriched floorplan is proposed as well as an algorithm for enhancing the system performance based on the user implicit feedback. Evaluation of CheckInside in four malls over the course of six weeks with 20 participants shows that it can provide the actual user location within the top five venues 99% of the time. This is compared to 17% only in the case of current LBSNs. In addition, it can increase the coverage of current LBSNs by more than 25%.
Moustafa Elhamshary, Moustafa Youssef
Presented on Tuesday September 16th as part of the Mobile-Social session.
People-nearby applications' (PNAs) are a form of ubiquitous computing that connect users based on their physical location data. One example is Grindr, a popular PNA that facilitates connections among gay and bisexual men. Adopting a uses and gratifications approach, we conducted two studies. In study one, 63 users reported motivations for Grindr use through open-ended descriptions. In study two, those descriptions were coded into 26 items that were completed by 525 Grindr users. Factor analysis revealed six uses and gratifications: social inclusion, sex, friendship, entertainment, romantic relationships, and location-based search. Two additional analyses examine (1) the effects of geographic location (e.g., urban vs. suburban/rural) on men’s use of Grindr and (2) how Grindr use is related to self-disclosure of information. Results highlight how the mixed-mode nature of PNA technology may change the boundaries of online and offline space, and how gay and bisexual men navigate physical environments.
Chad Van De Wiele, Stephanie Tom Tong
Presented on Tuesday September 16th as part of the Mobile-Social session.
Automatic check-in, which is to identify a user's visited points of interest (POIs) from his or her trajectories, is still an open problem because of positioning errors and the high POI density in small areas. In this study, we propose a probabilistic visited-POI identification method. The method uses a new hierarchical Bayesian model for identifying the latent visited-POI label of stay points, which are automatically extracted from trajectories. This model learns from labeled and unlabeled stay point data (i.e., semi-supervised learning) and takes into account personal preferences, stay locations including positioning errors, stay times for each category, and prior knowledge about typical user preferences and stay times. Experimental results with real user trajectories and POIs of Foursquare demonstrated that our method achieved statistically significant improvements in precision at 1 and recall at 3 over the nearest neighbor method and a conventional method that uses a supervised learning-to-rank algorithm.
Kyosuke Nishida, Hiroyuki Toda, Takeshi Kurashima, Yoshihiko Suhara
Presented on Tuesday September 16th as part of the Mobile-Social session.
To facilitate the collection of patient biosignals, designing extensible sensing devices in which sensor management is simplified is essential. This paper presents BioScope, an extensible sensing system that facilitates collecting data used in nursing assessments. We conducted experiments to demonstrate the potential of the system. The results obtained in this study can be applied in improving the design, thus enabling BioScope to facilitate data collection in numerous potential applications.
Cheng-Yuan Li, Chi-Hsien Yen, Kuo-Cheng Wang, Chuang-Wen You, Seng-Yong Lau, Cheryl Chia-Hui Chen, Polly Huang, Hao-hua Chu
Presented on Tuesday September 16th as part of the Sensing the Body session.
Sleep quality plays a significant role in personal health. A great deal of effort has been paid to design sleep quality monitoring systems, providing services ranging from bedtime monitoring to sleep activity detection. However, as sleep quality is closely related to the distribution of sleep duration over different sleep stages, neither the bedtime nor the intensity of sleep activities is able to reflect sleep quality precisely. To this end, we present Sleep Hunter, a mobile service that provides a fine-grained detection of sleep stage transition for sleep quality monitoring and intelligent wake-up call. The rationale is that each sleep stage is accompanied by specific yet distinguishable body movements and acoustic signals. Leveraging the built-in sensors on smartphones, Sleep Hunter integrates these physical activities with sleep environment, inherent temporal relation and personal factors by a statistical model for a fine-grained sleep stage detection. Based on the duration of each sleep stage, Sleep Hunter further provides sleep quality report and smart call service for users. Experimental results from over 30 sets of nocturnal sleep data show that our system is superior to existing actigraphy-based sleep quality monitoring systems, and achieves satisfying detection accuracy compared with dedicated polysomnography-based devices.
Weixi Gu, Zheng Yang, Longfei Shangguan, Wei Sun, Kun Jin, Yunhao Liu
Presented on Wednesday September 17th as part of the Body Signals session.
People interact with chairs frequently, making them a potential location to perform implicit health sensing that requires no additional effort by users. We surveyed 550 participants to understand how people sit in chairs and inform the design of a chair that detects heart and respiratory rate from the armrests and backrests of the chair respectively. In a laboratory study with 18 participants, we evaluated a range of common sitting positions to determine when heart rate and respiratory rate detection was possible (32% of the time for heart rate, 52% for respiratory rate) and evaluate the accuracy of the detected rate (83% for heart rate, 73% for respiratory rate). We discuss the challenges of moving this sensing to the wild by evaluating an in-situ study totaling 40 hours with 11 participants. We show that, as an implicit sensor, the chair can collect vital signs data from its occupant through natural interaction with the chair.
Erin Griffiths, T. Scott Saponas, A.J. Bernheim Brush
Presented on Wednesday September 17th as part of the Body Signals session.
We often think of ourselves as individuals with steady capabilities. However, converging strands of research indicate that this is not the case. Our biochemistry varies significantly over the course of a 24 hour period. Consequently our levels of alertness, productivity, physical activity, and even sensitivity to pain fluctuate throughout the day. This offers a considerable opportunity for the UbiComp community to identify novel measurements and interventions that can leverage these daily variations. To illustrate this potential, we present results from an empirical study with 9 participants over 97 days investigating whether such variations manifest in low-level smartphone use, focusing on daily rhythms related to sleep. Our findings demonstrate that phone usage patterns can be used to detect and predict individual daily variations indicative of temporal preference, sleep duration, and deprivation. We also identify opportunities and challenges for measuring and enhancing well-being using these simple and effective markers of circadian rhythms.
Saeed Abdullah, Mark Matthews, Elizabeth L Murnane, Geri Gay, Tanzeem Choudhury
Presented on Wednesday September 17th as part of the Body Signals session.
There is a growing demand for daily heart rate (HR) monitoring in the fields of healthcare, fitness, activity recognition, and entertainment. Although various HR monitoring systems have been proposed, most of these employ a wearable device, which may be a burden and disturb one's daily living. To achieve the goal of pervasive HR monitoring in our daily living, we present the HR monitoring method through the surface of a drinkware. The proposed method employs the surface of a drinkware as a broad sensing region, by expanding the principal of a basic photo-based HR sensor. The sensing surface works even with a curved shape, and it can be applied on various types of drinkwares. This approach enables unobtrusive HR monitoring during the beverage consumption. As a prototype, we implemented the proposed method on an ordinary transparent tumbler, and evaluated its HR monitoring performance.
Hiroshi Chigira, Masayuki Ihara, Minoru Kobayashi, Akimichi Tanaka, Tomohiro Tanaka
Presented on Wednesday September 17th as part of the Body Signals session.
Mobile sensing systems employ various sensors in smartphones to extract human-related information. As the demand for sensing systems increases, a more effective mechanism is required to sense information about human life. In this paper, we present a systematic study on the feasibility and gaining properties of a crowdsensing system that primarily concerns sensing WiFi packets in the air. We propose that this method is effective for estimating urban mobility by using only a small number of participants. During a seven-week deployment, we collected smartphone sensor data, including approximately four million WiFi packets from more than 130,000 unique devices in a city. Our analysis of this dataset examines core issues in urban mobility monitoring, including feasibility, spatio-temporal coverage, scalability, and threats to privacy. Collectively, our findings provide valuable insights to guide the development of new mobile sensing systems for urban life monitoring.
Yohan Chon, Suyeon Kim, Seungwoo Lee, Dongwon Kim, Yungeun Kim, Hojung Cha
Presented on Monday September 15th as part of the Cities & Transportation session.
This paper assesses the potential of ride-sharing for reducing traffic in a city -- based on mobility data extracted from 3G Call Description Records (CDRs), for the cities of Madrid and Barcelona (BCN), and from OSNs, such as Twitter and Foursquare (FSQ), collected for the cities of New York (NY) and Los Angeles (LA). First, we analyze these data sets to understand mobility patterns, home and work locations, and social ties between users. Then, we develop an efficient algorithm for matching users with similar mobility patterns, considering a range of constraints, including social distance. The solution provides an upper bound to the potential decrease in the number of cars in a city that can be achieved by ridesharing. Our results indicate that this decrease can be as high as 31%, when users are willing to ride with friends of friends.
Blerim Cici, Athina Markopoulou, Enrique Frias-Martinez, Nikolaos Laoutaris
Presented on Monday September 15th as part of the Cities & Transportation session.
People flow at a citywide level is in a mixed state with several basic patterns (e.g. commuting, working, commercial), and it is therefore difficult to extract useful information from such a mixture of patterns directly. In this paper, we proposed a novel tensor factorization approach to modeling city dynamics in a basic life pattern space (CitySpectral Space). To obtain the CitySpectrum, we utilized Non-negative Tensor Factorization (NTF) to decompose a people flow tensor into basic life pattern tensors, described by three bases i.e. the intensity variation among different regions, the time-of-day and the sample days. We apply our approach to a big mobile phone GPS log dataset (containing 1.6 million users) to model the fluctuation in people flow before and after the Great East Japan Earthquake from a CitySpectral perspective. In addition, our framework is extensible to a variety of auxiliary spatial-temporal data. We parametrize a people flow with a spatial distribution of the Points of Interest (POIs) to quantitatively analyze the relationship between human mobility and POI distribution. Based on the parametric people flow, we propose a spectral approach for a site-selection recommendation and people flow simulation in another similar area using POI distribution.
Zipei Fan, Xuan Song, Ryosuke Shibasaki
Presented on Monday September 15th as part of the Cities & Transportation session.
Determining the mode of transport of an individual is an important element of contextual information. In particular, we focus on differentiating between different forms of motorized transport such as car, bus, subway etc. Our approach uses location information and features derived from transit route information (schedule information, not real-time) published by transit agencies. This enables no up-front training or learning of routes and can be deployed instantly to a new place since most transit agencies publish this information. Combined with motion detection using phone accelerometers, we obtain a classification accuracy of around 90% on 50+ hours of car and transit data.
Rahul C Shah, Chieh-yih Wan, Hong Lu, Lama Nachman
Presented on Monday September 15th as part of the Cities & Transportation session.
Longbiao Chen, Daqing Zhang, Gang Pan, Leye Wang, Xiaojuan Ma, Chao Chen, Shijian Li
Presented on Wednesday September 17th as part of the Ubicomp at Work session.
The layouts of the buildings we live in shape our everyday lives. In office environments, building spaces affect employees' communication, which is crucial for productivity and innovation. However, accurate measurement of how spatial layouts affect interactions is a major challenge and traditional techniques may not give an objective view. We measure the impact of building spaces on social interactions using wearable sensing devices. We study a single organization that moved between two different buildings, affording a unique opportunity to examine how space alone can affect interactions. The analysis is based on two large scale deployments of wireless sensing technologies: short-range, lightweight RFID tags capable of detecting face-to-face interactions. We analyze the traces to study the impact of the building change on social behavior, which represents a first example of using ubiquitous sensing technology to study how the physical design of two workplaces combines with organizational structure to shape contact patterns.
Chloe Brown, Christos Efstratiou, Ilias Leontiadis, Daniele Quercia, Cecilia Mascolo, James Scott, Peter Key
Presented on Wednesday September 17th as part of the Ubicomp at Work session.
In this paper, we explore using large digital displays in combination with a personal mobile application to publicly and privately encourage people to make healthy choices. We designed, built, and deployed an experimental system called Lunch Line that promoted healthy eating. Lunch Line includes a public display that enables passersby to view the reported eating behavior of a group of people and take on daily "food challenges," and a mobile web application that allows users to record personal food choices, report challenge achievement, and compare their choices with other users and with USDA recommendations. Results from a 3-week field evaluation at a company cafeteria showed that our integrated system was effective in drawing public attention, delivering challenges, enabling self-tracking and self-reflection, and providing feedback on personal and group choices. We share lessons on how to design future systems that integrate situated public displays and personal mobile devices to encourage healthy choices.
Kerry Shih-Ping Chang, Catalina M Danis, Robert G Farrell
Presented on Wednesday September 17th as part of the Ubicomp at Work session.
We describe a qualitative study of delegate engagement with technology in academic conferences through a large-scale deployment of prototype technologies. These deployments represent current themes in conference technologies, such as providing access to content and opportunities for socialising between delegates. We consider not just the use of individual technologies, but also the overall impact of an assemblage of interfaces, ranging from ambient to interactive and mobile to situated. Based on a two-week deployment followed by interviews and surveys of attendees, we discuss the ways in which delegates engaged with the prototypes and the implications this had for their experience of the conferences. From our findings, we draw three new themes to inform the development of future conference technologies.
Nick Taylor, Tom Bartindale, John Vines, Patrick Olivier
Presented on Wednesday September 17th as part of the Ubicomp at Work session.
Interpersonal touch is our most primitive social language strongly governing our emotional well-being. Despite the positive implications of touch in many facets of our daily social interactions, we find wide-spread caution and taboo limiting touch-based interactions in workplace relationships that constitute a significant part of our daily social life. In this paper, we explore new opportunities for ubicomp technology to promote a new meme of casual and cheerful interpersonal touch such as high-fives towards facilitating vibrant workplace culture. Specifically, we propose High5, a mobile service with a smartwatch-style system to promote high-fives in everyday workplace interactions. We first present initial user motivation from semi-structured interviews regarding the potentially controversial idea of High5. We then present our smartwatch-style prototype to detect high-fives based on sensing electric skin potential levels. We demonstrate its key technical observation and performance evaluation.
Yuhwan Kim, Seungchul Lee, Inseok Hwang, Hyunho Ro, Youngki Lee, Miri Moon, Junehwa Song
Presented on Monday September 15th as part of the Activity and Group Interactions session.
Crowdsensing technologies are rapidly evolving and are expected to be utilized on commercial applications such as location-based services. Crowdsensing collects sensory data from daily activities of users without burdening users, and the data size is expected to grow into a population scale. However, quality of service is difficult to ensure for commercial use. Incentive design in crowdsensing with monetary rewards or gamifications is, therefore, attracting attention for motivating participants to collect data to increase data quantity. In contrast, we propose Steered Crowdsensing, which controls the incentives of users by using the game elements on location-based services for directly improving the quality of service rather than data size. For a feasibility study of steered crowdsensing, we deployed a crowdsensing system focusing on application scenarios of building processes on wireless indoor localization systems. In the results, steered crowdsensing realized deployments faster than non-steered crowdsensing while having half as many data.
Ryoma Kawajiri, Masamichi Shimosaka, Hisashi Kashima
Presented on Wednesday September 17th as part of the Sensing the Crowd session.
This paper proposes a novel participant selection framework, named CrowdRecruiter, for mobile crowdsensing. CrowdRecruiter operates on top of energy-efficient Piggyback Crowdsensing (PCS) task model and minimizes incentive payments by selecting a small number of participants while still satisfying probabilistic coverage constraint. In order to achieve the objective when piggybacking crowdsensing tasks with phone calls, CrowdRecruiter first predicts the call and coverage probability of each mobile user based on historical records. It then efficiently computes the joint coverage probability of multiple users as a combined set and selects the near-minimal set of participants, which meets coverage ratio requirement in each sensing cycle of the PCS task. We evaluated CrowdRecruiter extensively using a large-scale real-world dataset and the results show that the proposed solution significantly outperforms three baseline algorithms by selecting 10.0% - 73.5% fewer participants on average under the same probabilistic coverage constraint.
Daqing Zhang, Haoyi Xiong, Leye WANG, Guanling Chen
Presented on Wednesday September 17th as part of the Sensing the Crowd session.
Yu Zheng, Tong Liu, Yilun Wang, Yanmin Zhu, Yanchi Liu, Eric Chang
Presented on Wednesday September 17th as part of the Sensing the Crowd session.
In this paper we argue the need for orchestration support for participatory campaigns to achieve campaign quality, and automatisation of said support to achieve scalability, both issues contributing to stakeholder usability. This goes further than providing support for defining campaigns, an issue tackled in prior work. We provide a formal definition for a campaign by extracting commonalities from the state of the art and expertise in organising noise mapping campaigns. Next, we formalise how to ensure campaigns end successfully, and translate this formal notion into an operational recipe for dynamic orchestration. We then present a framework for automatising campaign definition, monitoring and orchestration which relies on workflow technology. The framework is validated by re-enacting several campaigns previously run through manual orchestration and quantifying the increased efficiency.
Ellie D'Hondt, Jesse Zaman, Eline Philips, Elisa Gonzalez Boix, Wolfgang De Meuter
Presented on Wednesday September 17th as part of the Sensing the Crowd session.
We propose a method to estimate car-level train congestion using Bluetooth RSSI observed by passengers' mobile phones. Our approach employs a two-stage algorithm where car-level location of passengers is estimated to infer car-level train congestion. We have learned Bluetooth signals attenuate due to passengers' bodies, distance and doors between cars through the analysis of over 50,000 Bluetooth real samples. Based on this prior knowledge, our algorithm is designed as a Bayesian-based likelihood estimator, and is robust to the change of both passengers and congestion at stations. The car-level positions are useful for passengers' personal navigation inside stations and car-level train congestion information helps determine better strategies of taking trains. Through a field experiment, we have confirmed the algorithm can estimate the location of 16 passengers with 83% accuracy and also estimate train congestion with 0.82 F-measure value in average.
Yuki Maekawa, Akira Uchiyama, Hirozumi Yamaguchi, Teruo Higashino
Presented on Wednesday September 17th as part of the Cars & Driving session.
Road latent cost, which quantifies how desirable each road is for traveling, is important information to enable many smart-city applications such as route recommendation. Arguably, vehicle trajectories are a good source to learn these costs as drivers intelligently incorporate them into their routing decisions. However, major past approaches misinterpret drivers' behaviors and suffer from trajectory sparsity problem, mainly because they adopt an edge-centric perspective which fails to exploit the sequential information in the entire trajectories. To address these shortcomings, we model drivers' routing decision process which targets at global path optimality, and present a framework to reliably discover those costs by exploiting entire trajectories while isolating the influence of heterogeneous destinations. Extensions are also made to address several issues in practice. Extensive experiments on real world data show that the road costs learned in this way significantly outperform past approaches in several urban computing tasks and require less data for learning.
Jiangchuan Zheng, Lionel Ni
Presented on Wednesday September 17th as part of the Cars & Driving session.
Searching for parking spots generates frustration and pollution. To address these parking problems, we present PocketParker, a crowdsourcing system using smartphones to predict parking lot availability. PocketParker is an example of a subset of crowdsourcing we call pocketsourcing. Pocketsourcing applications require no explicit user input or additional infrastructure, running effectively without the phone leaving the user's pocket. PocketParker detects arrivals and departures by leveraging existing activity recognition algorithms. Detected events are used to maintain per-lot availability models and respond to queries. By estimating the number of drivers not using PocketParker, a small fraction of drivers can generate accurate predictions. Our evaluation shows that PocketParker quickly and correctly detects parking events and is robust to the presence of hidden drivers. Camera monitoring of several parking lots as 105 PocketParker users generated 10,827 events over 45 days shows that PocketParker was able to correctly predict lot availability 94% of the time.
Anandatirtha Nandugudi, Taeyeon Ki, Carl Nuessle, Geoffrey Challen
Presented on Wednesday September 17th as part of the Cars & Driving session.
Today people have the opportunity to opt-in to usage-based automotive insurances for reduced premiums by allowing companies to monitor their driving behavior. Several companies claim to measure only speed data to preserve privacy. With our elastic pathing algorithm, we show that drivers can be tracked by merely collecting their speed data and knowing their home location, which insurance companies do, with an accuracy that constitutes privacy intrusion. To demonstrate the algorithm's real-world applicability, we evaluated its performance with datasets from central New Jersey and Seattle, Washington, representing suburban and urban areas. Our algorithm predicted destinations with error within 250 meters for 14% traces and within 500 meters for 24% traces in the New Jersey dataset (254 traces). For the Seattle dataset (691 traces), we similarly predicted destinations with error within 250 and 500 meters for 13% and 26% of the traces respectively. Our work shows that these insurance schemes enable a substantial breach of privacy.
Xianyi Gao, Bernhard Firner, Shridatt Sugrim, Victor Kaiser-Pendergrast, Yulong Yang, Janne Lindqvist
Presented on Wednesday September 17th as part of the Cars & Driving session.
We contribute evidence to which extent sensor- and contextual information available on mobile phones allow to predict whether a user would pick up a call or not. Using an app publicly available for Android phones, we logged anonymous data from 31311 calls of 418 different users. The data shows that information easily available in mobile phones, such as the time since the last call, the time since the last ringer mode change, or the device posture, can predict call availability with an accuracy of 83.2% (Kappa = .646). Personalized models can increase the accuracy to 87% on average. Features related to when the user was last active turned out to be strong predictors. This shows that simple contextual cues approximating user activity are worthwhile investigating when designing context-aware ubiquitous communication systems.
Martin Pielot
Presented on Wednesday September 17th as part of the Interruptability & Notifications session.
The mobile phone represents a unique platform for interactive applications that can harness the opportunity of an immediate contact with a user in order to increase the impact of the delivered information. However, this accessibility does not necessarily translate to reachability, as recipients might refuse an initiated contact or disfavor a message that comes in an inappropriate moment. In this paper we seek to answer whether, and how, suitable moments for interruption can be identified and utilized in a mobile system. We gather and analyze a real-world smartphone data trace and show that users' broader context, including their activity, location, time of day, emotions and engagement, determine different aspects of interruptibility. We then design and implement InterruptMe, an interruption management library for Android smartphones. An extensive experiment shows that, compared to a context-unaware approach, interruptions elicited through our library result in increased user satisfaction and shorter response times.
Veljko Pejovic, Mirco Musolesi
Presented on Wednesday September 17th as part of the Interruptability & Notifications session.
Wearable wireless sensors for health monitoring are enabling the design and delivery of just-in-time interventions (JITI). Critical to the success of JITI is to time its delivery so that the user is available to be engaged. We take a first step in modeling users' availability by analyzing 2,064 hours of physiological sensor data and 2,717 self-reports collected from 30 participants in a week-long field study. We use delay in responding to a prompt to objectively measure availability. We compute 99 features and identify 30 as most discriminating to train a machine learning model for predicting availability. We find that location, affect, activity type, stress, time, and day of the week, play significant roles in predicting availability. We find that users are least available at work and during driving, and most available when walking outside. Our model finally achieves an accuracy of 74.7% in 10-fold cross-validation and 77.9% with leave-one-subject-out.
Hillol Sarker, Moushumi Sharmin, Amin A Ali, Md Mahbubur Rahman, Rummana Bari, Syed Monowar Hossain, Santosh Kumar
Presented on Wednesday September 17th as part of the Interruptability & Notifications session.
Recently, Location-based Services (LBS) became proactive by supporting smart notifications in case the user enters or leaves a specific geographical area, well-known as Geofencing. However, different geofences cannot be temporally related to each other. Therefore, we introduce a novel method to formalize sophisticated Geofencing scenarios as state and transition-based geofence models. Such a model considers temporal relations between geofences as well as duration constraints for the time being within a geofence or in transition between geofences. These are two highly important aspects in order to cover sophisticated scenarios in which a notification should be triggered only in case the user crosses multiple geofences in a defined temporal order or leaves a geofence after a certain amount of time. As a proof of concept, we introduce a prototype of a suitable user interface for designing complex geofence models in conjunction with the corresponding proactive LBS.
Sandro Rodriguez Garzon, Bersant Deva
Presented on Wednesday September 17th as part of the Interruptability & Notifications session.
Energy Diet is a design concept for a digital bathroom scale that displays personal health information in the form of body weight alongside environmental health information in the form of carbon weight. We intentionally conflate these two types of feedback in an effort to encourage people to regularly monitor their energy use as they weigh themselves and to reflect on the complex relationships between personal health and environmental health. To inform our design we tested paper prototypes and administered two surveys with 500 participants. We then created a working prototype that we deployed in four participants’ homes for one month each. This paper discusses findings and design implications from our surveys and in-home deployment. Overall, seeing carbon weight together with body weight on a scale helped participants to conceptualize energy consumption and to reflect on a range of daily activities and their environmental impacts.
Pei-Yi Kuo, Michael Stephen Horn
Presented on Tuesday September 16th as part of the Energy & Environment session.
We present an ethnographic study of energy advisors working for a charity that provides support, particularly to people in fuel poverty. Our fieldwork comprises detailed observations that reveal the collaborative, interactional work of energy advisors and clients during home visits, supplemented with interviews and a participatory design workshop with advisors. We identify opportunities for Ubicomp technologies that focus on supporting the work of the advisor, including complementing the collaborative advice giving in home visits, providing help remotely, and producing evidence in support of accounts of practices and building conditions useful for interactions with landlords, authorities and other third parties. We highlight six specific design challenges that relate the domestic fuel poverty setting to the wider Ubicomp literature. Our work echoes a shift in attention from energy use and the individual consumer, specifically to matters of advice work practices and the domestic fuel poverty setting, and to the discourse around inclusive Ubicomp technologies.
Joel E Fischer, Enrico Costanza, Sarvapali D Ramchurn, James A Colley, Tom Rodden
Presented on Tuesday September 16th as part of the Energy & Environment session.
Xuxu Chen, Yu Zheng, Yubiao Chen, Qiwei Jin, Weiwei Sun, Eric Chang, Wei-Ying Ma
Presented on Tuesday September 16th as part of the Energy & Environment session.
Domestic microgeneration is the onsite generation of low- and zero-carbon heat and electricity by private households to meet their own needs. In this paper we explore how an everyday household routine ’ that of doing laundry ’ can be augmented by digital technologies to help households with photovoltaic solar energy generation to make better use of self-generated energy. This paper presents an 8-month in-the-wild study that involved 18 UK households in longitudinal energy data collection, prototype deployment and participatory data analysis. Through a series of technology interventions mixing energy feedback, proactive suggestions and direct control the study uncovered opportunities, potential rewards and barriers for families to shift energy consuming household activities and highlights how digital technology can act as mediator between household laundry routines and energy demand-shifting behaviors. Finally, the study provides insights into how a ’smart’ energy-aware washing machine shapes organization of domestic life and how people ’communicate’ with their washing machine.
Jacky Bourgeois, Janet van der Linden, Gerd Kortuem, Blaine Price, Christopher Rimmer
Presented on Tuesday September 16th as part of the Energy & Environment session.
Smartphones can collect considerable context data about the user, ranging from apps used to places visited. Frequent user patterns discovered from longitudinal, multi-modal context data could help personalize and improve overall user experience. Our long term goal is to develop novel middleware and algorithms to efficiently mine user behavior patterns entirely on the phone by utilizing idle processor cycles. Mining patterns on the mobile device provides better privacy guarantees to users, and reduces dependency on cloud connectivity. As an important step in this direction, we develop a novel general-purpose service called MobileMiner that runs on the phone and discovers frequent co-occurrence patterns indicating which context events frequently occur together. Using longitudinal context data collected from 106 users over 1-3 months, we show that MobileMiner efficiently generates patterns using limited phone resources. Further, we find interesting behavior patterns for individual users and across users, ranging from calling patterns to place visitation patterns. Finally, we show how our co-occurrence patterns can be used by developers to improve the phone UI for launching apps or calling contacts.
Vijay Srinivasan, Saeed Moghaddam, Abhishek Mukherji, Kiran K. Rachuri, Chenren Xu, Emmanuel Munguia Tapia
Presented on Tuesday September 16th as part of the Data Mining session.
Ubiquity of portable location-aware devices and popularity of online location-based services, have recently given rise to the collection of datasets with high spatial and temporal resolution. The subject of analyzing such data has consequently gained popularity due to numerous opportunities enabled by understanding objects’ (people and animals, among others) mobility patterns. In this paper, we propose a hidden semi-Markov-based model to understand the behavior of mobile entities. The hierarchical state structure in our model allows capturing spatio-temporal associations on the locational history both at stay-points and on the paths connecting them. We compare the accuracy of our model with a number of existing spatio-temporal models using two real datasets. Furthermore, we perform sensitivity analysis on our model to evaluate its robustness in presence of common issues in mobility datasets such as existence of noise and missing values. Results of our experiments show superiority of the proposed scheme compared with the other models.
Mitra Baratchi, Nirvana Meratnia, Paul Havinga, Andrew Skidmore, Bert Toxopeus
Presented on Tuesday September 16th as part of the Data Mining session.
Fitting sensors to humans and physical structures is becoming more and more common. These developments provide many opportunities for ubiquitous computing, as well as challenges for analyzing the resulting sensor data. From these challenges, an underappreciated problem arises: modeling multivariate time series with\emph{mixed sampling rates}. Although mentioned in several application papers using sensor systems, this problem has been left almost unexplored, often hidden in a preprocessing step or solved manually as a one-pass procedure (feature extraction/construction). This leaves an opportunity to formalize and develop methods that address mixed sampling rates in an automatic fashion. We approach the problem of dealing with multiple sampling rates from an aggregation perspective. We propose Accordion, a new embedded method that constructs and selects aggregate features iteratively, in a memory-conscious fashion. Our algorithms work on both classification and regression problems. We describe three experiments on real-world time series datasets, with satisfying results.
Ricardo Cachucho, Marvin Meeng, Ugo Vespier, Siegfried Nijssen, Arno Knobbe
Presented on Tuesday September 16th as part of the Data Mining session.
The newly emerging event-based social networks (EBSNs) connect online and offline social interactions, offering a great opportunity to understand behaviors in the cyber-physical space. While existing efforts have mainly focused on investigating user behaviors in traditional social network services (SNS), this paper aims to exploit individual behaviors in EBSNs, which remains an unsolved problem. In particular, our method predicts activity attendance by discovering a set of factors that connect the physical and cyber spaces and influence individual's attendance of activities in EBSNs. These factors, including content preference, context (spatial and temporal) and social influence, are extracted using different models and techniques. We further propose a novel Singular Value Decomposition with Multi-Factor Neighborhood (SVD-MFN) algorithm to predict activity attendance by integrating the discovered heterogeneous factors into a single framework, in which these factors are fused through a neighborhood set. Experiments based on real-world data from Douban Events demonstrate that the proposed SVD-MFN algorithm outperforms the state-of-the-art prediction methods.
Rong Du, Zhiwen Yu, Tao Mei, Zhitao Wang, Zhu Wang, Bin Guo
Presented on Tuesday September 16th as part of the Data Mining session.
Zhijia Zhao, Mingzhou Zhou, Xipeng Shen
Presented on Monday September 15th as part of the Mobile Performance session.
The battery life of mobile devices is one of their most important resources. Much of the literature focuses on accurately profiling the power consumption of device components or enabling application developers to develop energy-efficient applications through fine-grained power profiling. However,there is a lack of tools to enable users to extend battery life on demand. What can users do if they need their device to last for a specific duration in order to perform a specific task? To this extent, we developed BatteryExtender, a user-guided power management tool that enables the reconfiguration of the device’s resources based on the workload requirement, similar to the principle of creating virtual machines in the cloud. It predicts the battery life savings based on the new configuration, in addition to predicting the impact of running applications on the battery life. Through our experimental analysis, BatteryExtender decreased the energy consumption between 10.03% and 20.21%, and in rare cases by up to 72.83%. The accuracy rate ranged between 92.37% and 99.72%.
Grace Metri, Weisong Shi, Monica Brockmeyer, Abhishek Agrawal
Presented on Monday September 15th as part of the Mobile Performance session.
Wonwoo Jung, Yohan Chon, Dongwon Kim, Hojung Cha
Presented on Monday September 15th as part of the Mobile Performance session.
It is clear today that mobile video is a major traffic source and that online advertising is a steadily growing business. These trends are leading towards mobile video advertising becoming ubiquitous. We make two contributions towards better understanding mobile video ads and how their impact on mobile device resources can be minimized. We perform the first characterization of a well-defined set of mobile video ads on YouTube, the largest online video service. We then use our findings to design a video ad caching system for smartphones, aiming at minimizing the number of ad downloads to relieve mobile devices from the extra overhead induced by the ever increasing amount of ads. Our trace-driven simulations show that our caching system can save up to 50% data transfer.
Maria Carpen Amarie, Ioannis Pefkianakis, Henrik Lundgren
Presented on Monday September 15th as part of the Mobile Performance session.
Nonverbal children with communication disorders have difficulties communicating through oral language. To facilitate communication, Augmentative and Alternative Communication (AAC) is commonly used in intervention settingss. Different forms of AAC have been used; however, one key aspect of AAC is that children have different preferences and needs in the intervention process. One particular AAC method does not necessarily work for all children. Although robots have been used in different applications, this is one of the first times that robots have been used for improvement of communication in nonverbal children. In this work, we explore robot-based AAC through humanoid robots that assist therapists in interventions with nonverbal children. Through playing activities, our study assessed changes in gestures, vocalization, speech, and verbal expression in children. Our initial results show that robot-based AAC intervention has a positive impact on the communication skills of nonverbal children.
Kyunghea Jeon, Seok Jeong Yeon, Young Tae Kim, SeokWoo Song, John Kim
Presented on Wednesday September 17th as part of the Children's Therapy session.
This paper extends previous work automatically detecting stereotypical motor movements (SMM) in individuals on the autism spectrum. Using three-axis accelerometer data obtained through wearable wireless sensors, we compare recognition results for two different classifiers ’ Support Vector Machine and Decision Tree ’ in combination with different feature sets based on time-frequency characteristics of accelerometer data. We use data collected from six individuals on the autism spectrum who participated in two different studies conducted three years apart in classroom settings, and observe an average accuracy across all participants over time ranging from 81.2% (TPR: 0.91; FPR: 0.21) to 99.1% (TPR: 0.99; FPR: 0.01) for all combinations of classifiers and feature sets. We also provide analyses of kinematic parameters associated with observed movements in an attempt to explain classifier-feature specific performance. Based on our results, we conclude that real-time, person-dependent, adaptive algorithms are needed in order to accurately and consistently measure SMM automatically in individuals on the autism spectrum over time in real-word settings.
Matthew S Goodwin, Marzieh Haghighi, Qu Tang, Murat Akcakaya, Deniz Erdogmus, Stephen S Intille
Presented on Wednesday September 17th as part of the Children's Therapy session.
Multimodal and natural user interfaces offer an innovative approach to sensory integration therapies. We designed and developed SensoryPaint, a multimodal system that allows users to paint on a large display using physical objects, body-based interactions, and interactive audio. We evaluat-ed the impact of SensoryPaint through two user studies: a lab-based study of 15 children with neurodevelopmental disorders in which they used the system for up to one hour, and a deployment study with four children with autism, during which the system was integrated into existing daily sensory therapy sessions. Our results demonstrate that a multimodal large display, using whole body interactions combined with tangible interactions and interactive audio feedback, balances children’s attention between their own bodies and sensory stimuli, augments existing therapies, and promotes socialization. These results offer implications for the design of other ubicomp systems for children with neurodevelopmental disorders and for their integration into therapeutic interventions.
Kathryn E Ringland, Rodrigo Zalapa, Megan Neal, Lizbeth Escobedo, Monica Tentori, Gillian R Hayes
Presented on Wednesday September 17th as part of the Children's Therapy session.
Situated displays can support behavior management for children with behavioral challenges. However, existing tools are often static, rarely engaging, and tend to focus only on individual behavior. In this work, we designed and deployed a situated display to support teamwork and cooperation in children with behavioral challenges. We evaluated this tool in two classrooms of a public school specializing in behavioral interventions with 28 children over four weeks. The results of this work demonstrate that situated displays focused on collective behavioral performance can support reflection on individual performance, improve behavior for students with behavioral challenges, as well as encourage teamwork and cooperative behavior in classrooms. These results also indicate a variety of issues to be considered when designing situated displays for these environments, including considerations for the representation of ambiguity and failure as well as the relationship between novelty and engagement.
Aleksandar Matic, Gillian R Hayes, Monica Tentori, Maryam Abdullah, Sabrina Schuck
Presented on Wednesday September 17th as part of the Children's Therapy session.
The recent emergence of comfortable wearable sensors has focused almost entirely on monitoring physical activity, ignoring opportunities to monitor more subtle phenomena, such as the quality of social interactions. We argue that it is compelling to address whether physiological sensors can shed light on quality of social interactive behavior. This work leverages the use of a wearable electrodermal activity (EDA) sensor to recognize ease of engagement of children during a social interaction with an adult. In particular, we monitored 51 child-adult dyads in a semi-structured play interaction and used Support Vector Machines to automatically identify children who had been rated by the adult as more or less difficult to engage. We report on the classification value of several features extracted from the child's EDA responses, as well as several other features capturing the physiological synchrony between the child and the adult.
Javier Hernandez, Ivan Riobo, Agata Rozga, Gregory D. Abowd, Rosalind W. Picard
Presented on Tuesday September 16th as part of the Health & Children session.
In this work, we present ChildSafe, a classification system which exploits human skeletal features collected using a 3D depth camera to classify visual characteristics between children and adults. ChildSafe analyzes the histograms of training samples and implements a bin boundary-based classifier. We train and evaluate Child- Safe using a large dataset of visual samples collected from 150 elementary school children and 43 adults, ranging in the ages of 7 and 50. Our results suggest that ChildSafe successfully detects children with a proper classification rate of up to 97%, a false negative rate of as low as 1.82%, and a low false positive rate of 1.46%. We envision this work as an effective sub-system for designing various child protection applications.
Can Basaran, Hee Jung Yoon, Ho-Kyeong Ra, Taejoon Park, Sang Hyuk Son, JeongGil Ko
Presented on Tuesday September 16th as part of the Health & Children session.
This paper describes the design of a digital fork and a mobile interactive and persuasive game for a young child who is a picky eater and/or easily distracted during mealtime. The system employs Ubicomp technology to educate children on the importance of a balanced diet while motivating proper eating behavior. To sense a child's eating behavior, we have designed and prototyped a sensor-embedded digital fork, called the Sensing Fork. Furthermore, we have developed a story-book and persuasive game, called the Hungry Panda, on a smartphone. This capitalizes on the capabilities of the Sensing Fork to interact with and modify children's eating behavior during mealtime. We report the results of a real-life study that involves mother-child subjects and tested the effectiveness of the Sensing Fork and Hungry Panda game in addressing children's eating problems. Our findings exhibit positive effects for changing children's eating behavior.
Azusa Kadomura, Cheng-Yuan Li, Koji Tsukada, Hao-Hua Chu, Itiro Siio
Presented on Tuesday September 16th as part of the Health & Children session.
Health sensing through smartphones has received considerable attention in recent years because of the devices’ ubiquity and promise to lower the barrier for tracking medical conditions. In this paper, we focus on using smartphones to monitor newborn jaundice, which manifests as a yellow discoloration of the skin. Although a degree of jaundice is common in healthy newborns, early detection of extreme jaundice is essential to prevent permanent brain damage or death. Current detection techniques, however, require clinical tests with blood samples or other specialized equipment. Consequently, newborns often depend on visual assessments of their skin color at home, which is known to be unreliable. To this end, we present BiliCam, a low-cost system that uses smartphone cameras to assess newborn jaundice. We evaluated BiliCam on 100 newborns, yielding a 0.85 rank order correlation with the gold standard blood test. We also discuss usability challenges and design solutions to make the system practical.
Lilian de Greef, Mayank Goel, Min Joon Seo, Eric C Larson, James W Stout, James A Taylor, Shwetak N Patel
Presented on Tuesday September 16th as part of the Health & Children session.