Below is an outline of the program for Ubicomp 2014. This year's conference is multitrack and co-located with ISWC 2014.

Links to the papers in the ACM digital library are included below for Ubicomp submissions, the ISWC papers can also be found in the digital library. Adjunct proceedings (posters, demos, videos, keynotes) for both Ubicomp and ISWC 2014 can be found here.

Click on any session to view more information about its content. To view the entire program in a printer-friendly page, click here.

  • Ubicomp Conference Session
  • Combined Ubicomp & ISWC Session
  • ISWC Conference Session
  • Best Paper Nominee
  • Best Paper


Seattle 1 & 2
Seattle 3
Emerald 2
08:00 8am
Registration / Help desk opens
09:00-10:30 9am

Making Space Suits

Making Space Suits
Amy Ross

Click here for more about this keynote.

Note that the keynote will take place in Seattle 1, 2 and 3.

10:30-11:00 10:30
11:00-12:30 11am
Activity and Group Interactions
Mobile Performance
UbiComp and Design

Activity and Group Interactions

Hide Tokuda
Group Activity Recognition using Belief Propagation for Mobile Devices
Humans are social beings and spend most of their time in groups. Group behavior is emergent, generated by members’ personal characteristics and their interactions. It is therefore difficult to recognize in peer-to-peer (P2P) systems where the emergent behavior itself cannot be directly observed. We introduce 2 novel algorithms for distributed probabilistic inference (DPI) of group activities using loopy belief propagation (LBP). We evaluate their performance using an experiment in which 10 individuals play 6 team sports and show that these activities are emergent in nature through natural processes. Centralized recognition performs very well, upwards of an F-score of 0.95 for large window sizes. The distributed methods iteratively converge to solutions which are comparable to centralized methods. DPI-LBP also reduces energy consumption by a factor of 7 to 40, where a centralized unit or infrastructure is not required.
Dawud Gordon, Markus Scholz, Michael Beigl
Much of the stress and strain of student life remains hidden. The StudentLife continuous sensing app assesses the day-today and week-by-week impact of workload on stress, sleep, activity, mood, sociability, mental well-being and academic performance of a single class of 48 students across a 10 week term at Dartmouth College using Android phones. Results from the StudentLife study show a number of significant correlations between the automatic objective sensor data from smartphones and mental health and educational outcomes of the student body. We also identify a Dartmouth term lifecycle in the data that shows students start the term with high positive affect and conversation levels, low stress, and healthy sleep and daily activity patterns. As the term progresses and the workload increases, stress appreciably rises while positive affect, sleep, conversation and activity drops off. The StudentLife dataset is publicly available on the web.
Rui Wang, Fanglin Chen, Zhenyu Chen, Tianxing Li, Gabriella Harari, Stefanie Tignor, Xia Zhou, Dror Ben-Zeev, Andrew Campbell
Accommodating User Diversity for In-Store Shopping Behavior Recognition
This paper explores the possibility of using mobile sensing data to detect certain in-store shopping intentions or behaviours of shoppers. We propose a person-independent activity recognition technique called CROSDAC, which captures the diversity in the manifestation of such intentions or behaviours in a heterogeneous set of users in a data-driven manner via a 2-stage clustering-cum-classification technique. Using smartphone based sensor data (accelerometer, compass and Wi-Fi) from a directed, but real-life study involving 86 shopping episodes from 30 users in a mall’s food court, we show that CROSDAC’s mobile sensing-based approach can offer reasonably high accuracy (77:6% for a 2-class identification problem) and outperforms the traditional community driven approaches that unquestioningly segment users on the basis of underlying demographic or lifestyle attributes.
Sougata Sen, Dipanjan Chakraborty, Vigneshwaran Subbaraju, Dipyaman Banerjee, Archan Misra, Nilanjan Banerjee, Sumit Mittal
Detecting Smoothness of Pedestrian Flows by Participatory Sensing with Mobile Phones
In this paper, we propose a novel system for estimating crowd density and smoothness of pedestrian flows in public space by participatory sensing with mobile phones. By analyzing walking motion of the pedestrians and ambient sound in the environment that can be monitored by accelerometers and microphones in off-the-shelf smartphones, our system classifies the current situation at each area into four categories that well represent the crowd behavior. Through field experiments using Android smartphones, we show that our system can recognize the current situation with accuracy of 60-78%.
Tomohiro Nishimura, Takamasa Higuchi, Hirozumi Yamaguchi, Teruo Higashino
Interpersonal touch is our most primitive social language strongly governing our emotional well-being. Despite the positive implications of touch in many facets of our daily social interactions, we find wide-spread caution and taboo limiting touch-based interactions in workplace relationships that constitute a significant part of our daily social life. In this paper, we explore new opportunities for ubicomp technology to promote a new meme of casual and cheerful interpersonal touch such as high-fives towards facilitating vibrant workplace culture. Specifically, we propose High5, a mobile service with a smartwatch-style system to promote high-fives in everyday workplace interactions. We first present initial user motivation from semi-structured interviews regarding the potentially controversial idea of High5. We then present our smartwatch-style prototype to detect high-fives based on sensing electric skin potential levels. We demonstrate its key technical observation and performance evaluation.
Yuhwan Kim, Seungchul Lee, Inseok Hwang, Hyunho Ro, Youngki Lee, Miri Moon, Junehwa Song

Mobile Performance

David Chu
The battery life of mobile devices is one of their most important resources. Much of the literature focuses on accurately profiling the power consumption of device components or enabling application developers to develop energy-efficient applications through fine-grained power profiling. However,there is a lack of tools to enable users to extend battery life on demand. What can users do if they need their device to last for a specific duration in order to perform a specific task? To this extent, we developed BatteryExtender, a user-guided power management tool that enables the reconfiguration of the device’s resources based on the workload requirement, similar to the principle of creating virtual machines in the cloud. It predicts the battery life savings based on the new configuration, in addition to predicting the impact of running applications on the battery life. Through our experimental analysis, BatteryExtender decreased the energy consumption between 10.03% and 20.21%, and in rare cases by up to 72.83%. The accuracy rate ranged between 92.37% and 99.72%.
Grace Metri, Weisong Shi, Monica Brockmeyer, Abhishek Agrawal
Wonwoo Jung, Yohan Chon, Dongwon Kim, Hojung Cha
It is clear today that mobile video is a major traffic source and that online advertising is a steadily growing business. These trends are leading towards mobile video advertising becoming ubiquitous. We make two contributions towards better understanding mobile video ads and how their impact on mobile device resources can be minimized. We perform the first characterization of a well-defined set of mobile video ads on YouTube, the largest online video service. We then use our findings to design a video ad caching system for smartphones, aiming at minimizing the number of ad downloads to relieve mobile devices from the extra overhead induced by the ever increasing amount of ads. Our trace-driven simulations show that our caching system can save up to 50% data transfer.
Maria Carpen Amarie, Ioannis Pefkianakis, Henrik Lundgren

UbiComp and Design

Gregory Abowd
Ubicomp products have become more important in providing emotional experiences as users increasingly assimilate these products into their everyday lives. In this paper, we explored a new design perspective by applying a pet dog analogy to support emotional experience with ubicomp products. We were inspired by pet dogs, which are already intimate companions to humans and serve essential emotional functions in daily live. Our studies involved four phases. First, through our literature review, we articulated the key characteristics of pet dogs that apply to ubicomp products. Secondly, we applied these characteristics to a design case, CAMY, a mixed media PC peripheral with a camera. Like a pet dog, it interacts emotionally with a user. Thirdly, we conducted a user study with CAMY, which showed the effects of pet-like characteristics on users’ emotional experiences, specifically on intimacy, sympathy, and delightedness. Finally, we presented other design cases and discussed the implications of utilizing a pet dog analogy to advance ubicomp systems for improved user experiences.
Yea Kyung Row, Tek Jin Nam
The rapid growth of the Ubicomp field has recently raised concerns regarding its identity. These concerns have been compounded by the fact that there exists a lack of empirical evidence on how the field has evolved until today. In this study we applied co-word analysis to examine the status of Ubicomp research. We constructed the intellectual map of the field as reflected by 6858 keywords extracted from 1636 papers published in the HUC, UbiComp and Pervasive conferences during 1999-2013. Based on the results of a correspondence analysis we identify two major periods in the whole corpus: 1999-2007 and 2008-2013. We then examine the evolution of the field by applying graph theory and social network analysis methods to each period. We found that Ubicomp is increasingly focusing on mobile devices, and has in fact become more cohesive in the past 15 years. Our findings refute the assertion that Ubicomp research is now suffering an identity crisis.
Yong Liu, Jorge Goncalves, Denzil Ferreira, Simo Hosio, Vassilis Kostakos
Recent years have seen an increased research interest in multi-device interactions and digital ecosystems. This research addresses new opportunities and challenges when users are not simply interacting with one system or device at a time, but orchestrate ensembles of them as a larger whole. One of these challenges is to understand what principles of interaction work well for what, and to create such knowledge in a form that can inform design. Our contribution to this research is a framework of interaction principles for digital ecosystems, which can be used to analyze and understand existing systems and design new ones. The 4C framework provides new insights over existing frameworks and theory by focusing specifically on explaining the interactions taking place within digital ecosystems. We demonstrate this value through two examples of the framework in use, firstly for understanding an existing digital ecosystem, and secondly for generating ideas and discussion when designing a new one.
Henrik Sørensen, Dimitrios Raptis, Jesper Kjeldskov, Mikael B. Skov
This research builds on the UbiComp vision of systems that do not do things for people but engage people in their computational environment so that people can do things for themselves better. In this investigation, we sought to make good on a proof-of-concept where people interact with a social robot whereby the robot helps people to be more humanly creative. Twenty seven participants interacted with ATR’s humanoid robot Robovie (through a WoZ interface) in a creativity task. Results supported our proof of concept insofar as 100% of the participants generated creative ideas, and 63% incorporated the robot’s ideas into their own ideas for their creative output. Of the participants who had the highest creativity scores, 83% incorporated the robot’s ideas into their own. Discussion focuses on next steps toward building the Natural Language Processing system, and integrating the system into a more extensive networked UbiComp environment.
Peter H Kahn, Jr., Takayuki Kanda, Hiroshi Ishiguro, Solace Shen, Heather E Gary, Jolina H Ruckert
12:30-14:00 12:30
14:00-15:30 2pm
In the Home
Contextual Awareness on Mobile Devices
Indoor Location

In the Home

Shwetak Patel
A considerable amount of research has been carried out towards making long-standing smart home visions technically feasible. The technologically augmented homes made possible by this work are starting to become reality, but thus far living in and interacting with such homes has introduced significant complexity while offering limited benefit. As these technologies are increasingly adopted, the knowledge we gain from their use suggests a need to revisit the opportunities and challenges they pose. Synthesizing a broad body of research on smart homes with observations of industry and experiences from our own empirical work, we provide a discussion of ongoing and emerging challenges, namely challenges for meaningful technologies, complex domestic spaces, and human-home collaboration. Within each of these three challenges we discuss our visions for future smart homes and identify promising directions for the field.
Sarah Mennicken, Jo Vermeulen, Elaine M. Huang
Whilst the ubicomp community has successfully embraced a number of societal challenges for human benefit, including healthcare and sustainability, the well-being of other animals is hitherto underrepresented. We argue that ubicomp technologies, including sensing and monitoring devices as well as tangible and embodied interfaces, could make a valuable contribution to animal welfare. This paper particularly focuses on dogs in kenneled accommodation, as we investigate the opportunities and challenges for a smart kennel aiming to foster canine welfare. We conducted an in-depth ethnographic study of a dog rehoming center over four months; based on our findings, we propose a welfare-centered framework for designing smart environments, integrating monitoring and interaction with information management. We discuss the methodological issues we encountered during the research and propose a smart ethnographic approach for similar projects.
Clara Mancini, Janet van der Linden, Gerd Kortuem, Guy Dewsbury, Daniel Mills, Paula Boyden
We investigated how household deployment of Internet-connected locks and security cameras could impact teenagers' privacy. In interviews with 13 teenagers and 11 parents, we investigated reactions to audit logs of family members' comings and goings. All parents wanted audit logs with photographs, whereas most teenagers preferred text-only logs or no logs at all. We unpack these attitudes by examining participants' parenting philosophies, concerns, and current monitoring practices. In a follow-up online study, 19 parents configured an Internet-connected lock and camera system they thought might be deployed in their home. All 19 participants chose to monitor their children either through unrestricted access to logs or through real-time notifications of access. We discuss directions for auditing interfaces that could improve home security without impacting privacy.
Blase Ur, Jaeyeon Jung, Stuart Schechter
We demonstrate that a cheap (30USD) small, low power 8x8 thermal sensor array can by itself provide a broad range of information relevant for human activity monitoring in home and office environments. In particular the sensor can track people with an accuracy in the range of 1m (which is sufficient to recognize activity relevant regions), detect the operation mode of various appliances such as toaster, water cooker or egg cooker and actions such as opening a refrigerator, the oven or taking a shower. While there are sensing modalities for each of the above types of information (e.g. current sensors for appliances) the fact that they can all be detected by such a simple sensor is highly relevant for practical activity recognition systems. Compared to vision (or thermal imaging systems) the system has the advantage is being less privacy invasive allowing it for example to monitor bathroom activities (as shown in one of our evaluation scenarios). The paper describes the sensor, the methods used for activity detection and the evaluation.
Peter Hevesi, Sebastian Wille, Gerald Pirkl, Norbert Wehn, Paul Lukowicz
Exploring Interactive Furniture with EmotoCouch
People respond emotionally to other people, animals, or even objects like furniture. While current furniture is static in appearance, embedded electronics can enable furniture to change its appearance. A couch could show excitement during a party or anger when a pet scratches it. But would emotional furniture delight or annoy people? To explore the potential for emotional furniture, we built EmotoCouch. Through colored light, visual patterns, and haptic feedback, EmotoCouch expresses six emotional states: Excited, Happy, Calm, Depressed/Sad, Afraid, and Angry. This video describes the construction of EmotoCouch, feedback gathered through surveys and user interviews, and shows example EmotoCouch usage situations.
Sarah Mennicken, A.J. Brush, Asta Roseway, James Scott

Contextual Awareness on Mobile Devices

Daniel Ashbrook
Group Affiliation Detection Using Model Divergence for Wearable Devices
Methods for recognizing group affiliations using mobile devices have been proposed using centralized instances to aggregate and evaluate data. However centralized systems do not scale well and fail when the network is congested. We present a method for distributed, peer-to-peer (P2P) recognition of group affiliations in multi-group environments, using the divergence of mobile phone sensor data distributions as an indicator of similarity. The method assesses pairwise similarity between individuals using model parameters instead of sensor observations, and then interprets that information in a distributed manner. An experiment was conducted with 10 individuals in different group configurations to compare P2P and conventional centralized approaches. Although the output of the proposed method fluctuates, we can still correctly detect 93% of group affiliations by applying a filter. We foresee applications in mobile social networking, life logging, smart environments, crowd situations and possibly crowd emergencies.
Dawud Gordon, Martin Wirz, Daniel Roggen, Gerhard Tröster, Michael Beigl
Public Restroom Detection on Mobile Phone via Active Probing
Although there are clear benefits to automatic image capture services by wearable devices, image capture sometimes happens in sensitive spaces where camera use is not appropriate. In this paper, we tackle this problem by focusing on detecting when the user of a wearable device is located in a specific type of private space—the public restroom—so that the image capture can be disabled. We present an infrastructure-independent method that uses just the microphone and the speaker on a commodity mobile phone. Our method actively probes the environment by playing a 0.1 seconds sine wave sweep sound and then analyzes the impulse response (IR) by extracting MFCCs features. These features are then used to train an SVM model. Our evaluation results show that we can train a general restroom model which is able to recognize new restrooms. We demonstrate that this approach works on different phone hardware. Furthermore, the volume levels, occupancy and presence of other sounds do not affect recognition in significant ways. We discuss three types of errors that the prediction model has and evaluate two proposed smoothing algorithms for improving recognition.
Mingming Fan, Alexander Adams, Khai Truong
Exploiting Usage Statistics for Energy-efficient Logical Status Inference on Mobile Phones
Logical statuses of mobile users, such as isBusy and isAlone, are the key enabler for a plethora of context-aware mobile applications. While on-board hardware sensors, such as motion, proximity, and location sensors, have been extensively studied for logical status inference, the continuous usage incurs formidable energy consumption and therefore user experience degradation. In this paper, we argue that smartphone usage statistics can be used for logical status inference with negligible energy cost. To validate this argument, this paper presents a continuous inference engine that (1) intercepts multiple operating system events, in particular foreground app, notifications, screen states, and connected networks; (2) extracts informative features from OS events; and (3) efficiently infers the logical status of mobile users. The proposed inference engine is implemented for unmodified Android phones, and an evaluation on a four-week trial has shown promising accuracy in identifying four logical statuses of mobile users with over 87% accuracy while the average energy impact on the battery life is less than 0.5%.
Jon C Hammer, Tingxin Yan
How much light do you get? Estimating daily light exposure using smartphones
We present an approach to estimate a persons light exposure using smartphones. We used web-sourced weather reports combined with smartphone light sensor data, time of day, and indoor/outdoor information, to estimate illuminance around the user throughout a day. Since light dominates every human’s circadian rhythm and influences the sleep-wake cycle, we developed a smartphone-based system that does not require additional sensors for illuminance estimation. To evaluate our approach, we conducted a free-living study with 12 users, each carrying a smartphone, a head-mounted light reference sensor, and a wrist-worn light sensing device for six consecutive days. Estimated light values were compared to the head-mounted reference, the wrist-worn device and a mean value estimate. Our results show that illuminance could be estimated at less than 20% error for all study participants, outperforming the wrist-worn device. In 9 out of 12 participants the estimation deviated less than 10% from the reference measurements.
Florian Wahl, Thomas Kantermann, Oliver Amft

Indoor Location

Jin Nakazawa
We propose a graph-based, low-complexity sensor fusion approach for ubiquitous pedestrian indoor positioning using mobile devices. We employ our fusion technique to combine relative motion information based on step detection with WiFi signal strength measurements. The method is based on the well-known particle filter methodology. In contrast to previous work, we provide a probabilistic model for location estimation that is formulated directly on a fully discretized, graph-based representation of the indoor environment. We generate this graph by adaptive quantization of the indoor space, removing irrelevant degrees of freedom from the estimation problem. We evaluate the proposed method in two realistic indoor environments using real data collected from smartphones. In total, our dataset spans about 20 kilometers in distance walked and includes 13 users and four different mobile device types. Our results demonstrate that the filter requires an order of magnitude less particles than state-of-the-art approaches while maintaining an accuracy of a few meters. The proposed low-complexity solution not only enables indoor positioning on less powerful mobile devices, but also saves much-needed resources for location-based end-user applications which run on top of a localization service.
Sebastian Hilsenbeck, Dmytro Bobkov, Georg Schroth, Robert Huitl, Eckehard Steinbach
We present a device-free indoor tracking system that uses received signal strength (RSS) from radio frequency (RF) transceivers to estimate the location of a person. While many RSS-based tracking systems use a body-worn device or tag, this approach requires no such tag. The approach is based on the key principle that RF signals between wall-mounted transceivers reflect and absorb differently depending on a person’s movement within their home. A hierarchical neural network hidden Markov model (NN-HMM) classifier estimates both movement patterns and stand vs. walk conditions to accurately perform tracking. The algorithm and features used are specifically robust to changes in RSS mean shifts in the environment over time allowing for greater than 90% region level classification accuracy over an extended testing period. In addition to tracking, the system also estimates the number of people in different regions. It is currently being developed to support independent living and long-term monitoring of seniors.
Anindya Paul, Eric A Wan, Fatema Adenwala, Erich Schafermeyer, Nicholas Preiser, Jeffrey Kaye, Peter Jacobs
Location prediction enables us to use a person’s mobility history to realize various applications such as efficient temperature control, opportunistic meeting support, and automated receptionists. Indoor location prediction is a challenging problem, particularly due to a high density of possible locations and short transition distances between these locations. In this paper we present Indoor-ALPS, an Adaptive Indoor Location Prediction System that uses temporal-spatial features to create individual daily models for the prediction of when a user will leave their current location (transition time) and the next location she will transition to. We tested Indoor-ALPS on the Augsburg Indoor Location Tracking Benchmark and compared our approach to the best performing temporal-spatial mobility prediction algorithm, Prediction by Partial Match (PPM). Our results show that Indoor-ALPS improves the temporal- spatial prediction accuracy over PPM for look-aheads up to 90 minutes by 6.2%, and for up to 30 minute look-aheads by 10.7%. These results demonstrate that Indoor-ALPS can be used to support a wide variety of indoor mobility prediction-based applications.
Christian Koehler, Nikola Banovic, Ian Oakley, Jen Mankoff, Anind Dey
In recent years, there has been an explosion of social and collaborative applications that leverage location to provide users novel and engaging experiences. Current location technologies work well outdoors but fare poorly indoors. In this paper we present LoCo, a new framework that can provide highly accurate room-level location using a supervised classification scheme. We provide experiments that show this technique is orders of magnitude more efficient than current state-of-the-art WiFi localization techniques. Low classification overhead and computational footprint make classification practical and efficient even on mobile devices. Our framework has also been designed to be easily deployed and leveraged by developers to help create a new wave of location driven applications and services.
Jacob T Biehl, Matthew Cooper, Gerry Filby, Sven Kratz
15:30-16:00 3:30
16:00-17:30 4pm
Cities & Transportation
Sensing and Communication
Gadget Show

Cities & Transportation

Tanzeem Choudhury
Mobile sensing systems employ various sensors in smartphones to extract human-related information. As the demand for sensing systems increases, a more effective mechanism is required to sense information about human life. In this paper, we present a systematic study on the feasibility and gaining properties of a crowdsensing system that primarily concerns sensing WiFi packets in the air. We propose that this method is effective for estimating urban mobility by using only a small number of participants. During a seven-week deployment, we collected smartphone sensor data, including approximately four million WiFi packets from more than 130,000 unique devices in a city. Our analysis of this dataset examines core issues in urban mobility monitoring, including feasibility, spatio-temporal coverage, scalability, and threats to privacy. Collectively, our findings provide valuable insights to guide the development of new mobile sensing systems for urban life monitoring.
Yohan Chon, Suyeon Kim, Seungwoo Lee, Dongwon Kim, Yungeun Kim, Hojung Cha
This paper assesses the potential of ride-sharing for reducing traffic in a city -- based on mobility data extracted from 3G Call Description Records (CDRs), for the cities of Madrid and Barcelona (BCN), and from OSNs, such as Twitter and Foursquare (FSQ), collected for the cities of New York (NY) and Los Angeles (LA). First, we analyze these data sets to understand mobility patterns, home and work locations, and social ties between users. Then, we develop an efficient algorithm for matching users with similar mobility patterns, considering a range of constraints, including social distance. The solution provides an upper bound to the potential decrease in the number of cars in a city that can be achieved by ridesharing. Our results indicate that this decrease can be as high as 31%, when users are willing to ride with friends of friends.
Blerim Cici, Athina Markopoulou, Enrique Frias-Martinez, Nikolaos Laoutaris
People flow at a citywide level is in a mixed state with several basic patterns (e.g. commuting, working, commercial), and it is therefore difficult to extract useful information from such a mixture of patterns directly. In this paper, we proposed a novel tensor factorization approach to modeling city dynamics in a basic life pattern space (CitySpectral Space). To obtain the CitySpectrum, we utilized Non-negative Tensor Factorization (NTF) to decompose a people flow tensor into basic life pattern tensors, described by three bases i.e. the intensity variation among different regions, the time-of-day and the sample days. We apply our approach to a big mobile phone GPS log dataset (containing 1.6 million users) to model the fluctuation in people flow before and after the Great East Japan Earthquake from a CitySpectral perspective. In addition, our framework is extensible to a variety of auxiliary spatial-temporal data. We parametrize a people flow with a spatial distribution of the Points of Interest (POIs) to quantitatively analyze the relationship between human mobility and POI distribution. Based on the parametric people flow, we propose a spectral approach for a site-selection recommendation and people flow simulation in another similar area using POI distribution.
Zipei Fan, Xuan Song, Ryosuke Shibasaki
Determining the mode of transport of an individual is an important element of contextual information. In particular, we focus on differentiating between different forms of motorized transport such as car, bus, subway etc. Our approach uses location information and features derived from transit route information (schedule information, not real-time) published by transit agencies. This enables no up-front training or learning of routes and can be deployed instantly to a new place since most transit agencies publish this information. Combined with motion detection using phone accelerometers, we obtain a classification accuracy of around 90% on 50+ hours of car and transit data.
Rahul C Shah, Chieh-yih Wan, Hong Lu, Lama Nachman

Sensing and Communication

Yoshihiro Kawahara
Smart objects within instrumented environments offer an always available and intuitive way of interacting with a system. Connecting these objects to other objects in range, or even to smartphones and computers, enables substantially innovative interaction and sensing approaches. In this paper, we investigate the concept of Capacitive Near-Field Communication to enable ubiquitous interaction with everyday objects in a short-range spatial context. Our central contribution is a generic framework describing and evaluating the communication method in Ubiquitous Computing. We prove the relevance of our approach by an open-source implementation of a low-cost object tag and a transceiver offering a high-quality communication link at typical distances up to 15 cm. Moreover, we present three case studies considering tangible interaction for the visually impaired, natural interaction with everyday objects, and sleeping behavior analysis.
Tobias Grosse-Puppendahl, Sebastian Herber, Raphael Wimmer, Frank Englert, Sebastian Beck, Julian von Wilmsdorff, Reiner Wichert, Arjan Kuijper
Using magnetic field data as fingerprints for localization in indoor environment has become popular in recent years. Particle filter is often used to improve accuracy. However, most of existing particle filter based approaches either are heavily affected by motion estimation errors, which makes the system unreliable, or impose strong restrictions on smartphone such as fixed phone orientation, which is not practical for real-life use. In this paper, we present an indoor localization system named MaLoc, built on our proposed augmented particle filter. We create several innovations on the motion model, the measurement model and the resampling model to enhance the traditional particle filter. To minimize errors in motion estimation and improve the robustness of particle filter, we augment the particle filter with a dynamic step length estimation algorithm and a heuristic particle resampling algorithm. We use a hybrid measurement model which combines a new magnetic fingerprinting model and the existing magnitude fingerprinting model to improve the system performance and avoid calibrating different smartphone magnetometers. In addition, we present a novel localization quality estimation method and a localization failure detection method to address the "Kidnapped Robot Problem" and improve the overall usability. Our experimental studies show that MaLoc achieves a localization accuracy of 1~2.8m on average in a large building.
Hongwei Xie, Tao Gu, Xianping Tao, Haibo Ye, Jian Lv
Indoor object localization can enable many ubicomp applications, such as asset tracking and object-related activity recognition. Most location and tracking systems rely on either battery-powered devices which create cost and maintenance issues or cameras which have accuracy and privacy issues. This paper introduces a system that is able to detect the 3D position and motion of a battery-free RFID tag embedded with an ultrasound detector and an accelerometer. Combining tags' acceleration with location improves the system's power management and supports activity recognition. We characterize the system's localization performance in open space as well as implement it in a smart wet lab application. The system is used to track real-time location and motion of the tags in the wet lab as well as recognize pouring actions performed on the objects to which the tag is attached. The median localization accuracy is $7.6cm$ -- $(3.1,5,1.9)cm$ for each $(x,y,z)$ axis -- with max update rates of 15 Sample/s using single RFID reader antenna.
Yi Zhao, Anthony LaMarca, Joshua R Smith
Weiwei Jiang, Denzil Ferreira, Jani Ylioja, Jorge Goncalves, Vassilis Kostakos

Gadget Show

Tom Martin
17:30-19:30 5:30
Demos & Posters

The combined Demos & Posters session includes 27 demos and over 70 posters.

Click here for a full list of accepted demos and posters.

End of day
20:00 8pm
Registration / Help desk closes


Seattle 1 & 2
Seattle 3
Emerald 2
08:00 8am
Registration / Help desk opens
09:00-10:30 9am
Mobile Applications
Wearable Input/Output
Health & Children

Mobile Applications

Christine Lv 
Pedestrians have difficulty noticing hybrid vehicles (HVs) and electrical vehicles (EVs) quietly approaching from behind. We propose a vehicle detection scheme using a smartphone carried by a pedestrian. A notification of a vehicle approaching can be delivered to wearable devices such as Google Glass. We exploit the high-frequency switching noise generated by the motor unit in HVs and EVs. Although people are less sensitive to these high-frequency ranges, these sounds are prominent even on a busy street, and it is possible for a smartphone to detect these signs . The ambient sound captured at 48 kHz is converted to a feature vector in the frequency domain. A J48 classifier implemented on a smartphone can determine whether an EV or HV is approaching. We have collected a large amount of vehicle data at various locations. The false-positive and false-negative rates of our detection scheme are 1.2% and 4.95%, respectively. The first alarm was detected as early as 11.6 s before the vehicle approached the observer. The scheme can also determine the vehicle speed and vehicle type.
Masaru Takagi, Kosuke Fujimoto, Yoshihiro Kawahara, Tohru Asami
The goal of this work is to provide an abstraction of ideal sound environments to a new emerging class of Mobile Multi-speaker Audio (MMA) applications. Typically, it is challenging for MMA applications to implement advanced sound features (e.g., surround sound) accurately in mobile environments, especially due to unknown, irregular loudspeaker configurations. Towards an illusion that MMA applications run over specific loudspeaker configurations (i.e., speaker type, layout), this work proposes AMAC, a new Adaptive Mobile Audio Coordination system that senses the acoustic characteristics of mobile environments and controls individual loudspeakers adaptively and accurately. The prototype of AMAC implemented on commodity smartphones shows that it provides the coordination accuracy in sound arrival time in several tens of microseconds and reduces the variance in sound level substantially.
Hyosu Kim, SangJeong Lee, Jung-Woo Choi, Hwidong Bae, Jiyeon Lee, Junehwa Song, Insik Shin
Quality improvement in mobile applications should be based on the consideration of several factors, such as users’ diversity in spatio-temporal usage, as well as the device’s resource usage, including battery life. Although application tuning should consider this practical issue, it is difficult to ensure the success of this process during the development stage due to the lack of information about application usage. This paper proposes a user interaction-based profiling system to overcome the limitations of development-level application debugging. In our system, the analysis of both device behavior and energy consumption is possible with fine-grained process-level application monitoring. By providing fine-grained information, including user interaction, system behavior, and power consumption, our system provides meaningful analysis for application tuning. The proposed method does not require the source code of the application and uses a web-based framework so that users can easily provide their usage data. Our case study with a few popular applications demonstrates that the proposed system is practical and useful for application tuning.
Seokjun Lee, Chanmin Yoon, Hojung Cha
We propose a novel technique that aggregates multiple sensor streams generated by totally different types of sensors into a visually enhanced video stream. This paper shows major features of SENSeTREAM and demonstrates enhancement of user experience in an online live music event. Since SENSeTREAM is a video stream with sensor values encoded in a two-dimensional graphical code, it can transmit multiple sensor data streams while maintaining their synchronization. A SENSeTREAM can be transmitted via existing live streaming services, and can be saved into existing video archive services. We have implemented a prototype SENSeTREAM generator and deployed it to an online live music event. Through the pilot study, we confirmed that SENSeTREAM works with popular streaming services, and provide a new media experience for live performances. We also indicate future direction for establishing visual stream aggregation and its applications.
Takuro Yonezawa, Masaki Ogawa, Yutaro Kyono, Hiroki Nozaki, Jin Nakazawa, Osamu Nakamura, Hideyuki Tokuda

Wearable Input/Output

Kent Lyons
The Tongue and Ear Interface: A Wearable System for Silent Speech Recognition
We address the problem of performing silent speech recognition where vocalized audio is not available (e.g. due to a user's medical condition) or is highly noisy (e.g. during firefighting or combat). We describe our wearable system to capture tongue and jaw movements during silent speech. The system has two components: the Tongue Magnet Interface (TMI), which utilizes the 3-axis magnetometer aboard Google Glass to measure the movement of a small magnet glued to the user's tongue, and the Outer Ear Interface (OEI), which measures the deformation in the ear canal caused by jaw movements using proximity sensors embedded in a set of earmolds. We collected a data set of 1901 utterances of 11 distinct phrases silently mouthed by six able-bodied participants. Recognition relies on using hidden Markov model-based techniques to select one of the 11 phrases. We present encouraging results for user dependent recognition.
Himanshu Sahni, Abdelkareem Bedri, Gabriel Reyes, Pavleen Thukral, Zehua Guo, Thad Starner, Maysam Ghovanloo
Hands-free gesture control with a capacitive textile neckband
We present a novel sensing modality for hands-free gesture controlled user interfaces, based on active capacitive sensing. Four capacitive electrodes are integrated into a textile neckband, allowing continuous unobtrusive head movement monitoring. We explore the capability of the proposed system for recognising head gestures and postures. A study involving 12 subjects was carried out, recording data from 15 head gestures and 19 different postures. We present a quantitative evaluation based on this dataset, achieving an overall accuracy of 79.1% for head gesture recognition and 40.4% for distinguishing between head postures (69.9% when merging the most adjacent positions), respectively. These results indicate that our approach is promising for hands-free control interfaces. An example application scenario of this technology is the control of an electric wheelchair for people with motor impairments, where recognised gestures or postures can be mapped to control commands.
Marco Hirsch, Jingyuan Cheng, Attila Reiss, Mathias Sundholm, Paul Lukowicz, Oliver Amft
FabriTouch: Exploring Flexible Touch Input on Textiles
Touch-sensitive fabrics let users operate wearable devices unobtrusively and with rich input gestures similar to those on modern smartphones and tablets. While hardware prototypes exist in the DIY crafting community, HCI designers and researchers have little data about how well these devices actually work in realistic situations. FabriTouch is the first flexible touch-sensitive fabric that provides such scientifically validated information. We show that placing a FabriTouch pad onto clothing and the body instead of a rigid support surface significantly reduces input speed but still allows for basic gestures. We also show the impact of sitting, standing, and walking on horizontal and vertical swipe gesture performance in a menu navigation task. Finally, we provide the details necessary to replicate our FabriTouch pad, to enable both the DIY crafting community and HCI researchers and designers to build on our work.
Florian Heller, Stefan Ivanov, Chat Wacharamanotham, Jan Borchers
SwitchBack: An On-Body RF-Based Gesture Input Device
We present SwitchBack, a novel e-textile input device that can register multiple forms of input (tapping and bi-directional swiping) with minimal calibration. The technique is based on measuring the input impedance of a 7 cm microstrip short-circuit stub consisting of a strip of conductive fabric separated from a conductive fabric ground plane (also made of conductive fabric) by a layer of denim. The input impedance is calculated by measuring the stub's reflection coefficient using a simple RF reflectometer circuit, operating at 900MHz. The input impedance of the stub is affected by the dielectric properties of the surrounding material, and changes in a predictable manner when touched. We present the theoretical formulation, device and circuit design, and experimental results. Future work is also discussed.
Dana T Hughes, Halley P Profita, Nikolaus J Correll
Wearable Jamming Mitten for Virtual Environment Haptics
This paper presents a new mitten incorporating vacuum layer jamming technology to provide haptic feedback to a user. We demonstrate that layer jamming technology can be successfully applied to a mitten, and discuss advantages layer jamming provides as a wearable technology through its low profile form factor. Jamming differs from traditional wearable haptic systems by restricting a user's movement, rather than applying an actuation force on the user's body. Restricting the user's movement is achieved by varying the stiffness of wearable items, such as gloves. We performed a pilot study where the qualitative results showed users found the haptic sensation of the jamming mitten similar to grasping the physical counterpart.
Timothy M Simon, Ross T Smith, Bruce H Thomas
MagicWatch: Interacting & Segueing
Seeking for more friendly, more efficient, and more effective human-computer interaction ways is an eternal hot topic. This video demonstrates a MagicWatch that can sense user gestures, understand user intensions, and achieve expected tasks with the underlying core techniques and the support of a back-end context aware smart system on a cloud platform. The MagicWatch can act as a pointer, a remote controller, and an information portal. Just using hand, you can point a building, a person, or a screen; you can control a device, for instance, changing TV channel, adjusting temperature, or switching slides; and you can get necessary information from the cloud. Moreover, this video highlights MagicWatch seamless interactions with objects in its surrounding and easy segueing cyber-physical spaces.
Feng Yang, Shijian Li, Runhe Huang, Shugang Wang, Gang Pan

Health & Children

Inseok Hwang
The recent emergence of comfortable wearable sensors has focused almost entirely on monitoring physical activity, ignoring opportunities to monitor more subtle phenomena, such as the quality of social interactions. We argue that it is compelling to address whether physiological sensors can shed light on quality of social interactive behavior. This work leverages the use of a wearable electrodermal activity (EDA) sensor to recognize ease of engagement of children during a social interaction with an adult. In particular, we monitored 51 child-adult dyads in a semi-structured play interaction and used Support Vector Machines to automatically identify children who had been rated by the adult as more or less difficult to engage. We report on the classification value of several features extracted from the child's EDA responses, as well as several other features capturing the physiological synchrony between the child and the adult.
Javier Hernandez, Ivan Riobo, Agata Rozga, Gregory D. Abowd, Rosalind W. Picard
This paper describes the design of a digital fork and a mobile interactive and persuasive game for a young child who is a picky eater and/or easily distracted during mealtime. The system employs Ubicomp technology to educate children on the importance of a balanced diet while motivating proper eating behavior. To sense a child's eating behavior, we have designed and prototyped a sensor-embedded digital fork, called the Sensing Fork. Furthermore, we have developed a story-book and persuasive game, called the Hungry Panda, on a smartphone. This capitalizes on the capabilities of the Sensing Fork to interact with and modify children's eating behavior during mealtime. We report the results of a real-life study that involves mother-child subjects and tested the effectiveness of the Sensing Fork and Hungry Panda game in addressing children's eating problems. Our findings exhibit positive effects for changing children's eating behavior.
Azusa Kadomura, Cheng-Yuan Li, Koji Tsukada, Hao-Hua Chu, Itiro Siio
Health sensing through smartphones has received considerable attention in recent years because of the devices’ ubiquity and promise to lower the barrier for tracking medical conditions. In this paper, we focus on using smartphones to monitor newborn jaundice, which manifests as a yellow discoloration of the skin. Although a degree of jaundice is common in healthy newborns, early detection of extreme jaundice is essential to prevent permanent brain damage or death. Current detection techniques, however, require clinical tests with blood samples or other specialized equipment. Consequently, newborns often depend on visual assessments of their skin color at home, which is known to be unreliable. To this end, we present BiliCam, a low-cost system that uses smartphone cameras to assess newborn jaundice. We evaluated BiliCam on 100 newborns, yielding a 0.85 rank order correlation with the gold standard blood test. We also discuss usability challenges and design solutions to make the system practical.
Lilian de Greef, Mayank Goel, Min Joon Seo, Eric C Larson, James W Stout, James A Taylor, Shwetak N Patel
In this work, we present ChildSafe, a classification system which exploits human skeletal features collected using a 3D depth camera to classify visual characteristics between children and adults. ChildSafe analyzes the histograms of training samples and implements a bin boundary-based classifier. We train and evaluate Child- Safe using a large dataset of visual samples collected from 150 elementary school children and 43 adults, ranging in the ages of 7 and 50. Our results suggest that ChildSafe successfully detects children with a proper classification rate of up to 97%, a false negative rate of as low as 1.82%, and a low false positive rate of 1.46%. We envision this work as an effective sub-system for designing various child protection applications.
Can Basaran, Hee Jung Yoon, Ho-Kyeong Ra, Taejoon Park, Sang Hyuk Son, JeongGil Ko
SoberDiary: A Phone-based Support System for Assisting Recovery from Alcohol Dependence
Alcohol dependence is a chronic disorder associated with severe harm in multiple areas, and relapsing is easy despite treatment. After alcohol-dependent patients complete alcohol withdrawal treatment and return to their regular lives, they face further challenges in order to maintain sobriety. This study proposes SoberDiary, a phone-based support system that enables alcohol-dependent patients to self-monitor and self-manage their own alcohol use behavior and remain sober in their daily lives. Results from a 4-week user study involving 11 clinical patients show that, using SoberDiary, patients can self-monitor and self-manage their alcohol use behavior, reducing their total alcohol consumption and the number of drinking or heavy drinking days that occur following intervention.
Kuo-Cheng Wang, Yi-Hsuan Hsieh, Chi-Hsien Yen, Chuang-Wen You, Yen-Chang Chen, Ming-Chyi Huang, Seng-Yong Lau, Hsin-Liu Cindy Kao, Hao-Hua Chu
10:30-11:00 10:30
11:00-12:30 11am
Sensing in the Home
Eyewear Computing
Data Mining

Sensing in the Home

Anind Dey
This paper presents a new method for estimating which outlet an electrical appliance is plugged into by using the electrical wiring installed in the building. By making use of the voltage drop caused by the wire, we can estimate the distance between the sensor and an electrical appliance plugged into an outlet on an electrical circuit. If we have a floor plan of an environment of interest showing a wiring diagram and where a sensor is attached, we can determine which outlet an electrical appliance is plugged into from the distance between the sensor and the appliance. The estimated outlet position of an appliance is very useful for understanding real-world events and developing real-world applications, e.g., providing user location and appliance location aware services, daily activity recognition, and estimating a user's indoor location through electrical appliance use under specific conditions.
Quan Kong, Takuya Maekawa
In this paper, we present a significant improvement over past work on non-contact end-user deployable sensor for real time whole home power consumption. The technique allows users to place a single device consisting of magnetic pickups on the outside of a power or breaker panel to infer whole home power consumption without the need for professional installation of current transformers (CTs). The new approach does not require precise placement on the breaker panel, a key requirement in previous approaches. This is enabled through a self-calibration technique using a neural network that dynamically learns the transfer function despite the placement of the sensor and the construction of the breaker panel itself. We also demonstrate the ability to actually infer true power using this technique, unlike past solutions that have only been able to capture apparent power. We have evaluated our technique in six homes and one industrial building, including one seven-day deployment. Our results show we can estimate true power consumption with an average accuracy of 95.0% during naturalistic energy use in the home.
Md Tanvir Islam Aumi, Sidhant Gupta, Cameron Pickett, Matt Reynolds, Shwetak Patel
There is a large class of routine physical exercises that are performed on the ground, often on dedicated ’mats’ (e.g. push-ups, crunches, bridge). Such exercises involve coordinated motions of different body parts and are difficult to recognize with a single body worn motion sensors (like a step counter). Instead a network of sensors on different body parts would be needed, which is not always practicable. As an alternative we describe a cheap, simple textile pressure sensor matrix, that can be unobtrusively integrated into exercise mats to recognize and count such exercises. We evaluate the system on a set of 10 standard exercises. In an experiment with 7 subjects, each repeating each exercise 20 times, we achieve a user independent recognition rate of 82.5% and a user independent counting accuracy of 89.9%. The paper describes the sensor system, the recognition methods and the experimental results.
Mathias Sundholm, Jingyuan Cheng, Bo Zhou, Akash Sethi, Paul Lukowicz
Power remains a challenge in the widespread deployment of long-lived wireless sensing systems, which has led researchers to consider power harvesting as a potential solution. In this paper, we present a thermal power harvester that utilizes naturally changing ambient temperature in the environment as the power source. In contrast to traditional thermoelectric power harvesters, our approach does not require a spatial temperature gradient; instead it relies on temperature fluctuations over time, enabling it to be used freestanding in any environment in which temperature changes throughout the day. By mechanically coupling linear motion harvesters with a temperature sensitive bellows, we show the capability of harvesting up to 21 mJ of energy per cycle of temperature variation within the range 5 ’ to 25 ’. We also demonstrate the ability to power a sensor node, transmit sensor data wirelessly, and update a bistable E-ink display after as little as a 0.25 ’ ambient temperature change.
Chen Zhao, Sam Yisrael, Josh R Smith, Shwetak Patel

Eyewear Computing

Ozan Cakmakci
A Comparison of Order Picking Assisted by Head-Up Display (HUD), Cart-Mounted Display (CMD), Light, and Paper Pick List
Wearable and contextually aware technologies have great applicability in task guidance systems. Order picking is the task of collecting items from inventory in a warehouse and sorting them for distribution; this process accounts for about 60% of the total operational costs of these warehouses. Current practice in industry includes paper pick lists and pick-by-light systems. We evaluated order picking assisted by four approaches: head-up display (HUD); cart-mounted display (CMD); pick-by-light; and paper pick list. We report accuracy, error types, task time, subjective task load and user preferences for all four approaches. The findings suggest that pick-by-HUD and pick-by-CMD are superior on all metrics to the current practices of pick-by-paper and pick-by-light.
Anhong Guo, Shashank Raghu, Xuwen Xie, Saad Ismail, Xiaohui Luo, Joseph Simoneau, Scott Gilliland, Hannes Baumann, Caleb Southern, Thad Starner
The Effects of Visual Displacement on Simulator Sickness in Video See-Through HMDs
We present an experiment exploring the role of visual displacement to simulator sickness in a video see-through head-mounted display (HMD). To identify the effect of visual displacement on simulator sickness, we examined the effect of visual displacement conditions (ranging from 50 to 300 mm) on simulator sickness and investigated the adaptation of simulator sickness over three days. The results indicated that the total symptom score of simulator sickness in the 300 mm visual displacement condition was significantly higher than that of simulator sickness in the other visual displacement conditions. In addition, the total symptom score of simulator sickness became significantly lower over 1–3 days in the 200 mm visual displacement condition and 1–2 days in the 300 mm visual displacement condition, which means that adaptation was found over three days. However, only partial adaptation was shown in the visual displacement 300, thereby suggesting that high sensory conflict in visual displacement 300 results in increased time to adapt. These results indicate that simulator sickness in video see-through HMDs are adaptable over time, which supports previous studies.
Sei-Young Kim, Joong Ho Lee, Ji Hyung Park
Understanding the Wearability of Head-Mounted Devices from a Human-Centered Perspective
Extensive efforts have been dedicated to developing wearables, but existing solutions focus mainly on feasibility and innovation. Thus, although many devices are named ‘wearable’, users face some wearability issues. Previously adopted trial and error approaches have effectively produced wearables, but not focused on human factors. Through an extensive analysis of online comments about head-mounted devices, this paper presents their problem space from a human perspective. The analysis of online comments from existing and potential users enabled us to identify key aspects of the wearability of head-mounted devices, bridging the gap between design decisions and users’ requirements.
Vivian Genaro Motti, Kelly Caine
Looking At or Through? Using Eye Tracking to Infer Attention Location for Wearable Transparent Displays
Wearable near-eye displays pose interesting challenges for interface design. These devices present the user with a duality of visual worlds, with a virtual window of information overlaid onto the physical world. Because of this duality, we suggest that the wearable interface would benefit from understanding where the user's visual attention is directed. We explore the potential of eye tracking to address this problem, and describe four eye tracking techniques designed to provide data about where the user's attention is directed. We also propose some attention-aware user interface techniques demonstrating the potential of the eyes for wearable displays user interface management.
Melodie Vidal, David H Nguyen, Kent Lyons
Walkthrough Research: Methodological Potentials for Head-mounted Cameras as Reflexive Tools in Museum Contexts
This study investigates the potential of head-mounted video cameras as a technique for understanding human experience in museums. The technique is a design-research study at the intersection of visual-anthropology, ubiquitous computing and interaction design, and takes up the reflexive of digital humanities and ethnographies, in that the goal of the research is not to capture or provide determination of an experience, but instead to provide digital tools for reflection and understanding. The work uses a head-mounted video camera in museum spaces, and a set of simple image processing techniques to explore potential methods for understanding the relationship between people, objects, and environments in museum space.
Jamie Allen, Chris Whithead, Dionísio Soares Paiva, Catherine Descure, Jakob Bak

Data Mining

Eran Toch
Smartphones can collect considerable context data about the user, ranging from apps used to places visited. Frequent user patterns discovered from longitudinal, multi-modal context data could help personalize and improve overall user experience. Our long term goal is to develop novel middleware and algorithms to efficiently mine user behavior patterns entirely on the phone by utilizing idle processor cycles. Mining patterns on the mobile device provides better privacy guarantees to users, and reduces dependency on cloud connectivity. As an important step in this direction, we develop a novel general-purpose service called MobileMiner that runs on the phone and discovers frequent co-occurrence patterns indicating which context events frequently occur together. Using longitudinal context data collected from 106 users over 1-3 months, we show that MobileMiner efficiently generates patterns using limited phone resources. Further, we find interesting behavior patterns for individual users and across users, ranging from calling patterns to place visitation patterns. Finally, we show how our co-occurrence patterns can be used by developers to improve the phone UI for launching apps or calling contacts.
Vijay Srinivasan, Saeed Moghaddam, Abhishek Mukherji, Kiran K. Rachuri, Chenren Xu, Emmanuel Munguia Tapia
Ubiquity of portable location-aware devices and popularity of online location-based services, have recently given rise to the collection of datasets with high spatial and temporal resolution. The subject of analyzing such data has consequently gained popularity due to numerous opportunities enabled by understanding objects’ (people and animals, among others) mobility patterns. In this paper, we propose a hidden semi-Markov-based model to understand the behavior of mobile entities. The hierarchical state structure in our model allows capturing spatio-temporal associations on the locational history both at stay-points and on the paths connecting them. We compare the accuracy of our model with a number of existing spatio-temporal models using two real datasets. Furthermore, we perform sensitivity analysis on our model to evaluate its robustness in presence of common issues in mobility datasets such as existence of noise and missing values. Results of our experiments show superiority of the proposed scheme compared with the other models.
Mitra Baratchi, Nirvana Meratnia, Paul Havinga, Andrew Skidmore, Bert Toxopeus
Fitting sensors to humans and physical structures is becoming more and more common. These developments provide many opportunities for ubiquitous computing, as well as challenges for analyzing the resulting sensor data. From these challenges, an underappreciated problem arises: modeling multivariate time series with\emph{mixed sampling rates}. Although mentioned in several application papers using sensor systems, this problem has been left almost unexplored, often hidden in a preprocessing step or solved manually as a one-pass procedure (feature extraction/construction). This leaves an opportunity to formalize and develop methods that address mixed sampling rates in an automatic fashion. We approach the problem of dealing with multiple sampling rates from an aggregation perspective. We propose Accordion, a new embedded method that constructs and selects aggregate features iteratively, in a memory-conscious fashion. Our algorithms work on both classification and regression problems. We describe three experiments on real-world time series datasets, with satisfying results.
Ricardo Cachucho, Marvin Meeng, Ugo Vespier, Siegfried Nijssen, Arno Knobbe
The newly emerging event-based social networks (EBSNs) connect online and offline social interactions, offering a great opportunity to understand behaviors in the cyber-physical space. While existing efforts have mainly focused on investigating user behaviors in traditional social network services (SNS), this paper aims to exploit individual behaviors in EBSNs, which remains an unsolved problem. In particular, our method predicts activity attendance by discovering a set of factors that connect the physical and cyber spaces and influence individual's attendance of activities in EBSNs. These factors, including content preference, context (spatial and temporal) and social influence, are extracted using different models and techniques. We further propose a novel Singular Value Decomposition with Multi-Factor Neighborhood (SVD-MFN) algorithm to predict activity attendance by integrating the discovered heterogeneous factors into a single framework, in which these factors are fused through a neighborhood set. Experiments based on real-world data from Douban Events demonstrate that the proposed SVD-MFN algorithm outperforms the state-of-the-art prediction methods.
Rong Du, Zhiwen Yu, Tao Mei, Zhitao Wang, Zhu Wang, Bin Guo
12:30-14:00 12:30
14:00-15:30 2pm
Energy & Environment
Sensing the body
Public Displays & Interactions

Energy & Environment

Cecilia Mascolo
Energy Diet is a design concept for a digital bathroom scale that displays personal health information in the form of body weight alongside environmental health information in the form of carbon weight. We intentionally conflate these two types of feedback in an effort to encourage people to regularly monitor their energy use as they weigh themselves and to reflect on the complex relationships between personal health and environmental health. To inform our design we tested paper prototypes and administered two surveys with 500 participants. We then created a working prototype that we deployed in four participants’ homes for one month each. This paper discusses findings and design implications from our surveys and in-home deployment. Overall, seeing carbon weight together with body weight on a scale helped participants to conceptualize energy consumption and to reflect on a range of daily activities and their environmental impacts.
Pei-Yi Kuo, Michael Stephen Horn
We present an ethnographic study of energy advisors working for a charity that provides support, particularly to people in fuel poverty. Our fieldwork comprises detailed observations that reveal the collaborative, interactional work of energy advisors and clients during home visits, supplemented with interviews and a participatory design workshop with advisors. We identify opportunities for Ubicomp technologies that focus on supporting the work of the advisor, including complementing the collaborative advice giving in home visits, providing help remotely, and producing evidence in support of accounts of practices and building conditions useful for interactions with landlords, authorities and other third parties. We highlight six specific design challenges that relate the domestic fuel poverty setting to the wider Ubicomp literature. Our work echoes a shift in attention from energy use and the individual consumer, specifically to matters of advice work practices and the domestic fuel poverty setting, and to the discourse around inclusive Ubicomp technologies.
Joel E Fischer, Enrico Costanza, Sarvapali D Ramchurn, James A Colley, Tom Rodden
Domestic microgeneration is the onsite generation of low- and zero-carbon heat and electricity by private households to meet their own needs. In this paper we explore how an everyday household routine ’ that of doing laundry ’ can be augmented by digital technologies to help households with photovoltaic solar energy generation to make better use of self-generated energy. This paper presents an 8-month in-the-wild study that involved 18 UK households in longitudinal energy data collection, prototype deployment and participatory data analysis. Through a series of technology interventions mixing energy feedback, proactive suggestions and direct control the study uncovered opportunities, potential rewards and barriers for families to shift energy consuming household activities and highlights how digital technology can act as mediator between household laundry routines and energy demand-shifting behaviors. Finally, the study provides insights into how a ’smart’ energy-aware washing machine shapes organization of domestic life and how people ’communicate’ with their washing machine.
Jacky Bourgeois, Janet van der Linden, Gerd Kortuem, Blaine Price, Christopher Rimmer
Xuxu Chen, Yu Zheng, Yubiao Chen, Qiwei Jin, Weiwei Sun, Eric Chang, Wei-Ying Ma

Sensing the body

Paul Lukowicz
Unobtrusive Gait Verification for Mobile Phones
Continuously and unobtrusively identifying the phone's owner using accelerometer sensing and gait analysis has a great potential to improve user experience on the go. However, a number of challenges, including gait modeling and training data acquisition, must be addressed before unobtrusive gait verification is practical. In this paper, we describe a gait verification system for mobile phone without any assumption of body placement or device orientation. Our system uses a combination of supervised and unsupervised learning techniques to verify the user continuously and automatically learn unseen gait pattern from the user over time. We demonstrate that it is capable of recognizing the user in natural settings. We also investigated an unobtrusive training method that makes it feasible to acquire training data without explicit user annotation.
Hong Lu, Jonathan Huang, Tanwistha Saha, Lama Nachman
Enhancing Action Recognition through Simultaneous Semantic Mapping from Body-Worn Motion Sensors
Locations and actions are interrelated: some activities tend to occur at specific places, for example a person is more likely to twist his wrist when he is close to a door (to turn the knob). We present an unsupervised fusion method that takes advantage of this characteristic to enhance the recognition of location-related actions (e.g., open, close, switch, etc.). The proposed LocAFusion algorithm acts as a post-processing filter: At run-time, it constructs a semantic map of the environment by tagging action recognitions to Cartesian coordinates. It then uses the accumulated information about a location i) to discriminate between identical actions performed at different places and ii) to correct recognitions that are unlikely, given the other observations at the same location. LocAFusion does not require prior statistics about where activities occur, which allows for seamless deployment to new environments. The fusion approach is agnostic to the sensor modalities and methods used for action recognition and localization. For evaluation, we implemented a fully wearable setup that tracks the user with a foot-mounted motion sensor and the ActionSLAM algorithm. Simultaneously, we recognize hand actions through template matching on the data of a wrist-worn inertial measurement unit. In 10 recordings with 554 performed object interactions, LocAFusion consistently outperformed location-independent action recognition (8-31% increase in F1 score), identified 96% of the objects in the semantic map and overall correctly labeled 82% of the actions in problems with up to 23 classes.
Michael Hardegger, Long-Van Nguyen-Dinh, Alberto Calatroni, Daniel Roggen, Gerhard Tröster
Your activity tracker knows when you quit smoking
This paper discusses outcomes of our exploratory research aiming to discover ways of utilising continuous long term respiratory rate data collected from actigraphy (wrist-worn accelerometers.) We show that by monitoring changes in respiratory rate during sleep, we can detect and visualise various physical conditions that were previously not detectable using such simple wearable sensors, namely; the subjective level of drunkenness, fever, and smoking cessation. This study provides valuable insight into the potential of actigraphy, not simply as a tool for detecting common daily activities, but as a base for building a generic lifelog system that can evaluate the more qualitative aspects of your life.
Ken Kawamoto, Takeshi Tanaka, Hiroyuki Kuriyama
To facilitate the collection of patient biosignals, designing extensible sensing devices in which sensor management is simplified is essential. This paper presents BioScope, an extensible sensing system that facilitates collecting data used in nursing assessments. We conducted experiments to demonstrate the potential of the system. The results obtained in this study can be applied in improving the design, thus enabling BioScope to facilitate data collection in numerous potential applications.
Cheng-Yuan Li, Chi-Hsien Yen, Kuo-Cheng Wang, Chuang-Wen You, Seng-Yong Lau, Cheryl Chia-Hui Chen, Polly Huang, Hao-hua Chu
Spatiotemporal gait analysis with body worn inertial sensors improves diagnosis in clinical practice. Most of the gait performance measures are affected by walking speed. However, it has not been investigated that how much information foot clearance parameters share with the key parameters of gait performance domains. Using shoe-worn inertial sensors and previously validated algorithm we measured spatiotemporal as well as clearance gait parameters in a cohort of able-bodied adults over the age of 65 (N=879). Principal components analysis showed that variability of foot clearance parameters contribute to the main variability in gait data. Moreover, only weak to moderate correlation of gait speed and stride length with some clearance parameters has been observed. We recommend the assessment of clearance parameters during gait analysis in addition to parameters such as gait speed, bearing in mind the importance of foot clearance measures in obstacle negotiation, slipping and tripping related falls.
Kamiar Aminian, Farzin Dadashi, Benoit Mariani, Constanze Hoskovec, Brigitte Brigitte Santos-Eggimann, Christophe Büla

Public Displays & Interactions

Judy Kay
Projective tests are personality tests that reveal individuals’ emotions (e.g., Rorschach inkblot test). Unlike direct question-based tests, projective tests rely on ambiguous stimuli to evoke responses from individuals. In this paper we develop one such test, designed to be delivered automatically, anonymously and to a large community through public displays. Our work makes a number of contributions. First, we develop and validate in controlled conditions a quantitative projective test that can reveal emotions. Second, we demonstrate that this test can be deployed on a large scale longitudinally: we present a four-week deployment in our university’s public spaces where 1431 tests were completed anonymously by passers-by. Third, our results reveal strong diurnal rhythms of emotion consistent with results we obtained independently using the Day Reconstruction Method (DRM), literature on affect, well-being, and our understanding of our university’s daily routine.
Jorge Goncalves, Pratyush Pandab, Denzil Ferreira, Mohammad Ghahramani, Guoying Zhao, Vassilis Kostakos
PriCal is an ambient calendar display that shows a user's schedule similar to a paper wall calendar. PriCal provides context-adaptive privacy to users by detecting present persons and adapting event visibility according to the user's privacy preferences. We present a detailed privacy impact assessment of our system, which provides insights on how to leverage context to enhance privacy without being intrusive. PriCal is based on a decentralized architecture and supports the detection of registered users as well as unknown persons. In a three-week deployment study with seven displays, ten participants used PriCal in their real work environment with their own digital calendars. Our results provide qualitative insights on the implications, acceptance, and utility of context-adaptive privacy in the context of a calendar display system, indicating that it is a viable approach to mitigate privacy implications in ubicomp applications.
Florian Schaub, Bastian Könings, Peter Lang, Björn Wiedersheim, Christian Winkler, Michael Weber
As the cost of display hardware falls so the number of public display networks being deployed is increasing rapidly. While these networks have traditionally taken the form of digital signage used for advertising and information there is increasing interest in the vision of ’open display networks’. A key component of any open display network is an effective channel for disseminating applications created by third-parties and recent research has proposed a display-oriented ’application store’ as one such channel. In this paper we present a critical analysis of the requirements and design of display application stores ’ providing insights designed to help the implementers of future application stores.
Sarah Clinch, Mateusz Mikusz, Miriam Greis, Nigel Davies, Adrian Friday
Augmented Information Display (AiD) is an LCD-based communicative display device that transmits both visible (RGB) and invisible (infrared) information using temporal and spectral multiplexing. A field-sequential backlight system switches between a standard white or RGB LED backlight and a near-infrared (NIR) LED backlight at 120Hz frequency. The visible and invisible information are transmitted through the same LCD electro-optics elements but during different time intervals that synchronize with the corresponding backlights. We implemented several prototype software systems to demonstrate the potential applications of this novel display platform, such as an augmenting digital signage display, an information beacon for positioning systems, and an accessibility system for people with hearing impairments.
Shuguang Wu, Jun Xiao
15:30-16:00 3:30
16:00-17:30 4pm
Input & Interaction
Human Behavior

Input & Interaction

Eric Larson
Mobile devices have become people's indispensable companion, since they allow each individual to be constantly connected with the outside world. In order to keep connected, the devices periodically send out data, which reveal some information about the device owner. Data sent by these devices can be captured by any external observer. Since the observer can observe only the wireless data, the actual person using the device is unknown. In this work, we propose IdentityLink, an approach leveraging the captured wireless data and computer vision to infer the user-device links, i.e., inferring which device is carried by which user. Knowing the user-device links opens up new opportunities for applications such as identifying unauthorized personnel in enterprises or finding criminals by law enforcement. By conducting experiments in a realistic scenario, we demonstrate how IdentityLink can be effectively applied to real practice.
Le T. Nguyen, Yu Seung Kim, Patrick Tague, Joy Zhang
This paper presents a recognition scheme for fine-grain gestures. The scheme leverages directional antenna and short- range wireless propagation properties to recognize a vocabulary of action-oriented gestures from the American Sign Language. Since the scheme only relies on commonly available wireless features such as Received Signal Strength (RSS), signal phase differences, and frequency subband selection, it is readily deployable on commercial-off-the-shelf IEEE 802.11 devices. We have implemented the proposed scheme and evaluated it in two potential application scenarios: gesture-based electronic activation from wheelchair and gesture-based control of car infotainment system. The results show that the proposed scheme can correctly identify and classify up to 25 fine-grain gestures with an average ac- curacy of 92% for the first application scenario and 84% for the second scenario.
Pedro Melgarejo, Xinyu Zhang, Parameswaran Ramanathan, David Chu
We present BendID, a bendable input device that recognizes the location, magnitude and direction of its deformation. We use BendID to provide users with a tactile metaphor for pressure based input. The device is constructed by layering an array of indium tin oxide (ITO)-coated PET film electrodes on a Polymethylsiloxane (PDMS) sheet, which is sandwiched between conductive foams. The pressure values that are interpreted from the ITO electrodes are classified using a Support Vector Machine (SVM) algorithm via the Weka library to identify the direction and location of bending. A polynomial regression model is also employed to estimate the overall magnitude of the pressure from the device. A model then maps these variables to a GUI to perform tasks. In this preliminary paper, we demonstrate this device by implementing it as an interface for 3D shape bending and a game controller.
Vinh P Nguyen, Sang Ho Yoon, Ansh Verma, Karthik Ramani
Yanxia Zhang, Hans Jörg Müller, Ming Ki Chong, Andreas Bulling, Hans Gellersen
We introduce AirLink, a novel technique for sharing files between multiple devices. By waving a hand from one device towards another, users can directly transfer files between them. The system utilizes the devices’ built-in speakers and microphones to enable easy file sharing between phones, tablets and laptops. We evaluate our system in an 11-participant study with 96.8% accuracy, showing the feasibility of using AirLink in a multiple-device environment. We also implemented a real-time system and demonstrate the capability of AirLink in various applications.
Ke-Yu Chen, Daniel Ashbrook, Mayank Goel, Sung-Hyuck Lee, Shwetak Patel

Human Behavior

Sunny Consolvo
A number of wearable ’lifelogging’ camera devices have been released recently, allowing consumers to capture images and other sensor data continuously from a first-person perspective. Unlike traditional cameras that are used deliberately and sporadically, lifelogging devices are always ’on’ and automatically capturing images. Such features may challenge users’ (and bystanders’) expectations about privacy and control of image gathering and dissemination. While lifelogging cameras are growing in popularity, little is known about privacy perceptions of these devices or what kinds of privacy challenges they are likely to create. To explore how people manage privacy in the context of lifelogging cameras, as well as which kinds of first-person images people consider ’sensitive,’ we conducted an in situ user study (N = 36) in which participants wore a lifelogging device for a week, answered questionnaires about the collected images, and participated in an exit interview. Our findings indicate that: 1) some people may prefer to manage privacy through in situ physical control of image collection in order to avoid later burdensome review of all collected images; 2) a combination of factors including time, location, and the objects and people appearing in the photo determines its ’sensitivity;’ and 3) people are concerned about the privacy of bystanders, de- spite reporting almost no opposition or concerns expressed by bystanders over the course of the study.
Roberto Hoyle, Robert Templeman, Steven Armes, Denise Anthony, David Crandall, Apu Kapadia
In the context of a myriad of mobile apps which collect personally identifiable information (PII) and a prospective market place of personal data, we investigate a user-centric monetary valuation of mobile PII. During a 6-week long user study in a living lab deployment with 60 participants, we collected their daily valuations of 4 categories of mobile PII (communication, e.g. phonecalls made/received, applications, e.g. time spent on different apps, location and media, e.g. photos taken) at three levels of complexity (individual data points, aggregated statistics and processed, i.e. meaningful interpretations of the data). In order to obtain honest valuations, we employ a reverse second price auction mechanism. Our findings show that the most sensitive and valued category of personal information is location. We report statistically significant associations between actual mobile usage, personal dispositions, and bidding behavior. Finally, we outline key implications for the design of mobile services and future markets of personal data.
Jacopo Staiano, Nuria Oliver, Bruno Lepri, Rodrigo de Oliveira, Michele Caraviello, Nicu Sebe


Koji Yatani
Existing location-based social networks (LBSNs), e.g. Foursquare, depend mainly on GPS or network-based localization to infer users' locations. However, GPS is unavailable indoors and network-based localization provides coarse-grained accuracy. This limits the accuracy of current LBSNs in indoor environments, where people spend 89% of their time. This in turn affects the user experience, in terms of the accuracy of the ranked list of venues, especially for the small-screens of mobile devices; misses business opportunities; and leads to reduced venues coverage. In this paper, we present CheckInside: a system that can provide a fine-grained indoor location-based social network. CheckInside leverages the crowd-sensed data collected from users' mobile devices during the check-in operation and knowledge extracted from current LBSNs to associate a place with its name and semantic fingerprint. This semantic fingerprint is used to obtain a more accurate list of nearby places as well as automatically detect new places with similar signatures. A novel algorithm for handling incorrect check-ins and inferring a semantically-enriched floorplan is proposed as well as an algorithm for enhancing the system performance based on the user implicit feedback. Evaluation of CheckInside in four malls over the course of six weeks with 20 participants shows that it can provide the actual user location within the top five venues 99% of the time. This is compared to 17% only in the case of current LBSNs. In addition, it can increase the coverage of current LBSNs by more than 25%.
Moustafa Elhamshary, Moustafa Youssef
People-nearby applications' (PNAs) are a form of ubiquitous computing that connect users based on their physical location data. One example is Grindr, a popular PNA that facilitates connections among gay and bisexual men. Adopting a uses and gratifications approach, we conducted two studies. In study one, 63 users reported motivations for Grindr use through open-ended descriptions. In study two, those descriptions were coded into 26 items that were completed by 525 Grindr users. Factor analysis revealed six uses and gratifications: social inclusion, sex, friendship, entertainment, romantic relationships, and location-based search. Two additional analyses examine (1) the effects of geographic location (e.g., urban vs. suburban/rural) on men’s use of Grindr and (2) how Grindr use is related to self-disclosure of information. Results highlight how the mixed-mode nature of PNA technology may change the boundaries of online and offline space, and how gay and bisexual men navigate physical environments.
Chad Van De Wiele, Stephanie Tong Tom
Automatic check-in, which is to identify a user's visited points of interest (POIs) from his or her trajectories, is still an open problem because of positioning errors and the high POI density in small areas. In this study, we propose a probabilistic visited-POI identification method. The method uses a new hierarchical Bayesian model for identifying the latent visited-POI label of stay points, which are automatically extracted from trajectories. This model learns from labeled and unlabeled stay point data (i.e., semi-supervised learning) and takes into account personal preferences, stay locations including positioning errors, stay times for each category, and prior knowledge about typical user preferences and stay times. Experimental results with real user trajectories and POIs of Foursquare demonstrated that our method achieved statistically significant improvements in precision at 1 and recall at 3 over the nearest neighbor method and a conventional method that uses a supervised learning-to-rank algorithm.
Kyosuke Nishida, Hiroyuki Toda, Takeshi Kurashima, Yoshihiko Suhara
The smartphone contact list has the potential to be a valuable source of data about personal relationships. To understand how we might data mine the information that people store in their contact lists, we collected the contact lists of 54 participants. Initially we found that the majority of contact list features were unused. However, a further examination of the ’name’ field revealed a broad variety of contact-naming behaviors. We observed contact ’name’ fields that included affiliations, relationship role labels, multiple names, phone types, and references to companies / services / places. People’s appropriation and usage of contact lists have implications for automated attempts to merge or mine contact lists that assume people use the features and structure of the contact list tool as intended. They also offer new opportunities for data mining to better describe relationships between users and their contacts.
Jason Wiese, Jason I. Hong, John Zimmerman
17:30-18:15 5:30
Town Hall Meeting
18:30-22:30 6:30
EMP Reception / ISWC Design Exhibition

The conference reception will be held at the Experience Music Project (EMP) Museum. For more information about the reception, click here.

The ISWC design exhibition includes original works of wearable technology and/or novel applications for new audiences using existing technologies. Submissions may comprise any type of wearable technology (electronic, mechanical, textile and garment-based, etc). Awards for the best design will be given in three categories: Aesthetic, and Functional and Fiber Art.

For a list of ISWC design exhibitors, click here.

19:00 7pm
Registration / Help desk closes
End of day


Seattle 1 & 2
Seattle 3
Emerald 2
08:00 8am
Registration / Help desk opens
09:00-10:30 9am
Body Signals
Industry panel
Sensing the Crowd

Body Signals

Alanson Sample
Sleep quality plays a significant role in personal health. A great deal of effort has been paid to design sleep quality monitoring systems, providing services ranging from bedtime monitoring to sleep activity detection. However, as sleep quality is closely related to the distribution of sleep duration over different sleep stages, neither the bedtime nor the intensity of sleep activities is able to reflect sleep quality precisely. To this end, we present Sleep Hunter, a mobile service that provides a fine-grained detection of sleep stage transition for sleep quality monitoring and intelligent wake-up call. The rationale is that each sleep stage is accompanied by specific yet distinguishable body movements and acoustic signals. Leveraging the built-in sensors on smartphones, Sleep Hunter integrates these physical activities with sleep environment, inherent temporal relation and personal factors by a statistical model for a fine-grained sleep stage detection. Based on the duration of each sleep stage, Sleep Hunter further provides sleep quality report and smart call service for users. Experimental results from over 30 sets of nocturnal sleep data show that our system is superior to existing actigraphy-based sleep quality monitoring systems, and achieves satisfying detection accuracy compared with dedicated polysomnography-based devices.
Weixi Gu, Zheng Yang, Longfei Shangguan, Wei Sun, Kun Jin, Yunhao Liu
People interact with chairs frequently, making them a potential location to perform implicit health sensing that requires no additional effort by users. We surveyed 550 participants to understand how people sit in chairs and inform the design of a chair that detects heart and respiratory rate from the armrests and backrests of the chair respectively. In a laboratory study with 18 participants, we evaluated a range of common sitting positions to determine when heart rate and respiratory rate detection was possible (32% of the time for heart rate, 52% for respiratory rate) and evaluate the accuracy of the detected rate (83% for heart rate, 73% for respiratory rate). We discuss the challenges of moving this sensing to the wild by evaluating an in-situ study totaling 40 hours with 11 participants. We show that, as an implicit sensor, the chair can collect vital signs data from its occupant through natural interaction with the chair.
Erin Griffiths, T. Scott Saponas, A.J. Bernheim Brush
We often think of ourselves as individuals with steady capabilities. However, converging strands of research indicate that this is not the case. Our biochemistry varies significantly over the course of a 24 hour period. Consequently our levels of alertness, productivity, physical activity, and even sensitivity to pain fluctuate throughout the day. This offers a considerable opportunity for the UbiComp community to identify novel measurements and interventions that can leverage these daily variations. To illustrate this potential, we present results from an empirical study with 9 participants over 97 days investigating whether such variations manifest in low-level smartphone use, focusing on daily rhythms related to sleep. Our findings demonstrate that phone usage patterns can be used to detect and predict individual daily variations indicative of temporal preference, sleep duration, and deprivation. We also identify opportunities and challenges for measuring and enhancing well-being using these simple and effective markers of circadian rhythms.
Saeed Abdullah, Mark Matthews, Elizabeth L Murnane, Geri Gay, Tanzeem Choudhury
There is a growing demand for daily heart rate (HR) monitoring in the fields of healthcare, fitness, activity recognition, and entertainment. Although various HR monitoring systems have been proposed, most of these employ a wearable device, which may be a burden and disturb one's daily living. To achieve the goal of pervasive HR monitoring in our daily living, we present the HR monitoring method through the surface of a drinkware. The proposed method employs the surface of a drinkware as a broad sensing region, by expanding the principal of a basic photo-based HR sensor. The sensing surface works even with a curved shape, and it can be applied on various types of drinkwares. This approach enables unobtrusive HR monitoring during the beverage consumption. As a prototype, we implemented the proposed method on an ordinary transparent tumbler, and evaluated its HR monitoring performance.
Hiroshi Chigira, Masayuki Ihara, Minoru Kobayashi, Akimichi Tanaka, Tomohiro Tanaka

Industry panel

Florian Michahelles, Siemens Corporation
Transitioning from ubiquitous and wearable computing research to business practice

The research communities of ubicomp and ISWC researchers have been anticipating various trends, such as wearable technologies, hands-free interfaces, bring-your-own device, and navigation systems, by more than a decade ago. Prototypes once evaluated in user studies with 20 students get downloaded as an app, activity recognition algorithms can be sourced in chipsets, and hardware prototyping platforms become available as smart thing infrastructures. Whereas consumer-focused industries may benefit from these trends more directly, also professional domains in industry are starting to adapt ubicomp and wearable technologies.

It is the goal of this panel to collect use cases of successful transitions from research to practice, to understand the challenges of moving from research to product, and to identify open research question yet to be solved.

The panel brings a number of experts from the fields together to discuss

  • sample cases of ubiquitous and wearable computing research results taking effect in industry,
  • ingredients of successful transition from research to practice,
  • challenges of turning research prototypes into a products,
  • how research generates impact on practice.

  Davide Vigano, CEO of Sensoria
  Mary Czerwinski, Microsoft Research
  Shwetak Patel, University of Washington, founder of SNUPI and Zensi
  Albrecht Schmidt, University of Stuttgart

Sensing the Crowd

Nic Lane
Crowdsensing technologies are rapidly evolving and are expected to be utilized on commercial applications such as location-based services. Crowdsensing collects sensory data from daily activities of users without burdening users, and the data size is expected to grow into a population scale. However, quality of service is difficult to ensure for commercial use. Incentive design in crowdsensing with monetary rewards or gamifications is, therefore, attracting attention for motivating participants to collect data to increase data quantity. In contrast, we propose Steered Crowdsensing, which controls the incentives of users by using the game elements on location-based services for directly improving the quality of service rather than data size. For a feasibility study of steered crowdsensing, we deployed a crowdsensing system focusing on application scenarios of building processes on wireless indoor localization systems. In the results, steered crowdsensing realized deployments faster than non-steered crowdsensing while having half as many data.
Ryoma Kawajiri, Masamichi Shimosaka, Hisashi Kashima
This paper proposes a novel participant selection framework, named CrowdRecruiter, for mobile crowdsensing. CrowdRecruiter operates on top of energy-efficient Piggyback Crowdsensing (PCS) task model and minimizes incentive payments by selecting a small number of participants while still satisfying probabilistic coverage constraint. In order to achieve the objective when piggybacking crowdsensing tasks with phone calls, CrowdRecruiter first predicts the call and coverage probability of each mobile user based on historical records. It then efficiently computes the joint coverage probability of multiple users as a combined set and selects the near-minimal set of participants, which meets coverage ratio requirement in each sensing cycle of the PCS task. We evaluated CrowdRecruiter extensively using a large-scale real-world dataset and the results show that the proposed solution significantly outperforms three baseline algorithms by selecting 10.0% - 73.5% fewer participants on average under the same probabilistic coverage constraint.
Daqing Zhang, Haoyi Xiong, Leye WANG, Guanling Chen
Yu Zheng, Tong Liu, Yilun Wang, Yanmin Zhu, Yanchi Liu, Eric Chang
In this paper we argue the need for orchestration support for participatory campaigns to achieve campaign quality, and automatisation of said support to achieve scalability, both issues contributing to stakeholder usability. This goes further than providing support for defining campaigns, an issue tackled in prior work. We provide a formal definition for a campaign by extracting commonalities from the state of the art and expertise in organising noise mapping campaigns. Next, we formalise how to ensure campaigns end successfully, and translate this formal notion into an operational recipe for dynamic orchestration. We then present a framework for automatising campaign definition, monitoring and orchestration which relies on workflow technology. The framework is validated by re-enacting several campaigns previously run through manual orchestration and quantifying the increased efficiency.
Ellie D'Hondt, Jesse Zaman, Eline Philips, Elisa Gonzalez Boix, Wolfgang De Meuter
10:30-11:00 10:30
11:00-12:30 11am
Assistive devices
UbiComp at Work


Alastair Beresford
Mohit Sethi, Elena Oat, Mario Di Francesco, Tuomas Aura
Activity-based social networks, where people upload and share information about their location-based activities (e.g., the routes of their activities), are increasingly popular. Such systems, however, raise privacy and security issues: The service providers know the exact locations of their users; the users can report fake location information in order to, for example, unduly brag about their performance. In this paper, we propose a secure privacy-preserving system for reporting location-based activity summaries (e.g., the total distance covered and the elevation gain). Our solution is based on a combination of cryptographic techniques and geometric algorithms, and it relies on existing Wi-Fi access-point networks deployed in urban areas. We evaluate our solution by using real data sets from the FON community networks and from the Garmin Connect activity-based social network, and we show that it can achieve tight (up to a median accuracy of 76%) verifiable lower-bounds of the distance covered and of the elevation gain, while protecting the location privacy of the users with respect to both the social network operator and the access-point network operator(s).
Anh Pham, Kévin Huguenin, Igor Bilogrevic, Jean-Pierre Hubaux
This paper presents Zero-Effort Payments (ZEP), a seamless mobile computing system designed to accept payments with no effort on the customer’s part beyond a one-time opt-in. With ZEP, customers need not present cards nor operate smartphones to convey their identities. ZEP uses three complementary identification technologies: face recognition, proximate device detection, and human assistance. We demonstrate that the combination of these technologies enables ZEP to scale to the level needed by our deployments. We designed and built ZEP, and demonstrated its usefulness across two real-world deployments lasting five months of continuous deployment, and serving 274 customers. The different nature of our deployments stressed different aspects of our system. These challenges led to several system design changes to improve scalability and fault-tolerance.
Christopher Smowton, Jacob R Lorch, David Molnar, Stefan Saroiu, Alec Wolman
Touch-enabled user interfaces have become ubiquitous, such as on ATMs or portable devices. At the same time, authentication using touch input is problematic, since finger smudge traces may allow attackers to reconstruct passwords. We present SmudgeSafe, an authentication system that uses random geometric image transformations, such as translation, rotation, scaling, shearing, and flipping, to increase the security of cued-recall graphical passwords. We describe the design space of these transformations and report on two user studies: A lab-based security study involving 20 participants in attacking user-defined passwords, using high quality pictures of real smudge traces captured on a mobile phone display; and an in-the-field usability study with 374 participants who generated more than 130,000 logins on a mobile phone implementation of SmudgeSafe. Results show that SmudgeSafe significantly increases security compared to authentication schemes based on PINs and lock patterns, and exhibits very high learnability, efficiency, and memorability.
Stefan Schneegass, Frank Steimle, Andreas Bulling, Florian Alt, Albrecht Schmidt

Assistive devices

Janet van der Linden
Passive Haptic Learning of Braille Typing
Passive Haptic Learning (PHL) is the acquisition of sensorimotor skills without active attention to learning. One method is to "teach" motor skills using vibration cues delivered by a wearable, tactile interface while the user is focusing on another, primary task. We have created a system for Passive Haptic Learning of typing skills. In a study containing 16 participants, users demonstrated significantly reduced error typing a phrase in Braille after receiving passive instruction versus control (32.85% average decline in error vs. 2.73% increase in error). PHL users were also able to recognize and read more Braille letters from the phrase (72.5% vs. 22.4%). In a second study, containing 8 participants thus far, we passively teach the full Braille alphabet over four sessions. Typing error reductions in participants receiving PHL were more rapid and consistent, with 75% of PHL vs. 0% of control users reaching zero typing error. By the end of the study, PHL participants were also able to recognize and read 93.3% of all Braille alphabet letters. These results suggest that Passive Haptic instruction facilitated by wearable computers may be a feasible method of teaching Braille typing and reading.
Caitlyn Seim, John Chandler, Kayla DesPortes, Siddharth Dhingra, Miru Park, Thad Starner
An Assistive EyeWear Prototype that interactively converts 3D Object Locations into Spatial Audio
We present an end-to-end prototype for an assistive EyeWear system aimed at Vision Impaired users. The system uses computer vision to detect objects on planar surfaces and sonifies their 3D locations using spatial audio. A key novelty of the system is that it operates in real time (15Hz), allowing the user to interactively affect the audio feedback by actively moving a headworn sensor. A quantitative user study was conducted on 12 blindfolded subjects performing an object localisation and placement task using our system. This detailed study of near field interactive spatial audio for users operating at around arm's length departs from existing studies focused on far-field audio and non-interactive systems. The object localisation accuracy achieved on naive users suggests that the EyeWear prototype has a lot of potential as a real world assistive device. User feedback collected from exit surveys and mathematical modelling of user errors provide several promising avenues to further improve system performance.
Titus J. J. Tang, Wai Ho Li
Crossroad is among the most dangerous parts outside for the visually impaired people. Numerous studies have exploited navigating systems for the visually impaired community, providing services ranging from block detection, route planning to realtime localization. However, none of them have addressed the safety issue in crossroad and integrated three key factors necessary for a practical crossroad navigation system: detecting the crossroad, locating zebra patterns, and guiding the user within zebra crossing when passing the road. Our CrossNavi application responds to these needs, providing an integrated crossroad navigation service that incorporates all the essential functionalities mentioned above. The overall service is fulfilled by the collaboration of built-in sensors on commodity phones, and requires minimal human participation. We describe the technical aspects of its design, implementation, interface, and further improvements to make the system practical on a wider basis. Experimental results from three visually impaired volunteers show that the system exhibits promising behavior in both urban and rural areas.
Longfei Shangguan, Zheng Yang, Zimu Zhou, Xiaolong Zheng, Chenshu Wu, Yunhao Liu
Color blindness is a highly prevalent vision impairment that inhibits people's ability to understand colors. Although classified as a mild disability, color blindness has important effects on the daily activity of people, preventing them from performing their tasks in the most natural and effective ways. In order to address this issue we developed Chroma, a wearable augmented-reality system based on Google Glass that allows users to see a filtered image of the current scene in real-time. Chroma automatically adapts the scene-view based on the type of color blindness, and features dedicated algorithms for color saliency. Based on interviews with 23 people with color blindness we implemented four modes to help colorblind individuals distinguish colors they usually can't see. Although Glass still has important limitations, initial tests of Chroma in the lab show that colorblind individuals using Chroma can improve their color recognition in a variety of real-world activities. The deployment of Chroma on a wearable augmented-reality device makes it an effective digital aid with the potential to augment everyday activities, effectively providing access to different color dimensions for colorblind people.
Enrico Tanuwidjaja, Derek Huynh, Kirsten Koa, Calvin Nguyen, Churen Shao, Patrick Torbett, Colleen Emmenegger, Nadir Weibel

UbiComp at Work

Nigel Davies
The layouts of the buildings we live in shape our everyday lives. In office environments, building spaces affect employees' communication, which is crucial for productivity and innovation. However, accurate measurement of how spatial layouts affect interactions is a major challenge and traditional techniques may not give an objective view. We measure the impact of building spaces on social interactions using wearable sensing devices. We study a single organization that moved between two different buildings, affording a unique opportunity to examine how space alone can affect interactions. The analysis is based on two large scale deployments of wireless sensing technologies: short-range, lightweight RFID tags capable of detecting face-to-face interactions. We analyze the traces to study the impact of the building change on social behavior, which represents a first example of using ubiquitous sensing technology to study how the physical design of two workplaces combines with organizational structure to shape contact patterns.
Chloe Brown, Christos Efstratiou, Ilias Leontiadis, Daniele Quercia, Cecilia Mascolo, James Scott, Peter Key
In this paper, we explore using large digital displays in combination with a personal mobile application to publicly and privately encourage people to make healthy choices. We designed, built, and deployed an experimental system called Lunch Line that promoted healthy eating. Lunch Line includes a public display that enables passersby to view the reported eating behavior of a group of people and take on daily "food challenges," and a mobile web application that allows users to record personal food choices, report challenge achievement, and compare their choices with other users and with USDA recommendations. Results from a 3-week field evaluation at a company cafeteria showed that our integrated system was effective in drawing public attention, delivering challenges, enabling self-tracking and self-reflection, and providing feedback on personal and group choices. We share lessons on how to design future systems that integrate situated public displays and personal mobile devices to encourage healthy choices.
Kerry Shih-Ping Chang, Catalina M Danis, Robert G Farrell
We describe a qualitative study of delegate engagement with technology in academic conferences through a large- scale deployment of prototype technologies. These deployments represent current themes in conference technologies, such as providing access to content and opportunities for socialising between delegates. We consider not just the use of individual technologies, but also the overall impact of an assemblage of interfaces, ranging from ambient to interactive and mobile to situated. Based on a two-week deployment followed by interviews and surveys of attendees, we discuss the ways in which delegates engaged with the prototypes and the implications this had for their experience of the conferences. From our findings, we draw three new themes to inform the development of future conference technologies.
Nick Taylor, Tom Bartindale, John Vines, Patrick Olivier
Longbiao Chen, Daqing Zhang, Gang Pan, Leye Wang, Xiaojuan Ma, Chao Chen, Shijian Li
12:30-14:00 12:30
14:00-15:30 2pm
Children's Therapy
Interruptability & Notifications
Cars & Driving

Children's Therapy

Hao-Hua Chu
Nonverbal children with communication disorders have difficulties communicating through oral language. To facilitate communication, Augmentative and Alternative Communication (AAC) is commonly used in intervention settingss. Different forms of AAC have been used; however, one key aspect of AAC is that children have different preferences and needs in the intervention process. One particular AAC method does not necessarily work for all children. Although robots have been used in different applications, this is one of the first times that robots have been used for improvement of communication in nonverbal children. In this work, we explore robot-based AAC through humanoid robots that assist therapists in interventions with nonverbal children. Through playing activities, our study assessed changes in gestures, vocalization, speech, and verbal expression in children. Our initial results show that robot-based AAC intervention has a positive impact on the communication skills of nonverbal children.
Kyunghea Jeon, Seok Jeong Yeon, Young Tae Kim, SeokWoo Song, John Kim
This paper extends previous work automatically detecting stereotypical motor movements (SMM) in individuals on the autism spectrum. Using three-axis accelerometer data obtained through wearable wireless sensors, we compare recognition results for two different classifiers ’ Support Vector Machine and Decision Tree ’ in combination with different feature sets based on time-frequency characteristics of accelerometer data. We use data collected from six individuals on the autism spectrum who participated in two different studies conducted three years apart in classroom settings, and observe an average accuracy across all participants over time ranging from 81.2% (TPR: 0.91; FPR: 0.21) to 99.1% (TPR: 0.99; FPR: 0.01) for all combinations of classifiers and feature sets. We also provide analyses of kinematic parameters associated with observed movements in an attempt to explain classifier-feature specific performance. Based on our results, we conclude that real-time, person-dependent, adaptive algorithms are needed in order to accurately and consistently measure SMM automatically in individuals on the autism spectrum over time in real-word settings.
Matthew S Goodwin, Marzieh Haghighi, Qu Tang, Murat Akcakaya, Deniz Erdogmus, Stephen S Intille
Multimodal and natural user interfaces offer an innovative approach to sensory integration therapies. We designed and developed SensoryPaint, a multimodal system that allows users to paint on a large display using physical objects, body-based interactions, and interactive audio. We evaluat-ed the impact of SensoryPaint through two user studies: a lab-based study of 15 children with neurodevelopmental disorders in which they used the system for up to one hour, and a deployment study with four children with autism, during which the system was integrated into existing daily sensory therapy sessions. Our results demonstrate that a multimodal large display, using whole body interactions combined with tangible interactions and interactive audio feedback, balances children’s attention between their own bodies and sensory stimuli, augments existing therapies, and promotes socialization. These results offer implications for the design of other ubicomp systems for children with neurodevelopmental disorders and for their integration into therapeutic interventions.
Kathryn E Ringland, Rodrigo Zalapa, Megan Neal, Lizbeth Escobedo, Monica Tentori, Gillian R Hayes
Situated displays can support behavior management for children with behavioral challenges. However, existing tools are often static, rarely engaging, and tend to focus only on individual behavior. In this work, we designed and deployed a situated display to support teamwork and cooperation in children with behavioral challenges. We evaluated this tool in two classrooms of a public school specializing in behavioral interventions with 28 children over four weeks. The results of this work demonstrate that situated displays focused on collective behavioral performance can support reflection on individual performance, improve behavior for students with behavioral challenges, as well as encourage teamwork and cooperative behavior in classrooms. These results also indicate a variety of issues to be considered when designing situated displays for these environments, including considerations for the representation of ambiguity and failure as well as the relationship between novelty and engagement.
Aleksandar Matic, Gillian R Hayes, Monica Tentori, Maryam Abdullah, Sabrina Schuck

Interruptability & Notifications

Hans Gellersen
The mobile phone represents a unique platform for interactive applications that can harness the opportunity of an immediate contact with a user in order to increase the impact of the delivered information. However, this accessibility does not necessarily translate to reachability, as recipients might refuse an initiated contact or disfavor a message that comes in an inappropriate moment. In this paper we seek to answer whether, and how, suitable moments for interruption can be identified and utilized in a mobile system. We gather and analyze a real-world smartphone data trace and show that users' broader context, including their activity, location, time of day, emotions and engagement, determine different aspects of interruptibility. We then design and implement InterruptMe, an interruption management library for Android smartphones. An extensive experiment shows that, compared to a context-unaware approach, interruptions elicited through our library result in increased user satisfaction and shorter response times.
Veljko Pejovic, Mirco Musolesi
Wearable wireless sensors for health monitoring are enabling the design and delivery of just-in-time interventions (JITI). Critical to the success of JITI is to time its delivery so that the user is available to be engaged. We take a first step in modeling users' availability by analyzing 2,064 hours of physiological sensor data and 2,717 self-reports collected from 30 participants in a week-long field study. We use delay in responding to a prompt to objectively measure availability. We compute 99 features and identify 30 as most discriminating to train a machine learning model for predicting availability. We find that location, affect, activity type, stress, time, and day of the week, play significant roles in predicting availability. We find that users are least available at work and during driving, and most available when walking outside. Our model finally achieves an accuracy of 74.7% in 10-fold cross-validation and 77.9% with leave-one-subject-out.
Hillol Sarker, Moushumi Sharmin, Amin A Ali, Md Mahbubur Rahman, Rummana Bari, Syed Monowar Hossain, Santosh Kumar
Recently, Location-based Services (LBS) became proactive by supporting smart notifications in case the user enters or leaves a specific geographical area, well-known as Geofencing. However, different geofences cannot be temporally related to each other. Therefore, we introduce a novel method to formalize sophisticated Geofencing scenarios as state and transition-based geofence models. Such a model considers temporal relations between geofences as well as duration constraints for the time being within a geofence or in transition between geofences. These are two highly important aspects in order to cover sophisticated scenarios in which a notification should be triggered only in case the user crosses multiple geofences in a defined temporal order or leaves a geofence after a certain amount of time. As a proof of concept, we introduce a prototype of a suitable user interface for designing complex geofence models in conjunction with the corresponding proactive LBS.
Sandro Rodriguez Garzon, Bersant Deva
We contribute evidence to which extent sensor- and contextual information available on mobile phones allow to predict whether a user would pick up a call or not. Using an app publicly available for Android phones, we logged anonymous data from 31311 calls of 418 different users. The data shows that information easily available in mobile phones, such as the time since the last call, the time since the last ringer mode change, or the device posture, can predict call availability with an accuracy of 83.2% (Kappa = .646). Personalized models can increase the accuracy to 87% on average. Features related to when the user was last active turned out to be strong predictors. This shows that simple contextual cues approximating user activity are worthwhile investigating when designing context-aware ubiquitous communication systems.
Martin Pielot

Cars & Driving

Gerd Kortuem
We propose a method to estimate car-level train congestion using Bluetooth RSSI observed by passengers' mobile phones. Our approach employs a two-stage algorithm where car-level location of passengers is estimated to infer car-level train congestion. We have learned Bluetooth signals attenuate due to passengers' bodies, distance and doors between cars through the analysis of over 50,000 Bluetooth real samples. Based on this prior knowledge, our algorithm is designed as a Bayesian-based likelihood estimator, and is robust to the change of both passengers and congestion at stations. The car-level positions are useful for passengers' personal navigation inside stations and car-level train congestion information helps determine better strategies of taking trains. Through a field experiment, we have confirmed the algorithm can estimate the location of 16 passengers with 83% accuracy and also estimate train congestion with 0.82 F-measure value in average.
Yuki Maekawa, Akira Uchiyama, Hirozumi Yamaguchi, Teruo Higashino
Road latent cost, which quantifies how desirable each road is for traveling, is important information to enable many smart-city applications such as route recommendation. Arguably, vehicle trajectories are a good source to learn these costs as drivers intelligently incorporate them into their routing decisions. However, major past approaches misinterpret drivers' behaviors and suffer from trajectory sparsity problem, mainly because they adopt an edge-centric perspective which fails to exploit the sequential information in the entire trajectories. To address these shortcomings, we model drivers' routing decision process which targets at global path optimality, and present a framework to reliably discover those costs by exploiting entire trajectories while isolating the influence of heterogeneous destinations. Extensions are also made to address several issues in practice. Extensive experiments on real world data show that the road costs learned in this way significantly outperform past approaches in several urban computing tasks and require less data for learning.
Jiangchuan Zheng, Lionel Ni
Searching for parking spots generates frustration and pollution. To address these parking problems, we present PocketParker, a crowdsourcing system using smartphones to predict parking lot availability. PocketParker is an example of a subset of crowdsourcing we call pocketsourcing. Pocketsourcing applications require no explicit user input or additional infrastructure, running effectively without the phone leaving the user's pocket. PocketParker detects arrivals and departures by leveraging existing activity recognition algorithms. Detected events are used to maintain per-lot availability models and respond to queries. By estimating the number of drivers not using PocketParker, a small fraction of drivers can generate accurate predictions. Our evaluation shows that PocketParker quickly and correctly detects parking events and is robust to the presence of hidden drivers. Camera monitoring of several parking lots as 105 PocketParker users generated 10,827 events over 45 days shows that PocketParker was able to correctly predict lot availability 94% of the time.
Anandatirtha Nandugudi, Taeyeon Ki, Carl Nuessle, Geoffrey Challen
Today people have the opportunity to opt-in to usage-based automotive insurances for reduced premiums by allowing companies to monitor their driving behavior. Several companies claim to measure only speed data to preserve privacy. With our elastic pathing algorithm, we show that drivers can be tracked by merely collecting their speed data and knowing their home location, which insurance companies do, with an accuracy that constitutes privacy intrusion. To demonstrate the algorithm's real-world applicability, we evaluated its performance with datasets from central New Jersey and Seattle, Washington, representing suburban and urban areas. Our algorithm predicted destinations with error within 250 meters for 14% traces and within 500 meters for 24% traces in the New Jersey dataset (254 traces). For the Seattle dataset (691 traces), we similarly predicted destinations with error within 250 and 500 meters for 13% and 26% of the traces respectively. Our work shows that these insurance schemes enable a substantial breach of privacy.
Xianyi Gao, Bernhard Firner, Shridatt Sugrim, Victor Kaiser-Pendergrast, Yulong Yang, Janne Lindqvist
15:30-16:00 3:30
16:00-17:30 4pm

Open Data Kit: Applications of Mobile Devices in the Developing World

Gaetano Borriello

Click here for more about this keynote.

Note that the keynote will take place in Seattle 1, 2 and 3.

End of day
18:00 6pm
Registration / Help desk closes