September 11—14, 2005 | Tokyo, Japan
The Seventh International Conference on Ubiquitous Computing

Seeing is Believing!

Call for Demos (Closed)

Demonstrations have become a staple in the Ubicomp conference series, providing a unique opportunity for conference attendees to experience firsthand how a future full of ubiquitous computing will actually feel like.

By seeing and interacting with the latest and greatest in ubiquitous computing systems and technology, both delegates and presenters will be able to significantly broaden their understanding of the field by going beyond the thorough theoretical analyses presented at the paper sessions. Trying out new interaction techniques, discussing programming performance issues, and getting practical advice when it comes to creating smart artifacts -- all this and more is what makes the Demo session such an essential part of any Ubicomp conference!

Invited Demo Session

Sponsor Demo Session


D1: A Mobile Context Reactive User Interface Augmented by Natural Language

Babak Hodjat, Siamak Hodjat, Nick Treadgold (iAnywhere Solutions)

The user interfaces for mobile devices generally have space and entry restrictions, which limit usability of mobile applications. The Context Reactive User Experience (CRUSE) is a framework enabling the delivery of applications and services to mobile users in a usable form. At any given time CRUSE takes available context, user preferences and user behavior to present options that have the most likelihood of being selected. Such a prediction cannot be fool-proof at all times, therefore it is augmented by a natural language text box.


D2: A Wearable System for Supporting Motorbike Races - Suzuka 8 hours World Endurance Championship Race in July, 2004

Masakazu Miyamae, Yasue Kishino, Tsutomu Terada, Shojiro Nishio , Masahiko Tsukamoto ,Keisuke Hiraoka and Takahito Fukuda (Osaka University, Kobe University , Westunitis Co.,Ltd.)

Motorbike racing is one of the most famous motorsports and a lot of people visit circuit places to watch races. However, since audiences and pit crew can get only limited information, it is difficult for them to grasp the current race situation. In this demonstration, we describe an information browsing system for pit crew and audiences utilizing wearable computers to support them.


D3: Digitally Augmented Collectibles

Christian Metzger, Matthias Lampe, Elgar Fleisch, Oliver Zweifel (ETH - Swiss Federal Institute of Technology)

The Digitally Augmented Collectibles system extends the functionality of collector's items beyond simple exhibition and incorporates unobtrusively into a user's environment. It offers a simple interface to access item-specific information and establishes an emotional bond by playing item-related multimedia. The system identifies items through RFID and senses different combinations of collectibles. Digitally Augmented Collectibles is not limited to home-user applications but suggests potential to serve as a marketing tool.


D4: Configurable Hotspots for Ubiquitous Interaction

Alejandro Jaimes, Jianyi Liu (FXPAL Japan, Fuji Xerox Co. Ltd.)

In this paper we describe our configurable hotspot framework for ubiquitous interaction. A camera points to a physical space (e.g., desktop), the user defines interaction areas, and designs gestures by combining hotspots (rectangles) that detect simple 2D motions (left_right, top_bottom, etc.).


D5: Celadon: Infrastructure for Device Symbiosis

M. C. Lee, H.K. Jang, S.Y. Kim, Y.S. Paik, S.E. Jin, S. Lee , C. Narayanaswami, M.T. Raghunath, M. C. Rosu (IBM Ubiquitous Computing Lab, IBM T.J. Watson Research Center)

In this paper we present the project Celadon, which is an infrastructure to enable on demand collaboration between heterogeneous mobile devices and environmental devices. We abstract out the various capabilities of the mobile and environmental devices into a set of network accessible services. The symbiotic mode of operation essentially consists of devices using services offered by other devices. We demonstrate the typical use case of device symbiosis which consists of digital camera, PDA, and the large flat monitor, which can be deployed in public spaces.


D6: GETA Sandals: Knowing Where You Walk To

Shun-yuan Yeh, Keng-hao Chang, Chon-in Wu, Okuda Kenji, Hao-hua Chu (National Taiwan University)

The GETA sandals are Japanese wooden sandals embedded with location tracking devices. By wearing them, the GETA sandals can track anywhere a user walks to. The GETA sandals work both in the indoor and outdoor environments. The motivation for the GETA sandals is to create a location system that needs minimum infrastructural setup and support in the environment, making it easy for deployment in everyday environments. In our system, a user simply has to wear the GETA sandals to enable his/her location tracking. This is in contrast to most of the current indoor location systems based on WiFi or ultrasound, which need to setup access points, fixed transmitters and receivers in the environment. The GETA sandals track a user’s location using a footprint-based method. The footprint-based method uses location sensors installed underneath the GETA sandals to continuously measure the displacement vectors formed between the left and right sandals along a trail of advancing footprints. By progressively adding up these displacement vectors, the GETA sandals can calculate the user's current location anytime anywhere. Although the footprint-based method has the advantage of being a mobile and wearable location tracker, it has a drawback of accumulative error over distance traveled. To address this problem, the footprint based method is combined with a light RFID infrastructure to correct its positioning error over some fixed distance.


D7: Demonstration of Stable VoIP Communication over a Large Indoor Ad-hoc Network

Jun Hasegawa, Satoko Itaya, Akio Hasegawa, Peter Davis, Naoto Kadowki and Sadao Obana (ATR Adaptive Communication Research Labs.)

Good quality voice communication has been achieved in a large multi-hop ad hoc network consisting of mobile-PC and PDAs. We show methods used to achieve stable operation of a large ad hoc network supporting VoIP communication, including selection of relay-nodes based on signal strength criteria. be some nodes for which the radio reception from some other nodes is weak ie. “gray zones”. A widely recognized problem is that when radio reception is weak, HELLO packets are received successfully, but data packets sent at higher bit rates experience high error. Various measures of link quality have been proposed to avoid using nodes in gray zones for multi-hop routing [4-5]. But they did not


D8: µParts: Low Cost Sensor Networks at Scale

Michael Beigl, Christian Decker, Albert Krohn, Till Riedel, Tobias Zimmer (Telecooperation Office (TecO))

This paper presents the µPart wireless sensor system especially designed for settings requiring a high population of sensors. Those settings can be found in actual research of indoor activity recognition and ambient intelligence as well as outdoor environmental monitoring. µParts are very small sensor nodes (10x10mm), with wireless communication, enabling the setup of high density networks at low cost and with a long life time. Basic configuration capabilities like sensor type and sampling rate provide enough flexibility while keeping the system easy to deploy and affordable at the same time.


D9: HeadRacer: A Head Mounted Wearable Computer with Ultrasonic Position Sensing

Cliff Randell ,Paul Duff, Mike McCarthy, Henk Muller (Department of Computer Science , University of Bristol)

This demonstration is in three parts. Firstly we are showing a wearable computer built into a cycle helmet. This comprises a gumstix single board computer, an ultrasonic receiver module and a lithium polymer battery. The helmet interacts with an ultrasonic positioning system. This system requires no RF or infra-red synchronisation enabling fast and accurate performance. Lastly we are providing an interactive game based on Tux Racer with which conference attendees can appreciate the performance of the overall system. signals from transducers placed in the environment. We use a Bluetooth interface from our wearable to send position data to a laptop server running our selected application, the Tux Racer game. The data is translated into control signals en abling the game to be controlled using head movements. The game is displayed to enable both the user, and an audience, to enjoy the experience. Wearable Computer


D10: DTMap Demo: Interactive Tabletop Maps for Ubiquitous Computing

Masakazu Furuichi, Yutaka Mihori, Fumiko Muraoka , Alan Esenther, Kathy Ryall (Mitsubishi Electric Corporation, Mitsubishi Electric Research Laboratories)

Computationally-augmented tabletops are an increasingly common form factor for group collaborations. Visual data, such as maps, are particularly well-suited for interactive tabletop applications. In this demonstration we present DTMap, a prototype application developed to illustrate the power of combining visual data (in this case map data) with a multiuser tabletop environment. Our demonstration was developed for DiamondTouch[2], highlighting its unique ability to support input from multiple, simultaneous users and to identify the owner of each touch. A multi-user interactive tabletop such as this one facilitates direct manipulation of user interface elements and provides a shared focus of attention for collaborating users, and has the potential to make a strong contribution to ubiquitous computing environments.


D11: Ubiquitous Network for Building and Home control With Ad-hoc Wireless and Plug & Play mechanism

Masanori NAKATA,Noriyuki KUSHIRO, Toshiyasu HIGUMA, Naoyuki HIBARA (Mitsubishi Electric)

This proposal describes the ubiquitous network for building and home appliances with ad-hoc wireless and plug & play mechanism. We propose no-communication lines and noinstallation engineering network by utilizing the ubiquitous technologies. Wireless communication is suitable for controlling appliances equipped in a space. ZigBeeTM is implemented as an ad-hoc wireless communication method in a space. We expand its functionality; plug & play mechanism, i.e., automatic equipment location detection, to reduce the installation engineering. The walls, which divide each space in building and home, prevent the wireless signals from penetrate through the walls. We develop the pipe-based communication system to expand the wireless communication through the wall. We have developed the network system and installed the system in an actual building. And, we have been evaluating its capability for six months since January 2005.


D12: Development of Smart Navigation System "Cochira" for Customers in Railway Stations

Takeshi Nakagawa, Fuminori Tsunoda, Koichi Wakasugi, Sayaka Isojima, Isao Saito (EAST JAPAN RAILWAY COMPANY , UCHIDA YOKO COMPANY LTD.)

This demonstration presents our Smart Navigation System, "Cochira", to easily guide customers to the destination in railway stations only by touching Suica card on it. Cochira has a small touch-panel display and an arrow to indicate which way you should go to by itself bending like a robot. We believe you can enjoy the comical reaction after touching Suica on Cochira. In this paper, we describe the development of new concept of smart navigation system for each customers by utilizing Suica ID.


D13: Push!Music: Intelligent Music Sharing on Mobile Devices

Mattias Jacobsson, Mattias Rost, Maria Hakansson, Lars Erik Holmquist (Future Applications Lab, Viktoria Institute)

Push!Music is a music sharing application which runs on mobile devices with wireless ad-hoc networking. Here music files take the form of autonomous software agents than take advantage of meta-data to build up a personal identity through other agents that it encounters. They then use this information to move autonomously between the devices of users in the proximity, looking for the environment that suits them best. Users can also make active personal recommendations by collaborative sharing or "pushing" music to other users in the vicinity.


D14: Context Photography on Camera Phones

Mattias Rost, Lalya Gaye, Maria Hakansson, Sara Ljungblad, Lars Erik Holmquist (Future Applications Lab, Viktoria Institute)

Context photography uses sensors and image processing to create a picture that is visually affected by invisible factors in the environment, such as sound and movement. The system was previously implemented on a Tablet PC, but our newest prototype runs on standard camera phones. The program uses the phone's built-in microphone and real-time image analysis to create context photographs. Having the system on a mass-market platform will allow for large-scale user studies, since anyone with a compatible phone can now download and install the application. implemented, using different mappings from context data to input parameters of the effects. The user can choose which effect he or she wants to use, and thus decide how the picture should be affected by the context, but the context determines the ultimate results. After a first prototype implemented on a Tablet PC [3], our context camera prototype is now successfully ported to standard camera phones (see figure 1). This enables the use of such devices in large scale during user studies, since anyone with a camera phone can download the application.


D15: eHome Specification, Configuration, and Deployment

Ulrich Norbisrath, Priit Salumaa, Adam Malik, Tim Schwerdtner (RWTH Aachen University)

Our demonstration includes the presentation of a miniature eHome and its ubiquitous environment at runtime - the eHomeDemonstrator. We also present a low-cost specification, configuration, and deployment process for eHome systems, research results.


D16: Mixed Interaction Spaces - a new interaction technique for mobile devices

Thomas Riisgaard Hansen , Eva Eriksson , Andreas Lykke-Olesen (University of Aarhus, Interactive Spaces, Aarhus School of Architecture)

In this paper, we describe a new interaction technique for mobile devices named Mixed Interaction Space that uses the camera of the mobile device to track the position, size and rotation of a fixed-point. In this demonstration we will present a system that uses a hand-drawn circle, colored object or a person’s face as a fixed-point to determine the location of the device. We use these features as a 4 dimensional input vector to a set of different applications.


D17: Abaris: Capture and Access for Structured One-on-One Educational Settings

Sebastian Boring, Julie A. Kientz, Gregory D. Abowd, Gillian R. Hayes (University of Munich, Georgia Institute of Technology)

We present Abaris, an automated capture and access application that provides access to details of particular sessions of discrete trial training (DTT), a highly structured intervention therapy for children with autism (CWA). By using Abaris, therapists can capture manual data and index automatically into video of their sessions using perception technologies. These indices allow them to easily find particular scenes within the video to correspond to salient moments indicated by the manually collected data. Abaris includes an access interface that allows therapists to review sessions similarly to their current practices while providing new features previously unavailable to them.


D18: User Profile Acquisition, Management and Its History Visualization via a Cell Phone

Daisuke Morikawa, Masaru Honjo, Akira Yamaguchi, Masayoshi Ohashi (KDDI Corporation , ATR)

In this paper, we describe a user profile acquisition method using a cell phone equipped with active RFID transmitter/receiver, user profile management server called Profile Aggregator, and Profile Blog server based on the aggregated user profiles. At the same time, time and location (i.e. longitude/latitude and indoor semantic location) are simultaneously collected via a cell phone. In order to realize the acquisition of these activities, the following system has been designed and developed. We


D19: Digitally Enhanced Home Activities with u-Textures

Ryo Ohsawa, Takuro Yonezawa, Yuki Matsukura, Naohiko Kohtake, Kazunori Takashio and Hideyuki Tokuda (Graduate School of Media and Governance, Keio University)

This paper introduces a novel way that allows non-expert users to create smart surroundings. To establish this goal we have developed a panel type smart material called ”u-Texture”, which has a built-in computer and sensors. u-Texture can connect with others to form various furniture that support human home activities. The unique point of u-Texture is the self-organizable function that is changing its own behavior autonomously through recognition of its location, its inclination, and surrounding environment by assembling them physically. This paper describes the applications by using u-Textures and we are planning to demonstrate it.


D20: Peer-to-Peer Positioning of Co-located Mobile Devices

Mike Hazas, Christian Kray, Henoc Agbota, Hans Gellersen, Gerd Kortuem (Computing Department, Lancaster University)

The Relate system provides fine-grained information on the spatial relations of co-located mobile devices. The system is based on Relate Dongles, sensor nodes that can be attached to mobile computing devices via USB. The dongles are able to measure distances and relative orientation of devices in a true peer-to-peer fashion. This means that spatial relations of a set of devices can be determined wherever these become co-located, without need for any external infrastructure or instrumentation of the environment.


D21: A Position Detection Mechanism using Camera Images for Pin&Play

Yasue Kishino, Tsutomu Terada, Shojiro Nishio, Nicolas Villar and Hans Gellersen (Osaka University, Lancaster University)

We have implemented an augmented notice board and pushbins systemusing Pin&Play. Though the positions of insterted pins are important for most applications, they can not be detected by the current technology. Thus, we propose a position detection mechanism for pins using a camera which is placed in front of the board. The proposed method has a dynamic adjustment mechanism of image processing parameters, to achieve higher accuracy even when the surrounding light condition changes. Moreover, we also clarify the effectively of our proposed method by performance evaluation.


D22: A Display Cube as a Tangible User Interface

Albrecht Schmidt, Dominik Schmidt, Paul Holleis, Matthias Kranz (University of Munich)

In this paper we introduce the design and development of a display cube as a novel tangible user interface. Using the common shape of a cube we implemented a platform that supports input by gesture recognition and output through 6 displays mounted at the sides of the cube. Exploiting the physical affordances of the cube and augmenting it with embedded sensors and LCD displays, we enable different applications like a playful learning interface for children. Based on initial observations and users’ experiences with the device, we argue that breaking conventions about how a computer has to look like and providing a playful interface is a promising approach to embed and integrate technology into people’s everyday context and activities and enable new forms of interaction. wireless sensor nodes. The hardware of the node comprises a communication board integrating a PIC18F6720 microcontroller, a TR1001 transceiver for 125 kbit/s data transfer on the 868 MHz band, a real-time clock (RTC), additional 512KB Flash memory and two LEDs together with a small speaker for basic notification functionality. Running on a single 1.2V AAA rechargeable battery the board consumes on average 40mA with the communication and the LEDs active. This main board can be extended with additional sensor and actuator boards. The cube specific hardware (display connections and sensing) is designed as an add-on board to the base platform, see [5] for detailed information including schematic, board layout, soldering instructions


D23: Displayed Connectivity

Paul Holleis, Matthias Kranz , Albrecht Schmidt (University of Munich)

In this demo we motivate applications for using a set of small displays as tangible communication devices. In particular we investigate the effect of the chosen casing on the use. The displays are capable of displaying text and basic icons with a limited resolution and are connected wirelessly, either directly or over the Internet. Envisioned use cases are awareness scenarios like physical interfaces for status setting for instant messaging systems. single 1.2V rechargeable battery. The display board is designed as an add-on board to the base platform, see [1] for detailed information including schematic, layout, soldering instructions and part lists. The board has included two accelerometers and can connect up to 6 displays. In this project only one display channel per board is used. The hardware architecture can be seen in Figure 1.


D24: Graphic Shadow Wall

Yugo Minomo ,Takeshi Naemura (The University of Tokyo)

Today, we have a lot of displays everywhere, and sometimes feel a surfeit of information. The aim of this pro ject is to provide a novel display system which can change its visibility depending on the presence of users. More concretely, the proposed system gives some colorful information to the users’ physical shadow. We call this system “Graphic Shadow Wall,” and use two pro jectors to exploit the additivity of lights. For example, while two colorful images are pro jected onto a wall, you can see just a grayscale image as a result of additive color mixing of the two pro jector lights. When you occlude just one of the pro jector lights, you will see the colorful image from the other pro jector in your shadow. We will demonstrate some applications of the system.


D25: Strino: STRain-based user Interface with tactile of elastic Natural Objects

Makoto IIDA, Shoji KAWAKAMI and Takeshi NAEMURA (The University of Tokyo)

The authors believe that such a user interface that has the tactile of natural material could play an important role for the invisibility of ubiquitous computing. For example, we will demonstrate a leaves music instrument and a wooden touch panel. More generally, the goal of this project is to convert immediate natural objects into some kinds of ubiquitous user interfaces. For this purpose, the authors focus on the strain of objects. When external forces are applied to a stationary object, stress and strain are the result. This method determines position and strength of forces using strain measurement and FEM (Finite Element Method) analysis in real time.


Demo Co-Chairs: Yasuto Nakanishi
Keio Univ., Japan
<naka@sfc.keio.ac.jp>
 
Marc Langheinrich
ETH Zurich, Switzerland
<langhein@inf.ethz.ch>
Formats accepted: Please use the SIGCHI conference publications format (see http://sigchi.org/chipubform/). The Demonstrations Supplement template can be downloaded from here.
Abstract Page limit: 2 pages (ACM SIGCHI conference publications format)
Submission Deadline: June 10, 2005
Acceptance Notification: July 22, 2005
Final Version Due: August 5, 2005
 
 

Gold Sponsors

 

 


 


Silver Sponsors

 

 


 


 


 


 


Bronze Sponsors

In Cooperation

Network Equipment Support