UbiComp / ISWC 2023
Tutorials

UbiComp/ISWC 2023 features 4 Tutorials that will be running on Sunday October 8 and Monday October 9, before the start of the main conference.

A tutorial at UbiComp/ISWC is a specialized session led by experts in a particular field, aimed at providing attendees with in-depth knowledge and understanding of a specific topic. The primary goal of a tutorial is to educate and inform attendees about emerging trends, methods, techniques, or new developments in a discipline, often with a more practical or hands-on approach than traditional conference presentations.

*Tutorials, as any other track at UbiComp/ISWC 2023, will be in-person only.
More details for exceptions can be found here.

Registration is open for Tutorials: https://cvent.me/Wnn01G

Tutorials (Oct 8th, 2023)

Half-day Tutorial from 9am – 12:30pm

Organizers

  • Teodor Stoev (Institute for Data Science, University of Greifswald)
  • Emma L. Tonkin (University of Bristol)
  • Kristina Yordanova (Institute for Data Science, University of Greifswald)
  • Gregory J. L. Tourte (University of Bristol)

Abstract

Data annotation is key to a large number of fields, including ubiquitous computing. Documenting the quality and extent of annotation is increasingly recognised as an important aspect of understanding the validity, biases and limitations of systems built using this data: hence, it is also relevant to regulatory and compliance needs and outcomes. However, the process of annotation often receives little attention, and is characterised in the literature as ‘under-described’ and ‘invisible work’. In this tutorial, we bring together existing resources and methods to present a framework for the iterative development and evaluation of an annotation protocol, from requirements gathering, setting scope, development, documentation, piloting and evaluation, through to scaling-up annotation processes for a production annotation process. We also explore the potential of semi-supervised approaches and state-of-the-art methods such as the use of generative AI in supporting annotation workflows, and how such approaches are validated and their strengths and weaknesses characterised. This tutorial is designed to be suitable for people from a wide range of backgrounds, as annotation can be understood as a highly interdisciplinary task and often requires collaboration with subject matter experts from relevant fields. Participants will trial and evaluate a selection of annotation interfaces and walk through the process of evaluating the outcomes. By the end of the workshop, participants will develop a deeper understanding of the task of developing an annotation protocol and aspects of the requirements and context which should be taken into account. Presentations and code from this event will be shared openly on a Github repository.

Half-day Tutorial from 2pm – 5:30pm

Organizers

  • Rina R. Wehbe (Dalhousie University)
  • Mayra Barrera Machuca (Dalhousie University)
  • Lizbeth Escobedo (Dalhousie University)

Abstract

Over the last couple of years, there has been a big push toward making immersive and mixed technologies available to the general public. Yet, designing for these new technologies is challenging, as users need to position virtual objects in 3D space. The current state­of-the art technologies used to access these virtual environments (e.g., Head-mount displays (HMD)s also present additional chal­lenges for designers when considering depth perception issues that affect user precision. Moreover, these challenges are exacerbated when designers consider accessibility needs of special populations. To make new immersive and mixed technologies more accessible, we propose a tutorial at UbiComp / ISWC 2023 to discuss design strategies, research methodologies, and implementation practices with special populations using technologies across the physical-­virtual reality spectrum. In this tutorial, participants will learn how to make these technologies more accessible by (1) teaching stu­dents of the tutorial how to design, prototype, and evaluate these technologies using empirical research. We aim to (2) bring together researchers, practitioners, and students who are interested in mak­ing immersive and mixed technologies more accessible and (3) identify common problems when designing new user interfaces.

Tutorials (Oct 9th, 2023)

Half-day Tutorial from 9am – 12:30pm

Organizers

  • Harish Haresamudram (Gatech),
  • Chi Ian Tang (Nokia Bell Labs and Cambridge),
  • Sungho Suh (DFKI and RPTU),
  • Paul Lukowicz (DFKI and RPTU),
  • Thomas Plötz (Gatech)

Abstract

Feature extraction lies at the core of Human Activity Recognition (HAR): the automated inference of what activity is being performed. Traditionally, the HAR community used statistical metrics and distribution-based representations to summarize the movement present in windows of sensor data into feature vectors. More recently, learned representations have been used successfully in lieu of such handcrafted and manually engineered features. In particular, the community has shown substantial interest in self-supervised methods, which leverage large-scale unlabeled data to first learn useful representations that are subsequently fine-tuned to the target applications. In this tutorial, we focus on representations for single-sensor and multi-modal setups, and go beyond the current de facto of learning representations. We also discuss the economic use of existing representations, specifically via transfer learning and domain adaptation. The proposed tutorial will introduce state-of-the-art methods for representation learning in HAR, and provide a forum for researchers from mobile and ubiquitous computing to not only discuss the current state of the field but to also chart future directions for the field itself, including answering what it would take to finally solve the activity recognition problem.

Webpage: https://sites.google.com/view/soar-tutorial-ubicomp2023/home

Half-day Tutorial from 2pm – 5:30pm

Organizers

  • Agnes Grünerbl (RPTU and DFKI),
  • Kai Kunze (Keio University),
  • Thomas Lachmann (RPTU),
  • Jamie A Ward (Goldsmiths University of London),
  • Paul Lukowicz (RPTU and DFKI)

Abstract

A central research goal of Ubicomp has always been the development of systems and methods that seamlessly support humans in accomplishing complex tasks in everyday life. In the wake of rapid advances in artificial intelligence (AI), topics such as “Human-Centered AI” and “Hybrid Human AI” are showing a growing interest in this very research that puts us humans and our needs at the center of artificial intelligence. While methods for augmenting the human body and the impact of these augmentations on human physical life are being extensively researched, there has been very limited progress in evaluating the impact on human cognitive perception and its impact on the overall outcome of augmentations to the human body. In this tutorial, we will address the question of how to evaluate the cognitive impact of human augmentation. We will address the different levels of cognitive effects, how to measure which methods of augmentation have the best effect, and which cognitive measures have the greatest impact on augmentation, and we will give the audience the opportunity to test and evaluate cognitive human augmentation systems themselves.

UbiComp / ISWC

Past Conferences

The ACM international joint conference on Pervasive and Ubiquitous Computing (UbiComp) is the result of a merger of the two most renowned conferences in the field: Pervasive and UbiComp. While it retains the name of the latter in recognition of the visionary work of Mark Weiser, its long name reflects the dual history of the new event.

The ACM International Symposium on Wearable Computing (ISWC) discusses novel results in all aspects of wearable computing, and has been colocated with UbiComp and Pervasive since 2013.

A complete list of UbiComp, Pervasive, and ISWC past conferences is provided below.