David Lindlbauer


I am an Assistant Professor at the Human-Computer Interaction Institute at Carnegie Mellon University, leading the Augmented Perception Lab.


I am hiring. I am looking for students to join my new lab at CMU. We will work at the intersection of perception, interaction, computation and Mixed Reality. Please reach out if you are interested.


My research focusses on understanding how humans perceive and interact with digital information, and to build technology that goes beyond the flat displays of PCs and smartphones to advances our capabilities when interacting with the digital world. To achieve this, I create and study enabling technologies and computational approaches that control when, where and how virtual content is displayed to increase the usability of AR and VR interfaces.


Before CMU, I was a postdoc at ETH Zurich in the AIT Lab of Otmar Hilliges. I completed my PhD at TU Berlin in the Computer Graphics group, advised by Marc Alexa. I have worked with Jörg Müller at TU Berlin, the Media Interaction Lab in Hagenberg, Austria, Stacey Scott and Mark Hancock at the University of Waterloo, and interned Microsoft Research (Redmond) in the Perception & Interaction Group.



You can also find me on Twitter, Linkedin, and Google Scholar, or contact me via davidlindlbauer[at]cmu.edu.


Download my cv here: cv_davidlindlbauer.pdf.


Check out the Augmented Perception Lab at CMU HCII.



Recent and upcoming publications


Omni: Volumetric Sensing and Actuation of Passive Magnetic Tools for Dynamic Haptic Feedback
T. Langerak, J. Zarate, D. Lindlbauer, C. Holz, O. Hilliges
ACM UIST 2020
More infos here.

Optimal Control for Electromagnetic Haptic Guidance Systems
T. Langerak, J. Zarate, V. Vechev, D. Lindlbauer, D. Panozzo, O. Hilliges
ACM UIST 2020
More infos here.

A Rapid Tapping Task on Commodity Smartphones to Assess Motor Fatigability
L. Barrios, P. Oldrati, D. Lindlbauer, M. Hilty, H. Hayward-Koennecke, C. Holz, A. Lutterotti
ACM CHI 2020
More infos here.

Context-Aware Online Adaptation of Mixed Reality Interfaces
D. Lindlbauer, A. Feit, O. Hilliges
ACM UIST 2019
More infos here.




Perception | Interaction | Mixed Reality
In the virtual world, changing properties of objects such as their color, size or shape is one of the main means of communication. I am interested how these features can be brought into the real world by modifying the optical properties of objects and devices and how this dynamic appearance influences interaction and behavior. The interplay of creating functional prototypes of interactive artifacts and devices and studying them in controlled experiments forms the basis of my research.


Context-Aware Online Adaptation of Mixed Reality Interfaces

We present an optimization-based approach for Mixed Reality (MR) systems to automatically control when and where applications are shown, and how much information they display. Currently, content creators design applications, and users then manually adjust which applications are visible and how much information they show. This choice has to be adjusted every time users switch context, i.e., whenever they switch their task or environment. Since context switches happen many times a day, we believe that MR interfaces require automation to alleviate this problem. We propose a real-time approach to automate this process based on users' current cognitive load and knowledge about their task and environment. Our system adapts which applications are displayed, how much information they show, and where they are placed. We formulate this problem as a mix of rule-based decision making and combinatorial optimization which can be solved efficiently in real-time. We present a set of proof-of-concept applications showing that our approach is applicable in a wide range of scenarios. Finally, we show in a dual-task evaluation that our approach decreased secondary tasks interactions by 36%.

D. Lindlbauer, A. Feit, O. Hilliges, 2019. Context-Aware Online Adaptation of Mixed Reality Interfaces. UIST '19, New Orleans, LA, USA.
Project page / Full video (5 min) / talk recording from UIST '19

Remixed Reality: Manipulating Space and Time in Augmented Reality

We present Remixed Reality, a novel form of mixed reality. In contrast to classical mixed reality approaches where users see a direct view or video feed of their environment, with Remixed Reality they see a live 3D reconstruction, gathered from multiple external depth cameras. This approach enables changing the environment as easily as geometry can be changed in virtual reality, while allowing users to view and interact with the actual physical world as they would in augmented reality. We characterize a taxonomy of manipulations that are possible with Remixed Reality: spatial changes such as erasing objects; appearance changes such as changing textures; temporal changes such as pausing time; and viewpoint changes that allow users to see the world from different points without changing their physical location. We contribute a method that uses an underlying voxel grid holding information like visibility and transformations, which is applied to live geometry in real time.

D. Lindlbauer, A. Wilson, 2018. Remixed Reality: Manipulating Space and Time in Augmented Reality. CHI '18, Montreal, Canada.
Microsoft Research Blog / Full video (5 min)

Featured on: Shiropen (Seamless), VR Room, MSPowerUser, It's about VR.

Changing the Appearance of Real-World Objects by Modifying Their Surroundings

We present an approach to alter the perceived appearance of physical objects by controlling their surrounding space. Many real-world objects cannot easily be equipped with displays or actuators in order to change their shape. While common approaches such as projection mapping enable changing the appearance of objects without modifying them, certain surface properties (e.g. highly reflective or transparent surfaces) can make employing these techniques difficult. In this work, we present a conceptual design exploration on how the appearance of an object can be changed by solely altering the space around it, rather than the object itself. In a proof-of-concept implementation, we place objects onto a tabletop display and track them together with users to display perspective-corrected 3D graphics for augmentation. This enables controlling properties such as the perceived size, color, or shape of objects. We characterize the design space of our approach and demonstrate potential applications. For example, we change the contour of a wallet to notify users when their bank account is debited. We envision our approach to gain in importance with increasing ubiquity of display surfaces.

D. Lindlbauer, J. Müller, M. Alexa, 2017. Changing the Appearance of Real-World Objects by Modifying Their Surroundings. CHI '17, Denver, CO, USA.
Full video (5 min)

Changing the Appearance of Physical Interfaces Through Controlled Transparency

We present physical interfaces that change their appearance through controlled transparency. These transparency-controlled physical interfaces are well suited for applications where communication through optical appearance is sufficient, such as ambient display scenarios. They transition between perceived shapes within milliseconds, require no mechanically moving parts and consume little energy. We build 3D physical interfaces with individually controllable parts by laser cutting and folding a single sheet of transparency-controlled material. Electrical connections are engraved in the surface, eliminating the need for wiring individual parts. We consider our work as complementary to current shape-changing interfaces. While our proposed interfaces do not exhibit dynamic tangible qualities, they have unique benefits such as the ability to create apparent holes or nesting of objects. We explore the benefits of transparency-controlled physical interfaces by characterizing their design space and showcase four physical prototypes: two activity indicators, a playful avatar, and a lamp shade with dynamic appearance.

D. Lindlbauer, J. Müller, M. Alexa, 2016. Changing the Appearance of Physical Interfaces Through Controlled Transparency. UIST '16, Tokyo, Japan. Project website / long video (5 min) / talk recording from UIST'16

Featured on: Fast Company Co.Design, Vice Motherboard, Futurism, prosthetic knowledge.

Combining Shape-Changing Interfaces and Spatial Augmented Reality Enables Extended Object Appearance

We propose combining shape-changing interfaces and spatial augmented reality for extending the space of appearances and interactions of actuated interfaces. While shape-changing interfaces can dynamically alter the physical appearance of objects, the integration of spatial augmented reality additionally allows for dynamically changing objects' optical appearance with high detail. This way, devices can render currently challenging features such as high frequency texture or fast motion. We frame this combination in the context of computer graphics with analogies to established techniques for increasing the realism of 3D objects such as bump mapping. This extensible framework helps us identify challenges of the two techniques and benefits of their combination. We utilize our prototype shape-changing device enriched with spatial augmented reality through projection mapping to demonstrate the concept. We present a novel mechanical distance-fields algorithm for real-time fitting of mechanically constrained shape-changing devices to arbitrary 3D graphics. Furthermore, we present a technique for increasing effective screen real estate for spatial augmented reality through view-dependent shape change.

D. Lindlbauer, J.E. Grønbæk, M. Birk, K. Halskov, M. Alexa, J. Müller, 2016. Combining Shape-Changing Interfaces and Spatial Augmented Reality Enables Extended Object Appearance. CHI '16, San Jose, CA, USA.
Project website / long video (5 min) / talk recording from CHI '16

Influence of Display Transparency on Background Awareness and Task Performance

It has been argued that transparent displays are beneficial for certain tasks by allowing users to simultaneously see on-screen content as well as the environment behind the display. However, it is yet unclear how much in background awareness users gain and if performance suffers for tasks performed on the transparent display, since users are no longer shielded from distractions. Therefore, we investigate the influence of display transparency on task performance and background awareness in a dual-task scenario. We conducted an experiment comparing transparent displays with conventional displays in different horizontal and vertical configurations. Participants performed an attention-demanding primary task on the display while simultaneously observing the background for target stimuli. Our results show that transparent and horizontal displays increase the ability of participants to observe the background while keeping primary task performance constant.

D. Lindlbauer, K. Lilija, R. Walter, J. Müller, 2016. Influence of Display Transparency on Background Awareness and Task Performance. CHI '16, San Jose, CA, USA.
Full video (3 min)
ACM CHI 2016 Best Paper Honorable Mention Award

Tracs: Transparency Control for See-through Displays

Tracs is a dual-sided see-through display system with controllable transparency. Traditional displays are a constant visual and communication barrier, hindering fast and efficient collaboration of spatially close or facing co- workers. Transparent displays could potentially remove these barriers, but introduce new issues of personal privacy, screen content privacy and visual interference. We therefore propose a solution with controllable transparency to overcome these problems. Tracs consists of two see-through displays, with a transparency-control layer, a backlight layer and a polarization adjustment layer in-between. The transparency- control layer is built as a grid of individually addressable transparency-controlled patches, allowing users to control the transparency overall or just locally. Additionally, the locally switchable backlight layer improves the contrast of LCD screen content. Tracs allows users to switch between personal and collaborative work fast and easily and gives them full control of transparent regions on their display.

D. Lindlbauer, T.Aoki, R. Walter, Y. Uema, A. Höchtl, M. Haller, M. Inami, J. Müller, 2014. Tracs: Transparency Control for See-through Displays.
UIST '14, Honolulu, Hawaii, USA. long video (3 min)
also presented as demo at UIST'14

D. Lindlbauer, T.Aoki, Y. Uema, A. Höchtl, M. Haller, M. Inami, J. Müller, 2014. A Collaborative See-through Display Supporting On-demand Privacy,
Siggraph Emerging Technology '14, Vancouver, Canada. video

Featured on: Gizmodo

Novel devices and interactions
I am working on a wide range of other topics, mostly in collaboration with colleagues or students. We worked on projects in the fields of tactile feedback, human perception, and novel interaction techniques.


Omni: Volumetric Sensing and Actuation of Passive Magnetic Tools for Dynamic Haptic Feedback.

We present Omni, a self-contained 3D haptic feedback system that is capable of sensing and actuating an untethered, passive tool containing only a small embedded permanent magnet. Omni enriches AR, VR and desktop applications by providing an active haptic experience using a simple apparatus centered around an electromagnetic base. The spatial haptic capabilities of Omni are enabled by a novel gradient-based method to reconstruct the 3D position of the permanent magnet in midair using the measurements from eight off-the-shelf hall sensors that are integrated into the base. Omni’s 3 DoF spherical electromagnet simultaneously exerts dynamic and precise radial and tangential forces in a volumetric space around the device. Since our system is fully integrated, contains no moving parts and requires no external tracking, it is easy and affordable to fabricate. We describe Omni’s hardware implementation, our 3D reconstruction algorithm, and evaluate the tracking and actuation performance in depth. Finally, we demonstrate its capabilities via a set of interactive usage scenarios.

T. Langerak, J. Zarate, D. Lindlbauer, C. Holz, O Hilliges. 2020. Omni: Volumetric Sensing and Actuation of Passive Magnetic Tools for Dynamic Haptic Feedback.
ACM UIST '20.

Optimal Control for Electromagnetic Haptic Guidance Systems.

We introduce an optimal control method for electromagnetic haptic guidance systems. Our real-time approach assists users in pen-based tasks such as drawing, sketching or designing. The key to our control method is that it guides users, yet does not take away agency. Existing approaches force the stylus to a continuously advancing setpoint on a target trajectory, leading to undesirable behavior such as loss of haptic guidance or unintended snapping. Our control approach, in contrast, gently pulls users towards the target trajectory, allowing them to always easily override the system to adapt their input spontaneously and draw at their own speed. To achieve this fexible guidance, our optimization iteratively predicts the motion of an input device such as a pen, and adjusts the position and strength of an underlying dynamic electromagnetic actuator accordingly. To enable real-time computation, we additionally introduce a novel and fast approximate model of an electromagnet. We demonstrate the applicability of our approach by implementing it on a prototypical hardware platform based on an electromagnet moving on a bi-axial linear stage, as well as a set of applications. Experimental results show that our approach is more accurate and preferred by users compared to open-loop and time-dependent closed-loop approaches.

T. Langerak, J. Zarate, V. Vechev, D. Lindlbauer, D. Panozzo, O Hilliges. 2020. Optimal Control for Electromagnetic Haptic Guidance Systems.
ACM UIST '20.

A Rapid Tapping Task on Commodity Smartphones to Assess Motor Fatigability.

Fatigue is a common debilitating symptom of many autoimmune diseases, including multiple sclerosis. It negatively impacts patients' every-day life and productivity. Despite its prevalence, fatigue is still poorly understood. Its subjective nature makes quantification challenging and it is mainly assessed by questionnaires, which capture the magnitude of fatigue insufficiently. Motor fatigability, the objective decline of performance during a motor task, is an underrated aspect in this regard. Currently, motor fatigability is assessed using a handgrip dynamometer. This approach has been proven valid and accurate but requires special equipment and trained personnel. We propose a technique to objectively quantify motor fatigability using a commodity smartphone. The method comprises a simple exertion task requiring rapid alternating tapping. Our study with 20 multiple sclerosis patients and 35 healthy participants showed a correlation of rho = 0.8 with the baseline handgrip method. This smartphone-based approach is a first step towards ubiquitous, more frequent, and remote monitoring of fatigability and disease progression.

L. Barrios, P. Oldrati, D. Lindlbauer, M. Hilty, H. Hayward-Koennecke, C. Holz, A. Lutterotti. 2020. A Rapid Tapping Task on Commodity Smartphones to Assess Motor Fatigability.
ACM CHI '20.

Understanding Metamaterial Mechanisms.

In this paper, we establish the underlying foundations of mechanisms that are composed of cell structures---known as metamaterial mechanisms. Such metamaterial mechanisms were previously shown to implement complete mechanisms in the cell structure of a 3D printed material, without the need for assembly. However, their design is highly challenging. A mechanism consists of many cells that are interconnected and impose constraints on each other. This leads to unobvious and non-linear behavior of the mechanism, which impedes user design. In this work, we investigate the underlying topological constraints of such cell structures and their influence on the resulting mechanism. Based on these findings, we contribute a computational design tool that automatically creates a metamaterial mechanism from user-defined motion paths. This tool is only feasible because our novel abstract representation of the global constraints highly reduces the search space of possible cell arrangements.

A. Ion, D. Lindlbauer, P. Herholz, M. Alexa, P. Baudisch. 2019
Understanding Metamaterial Mechanisms.
ACM CHI '19, Glasgow, Scotland, UK.

The Mental Image Revealed by Gaze Tracking.

Humans involuntarily move their eyes when retrieving an image from memory. This motion is often similar to actually observing the image. We suggest to exploit this behavior as a new modality in human computer interaction, using the motion of the eyes as a descriptor of the image. Interaction requires the user's eyes to be tracked but no voluntary physical activity. We perform a controlled experiment and develop matching techniques using machine learning to investigate if images can be discriminated based on the gaze patterns recorded while users merely think about image. Our results indicate that image retrieval is possible with an accuracy significantly above chance. We also show that this result generalizes to images not used during training of the classifier and extends to uncontrolled settings in a realistic scenario.

X. Wang, A. Ley, S. Koch, D. Lindlbauer, J. Hays, K. Holmqvist, M. Alexa. 2019 The Mental Image Revealed by Gaze Tracking.
ACM CHI '19, Glasgow, Scotland, UK.

Featured on Shiropen (Seamless)

TacTiles: Dual-mode Low-power Electromagnetic Actuators for Rendering Continuous Contact and Spatial Haptic Patterns in VR.

We introduce TacTiles, light (1.8g), low-power (130 mW ), and small form-factor (1cm3) electromagnetic actuators that can form a flexible haptic array to provide localized tactile feedback. A novel hardware design uses a custom designed 8 layer PCB, dampening materials to reduce recoil, and an asymmetric latching mechanism that enables two distinct modes of actuation. We leverage these modes in Virtual Reality (VR) to render touch with objects and surface textures when moving over them. We conducted quantitative and qualitative experiments to evaluate system performance and experiences in VR. Our results indicate that TacTiles are suitable in rendering a variety of surface textures, can convincingly render continuous touch with virtual objects, and enables users to discriminate objects from textured surfaces even without looking at them.

V. Vechev, J. Zarate, D. Lindlbauer, R. Hinchet, H. Shea, O. Hilliges. 2019 TacTiles: Dual-mode Low-power Electromagnetic Actuators for Rendering Continuous Contact and Spatial Haptic Patterns in VR.
IEEE VR '19, Osaka, Japan.

Featured on Shiropen (Seamless)

HeatSpace: Automatic Placement of Displays by Empirical Analysis of User Behavior.

We present HeatSpace, a system that records and empirically analyzes user behavior in a space and automatically suggests positions and sizes for new displays. The system uses depth cameras to capture 3D geometry and users’ perspectives over time. To derive possible display placements, it calculates volumetric heatmaps describing geometric persistence and planarity of structures inside the space. It evaluates visibility of display poses by calculating a volumetric heatmap describing occlusions, position within users’ field of view, and viewing angle. Optimal display size is calculated through a heatmap of average viewing distance. Based on the heatmaps and user constraints we sample the space of valid display placements and jointly optimize their positions. This can be useful when installing displays in multi-display environments such as meeting rooms, offices, and train stations.

A. Fender, D. Lindlbauer, P. Herholz, M. Alexa, J. Müller, 2017. HeatSpace: Automatic Placement of Displays by Empirical Analysis of User Behavior. UIST '17, Quebec City, Canada.
Full video (5 min) / Andreas' talk recording from UIST '17

GelTouch: Localized Tactile Feedback Through Thin, Programmable Gel

GelTouch is a gel-based layer that can selectively transition between soft and stiff to provide tactile multi-touch feedback. It is flexible, transparent when not activated, and contains no mechanical, electromagnetic, or hydraulic components, resulting in a compact form factor (a 2mm thin touchscreen layer for our prototype). The activated areas can be morphed freely and continuously, without being limited to fixed, predefined shapes. GelTouch consists of a poly(N-isopropylacrylamide) gel layer which alters its viscoelasticity when activated by applying heat (>32 ◦C). We present three different activation techniques: 1) Indium Tin Oxide (ITO) as a heating element that enables tactile feed- back through individually addressable taxels; 2) predefined tactile areas of engraved ITO, that can be layered and combined; 3) complex arrangements of resistance wire that create thin tactile edges. We present a tablet with 6x4 tactile areas, enabling a tactile numpad, slider, and thumbstick. We show that the gel is up to 25 times stiffer when activated and that users detect tactile features reliably (94.8%).

V. Miruchna, R. Walter, D. Lindlbauer, M. Lehmann, R. von Klitzing, J. Müller, 2015. GelTouch: Localized Tactile Feedback Through Thin, Programmable Gel. UIST '15, Charlotte, NC, USA.
ACM UIST 2015 Best Paper Award Honorable Mention
Viktor's talk from UIST '15'

Featured on: MIT Technology Review, Engadget, Wired DE, El País.

Measuring Visual Salience of 3D Printed Objects

We investigate human viewing behavior when participants are presented with physical realizations of 3D objects by gathering fixations on the surface of the presented stimuli. This data is used to validate assumptions regarding visual saliency so far only experimentally analyzed using flat stimuli. We provide a way to compare fixation sequences from different subjects as well as a model for generating test sequences of fixations unrelated to the stimuli. This way we can show that human observers agree in their fixations for the same object under similar viewing conditions — as expected based on similar results for flat stimuli. We also develop a simple procedure to validate computational models for visual saliency of 3D objects and use it to show that popular models of mesh salience based on the center surround patterns fail to predict fixations.

X. Wang, D. Lindlbauer, C. Lessig, M. Maertens, M. Alexa, 2016. Measuring Visual Salience of 3D Printed Objects. IEEE Computer Graphics and Applications, Special Issue on Quality Assessment and Perception. Vol. 36 / 4, 2016.
Project website, IEEE Xplore

Accuracy of Monocular Gaze Tracking on 3D Geometry

Many applications in visualization benefit from accurate knowledge of where a person is looking at. We present a system for accurately tracking gaze positions on a three dimensional object using a monocular head mounted eye tracker. We accomplish this by 1) using digital manufacturing to create stimuli with accurately known geometry, 2) embedding fiducial markers directly into the manufactured objects to reliably estimate the rigid transformation of the object, and, 3) using a perspective model to relate pupil positions to 3D locations. This combination enables the efficient and accurate computation of gaze position on an object from measured pupil positions. We validate the accuracy of our system experimentally, achieving an angular resolution of 0.8◦ and a 1.5% depth error using a simple calibration procedure with 11 points.

X. Wang, D. Lindlbauer, C. Lessig, M. Alexa, 2015.
Accuracy of Monocular Gaze Tracking on 3D Geometry.
ETVIS Workshop '15 (in conj. IEEE VIS '15), Chicago, Il, USA.

X. Wang, D. Lindlbauer, C. Lessig, M. Alexa, 2015. Accuracy of Monocular Gaze Tracking on 3D Geometry. In Book: Eye Tracking and Visualization. Foundations, Techniques, and Applications. ETVIS 2015. Springer Int. Pub. 2017. M. Burch, L. Chuang, B. Fisher, A. Schmidt and D. Weiskopf (Eds.).
Project website

Analyzing Visual Attention During Whole Body Interaction with Public Displays

While whole body interaction can enrich user experience on public displays, it remains unclear how common visualizations of user representations impact users’ ability to perceive content on the display. In this work we use a head-mounted eye tracker to record visual behavior of 25 users interacting with a public display game that uses a silhouette user representation, mirroring the users’ movements. Results from visual attention analysis as well as post-hoc recall and recognition tasks on display contents reveal that visual attention is mostly on users’ silhouette while peripheral screen elements remain largely unattended. In our experiment, content attached to the user representation attracted significantly more attention than other screen contents, while content placed at the top and bottom of the screen attracted significantly less. Screen contents attached to the user representation were also significantly better remembered than those at the top and bottom of the screen.

R. Walter, A. Bulling, D. Lindlbauer, M. Schüssler, J. Müller, 2015.
Analyzing Visual Attention During Whole Body Interaction with Public Displays. UBICOMP '15, Osaka, Japan.

Creature Teacher: A Performance-Based Animation System for Creating Cyclic Movements

Creature Teacher is a performance-based animation system for creating cyclic movements. Users directly manipulate body parts of a virtual character by using their hands. Creature Teacher’s generic approach makes it possible to animate rigged 3D models with nearly arbitrary topology (e.g., non-humanoid) without requiring specialized user-to-character mappings or predefined movements. We use a bimanual interaction paradigm, allowing users to select parts of the model with one hand and manipulate them with the other hand. Cyclic movements of body parts during manipulation are detected and repeatedly played back - also while animating other body parts. Our approach of taking cyclic movements as an input makes mode switching between recording and playback obsolete and allows for fast and seamless creation of animations. We show that novice users with no animation background were able to create expressive cyclic animations for initially static virtual 3D creatures.

A. Fender, J. Müller, D. Lindlbauer, 2015.
Creature Teacher: A Performance-Based Animation System for Creating Cyclic Movements. SUI '15, Los Angeles, CA, USA.

A Chair as Ubiquitous Input Device: Exploring Semaphoric Chair Gestures for Focused and Peripheral Interaction

During everyday office work we are used to controlling our computers with keyboard and mouse, while the majority of our body remains unchallenged and the physical workspace around us stays largely unattended. Addressing this untapped potential, we explore the concept of turning a flexible office chair into a ubiquitous input device. To facilitate daily desktop work, we propose the utilization of semaphoric chair gestures that can be assigned to specific application functionalities. The exploration of two usage scenarios in the context of focused and peripheral interaction demonstrates high potential of chair gestures as additional input modality for opportunistic, hands-free interaction.

K. Probst, D. Lindlbauer, M. Haller, B. Schwartz, A. Schrempf, 2014. A Chair as Ubiquitous Input Device: Exploring Semaphoric Chair Gestures for Focused and Peripheral Interaction. CHI '14, Toronto, Canada.

K. Probst, D. Lindlbauer, M. Haller, B. Schwartz, A. Schrempf, 2014.
Exploring the Potential of Peripheral Interaction through Smart Furniture.
Workshop on Peripheral Interaction: Shaping the Research and Design Space at CHI '14, Toronto, Canada.

K. Probst, D. Lindlbauer, P. Greindl, M. Trapp, M. Haller, B. Schwartz, and A. Schrempf, 2013. Rotating, Tilting, Bouncing: Using an Interactive Chair to Promote Activity in Office Environments. CHI EA ’13, Paris, France, 2013

Perceptual Grouping: Selection Assistance for Digital Sketching

Modifying a digital sketch may require multiple selections before a particular editing tool can be applied. Especially on large interactive surfaces, such interactions can be fatiguing. Accordingly, we propose a method, called Suggero, to facilitate the selection process of digital ink. Suggero identifies groups of perceptually related drawing objects. These “perceptual groups” are used to suggest possible extensions in response to a person’s initial selection. Two studies were conducted. First, a background study investigated participant’s expectations of such a selection assistance tool. Then, an empirical study compared the effectiveness of Suggero with an existing manual technique. The results revealed that Suggero required fewer pen interactions and less pen movement, suggesting that Suggero minimizes fatigue during digital sketching.

D. Lindlbauer, M. Haller, M. Hancock, S. D. Scott, and W. Stuerzlinger, 2013. Perceptual Grouping: Selection Assistance for Digital Sketching.
ITS ’13, St. Andrews, Scotland.

D. Lindlbauer, 2012
Perceptual Grouping of Digital Sketches.
Master’s thesis (supervised by Prof Michael Haller)
University of Applied Sciences Upper Austria, Hagenberg

Understanding Mid-Air Hand Gestures: A Study of Human Preferences in Usage of Gesture Types for HCI

In this paper we present the results of a study of human preferences in using mid-air gestures for directing other humans. Rather than contributing a specific set of gestures, we contribute a set of gesture types, which together make a set of the core actions needed to complete any of our six chosen tasks in the domain of human-to-human gestural communication without the speech channel. We observed 12 participants, cooperating to accomplish different tasks only using hand gestures to communicate. We analyzed 5,500 gestures in terms of hand usage and gesture type, using a novel classification scheme which combines three existing taxonomies in order to better capture this interaction space. Our findings indicate that, depending on the meaning of the gesture, there is preference in the usage of gesture types, such as pointing, pantomimic acting, direct manipulation, semaphoric, or iconic gestures. These results can be used as guidelines to design purely gesture driven interfaces for interactive environments and surfaces.

R. Aigner, D. Wigdor, H. Benko, M. Haller, D. Lindlbauer, A. Ion, S. Zhao, and J.T.K.V. Koh, 2012. Understanding Mid-Air Hand Gestures: A Study of Human Preferences in Usage of Gesture Types for HCI.
Microsoft Tech Report, Redmond, WA, USA. MSR-TR-2012-11.

Exploring the Use of Distributed Multiple Monitors Within an Activity-Promoting Sit-and-Stand Office Workspace

Nowadays sedentary behaviors such as prolonged sitting have become a predominant element of our lives. Particularly in the office environment, many people spend the majority of their working day seated in front of a computer. In this paper, we investigate the adoption of a physically active work process within an activity-promoting office workspace design that is composed of a sitting and a standing workstation. Making use of multiple distributed monitors, this environment introduces diversity into the office workflow through the facilitation of transitions between different work-related tasks, workstations, and work postures. We conducted a background study to get a better understanding of how people are performing their daily work within this novel workspace. Our findings identify different work patterns and basic approaches for physical activity integration, which indicate a number of challenges for software design. Based on the results of the study, we provide design implications and highlight new directions in the field of HCI design to support seamless alternation between different postures while working in such an environment.

K. Probst, D. Lindlbauer, F. Perteneder, M. Haller, B. Schwartz, and A. Schrempf, 2013. Exploring the Use of Distributed Multiple Monitors Within an Activity-Promoting Sit-and-Stand Office Workspace.
Interact ’13, Capetown, South Africa.

Professional activity, awards & talks


Program committee and editorial boards
Program Committee member for UIST 2021
Program Committee member for CHI 2021
Associate Editor for ISS 2021 (ACM PACM HCI journal)
Guest Editor for Frontiers in VR - Training in XR
Program Committee member for UIST 2020 Associate Editor for ISS 2020 (ACM PACM HCI journal)
Program Committee member for CHI 2020
Program Committee member for CHI 2019
Program Committee member for UIST 2018
Program Committee member for ISS 2017

Organizing committee
CHI 2022 Interactivity co-chair
SIGCHI operations committee (2016 - 2021)
UIST 2020 Virtual Experience and Operations co-chair
CHI 2016 - 2020 Video capture chair
UIST 2019 Student Innovation Contest co-chair
UIST 2018 Student Innovation Contest co-chair
UIST 2018 Best Paper Committee member
UIST 2016 Student Volunteer co-chair
UIST 2015 Documentation chair
Pervasive Displays 2016 Poster chair

Reviewing & other activity
2021 CHI*, UIST, TOCHI, SIGGRAPH, Frontiers in VR, ISMAR, IEEE VR, ISMAR, TEI
2020 CHI*, UIST, TOCHI, DIS, MobileHCI, IEEE VR, GI, ISS*
2019 CHI*, UIST, TOCHI, SIGGRAPH, Computer Graphics Forum, IEEE MultiMedia
2018 CHI*, UIST, TEI, IEEE VR, DIS, TOCHI
2017 CHI*, UIST*, IMWUT, MobileHCI, DIS, DESFORM
2016 CHI*, UIST*, ISS, ICMI, SUI, AH, IJHCI
2015 CHI, ITS, ICMI, SUI, PerDis, PERCOMP Journal
2014 CHI, UIST*, ICMI, SUI, NordiCHI
(*) received special recognition for reviewing

Poster committee for ISS 2016 & 2017, MUM 2016
Student volunteer for ITS 2014, UIST 2014, CHI 2015

Grants & fellowships
NSF Grant - Student Innovation Challenge at UIST ($15,900, Co-writer, 2019)
Increasing diversity & inclusiveness at UIST. Grant provides funding for 5 teams
from underrepresented minorities to participate in the contest and attend the conference.
SIGCHI Grant - Student Innovation Challenge at UIST ($18,330, Co-writer, 2019)
Increasing diversity & inclusiveness at UIST. Grant provides funding for 2 non-US teams
from underrepresented minorities to participate in the contest and attend the conference
and pay registration for 5 US-based teams.
ETH Zurich Postdoctoral Fellowships (CHF 229,600 / $229,068, Principal Investigator, 2018)
A Computational Framework for Increasing the Usability of Augmented Reality and Virtual Reality
Shapeways Educational Grant ($1,000, Contributor, 2015)
Exploring Visual Saliency of 3D Objects
Performance scholarship of FH Hagenberg (€750 / $850, Awardee, 2011)
One of twelve awardees for scholarship by FH Hagenberg (Leistungsstipendium)

Awards
CHI 2016 Best Paper Honorable Mention Award for
Influence of Display Transparency on Background Awareness and Task Performance.

UIST 2015 Best Paper Honorable Mention Award for
GelTouch: Localized Tactile Feedback Through Thin, Programmable Gel

Invited talks
2020/03/25Carnegie Mellon University.
2020/03/12Aalto University.
2020/03/02University of Chicago.
2020/02/27University of Illinois at Chicago.
2020/02/24Boston University.
2020/02/05Facebook Reality Labs.
2019/12/17Aalto University.
2019/10/28University of Chicago
2019/08/09Google Interaction Lab
2019/08/08UC Berkeley
2019/08/07Stanford University
2019/08/02UCLA.
2019/07/10MIT Media Lab - Tangible Meda Group.
2019/07/10MIT CSAIL.
2019/07/08Columbia University.
2019/06/15Swiss Society of Virtual and Augmented Reality, Meetup #HOMIXR
2018/05/22Interact Lab - University of Sussex.
2018/03/02IST Austria.
2018/02/21DGP – University of Toronto.
2017/12/15ETH Zurich.
2017/12/14Disney Research Zurich.
2017/12/12INRIA Bordeaux.
2017/10/05Aarhus University.


You can download my cv here: cv_davidlindlbauer.pdf.





Older projects


AEC Facade Visualisation [2011]

This project is a visualization on the interactive facade of the Ars Electronica Center, Linz. Users can play the game Breakout (or "bricks"). The platform is controlled with users' body movement. The system tracks the player in front of the building with a camera and positions the camera accordingly. This is done by sending network commands to the AEC facade interface. Collaboration with Alexandra Ion. Hagenberg, 2011.

Kontrollwerk [2009]

Kontrollwerk is a multitouch midi controller software for surface platforms, in collaboration with Alexandra Ion and Stefan Wasserbauer. KontrollWerk gives the user the possibility to create its own user interface with different types of midi controls. The output can be directed to any software or to any kind of internal and external midi devices. With gesture recognition and a blob menu the application has an intuitive user interface and handling. The software fits very well to DJs and VJs controlling several devices during a live performance. Hagenberg, 2009.

The Witness [2009]

The iPhone App “The Witness” is an interactive real life game containing multiple components, realized during my time at Interactive Pioneers. Depending on the level of the game and the location the player has to complete different tasks to find out more information and to complete the game. Located in Berlin, players were guided to multiple locations through the software. After reaching a location, players used the app to watch videos of the story, fulfill tasks like finding QR codes and communicate with actors, which were part of the game. The project was realized with Jung von Matt Spree (advertising agency, concept) and 13th Street (client). Aachen at Interactive Pioneers, 2009.