My research interest lies in Human Computer Interaction, with special focus on AR, accessibility and creativity support.
I'm currently exploring and building AI+AR tools that assists people sense and modify their surrounding environment, discover hidden creativity opportunity, and also improve knowledge gain and task performance in everyday life.
Jun 2023 - Sep 2023, San Jose, CA
Jun 2020 - Mar 2021, Beijing, China
Here are some recent projects I lead/participate in UW CSE
May 2022 - Now, at UW CSE
Feb 2020 - Apr 2022, at UW CSE
Here are some recent publication produced by projects I lead/participate in UW CSE and Tsinghua
In this demo paper, we introduce RASSAR, a mobile AR application for semi-automatically identifying, localizing, and visualizing indoor accessibility and safety issues using LiDAR and real-time computer vision. Our prototype supports four classes of detection problems: inaccessible object dimensions (e.g., table height), inaccessible object positions (e.g., a light switch out of reach), the presence of unsafe items (e.g., scissors), and the lack of proper assistive devices (e.g., grab bars). RASSAR's design was informed by a formative interview study with 18 participants from five key stakeholder groups, including wheelchair users, blind and low vision participants, families with young children, and caregivers. Our envisioned use cases include vacation rental hosts, new caregivers, or people with disabilities themselves documenting issues in their homes or rental spaces and planning renovations. We present key findings from our formative interviews, the design of RASSAR, and results from an initial performance evaluation.
PDFWe introduce RASSAR (Room Accessibility and Safety Scanning in Augmented Reality), a new proof-of-concept prototype for semi-automatically identifying, categorizing, and localizing indoor accessibility and safety issues using LiDAR + camera data, machine learning, and AR. We present an overview of the current RASSAR prototype and a preliminary evaluation in a single home.
PDF DOIWe present Kinergy—an interactive design tool for creating self propelled motion by harnessing the energy stored in 3D printable springs. Kinergy allows the user to create motion-enabled 3D models by embedding kinetic units, customize output motion characteristics by parameterizing embedded springs and kinematic elements, control energy by operating the specialized lock, and preview the resulting motion in an interactive environment.
PDFwe present an exploration work of automating interior layout design. We put forward a set of representation rules which turn interior scene photos into structuralized scene graphs. With representation rule containing both categorial and spatial information, we establish an interior scene graph dataset by annotating well-designed interior scene pictures downloaded from online photo sharing sites. Using the interior scene dataset which contains over 400 valid interior scene graphs, we train a graph generative model and further render its output as reconstructed scenes.
PDF DOIWe introduce an interactive evolutionary computation (IEC) design experiment that deals with a simplified interior design task and has already been tested on 230 subjects. Using the data gathered during the experiment, we conduct data analysis and visualization involving methods including Holistic color interval and K-means clustering to show categories and processes in design. Additionally, we train a content-based recommendation system with experiment data to capture user preference and make the IEC system more efficient and intelligent.
PDF DOIIn this study, new data structure of command-object graph retrieved from 3D modeling event logs was proposed. It reflects the cause-and-effect relationship between commands and objects to accommodate 3D modeling process. A case study was conducted on 110 students' event logs generated from solving a well-defined façade model. As result: an average trial-and-error ratio was calculated at 0.3357 based on original and simplified graphs. Four main types of ‘modeling bubble’ were summarized base on unit-tag graphs. Key commands were found by graph centrality. Auto-prediction of samegroup objects was developed with mis-group rate less than 0.9%.
PDF DOI185 E Stevens Way NE, Seattle, WA 98195
xiasu@cs.washington.edu
HCI PhD student in UW CSE. Does AR+Accessibility+Creativity research