default search action
36th UIST 2023: San Francisco, CA, USA - Adjunct Volume
- Sean Follmer, Jeff Han, Jürgen Steimle, Nathalie Henry Riche:
Adjunct Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, UIST 2023, San Francisco, CA, USA, 29 October 2023- 1 November 2023. ACM 2023
Session: Posters
- Chang Xiao:
AutoSurveyGPT: GPT-Enhanced Automated Literature Discovery. 1:1-1:3 - Hongning Shi, Jiajia Li, Lian Xue, Yajing Song:
OperAR: Using an Augmented Reality Agent to Enhance Children's Interactive Intangible Cultural Heritage Experience of the Peking Opera. 2:1-2:3 - Liwen He, Yifan Li, Mingming Fan, Liang He, Yuhang Zhao:
A Multi-modal Toolkit to Support DIY Assistive Technology Creation for Blind and Low Vision People. 3:1-3:3 - Ian Arawjo, Priyan Vaithilingam, Martin Wattenberg, Elena L. Glassman:
ChainForge: An open-source visual programming environment for prompt engineering. 4:1-4:3 - Junxian Li, Yanan Wang, Hebo Gong, Zhitong Cui:
AwakenFlora: Exploring Proactive Smell Experience in Virtual Reality through Mid-Air Gestures. 5:1-5:3 - Yuya Aikawa, Ryoya Tamura, Chunchen Xu, Xiao Ge, Daigo Misaki:
Introducing Augmented Post-it: An AR Prototype for Engaging Body Movements in Online GPT-Supported Brainstorming. 6:1-6:3 - Ruishi Zou, Zi Ye, Chen Ye:
iTutor: A Generative Tutorial System for Teaching the Elders to Use Smartphone Applications. 7:1-7:3 - Jongik Jeon, Chang Hee Lee:
SoundMist: Novel Interface for Spatial Auditory Experience. 8:1-8:3 - Tomoki Takahashi, Yusuke Sakai, Lana Sinapayen:
Sacriface: A Simple and Versatile Support Structure for 3D Printing. 9:1-9:3 - Austin Mac, Misha Sra:
Sonic Storyteller: Augmenting Oral Storytelling with Spatial Sound Effects. 10:1-10:3 - Shunta Ito, Yasuto Nakanishi:
AmplifiedCoaster: Virtual Roller Coaster Experience using Motorized Ramps and Personal Mobility Vehicle. 11:1-11:3 - Amr Gomaa, Robin Zitt, Guillermo Reyes, Antonio Krüger:
SynthoGestures: A Novel Framework for Synthetic Dynamic Hand Gesture Generation for Driving Scenarios. 12:1-12:3 - Gabriel J. Serfaty, Virgil O. Barnard, Joseph P. Salisbury:
Generative Facial Expressions and Eye Gaze Behavior from Prompts for Multi-Human-Robot Interaction. 13:1-13:3 - Hye-Young Jo, Chan Hu Wie, Yejin Jang, Dong-Uk Kim, Yurim Son, Yoonji Kim:
TrainerTap: Weightlifting Support System Prototype Simulating Personal Trainer's Tactile and Auditory Guidance. 14:1-14:3 - Kazuki Koyama, Koya Narumi, Ken Takaki, Yasushi Kawase, Ari Hautasaari, Yoshihiro Kawahara:
Reusing Cardboard for Packaging Boxes with a Computational Design System. 15:1-15:3 - Xiang Fei, Yujing Tian, Yanan Wang:
Laseroma: A Small-Sized, Light-Weight, and Low-Cost Olfactory Display Releasing Multiple Odors through Pointed Heating. 16:1-16:3 - Xiaohang Tang, Xi Chen, Sam Wong, Yan Chen:
VizPI: A Real-Time Visualization Tool for Enhancing Peer Instruction in Large-Scale Programming Lectures. 17:1-17:3 - Joshua Gorniak, Jacob Ottiger, Donglai Wei, Nam Wook Kim:
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction. 18:1-18:3 - Junhan Kong, Tianyuan Cai, Zoya Bylinskii:
Improving Mobile Reading Experiences While Walking Through Automatic Adaptations and Prompted Customization. 19:1-19:3 - Donghyeok Ma, Joon Hyub Lee, Junwoo Yoon, Taegyu Jin, Seok-Hyung Bae:
SketchingRelatedWork: Finding and Organizing Papers through Inking a Node-Link Diagram. 20:1-20:3 - Takegi Yoshimoto, Shuto Murakami, Homei Miyashita:
Edible Lenticular Lens Design System. 21:1-21:3 - Donghan Hu, Joseph Bae, Sol Ie Lim, Sang Won Lee:
Context-Aware Sit-Stand Desk for Promoting Healthy and Productive Behaviors. 22:1-22:3 - Atieh Taheri, Purav Bhardwaj, Arthur Caetano, Alice Zhong, Misha Sra:
Virtual Buddy: Redefining Conversational AI Interactions for Individuals with Hand Motor Disabilities. 23:1-23:3 - Tatsuya Kagemoto, Kentaro Takemura:
Event-Based Pupil Tracking Using Bright and Dark Pupil Effect. 24:1-24:3 - Takashi Murayama, Shu Sugita, Hiroyuki Saegusa, Junichiro Kadomoto, Hidetsugu Irie, Shuichi Sakai:
iKnowde: Interactive Learning Path Generation System Based on Knowledge Dependency Graphs. 25:1-25:3 - Seung Hyeon Han, Woohun Lee:
Sketchnote: Sketch-Based Visualization of Problem Decomposition in Block-Based Programming. 26:1-26:3 - Miku Fukaike, Homei Miyashita:
How To Eat Garlic Without Causing Bad Breath: Taste reproduction using a taste sensor and presentation of taste and aroma using a fork device. 27:1-27:3 - Daniel Lohn, Tobias Höllerer, Misha Sra:
Augmented Photogrammetry: 3D Object Scanning and Appearance Editing in Mobile Augmented Reality. 28:1-28:3 - Chizu Nishimori, Tomohiko Mukai:
On-the-fly Editing of Emoji Elements for Mobile Messaging. 29:1-29:2 - Seung-Jun Lee, Taegyu Jin, Joon Hyub Lee, Seok-Hyung Bae:
An Interactive System for Drawing Cars in Perspective. 30:1-30:3 - Yousang Kwon, Seonuk Kim, Taeyoung Ko, Juhyeok Yoon, Kyungho Lee:
Representing the Timbre of Traditional Musical Instruments Based On Contemporary Instrumental Samples Using DDSP. 31:1-31:3 - Homei Miyashita, Yoshinobu Kaji, Ai Sato:
Electric Salt: Tableware Design for Enhancing Taste of Low-Salt Foods. 32:1-32:3 - Donghyeok Ma, Joon Hyub Lee, Taegyu Jin, Sang-Hyun Lee, Ho Min Kim, Seok-Hyung Bae:
Sketching Proteins with Bare Hands in VR. 33:1-33:3 - Taegyu Jin, Seung-Jun Lee, Joon Hyub Lee, Seok-Hyung Bae:
Touch'n'Draw: Rapid 3D Sketching with Fluent Bimanual Coordination. 34:1-34:3 - Avinash Ajit Nargund, Alejandro Aponte, Arthur Caetano, Misha Sra:
ModBand: Design of a Modular Headband for Multimodal Data Collection and Inference. 35:1-35:3 - Radha Kumaran, Viral Niraj Doshi, Sherry X. Chen, Avinash Ajit Nargund, Tobias Höllerer, Misha Sra:
EChat: An Emotion-Aware Adaptive UI for a Messaging App. 36:1-36:3 - Qiming Sun, I-Han Hsiao:
Waste Genie: A Web-Based Educational Technology for Sustainable Waste Management. 37:1-37:3 - Benjamin Nuernberger, Corey J. Cochrane, Justin Williams, Lyle Klyne, Andreas Gottscholl, Hannes Kraus, Angelo Ryan Soriano, Pablo S. Narvaez, Chi-Chien Nelson Huang, Katherine Dang, Edward C. Gonzales, Neil Murphy, Carol A. Raymond:
Visualizing Spacecraft Magnetic Fields on the Web and in VR. 38:1-38:3 - Wenqi Zheng, Emma Walquist, Isha Datey, Xiangyu Zhou, Kelly Berishaj, Melissa Mcdonald, Michele Parkhill, Dongxiao Zhu, Douglas Zytko:
Towards Trauma-Informed Data Donation of Sexual Experience in Online Dating to Improve Sexual Risk Detection AI. 39:1-39:3 - Aryan Saini, Srihari Sridhar, Aarushi Raheja, Rakesh Patibanda, Nathalie Overdevest, Po-Yao (Cosmos) Wang, Elise van den Hoven, Florian 'Floyd' Mueller:
Pneunocchio: A playful nose augmentation for facilitating embodied representation. 40:1-40:3 - Daniel Vargas-Diaz, Sulakna Karunaratna, Jisun Kim, Sang Won Lee, Koeun Choi:
TaleMate: Collaborating with Voice Agents for Parent-Child Joint Reading Experiences. 41:1-41:3 - Ashwini G. Naik, Andrew E. Johnson:
Using Personal Situated Analytics (PSA) to Interpret Recorded Meetings. 42:1-42:3 - Lauryn Anderson, Lora Oehlberg, Wesley Willett:
FlavourFrame: Visualizing Tasting Experiences. 43:1-43:3 - James Lin, Tiffany Knearem, Kristopher Giesing:
Relay: A collaborative UI model for design handoff. 44:1-44:3 - Xin Yue Amanda Li, Jason Wu, Jeffrey P. Bigham:
Using LLMs to Customize the UI of Webpages. 45:1-45:3 - Shanna Li Ching Hollingworth, Wesley Willett:
FluencyAR: Augmented Reality Language Immersion. 46:1-46:3 - Gabriel Lipkowitz, Eric S. G. Shaqfeh, Joseph M. DeSimone:
Palette-PrintAR: an augmented reality fluidic design tool for multicolor resin 3D printing. 47:1-47:3 - Vivian Hsinyueh Chan, Chiao Fang, Yukai Hung, Jing-Yuan Huang, Lung-Pan Cheng:
Chandelier: Interaction Design With Surrounding Mid-Air Tangible Interface. 48:1-48:3 - Eduard De Vidal Flores, Caglar Yildirim, D. Fox Harrell:
E4UnityIntegration-MIT: An Open-Source Unity Plug-in for Collecting Physiological Data using Empatica E4 during Gameplay. 49:1-49:3 - Yifan Yan, Mingyi Yuan, Yanan Wang, Qi Wang, Xinyi Liao, Xinyan Li:
ScentCarving: Fabricating Thin, Multi-layered and Paper-Based Scent Release through Laser Printing. 50:1-50:3
Session: Demos
- Chenyang Zhang, Tiansu Chen, Rohan Russel Nedungadi, Eric Shaffer, Elahe Soltanaghai:
FocusFlow: Leveraging Focal Depth for Gaze Interaction in Virtual Reality. 51:1-51:4 - Peiling Jiang, Li Feng, Fuling Sun, Haijun Xia, Can Liu:
Demonstrating 1D-Touch: NLP-Assisted Coarse Text Selection via a Semi-Direct Gesture. 52:1-52:3 - Mustafa Doga Dogan, Raul Garcia-Martin, Patrick William Haertel, Jamison John O'Keefe, Raul Sánchez-Reillo, Stefanie Mueller:
Demonstrating BrightMarkers: Fluorescent Tracking Markers Embedded in 3D Printed Objects. 53:1-53:3 - Masatoshi Hamanaka:
Melody Slot Machine on iPhone: Dial-type Interface for Morphed Melody: Dial-type Interface for Morphed Melody. 54:1-54:3 - Yuning Su, Yonghao Shi, Da-Yuan Huang, Xing-Dong Yang:
Laser-PoweredVibrotactileRendering. 55:1-55:3 - Tatsuya Kawasaki, Hiroyuki Manabe:
LensTouch: Touch Input on Lens Surfaces of Smart Glasses. 56:1-56:3 - Yair Herbst, Alon Wolf, Lihi Zelnik-Manor:
Demonstrating HUGO, a High-Resolution Tactile Emulator for Complex Surfaces. 57:1-57:4 - Takahiro Kusabuka, Hiroshi Chigira, Takayoshi Mochizuki:
LayerShift: Reconfigurable Layer Expression Using Robotic Transparent Displays. 58:1-58:3 - Kentaro Yasu:
Demonstrating SuperMagneShape: Interactive Usage of a Passive Pin-Based Shape-Changing Display. 59:1-59:3 - Bryan Min, Matthew T. Beaudouin-Lafon, Sangho Suh, Haijun Xia:
Demonstration of Masonview: Content-Driven Viewport Management. 60:1-60:3 - Kenta Takagi, Yasuto Nakanishi:
Projectoroid: A Mobile Robot-Based SAR Display Approach. 61:1-61:3 - Kosei Kamata, Haruki Takahashi, Koji Tsukada:
Conductive, Ferromagnetic and Bendable 3D Printed Hair for Designing Interactive Objects: Conductive, Ferromagnetic and Bendable 3D Printed Hair. 62:1-62:3 - Ayaka Ishii, Kentaro Yasu:
FluxTangible: Simple and Dynamic Haptic Tangible with Bumps and Vibrations. 63:1-63:3 - Kyungeun Jung, Sang Ho Yoon:
Mo2Hap: Rendering VR Performance Motion Flow to Upper-body Vibrotactile Haptic Feedback. 64:1-64:3 - Rui Sheng, Leni Yang, Haotian Li, Yan Luo, Ziyang Xu, Zhilan Zhou, David Gotz, Huamin Qu:
Knowledge Compass: A Question Answering System Guiding Students with Follow-Up Question Recommendations. 65:1-65:4 - Zeyu Yan, Hsuanling Lee, Liang He, Huaishu Peng:
Demonstration of 3D Printed Magnetophoretic Displays. 66:1-66:3 - Fangzheng Liu, Joseph A. Paradiso:
PrintedCircuit Board (PCB) Probe Tester (PCBPT) - a Compact Desktop Systemthat Helps with Automatic PCBDebugging. 67:1-67:3 - Kyunghwan Kim, Geehyuk Lee:
Virtual Rolling Temple: Expanding the Vertical Input Space of a Smart Glasses Touchpad. 68:1-68:3 - Catherine Grevet Delcourt, Zhixin Jin, Sofia Kobayashi, Quan Gu, Christine Bassem:
Demonstration of A Figma Plugin to Simulate A Large-Scale Network for Prototyping Social Systems. 69:1-69:3 - Peitong Duan, Jeremy Warner, Bjoern Hartmann:
Towards Generating UI Design Feedback with LLMs. 70:1-70:3 - Changyo Han, Yosuke Nakagawa, Takeshi Naemura:
Demonstrating Swarm Robots Capable of Cooperative Transitioning between Table and Wall. 71:1-71:4 - Magnus Frisk, Mads Vejrup, Frederik Kjaer Soerensen, Michael Wessely:
ChromaNails: Re-Programmable Multi-Colored High-Resolution On-Body Interfaces using Photochromic Nail Polish. 72:1-72:5 - Yunyi Zhu, Cedric Honnet, Yixiao Kang, Junyi Zhu, Angelina J. Zheng, Kyle Heinz, Grace Tang, Luca Musk, Michael Wessely, Stefanie Mueller:
Demonstration of ChromoCloth: Re-Programmable Multi-Color Textures through Flexible and Portable Light Source. 73:1-73:3 - Zhihao Yao, Qirui Sun, Beituo Liu, Yao Lu, Guanhong Liu, Haipeng Mi:
InkBrush: A Flexible and Controllable Authoring Tool for 3D Ink Painting. 74:1-74:3 - Artem Dementyev, Dimitri Kanevsky, Samuel J. Yang, Mathieu Parvaix, Chiong Lai, Alex Olwal:
LiveLocalizer: Augmenting Mobile Speech-to-Text with Microphone Arrays, Optimized Localization and Beamforming. 75:1-75:3 - Ruofei Du, Na Li, Jing Jin, Michelle Carney, Xiuxiu Yuan, Kristen Wright, Mark Sherwood, Jason Mayes, Lin Chen, Jun Jiang, Jingtao Zhou, Zhongyi Zhou, Ping Yu, Adarsh Kowdle, Ram Iyengar, Alex Olwal:
Experiencing Visual Blocks for ML: Visual Prototyping of AI Pipelines. 76:1-76:3 - Timothy J. Aveni, Armando Fox, Björn Hartmann:
Bringing Context-Aware Completion Suggestions to Arbitrary Text Entry Interfaces. 77:1-77:3 - Sofia Kobayashi, Yuehe Mao, Christine Bassem, Catherine Grevet Delcourt:
Snap'N'Go: An Extendable Framework for Evaluating Mechanisms in Spatial Crowdsourcing. 78:1-78:3 - Anandghan Waghmare, Jiexin Ding, Ishan Chatterjee, Shwetak N. Patel:
Demo of Z-Ring: Context-Aware Subtle Input Using Single-Point Bio-Impedance Sensing. 79:1-79:3 - Jingying Wang, Vitaliy Popov, Xu Wang:
SketchSearch: Fine-tuning Reference Maps to Create Exercises In Support of Video-based Learning for Surgeons. 80:1-80:3 - Jaewook Lee, Devesh P. Sarda, Eujean Lee, Amy Lee, Jun Wang, Adrian Rodriguez, Jon E. Froehlich:
Towards Real-time Computer Vision and Augmented Reality to Support Low Vision Sports: A Demonstration of ARTennis. 81:1-81:3 - J. D. Zamfirescu-Pereira, Shm Garanganao Almeda, Kyu Won Kim, Bjoern Hartmann:
Towards Image Design Space Exploration in Spreadsheets with LLM Formulae. 82:1-82:3 - Ruei-Che Chang, Chia-Sheng Hung, Dhruv Jain, Anhong Guo:
SoundBlender: Manipulating Sounds for Accessible Mixed-Reality Awareness. 83:1-83:4 - Julien Blanchet, Megan E. Hillis, Yeongji Lee, Qijia Shao, Xia Zhou, David J. M. Kraemer, Devin J. Balkcom:
LearnThatDance: Augmenting TikTok Dance Challenge Videos with an Interactive Practice Support System Powered by Automatically Generated Lesson Plans. 84:1-84:4 - Xingyu Bruce Liu, Vladimir Kirilyuk, Xiuxiu Yuan, Peggy Chi, Alex Olwal, Xiang 'Anthony' Chen, Ruofei Du:
Experiencing Visual Captions: Augmented Communication with Real-time Visuals using Large Language Models. 85:1-85:4 - Joyce E. Passananti, Ana Cárdenas Gasca, Jennifer Jacobs, Tobias Höllerer:
SculptAR: Direct Manipulations of Machine Toolpaths in Augmented Reality for 3D Clay Printing. 86:1-86:3 - Qiuyu Lu, Semina Yi, Tianyu Yu, Yuran Ding, Haipeng Mi, Lining Yao:
Demonstrating Sustainflatable: Harvesting, Storing and Utilizing Ambient Energy for Pneumatic Morphing Interfaces. 87:1-87:6 - Angela Vujic, Ashley Martin, Shreyas Nisal, Manaal Mohammed, Pattie Maes:
Demonstration of Joie: A Joy-based Brain-Computer Interface (BCI) with Wearable Skin Conformal Polymer Electrodes. 88:1-88:3 - Gregory Thomas Croisdale, John Joon Young Chung, Emily Huang, Gage Birchmeier, Xu Wang, Anhong Guo:
DeckFlow: A Card Game Interface for Exploring Generative Model Flows. 89:1-89:3 - Yijie Guo, Zhenhan Huang, Ruhan Wang, Chih-Heng Li, Ruoyu Wu, Qirui Sun, Zhihao Yao, Haipeng Mi, Yu Peng:
Sparkybot: An Embodied AI Agent-Powered Robot with Customizable Characters andInteraction Behavior for Children. 90:1-90:3 - Jiajing Guo, Andrew Benton, Nan Tian, William Ma, Nicholas Feffer, Zhengyu Zhou, Liu Ren:
EyeClick: A Robust Two-Step Eye-Hand Interaction for Text Entry in Augmented Reality Glasses. 91:1-91:4 - Jaewook Lee, Jun Wang, Elizabeth Brown, Liam Chu, Sebastian S. Rodriguez, Jon E. Froehlich:
Towards Designing a Context-Aware Multimodal Voice Assistant for Pronoun Disambiguation: A Demonstration of GazePointAR. 92:1-92:3 - Samira Pulatova, Jiadi Luo, Janghyeon Lee, Veronika Domova, Yuqi Yao, Parsa Rajabi, Lawrence H. Kim:
SwarmFidget: Exploring Programmable Actuated Fidgeting with Swarm Robots. 93:1-93:4 - Takekazu Kitagishi, Yuichi Hiroi, Yuna Watanabe, Yuta Itoh, Jun Rekimoto:
Telextiles: End-to-end Remote Transmission of Fabric Tactile Sensation. 94:1-94:3 - Shuyi Sun, Gabriela Vega, Erkin Seker, Krystle L. Reagan, Katia Vega:
PURRtentio: Implementing a Smart Litter Box for Feline Urinalysis with Electrochemical Biosensors. 95:1-95:3 - Harrison Goldstein, Benjamin C. Pierce, Andrew Head:
Tyche: In Situ Analysis of Random Testing Effectiveness. 96:1-96:3 - Ziheng Huang, Sebastian Gutierrez, Hemanth Kamana, Stephen MacNeil:
Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents. 97:1-97:3 - Faraz Faruqi, Ahmed Katary, Tarik Hasic, Amira Abdel-Rahman, Nayeemur Rahman, Leandra Tejedor, Mackenzie Leake, Megan Hofmann, Stefanie Mueller:
Demonstration of Style2Fab: Functionality-Aware Segmentation for Fabricating Personalized 3D Models with Generative AI. 98:1-98:5
Session: Doctoral Symposiums
- Jasper Tran O'Leary:
Physical-Digital Programming. 99:1-99:5 - Nikhita Joshi:
User Interface Constraints to Influence User Behaviour when Reading and Writing. 100:1-100:5 - Yiyue Luo:
Intelligent Textiles for Physical Human-Environment Interactions. 101:1-101:5 - Bogoan Kim:
Supporting Independence of Autistic Adults through Mobile and Virtual Reality Technologies. 102:1-102:5 - Jas Brooks:
Chemical interfaces: new methods for interfacing with the human senses. 103:1-103:7 - Katherine W. Song:
Decomposable Interactive Systems. 104:1-104:5 - Bryan Wang:
Democratizing Content Creation and Consumption through Human-AI Copilot Systems. 105:1-105:4 - Samuelle Bourgault:
Developing Action-Oriented Systems for Manual-Computational Craft Workflows. 106:1-106:5
Session: Workshops
- Michael S. Bernstein, Joon Sung Park, Meredith Ringel Morris, Saleema Amershi, Lydia B. Chilton, Mitchell L. Gordon:
Architecting Novel Interactions with Generative AI Models. 107:1-107:3 - Ryo Suzuki, Mar González-Franco, Misha Sra, David Lindlbauer:
XR and AI: AI-Enabled Virtual, Augmented, and Mixed Reality. 108:1-108:3 - Zeyu Yan, Tingyu Cheng, Jasmine Lu, Pedro Lopes, Huaishu Peng:
Future Paradigms for Sustainable Making. 109:1-109:3 - Daniel Leithinger, Ran Zhou, Eric Acome, Ahad Mujtaba Rauf, Teng Han, Craig D. Shultz, Joe Mullenbach:
Electro-actuated Materials for Future Haptic Interfaces. 110:1-110:3
Session: Visions
- Ivan Poupyrev:
The Ultimate Interface. 111:1-111:2 - Meredith Ringel Morris:
AGI is Coming... Is HCI Ready? 112:1
Session: Student Innovation Contests
- Yuto Nagao, Soichiro Fukuda:
4-Frame Manga Drawing Support System. 113:1-113:3 - Toma Itagaki, Richard Li:
Smart-Pikachu: Extending Interactivity of Stuffed Animals with Large Language Models. 114:1-114:2 - Yumeng Ma, Jiahao Ren:
ProactiveAgent: Personalized Context-Aware Reminder System. 115:1-115:3 - Wei-En Tsai, Yi-Chun Liu:
Aisen - Web-Based Gaze-Tracking Assistive Communication Interface with Word Cards Generated by LLMs. 116:1-116:3 - Jaidev Shriram, Sanjayan Pradeep Kumar Sreekala:
ZINify: Transforming Research Papers into Engaging Zines with Large Language Models. 117:1-117:3 - Julien Blanchet, Sixuan Han:
Integrating a LLM into an Automatic Dance Practice Support System: Breathing Life Into The Virtual Coach. 118:1-118:2 - Yubo Zhao, Xiying Bao:
Narratron: Collaborative Writing and Shadow-playing of Children Stories with Large Language Models. 119:1-119:6 - Olivia Seow:
LingoLand: An AI-Assisted Immersive Game for Language Learning. 120:1-120:3 - Yihao Zhu, Qinyi Zhou:
Docent: Digital Operation-Centric Elicitation of Novice-friendly Tutorials. 121:1-121:3 - Jeongeon Park, Daeun Choi:
AudiLens: Configurable LLM-Generated Audiences for Public Speech Practice. 122:1-122:3
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.