default search action
Maki Sugimoto
Person information
SPARQL queries
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j27]Hisako Tomita, Naoto Ienaga, Hiroki Kajita, Tetsu Hayashida, Maki Sugimoto:
An analysis on the effect of body tissues and surgical tools on workflow recognition in first person surgical videos. Int. J. Comput. Assist. Radiol. Surg. 19(11): 2195-2202 (2024) - [j26]Yuki Mashiyama, Ryota Kondo, Masaaki Fukuoka, Theophilus Teo, Maki Sugimoto:
Investigating body perception of multiple virtual hands in synchronized and asynchronized conditions. Frontiers Virtual Real. 5 (2024) - [j25]Masaaki Fukuoka, Fumihiko Nakamura, Adrien Verhulst, Masahiko Inami, Michiteru Kitazaki, Maki Sugimoto:
Sensory Attenuation With a Virtual Robotic Arm Controlled Using Facial Movements. IEEE Trans. Vis. Comput. Graph. 30(7): 4023-4038 (2024) - [j24]Ulrich Eck, Maki Sugimoto, Misha Sra, Markus Tatzgern, Jeanine K. Stefanucci, Ian Williams:
Message from the ISMAR 2024 Science and Technology Program Chairs and TVCG Guest Editors. IEEE Trans. Vis. Comput. Graph. 30(11): vii (2024) - [c171]Theophilus Teo, Kuniharu Sakurada, Maki Sugimoto, Gun A. Lee, Mark Billinghurst:
Evaluations of Parallel Views for Sequential VR Search Tasks. AHs 2024: 148-156 - [c170]Katsutoshi Masai, Keisuke Morita, Maki Sugimoto:
Analysis of Co-viewing with Virtual Agent Towards Affectively Immersive Interaction in Virtual Spaces. ISMAR-Adjunct 2024: 355-356 - [c169]Katsutoshi Masai, Maki Sugimoto, Brian Iwana:
Facial Gesture Classification with Few-shot Learning Using Limited Calibration Data from Photo-reflective Sensors on Smart Eyewear. MUM 2024: 432-438 - [c168]Hiroo Yamamura, Ryota Kondo, Maki Sugimoto:
Necomimi illusion: Generating Ownership of Cat Ears through Haptic Feedback via Hair. SIGGRAPH Asia XR 2024: 13:1-13:2 - [c167]Tatsuki Arai, Mariko Isogawa, Kuniharu Sakurada, Maki Sugimoto:
3D Human Pose Estimation Using Ultra-low Resolution Thermal Images. SIGGRAPH Asia Posters 2024: 53:1-53:2 - [c166]Tomoya Matsubara, Maki Sugimoto, Hideo Saito:
Change Detection for Constantly Maintaining Up-to-date Metaverse Maps. VR Workshops 2024: 559-564 - [c165]Ryota Kondo, Maki Sugimoto:
Teleporting Split Body: Can Portals Extend Self-Location? VR Workshops 2024: 608-612 - [c164]Ryota Kondo, Maki Sugimoto, Hideo Saito:
Effects on Size Perception by Changing Dynamic Invisible Body Size. VR Workshops 2024: 733-734 - [e5]Ulrich Eck, Misha Sra, Jeanine K. Stefanucci, Maki Sugimoto, Markus Tatzgern, Ian Williams:
IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2024, Bellevue, WA, USA, October 21-25, 2024. IEEE 2024, ISBN 979-8-3315-1647-5 [contents] - [e4]Ulrich Eck, Misha Sra, Jeanine K. Stefanucci, Maki Sugimoto, Markus Tatzgern, Ian Williams:
IEEE International Symposium on Mixed and Augmented Reality Adjunct, ISMAR 2024, Bellevue, WA, USA, October 21-25, 2024. IEEE 2024, ISBN 979-8-3315-0691-9 [contents] - 2023
- [j23]Yasuhiro Nitta, Mariko Isogawa, Ryo Yonetani, Maki Sugimoto:
Importance Rank-Learning of Objects in Urban Scenes for Assisting Visually Impaired People. IEEE Access 11: 62932-62941 (2023) - [j22]Kuniharu Sakurada, Ryota Kondo, Fumihiko Nakamura, Michiteru Kitazaki, Maki Sugimoto:
Investigating the perceptual attribution of a virtual robotic limb synchronizing with hand and foot simultaneously. Frontiers Virtual Real. 4 (2023) - [j21]Fumihiko Nakamura, Masaaki Murakami, Katsuhiro Suzuki, Masaaki Fukuoka, Katsutoshi Masai, Maki Sugimoto:
Analyzing the Effect of Diverse Gaze and Head Direction on Facial Expression Recognition With Photo-Reflective Sensors Embedded in a Head-Mounted Display. IEEE Trans. Vis. Comput. Graph. 29(10): 4124-4139 (2023) - [c163]Yuki Nakabayashi, Fumihiko Nakamura, Maki Sugimoto:
Facial Expression Recognition by Photo-Reflective Sensors Considering Time Series and Head Posture. APCC 2023: 358-363 - [c162]Masayasu Niwa, Katsutoshi Masai, Shigeo Yoshida, Maki Sugimoto:
Investigating Effects of Facial Self-Similarity Levels on the Impression of Virtual Agents in Serious/Non-Serious Contexts. AHs 2023: 221-230 - [c161]Ryota Kondo, Maki Sugimoto:
Effects of the Number of Bodies on Ownership of Multiple Bodies. AHs 2023: 314-316 - [c160]Fumihiko Nakamura, Maki Sugimoto:
Exploring the Effect of Transfer Learning on Facial Expression Recognition using Photo-Reflective Sensors embedded into a Head-Mounted Display. AHs 2023: 317-319 - [c159]Jean-Marie Normand, Maki Sugimoto, Veronica Sundstedt:
ICAT-EGVE 2023 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments. ICAT-EGVE 2023: i-xii - [c158]Koki Kawamura, Shunichi Kasahara, Masaaki Fukuoka, Katsutoshi Masai, Ryota Kondo, Maki Sugimoto:
SyncArms: Gaze-Driven Target Object-oriented Manipulation for Parallel Operation of Robot Arms in Distributed Physical Environments. SIGGRAPH Emerging Technologies 2023: 18:1-18:2 - [c157]Ryota Kondo, Maki Sugimoto:
Dynamic Split Body: Changing Body Perception and Self-Location by Manipulating Half-Body Position. SIGGRAPH Asia Emerging Technologies 2023: 7:1-7:2 - [c156]Theophilus Teo, Maki Sugimoto, Gun A. Lee, Mark Billinghurst:
NinjaHeads: Gaze-Oriented Parallel View System for Asynchronous Tasks. SIGGRAPH Asia XR 2023: 21:1-21:2 - [c155]Ryota Kondo, Maki Sugimoto:
Investigating the Minimal Condition of the Dynamic Invisible Body Illusion. VR Workshops 2023: 601-602 - [c154]Theophilus Teo, Kuniharu Sakurada, Maki Sugimoto:
Exploring Enhancements towards Gaze Oriented Parallel Views in Immersive Tasks. VR 2023: 620-630 - [c153]Yuhi Tani, Akemi Kobayashi, Katsutoshi Masai, Takehiro Fukuda, Maki Sugimoto, Toshitaka Kimura:
Assessing Individual Decision-Making Skill by Manipulating Predictive and Unpredictive Cues in a Virtual Baseball Batting Environment. VR Workshops 2023: 775-776 - [e3]Jean-Marie Normand, Maki Sugimoto, Veronica Sundstedt:
Virtual Environments 2023, ICAT-EGVE, 33rd International Conference on Artificial Reality and Telexistence, 28th Eurographics Symposium on Virtual Environments, Trinity College Dublin, Ireland, December 6-8, 2023. Eurographics Association 2023, ISBN 978-3-03868-218-9 [contents] - 2022
- [j20]Kosuke Kikui, Katsutoshi Masai, Tomoya Sasaki, Masahiko Inami, Maki Sugimoto:
AnkleSens: Foot Posture Prediction Using Photo Reflective Sensors on Ankle. IEEE Access 10: 33111-33122 (2022) - [j19]Ferran Argelaguet, Ryan P. McMahan, Maki Sugimoto:
Foreword to the Special Section on the International Conference on Artificial Reality and Telexistence & Eurographics Symposium on Virtual Environments (ICAT-EGVE 2020). Comput. Graph. 103: 5- (2022) - [j18]Fumihiko Nakamura, Adrien Verhulst, Kuniharu Sakurada, Masaaki Fukuoka, Maki Sugimoto:
Evaluation of Spatial Directional Guidance Using Cheek Haptic Stimulation in a Virtual Environment. Frontiers Comput. Sci. 4: 733844 (2022) - [j17]Reiji Miura, Shunichi Kasahara, Michiteru Kitazaki, Adrien Verhulst, Masahiko Inami, Maki Sugimoto:
MultiSoma: Motor and Gaze Analysis on Distributed Embodiment With Synchronized Behavior and Perception. Frontiers Comput. Sci. 4: 788014 (2022) - [j16]Ryota Kondo, Maki Sugimoto:
Split body: Extending self-location by splitting a body left and right. Frontiers Virtual Real. 3 (2022) - [j15]Keitaro Yoshida, Ryo Hachiuma, Hisako Tomita, Jingjing Pan, Kris Kitani, Hiroki Kajita, Tetsu Hayashida, Maki Sugimoto:
Spatiotemporal Video Highlight by Neural Network Considering Gaze and Hands of Surgeon in Egocentric Surgical Videos. J. Medical Robotics Res. 7(1): 2141001:1-2141001:18 (2022) - [j14]Yuta Itabashi, Fumihiko Nakamura, Hiroki Kajita, Hideo Saito, Maki Sugimoto:
Smart Surgical Light: Identification of Surgical Field States Using Time of Flight Sensors. J. Medical Robotics Res. 7(1): 2141002:1-2141002:11 (2022) - [c152]Katsutoshi Masai, Monica Perusquía-Hernández, Maki Sugimoto, Shiro Kumano, Toshitaka Kimura:
Consistent Smile Intensity Estimation from Wearable Optical Sensors. ACII 2022: 1-8 - [c151]Kye Shimizu, Naoto Ienaga, Kazuma Takada, Maki Sugimoto, Shunichi Kasahara:
Human Latent Metrics: Perceptual and Cognitive Response Correlates to Distance in GAN Latent Space for Facial Images. SAP 2022: 3:1-3:10 - [c150]Yukiya Nakanishi, Masaaki Fukuoka, Shunichi Kasahara, Maki Sugimoto:
Synchronous and Asynchronous Manipulation Switching of Multiple Robotic Embodiment Using EMG and Eye Gaze. AHs 2022: 94-103 - [c149]Kuniharu Sakurada, Ryota Kondo, Fumihiko Nakamura, Masaaki Fukuoka, Michiteru Kitazaki, Maki Sugimoto:
The Reference Frame of Robotic Limbs Contributes to the Sense of Embodiment and Motor Control Process. AHs 2022: 104-115 - [c148]Nonoka Nishida, Yukiko Iwasaki, Theophilus Teo, Masaaki Fukuoka, Maki Sugimoto, Po-Han Chen, Fumihiro Kato, Michiteru Kitazaki, Hiroyasu Iwata:
Analysis and Observation of Behavioral Factors Contributing to Improvement of Embodiment to a Supernumerary Limb. AHs 2022: 116-120 - [c147]Ryota Kondo, Maki Sugimoto:
Effects of Body Duplication and Split on Body Schema. AHs 2022: 320-322 - [c146]Tohru Takechi, Fumihiko Nakamura, Masaaki Fukuoka, Naoto Ienaga, Maki Sugimoto:
The Sense of Agency, Sense of Body Ownership with a Semi-autonomous Telexistence Robot under Shared / Unshared Intention Conditions. ICAT-EGVE (Posters and Demos) 2022: 11-12 - [c145]Theophilus Teo, Kuniharu Sakurada, Masaaki Fukuoka, Maki Sugimoto:
Evaluating Techniques to Share Hand Gestures for Remote Collaboration using Top-Down Projection in a Virtual Environment. ICAT-EGVE 2022: 17-25 - [c144]Takumi Komori, Hiroki Ishimoto, Gowrishankar Ganesh, Maki Sugimoto, Masahiko Inami, Michiteru Kitazaki:
Sense of Ownership, Self-location, and Gaze Responses in Virtual Rubber Hand Illusion. ICAT-EGVE (Posters and Demos) 2022: 29-30 - [c143]Yuki Mashiyama, Masaaki Fukuoka, Theophilus Teo, Ryota Kondo, Maki Sugimoto:
Investigating Perception of Multiple Virtual Hands using Startle Response. ICAT-EGVE (Posters and Demos) 2022: 53-54 - [c142]Yuki Nakabayashi, Tatsuki Arai, Miruku Ozaki, Koki Kawamura, Koki Tsuboji, Maho Nakagawa, Yasuhiro Nitta, Ryotaro Haba, Sho Hamano, Yuki Matsutani, Yuki Mashiyama, Maki Sugimoto:
Mingle with an Amoeba. ICAT-EGVE (Posters and Demos) 2022: 55-56 - [c141]Koki Watanabe, Fumihiko Nakamura, Kuniharu Sakurada, Theophilus Teo, Maki Sugimoto:
An Integrated Ducted Fan-Based Multi-Directional Force Feedback with a Head Mounted Display. ICAT-EGVE 2022: 55-63 - [c140]Hitoshi Yoshihashi, Naoto Ienaga, Maki Sugimoto:
A Quantitative and Qualitative Analysis on a GAN-Based Face Mask Removal on Masked Images and Videos. VISIGRAPP (Revised Selected Papers) 2022: 51-65 - [c139]Theophilus Teo, Kuniharu Sakurada, Masaaki Fukuoka, Maki Sugimoto:
Techniques for using VRChat to Replace On-site Experiments. ISMAR Adjunct 2022: 249-253 - [c138]Harin Hapuarachchi, Hiroki Ishimoto, Maki Sugimoto, Masahiko Inami, Michiteru Kitazaki:
Embodiment of an Avatar with Unnatural Arm Movements. ISMAR Adjunct 2022: 772-773 - [c137]Katsutoshi Masai, Takuma Kajiyama, Tadashi Muramatsu, Maki Sugimoto, Toshitaka Kimura:
Virtual Reality Sonification Training System Can Improve a Novice's Forehand Return of Serve in Tennis. ISMAR Adjunct 2022: 845-849 - [c136]Hitoshi Yoshihashi, Naoto Ienaga, Maki Sugimoto:
GAN-based Face Mask Removal using Facial Landmarks and Pixel Errors in Masked Region. VISIGRAPP (5: VISAPP) 2022: 125-133 - 2021
- [c135]Reiji Miura, Shunichi Kasahara, Michiteru Kitazaki, Adrien Verhulst, Masahiko Inami, Maki Sugimoto:
MultiSoma: Distributed Embodiment with Synchronized Behavior and Perception. AHs 2021: 1-9 - [c134]Ryo Takizawa, Takayoshi Hagiwara, Adrien Verhulst, Masaaki Fukuoka, Michiteru Kitazaki, Maki Sugimoto:
Dynamic Shared Limbs: An Adaptive Shared Body Control Method Using EMG Sensors. AHs 2021: 10-18 - [c133]Fumihiko Nakamura, Adrien Verhulst, Kuniharu Sakurada, Maki Sugimoto:
Virtual Whiskers: Spatial Directional Guidance using Cheek Haptic Stimulation in a Virtual Environment. AHs 2021: 141-151 - [c132]Edouard Ferrand, Adrien Verhulst, Masahiko Inami, Maki Sugimoto:
Exploring a Dynamic Change of Muscle Perception in VR, Based on Muscle Electrical Activity and/or Joint Angle. AHs 2021: 298-300 - [c131]Thibault Fabre, Adrien Verhulst, Alfonso Balandra, Maki Sugimoto, Masahiko Inami:
Investigating Textual Visual Sound Effects in a Virtual Environment and their impacts on Object Perception and Sound Perception. ISMAR 2021: 320-328 - [c130]Fumihiko Nakamura, Adrien Verhulst, Kuniharu Sakurada, Masaaki Fukuoka, Maki Sugimoto:
Virtual Whiskers: Cheek Haptic-Based Spatial Directional Guidance in a Virtual Space. SIGGRAPH Asia XR 2021: 17:1-17:2 - [c129]Koki Watanabe, Fumihiko Nakamura, Kuniharu Sakurada, Theophilus Teo, Maki Sugimoto:
X-Wing: Propeller-Based Force Feedback to Head in a Virtual Environment. SIGGRAPH Asia XR 2021: 19:1-19:2 - [c128]Yuki Kato, Maki Sugimoto, Masahiko Inami, Michiteru Kitazaki:
Communications in Virtual Environment Improve Interpersonal Impression. VR Workshops 2021: 613-614 - 2020
- [j13]Ryota Kondo, Yamato Tani, Maki Sugimoto, Kouta Minamizawa, Masahiko Inami, Michiteru Kitazaki:
Re-association of Body Parts: Illusory Ownership of a Virtual Arm Associated With the Contralateral Real Finger by Visuo-Motor Synchrony. Frontiers Robotics AI 7: 26 (2020) - [c127]Katsutoshi Masai, Kai Kunze, Maki Sugimoto:
Eye-based Interaction Using Embedded Optical Sensors on an Eyewear Device for Facial Expression Recognition. AHs 2020: 1:1-1:10 - [c126]Akino Umezawa, Yoshinari Takegawa, Katsuhiro Suzuki, Katsutoshi Masai, Yuta Sugiura, Maki Sugimoto, Yutaka Tokuda, Diego Martínez Plasencia, Sriram Subramanian, Masafumi Takahashi, Hiroaki Taka, Keiji Hirata:
e2-MaskZ: a Mask-type Display with Facial Expression Identification using Embedded Photo Reflective Sensors. AHs 2020: 36:1-36:3 - [c125]Theophilus Teo, Fumihiko Nakamura, Maki Sugimoto, Adrien Verhulst, Gun A. Lee, Mark Billinghurst, Matt Adcock:
WeightSync: Proprioceptive and Haptic Stimulation for Virtual Physical Perception. ICAT-EGVE 2020: 1-9 - [c124]Kazuya Nagamachi, Yuki Kato, Maki Sugimoto, Masahiko Inami, Michiteru Kitazaki:
Pseudo Physical Contact and Communication in VRChat: A Study with Survey Method in Japanese Users. ICAT-EGVE (Posters and Demos) 2020: 13-14 - [c123]Yoshinari Takegawa, Yutaka Tokuda, Akino Umezawa, Katsuhiro Suzuki, Katsutoshi Masai, Yuta Sugiura, Maki Sugimoto, Diego Martínez Plasencia, Sriram Subramanian, Keiji Hirata:
Digital Full-Face Mask Display with Expression Recognition using Embedded Photo Reflective Sensor Arrays. ISMAR 2020: 101-108 - [c122]Katsutoshi Masai, Kai Kunze, Daisuke Sakamoto, Yuta Sugiura, Maki Sugimoto:
Face Commands - User-Defined Facial Gestures for Smart Glasses. ISMAR 2020: 374-386 - [c121]Theophilus Teo, Fumihiko Nakamura, Maki Sugimoto, Adrien Verhulst, Gun A. Lee, Mark Billinghurst, Matt Adcock:
Feel it: Using Proprioceptive and Haptic Feedback for Interaction with Virtual Embodiment. SIGGRAPH Emerging Technologies 2020: 2:1-2:2 - [c120]Chisa Saito, Katsutoshi Masai, Maki Sugimoto:
Classification of Spontaneous and Posed Smiles by Photo-reflective Sensors Embedded with Smart Eyewear. TEI 2020: 45-52 - [c119]Adrien Verhulst, Wanqi Zhao, Fumihiko Nakamura, Masaaki Fukuoka, Maki Sugimoto, Masahiko Inami:
Impact of Fake News in VR compared to Fake News on Social Media, a pilot study. VR Workshops 2020: 577-578 - [e2]Ferran Argelaguet, Ryan P. McMahan, Maki Sugimoto:
30th International Conference on Artificial Reality and Telexistence, 25th Eurographics Symposium on Virtual Environments, ICAT-EGVE 2020, Virtual Event, USA, December 2-4, 2020. Eurographics Association 2020, ISBN 978-3-03868-111-3 [contents]
2010 – 2019
- 2019
- [c118]Ryo Takizawa, Adrien Verhulst, Katie Seaborn, Masaaki Fukuoka, Atsushi Hiyama, Michiteru Kitazaki, Masahiko Inami, Maki Sugimoto:
Exploring Perspective Dependency in a Shared Body with Virtual Supernumerary Robotic Arms. AIVR 2019: 25-32 - [c117]Fumihiko Nakamura, Katsuhiro Suzuki, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, Maki Sugimoto:
Automatic Labeling of Training Data by Vowel Recognition for Mouth Shape Recognition with Optical Sensors Embedded in Head-Mounted Display. ICAT-EGVE 2019: 9-16 - [c116]Masaaki Fukuoka, Adrien Verhulst, Fumihiko Nakamura, Ryo Takizawa, Katsutoshi Masai, Maki Sugimoto:
FaceDrive: Facial Expression Driven Operation to Control Virtual Supernumerary Robotic Arms. ICAT-EGVE 2019: 17-24 - [c115]Adam Drogemuller, Adrien Verhulst, Benjamin Volmer, Bruce H. Thomas, Masahiko Inami, Maki Sugimoto:
Real Time Remapping of a Third Arm in Virtual Reality. ICAT-EGVE 2019: 57-64 - [c114]Ryohei Suzuki, Katsutoshi Masai, Maki Sugimoto:
ReallifeEngine: A Mixed Reality-Based Visual Programming System for SmartHomes. ICAT-EGVE 2019: 105-112 - [c113]Irshad Abibouraguimane, Kakeru Hagihara, Keita Higuchi, Yuta Itoh, Yoichi Sato, Tetsu Hayashida, Maki Sugimoto:
CoSummary: adaptive fast-forwarding for surgical videos by detecting collaborative scenes using hand regions and gaze positions. IUI 2019: 580-590 - [c112]Masaaki Murakami, Kosuke Kikui, Katsuhiro Suzuki, Fumihiko Nakamura, Masaaki Fukuoka, Katsutoshi Masai, Yuta Sugiura, Maki Sugimoto:
AffectiveHMD: facial expression recognition in head mounted display using embedded photo reflective sensors. SIGGRAPH Emerging Technologies 2019: 7:1-7:2 - [c111]Masaaki Fukuoka, Adrien Verhulst, Fumihiko Nakamura, Ryo Takizawa, Katsutoshi Masai, Maki Sugimoto:
FaceDrive: Facial Expression Driven Operation to Control Virtual Supernumerary Robotic Arms. SIGGRAPH Asia XR 2019: 9-10 - [c110]Adam Drogemuller, Adrien Verhulst, Masahiko Inami, Benjamin Volmer, Maki Sugimoto, Bruce H. Thomas:
Remapping a Third Arm in Virtual Reality. VR 2019: 898-899 - [c109]Takayoshi Hagiwara, Maki Sugimoto, Masahiko Inami, Michiteru Kitazaki:
Shared Body by Action Integration of Two Persons: Body Ownership, Sense of Agency and Task Performance. VR 2019: 954-955 - [c108]Ryota Kondo, Maki Sugimoto, Masahiko Inami, Michiteru Kitazaki:
Scrambled Body: A Method to Compare Full Body Illusion and Illusory Body Ownership of Body Parts. VR 2019: 1028-1029 - [c107]Benjamin Volmer, Adrien Verhulst, Masahiko Inami, Adam Drogemuller, Maki Sugimoto, Bruce H. Thomas:
Towards Robot Arm Training in Virtual Reality Using Partial Least Squares Regression. VR 2019: 1209-1210 - [c106]Kazuya Nagamachi, Sachiyo Ueda, Maki Sugimoto, Masahiko Inami, Michiteru Kitazaki:
Virtual Avatar Automatically Enhances Human Perspective Taking. VRCAI 2019: 58:1-58:2 - 2018
- [j12]Adrien Verhulst, Jean-Marie Normand, Cindy Lombart, Maki Sugimoto, Guillaume Moreau:
Influence of Being Embodied in an Obese Virtual Body on Shopping Behavior and Products Perception in VR. Frontiers Robotics AI 5: 113 (2018) - [j11]Takumi Hamasaki, Yuta Itoh, Yuichi Hiroi, Daisuke Iwai, Maki Sugimoto:
HySAR: Hybrid Material Rendering by an Optical See-Through Head-Mounted Display with Spatial Augmented Reality Projection. IEEE Trans. Vis. Comput. Graph. 24(4): 1457-1466 (2018) - [c105]Katsutoshi Masai, Yuta Sugiura, Maki Sugimoto:
FaceRubbing: Input Technique by Rubbing Face using Optical Sensors on Smart Eyewear for Facial Expression Recognition. AH 2018: 23:1-23:5 - [c104]Kakeru Hagihara, Keiichiro Taniguchi, Irshad Abibouraguimane, Yuta Itoh, Keita Higuchi, Jiu Otsuka, Maki Sugimoto, Yoichi Sato:
Object-wise 3D Gaze Mapping in Physical Workspace. AH 2018: 25:1-25:5 - [c103]Ryota Kondo, Sachiyo Ueda, Maki Sugimoto, Kouta Minamizawa, Masahiko Inami, Michiteru Kitazaki:
Invisible Long Arm Illusion: Illusory Body Ownership by Synchronous Movement of Hands and Feet. ICAT-EGVE 2018: 21-28 - [c102]Kosuke Kikui, Yuta Itoh, Makoto Yamada, Yuta Sugiura, Maki Sugimoto:
Intra-/inter-user adaptation framework for wearable gesture sensing device. UbiComp 2018: 21-24 - [c101]Katsutoshi Masai, Kai Kunze, Yuta Sugiura, Maki Sugimoto:
Mapping Natural Facial Expressions Using Unsupervised Learning and Optical Sensors on Smart Eyewear. UbiComp/ISWC Adjunct 2018: 158-161 - [c100]Kei Saito, Katsutoshi Masai, Yuta Sugiura, Toshitaka Kimura, Maki Sugimoto:
Development of a Virtual Environment for Motion Analysis of Tennis Service Returns. MMSports@MM 2018: 59-66 - [c99]Nao Asano, Katsutoshi Masai, Yuta Sugiura, Maki Sugimoto:
3D facial geometry analysis and estimation using embedded optical sensors on smart eyewear. SIGGRAPH Posters 2018: 45:1-45:2 - [c98]Ryota Kondo, Maki Sugimoto, Kouta Minamizawa, Masahiko Inami, Michiteru Kitazaki, Yamato Tani:
Illusory Body Ownership Between Different Body Parts: Synchronization of Right Thumb and Right Arm. VR 2018: 611-612 - 2017
- [j10]Katsutoshi Masai, Kai Kunze, Yuta Sugiura, Masa Ogata, Masahiko Inami, Maki Sugimoto:
Evaluation of Facial Expression Recognition by a Smart Eyewear for Facial Direction Changes, Repeatability, and Positional Drift. ACM Trans. Interact. Intell. Syst. 7(4): 15:1-15:23 (2017) - [j9]Yuta Itoh, Takumi Hamasaki, Maki Sugimoto:
Occlusion Leak Compensation for Optical See-Through Displays Using a Single-Layer Transmissive Spatial Light Modulator. IEEE Trans. Vis. Comput. Graph. 23(11): 2463-2473 (2017) - [c97]Yuichi Hiroi, Yuta Itoh, Takumi Hamasaki, Maki Sugimoto:
AdaptiVisor: assisting eye adaptation via occlusive optical see-through head-mounted displays. AH 2017: 9 - [c96]Arashi Shimazaki, Yuta Sugiura, Dan Mikami, Toshitaka Kimura, Maki Sugimoto:
MuscleVR: detecting muscle shape deformation using a full body suit. AH 2017: 15 - [c95]Katsutoshi Masai, Yuta Sugiura, Maki Sugimoto:
ACTUATE racket: designing intervention of user's performance through controlling angle of racket surface. AH 2017: 31 - [c94]Nao Asano, Katsutoshi Masai, Yuta Sugiura, Maki Sugimoto:
Facial Performance Capture by Embedded Photo Reflective Sensors on A Smart Eyewear. ICAT-EGVE 2017: 21-28 - [c93]Wakaba Kuno, Yuta Sugiura, Nao Asano, Wataru Kawai, Maki Sugimoto:
Estimation of 3D Finger Postures with wearable device measuring Skin Deformation on Back of Hand. ICAT-EGVE (Posters and Demos) 2017: 41-42 - [c92]Wakaba Kuno, Yuta Sugiura, Nao Asano, Wataru Kawai, Maki Sugimoto:
3D Reconstruction of Hand Postures by Measuring Skin Deformation on Back Hand. ICAT-EGVE 2017: 221-228 - [c91]Katsutoshi Masai, Yuta Sugiura, Michita Imai, Maki Sugimoto:
RacketAvatar that Expresses Intention of Avatar and User. HRI (Companion) 2017: 44 - [c90]Naomi Furui, Katsuhiro Suzuki, Yuta Sugiura, Maki Sugimoto:
SofTouch: Turning Soft Objects into Touch Interfaces Using Detachable Photo Sensor Modules. ICEC 2017: 47-58 - [c89]Takuro Watanabe, Yuta Sugiura, Natsuki Miyata, Koji Fujita, Akimoto Nimura, Maki Sugimoto:
DanceDanceThumb: Tablet App for Rehabilitation for Carpal Tunnel Syndrome. ICEC 2017: 473-476 - [c88]Koki Yamashita, Yuta Sugiura, Takashi Kikuchi, Maki Sugimoto:
DecoTouch: Turning the Forehead as Input Surface for Head Mounted Display. ICEC 2017: 481-484 - [c87]Takashi Kikuchi, Yuta Sugiura, Katsutoshi Masai, Maki Sugimoto, Bruce H. Thomas:
EarTouch: turning the ear into an input surface. MobileHCI 2017: 27:1-27:6 - [c86]Shigo Ko, Yuta Itoh, Yuta Sugiura, Takayuki Hoshi, Maki Sugimoto:
Spatial Calibration of Airborne Ultrasound Tactile Display and Projector-Camera System Using Fur Material. TEI 2017: 583-588 - [c85]Katsuhiro Suzuki, Fumihiko Nakamura, Jiu Otsuka, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, Maki Sugimoto:
Recognition and mapping of facial expressions to avatar by embedded photo reflective sensors in head mounted display. VR 2017: 177-185 - [c84]Yuichi Hiroi, Yuta Itoh, Takumi Hamasaki, Daisuke Iwai, Maki Sugimoto:
HySAR: Hybrid material rendering by an optical see-through head-mounted display with spatial augmented reality projection. VR 2017: 211-212 - [c83]Yuta Itoh, Jason Orlosky, Kiyoshi Kiyokawa, Toshiyuki Amano, Maki Sugimoto:
Monocular focus estimation method for a freely-orienting eye using Purkinje-Sanson images. VR 2017: 213-214 - [c82]Koki Yamashita, Takashi Kikuchi, Katsutoshi Masai, Maki Sugimoto, Bruce H. Thomas, Yuta Sugiura:
CheekInput: turning your cheek into an input surface by embedded optical sensors on a head-mounted display. VRST 2017: 19:1-19:8 - [c81]Alan Transon, Adrien Verhulst, Jean-Marie Normand, Guillaume Moreau, Maki Sugimoto:
Evaluation of facial expressions as an interaction mechanism and their impact on affect, workload and usability in an AR game. VSMM 2017: 1-8 - [c80]Naoaki Kashiwagi, Yuta Sugiura, Natsuki Miyata, Mitsunori Tada, Maki Sugimoto, Hideo Saito:
Measuring Grasp Posture Using an Embedded Camera. WACV Workshops 2017: 42-47 - 2016
- [j8]Hayeon Jeong, Daniel Saakes, Uichin Lee, Augusto Esteves, Eduardo Velloso, Andreas Bulling, Katsutoshi Masai, Yuta Sugiura, Masa Ogata, Kai Kunze, Masahiko Inami, Maki Sugimoto, Anura Rathnayake, Tilak Dias:
Demo hour. Interactions 23(1): 8-11 (2016) - [c79]Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, Maki Sugimoto:
Analysis of Multiple Users' Experience in Daily Life Using Wearable Device for Facial Expression Recognition. ACE 2016: 52:1-52:5 - [c78]Katsutoshi Masai, Kai Kunze, Maki Sugimoto, Mark Billinghurst:
Empathy Glasses. CHI Extended Abstracts 2016: 1257-1263 - [c77]Youngho Lee, Katsutoshi Masai, Kai Kunze, Maki Sugimoto, Mark Billinghurst:
A Remote Collaboration System with Empathy Glasses. ISMAR Adjunct 2016: 342-343 - [c76]Katsutoshi Masai, Yuta Sugiura, Masa Ogata, Kai Kunze, Masahiko Inami, Maki Sugimoto:
Facial Expression Recognition in Daily Life by Embedded Photo Reflective Sensors on Smart Eyewear. IUI 2016: 317-326 - [c75]Yuta Itoh, Yuichi Hiroi, Jiu Otsuka, Maki Sugimoto, Jason Orlosky, Kiyoshi Kiyokawa, Gudrun Klinker:
Laplacian vision: augmenting motion prediction via optical see-through head-mounted displays and projectors. SIGGRAPH Emerging Technologies 2016: 13:1 - [c74]Takashi Kikuchi, Yuichi Hiroi, Ross T. Smith, Bruce H. Thomas, Maki Sugimoto:
MARCut: Marker-based Laser Cutting for Personal Fabrication on Existing Objects. TEI 2016: 468-474 - [c73]Katsuhiro Suzuki, Fumihiko Nakamura, Jiu Otsuka, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, Maki Sugimoto:
Facial Expression Mapping inside Head Mounted Display by Embedded Optical Sensors. UIST (Adjunct Volume) 2016: 91-92 - [p2]Mark Billinghurst, Kunal Gupta, Masai Katsutoshi, Youngho Lee, Gun A. Lee, Kai Kunze, Maki Sugimoto:
Is It in Your Eyes? Explorations in Using Gaze Cues for Remote Collaboration. Collaboration Meets Interactive Spaces 2016: 177-199 - [e1]Eduardo E. Veas, Tobias Langlotz, José Martínez-Carranza, Raphaël Grasset, Maki Sugimoto, Alejandro Martín:
2016 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2016 Adjunct, Merida, Yucatan, Mexico, September 19-23, 2016. IEEE Computer Society 2016, ISBN 978-1-5090-3740-7 [contents] - 2015
- [j7]Sho Shimamura, Motoko Kanegae, Jun Morita, Yuji Uema, Maiko Takahashi, Masahiko Inami, Tetsu Hayashida, Hideo Saito, Maki Sugimoto:
Virtual Slicer: Visualizer for Tomographic Medical Images Corresponding Handheld Device to Patient. Int. J. Virtual Real. 15(1): 10-17 (2015) - [c72]Ryota Koshiyama, Takashi Kikuchi, Jun Morita, Maki Sugimoto:
VolRec: haptic display of virtual inner volume in consideration of angular moment. Advances in Computer Entertainment 2015: 32:1-32:4 - [c71]Katsutoshi Masai, Yuta Sugiura, Katsuhiro Suzuki, Sho Shimamura, Kai Kunze, Masa Ogata, Masahiko Inami, Maki Sugimoto:
AffectiveWear: towards recognizing affect in real life. UbiComp/ISWC Adjunct 2015: 357-360 - [c70]Yuichi Hiroi, Kei Obata, Katsuhiro Suzuki, Naoto Ienaga, Maki Sugimoto, Hideo Saito, Tadashi Takamaru:
Remote Welding Robot Manipulation Using Multi-view Images. ISMAR 2015: 128-131 - [c69]Katsutoshi Masai, Yuta Sugiura, Masa Ogata, Katsuhiro Suzuki, Fumihiko Nakamura, Sho Shimamura, Kai Kunze, Masahiko Inami, Maki Sugimoto:
AffectiveWear: toward recognizing facial expression. SIGGRAPH Emerging Technologies 2015: 4:1 - [c68]Katsutoshi Masai, Yuta Sugiura, Masa Ogata, Katsuhiro Suzuki, Fumihiko Nakamura, Sho Shimamura, Kai Kunze, Masahiko Inami, Maki Sugimoto:
AffectiveWear: toward recognizing facial expression. SIGGRAPH Posters 2015: 16:1 - [c67]Kazuki Matsumoto, Wataru Nakagawa, Hideo Saito, Maki Sugimoto, Takashi Shibata, Shoji Yachida:
AR Visualization of Thermal 3D Model by Hand-held Cameras. VISAPP (3) 2015: 480-487 - [c66]Motoko Kanegae, Jun Morita, Sho Shimamura, Yuji Uema, Maiko Takahashi, Masahiko Inami, Tetsu Hayashida, Maki Sugimoto:
Registration and projection method of tumor region projection for breast cancer surgery. VR 2015: 201-202 - [c65]Tatsuya Kodera, Maki Sugimoto, Ross T. Smith, Bruce H. Thomas:
3D position measurement of planar photo detector using gradient patterns. VR 2015: 211-212 - [c64]Jun Morita, Sho Shimamura, Motoko Kanegae, Yuji Uema, Maiko Takahashi, Masahiko Inami, Tetsu Hayashida, Maki Sugimoto:
MRI overlay system using optical see-through for marking assistance. VR 2015: 239-240 - 2014
- [c63]Wataru Nakagawa, Kazuki Matsumoto, François de Sorbier, Maki Sugimoto, Hideo Saito, Shuji Senda, Takashi Shibata, Akihiko Iketani:
Visualization of Temperature Change Using RGB-D Camera and Thermal Camera. ECCV Workshops (1) 2014: 386-400 - [c62]Naoya Maeda, Jun Morita, Maki Sugimoto:
Pathfinder Vision: Tele-operation Robot Interface in Consideration of Geometry for Supporting Future Prediction. ICAT-EGVE 2014: 29-35 - [c61]Sho Shimamura, Kazuki Matsumoto, Naoya Maeda, Tatsuya Kodera, Wataru Nakagawa, Yukiko Shinozuka, Maki Sugimoto, Hideo Saito:
Smart Fan: Self-contained Mobile Robot that Performs Human Detection and Tracking using Thermal Camera. ICAT-EGVE (Posters and Demos) 2014 - [c60]Naoto Tani, Tatsuya Kodera, Maki Sugimoto:
Remote Control System for Home Appliances using Spherical Image. ICAT-EGVE (Posters and Demos) 2014 - [c59]Ryo Yamamoto, Yuji Uema, Maki Sugimoto, Masahiko Inami:
Study on Visual Display for Stereoscopic Vision Based on Binocular Parallax Using Retro-reflective Projection Technology. ICAT-EGVE (Posters and Demos) 2014 - [c58]Kazuki Matsumoto, Wataru Nakagawa, François de Sorbier, Maki Sugimoto, Hideo Saito, Shuji Senda, Takashi Shibata, Akihiko Iketani:
RGB-D-T camera system for AR display of temperature change. ISMAR 2014: 357-358 - [c57]Naoya Maeda, Maki Sugimoto:
Pathfinder vision: tele-operation robot interface for supporting future prediction using stored past images. SIGGRAPH Posters 2014: 52:1 - [c56]Kathrin Probst, Michael Haller, Kentaro Yasu, Maki Sugimoto, Masahiko Inami:
Move-it sticky notes providing active physical feedback through motion. TEI 2014: 29-36 - [c55]Philipp Stefan, Patrick Wucherer, Yuji Oyamada, Meng Ma, Alexander Schoch, Motoko Kanegae, Naoki Shimizu, Tatsuya Kodera, Sebastien Cahier, Matthias Weigl, Maki Sugimoto, Pascal Fallavollita, Hideo Saito, Nassir Navab:
An AR edutainment system supporting bone anatomy learning. VR 2014: 113-114 - [c54]Sho Shimamura, Motoko Kanegae, Jun Morita, Yuji Uema, Masahiko Inami, Tetsu Hayashida, Hideo Saito, Maki Sugimoto:
Virtual slicer: interactive visualizer for tomographic medical images based on position and orientation of handheld device. VRIC 2014: 12:1-12:8 - [c53]Tatsuya Kodera, Naoto Tani, Jun Morita, Naoya Maeda, Kazuna Tsuboi, Motoko Kanegae, Yukiko Shinozuka, Sho Shimamura, Kadoki Kubo, Yusuke Nakayama, Jaejun Lee, Maxime Pruneau, Hideo Saito, Maki Sugimoto:
Virtual rope slider. VRIC 2014: 36:1-36:4 - 2013
- [c52]Tsubasa Yamamoto, Yuta Sugiura, Suzanne Low, Koki Toda, Kouta Minamizawa, Maki Sugimoto, Masahiko Inami:
PukaPuCam: Enhance Travel Logging Experience through Third-Person View Camera Attached to Balloons. Advances in Computer Entertainment 2013: 428-439 - [c51]Kazuna Tsuboi, Yuji Oyamada, Maki Sugimoto, Hideo Saito:
3D Object Surface Tracking Using Partial Shape Templates Trained from a Depth Camera for Spatial Augmented Reality Environments. AUIC 2013: 125-126 - [c50]Sho Shimamura, Motoko Kanegae, Yuji Uema, Masahiko Inami, Tetsu Hayashida, Hideo Saito, Maki Sugimoto:
Virtual Slicer: Development of Interactive Visualizer for Tomographic Medical Images Based on Position and Orientation of Handheld Device. CW 2013: 383 - [c49]Ross T. Smith, Maki Sugimoto, Raphaël Grasset:
Demo chairs. ISMAR 2013: 1 - [c48]Sandy Martedi, Mai Otsuki, Hideo Saito, Maki Sugimoto, Asako Kimura, Fumihisa Shibata:
A Tracking Method for 2D Canvas in MR-Based Interactive Painting System. SITIS 2013: 790-794 - 2012
- [c47]Naoya Koizumi, Maki Sugimoto, Naohisa Nagaya, Masahiko Inami, Masahiro Furukawa:
Stop motion goggle: augmented visual perception by subtraction method using high speed liquid crystal. AH 2012: 14 - [c46]Takuya Ikeda, Yuji Oyamada, Maki Sugimoto, Hideo Saito:
Illumination estimation from shadow and incomplete object shape captured by an RGB-D camera. ICPR 2012: 165-169 - [c45]Yuji Uema, Naoya Koizumi, Shian Wei Chang, Kouta Minamizawa, Maki Sugimoto, Masahiko Inami:
Optical camouflage III: Auto-stereoscopic and multiple-view display system using retro-reflective projection technology. VR 2012: 57-58 - 2011
- [c44]Masato Takahashi, Charith Lasantha Fernando, Yuto Kumon, Shuhey Takeda, Hideaki Nii, Takuji Tokiwa, Maki Sugimoto, Masahiko Inami:
Earthlings Attack!: a ball game using human body communication. AH 2011: 17 - [c43]Kathrin Probst, Thomas Seifried, Michael Haller, Kentaro Yasu, Maki Sugimoto, Masahiko Inami:
Move-it: interactive sticky notes actuated by shape memory alloys. CHI Extended Abstracts 2011: 1393-1398 - [c42]Michael Haller, Christoph Richter, Peter Brandl, Sabine Gross, Gerold Schossleitner, Andreas Schrempf, Hideaki Nii, Maki Sugimoto, Masahiko Inami:
Finding the Right Way for Interrupting People Improving Their Sitting Posture. INTERACT (2) 2011: 1-17 - [c41]Kakehi Gota, Yuta Sugiura, Anusha I. Withana, Calista Lee, Naohisa Nagaya, Daisuke Sakamoto, Maki Sugimoto, Masahiko Inami, Takeo Igarashi:
FuwaFuwa: detecting shape deformation of soft objects using directional photoreflectivity measurement. SIGGRAPH Emerging Technologies 2011: 5 - [c40]Anusha I. Withana, Yuta Sugiura, Charith Lasantha Fernando, Yuji Uema, Yasutoshi Makino, Maki Sugimoto, Masahiko Inami:
ImpAct: haptic stylus for shallow depth surface interaction. SIGGRAPH Asia Emerging Technologies 2011: 10:1 - [c39]Yuta Sugiura, Kakehi Gota, Anusha I. Withana, Calista Lee, Daisuke Sakamoto, Maki Sugimoto, Masahiko Inami, Takeo Igarashi:
Detecting shape deformation of soft objects using directional photoreflectivity measurement. UIST 2011: 509-516 - 2010
- [j6]Naoya Koizumi, Kentaro Yasu, Angela Liu, Maki Sugimoto, Masahiko Inami:
Animated paper: A toolkit for building moving toys. Comput. Entertain. 8(2): 7:1-7:16 (2010) - [j5]Anusha I. Withana, Yasutoshi Makino, Makoto Kondo, Maki Sugimoto, Kakehi Gota, Masahiko Inami:
ImpAct: Immersive haptic stylus to enable direct touch and manipulation for surface computing. Comput. Entertain. 8(2): 9:1-9:16 (2010) - [c38]Masahiro Furukawa, Yuji Uema, Maki Sugimoto, Masahiko Inami:
Fur interface with bristling effect induced by vibration. AH 2010: 17 - [c37]Anusha I. Withana, Rika Matsui, Maki Sugimoto, Kentaro Harada, Masa Inakage:
Narrative image composition using objective and subjective tagging. SIGGRAPH Posters 2010: 92:1 - [c36]Yu Ebihara, Chihiro Kondo, Maki Sugimoto, Satoru Tokuhisa, Takuji Tokiwa, Kentaro Harada, Hiroaki Miyasho, Toshitugu Yasaka, Anusha I. Withana, Masa Inakage:
Composing sounds and images for public display using correlated KANSEI information: (Copyright restrictions prevent ACM from providing the full text for this article). SIGGRAPH ASIA (Posters) 2010: 29:1 - [c35]Takuo Imbe, Fumitaka Ozaki, Shin Kiyasu, Yusuke Mizukami, Shuichi Ishibashi, Masa Inakage, Naohito Okude, Adrian David Cheok, Masahiko Inami, Maki Sugimoto:
Myglobe: a navigation service based on cognitive maps. TEI 2010: 189-192 - [c34]Naoya Koizumi, Kentaro Yasu, Angela Liu, Maki Sugimoto, Masahiko Inami:
Animated paper: a moving prototyping platform. UIST (Adjunct Volume) 2010: 389-390 - [c33]Anusha I. Withana, Makoto Kondo, Kakehi Gota, Yasutoshi Makino, Maki Sugimoto, Masahiko Inami:
ImpAct: enabling direct touch and manipulation for surface computing. UIST (Adjunct Volume) 2010: 411-412 - [p1]Masahiko Inami, Maki Sugimoto, Bruce H. Thomas, Jan Richter:
Active Tangible Interactions. Tabletops 2010: 171-187
2000 – 2009
- 2009
- [j4]Jakob Leitner, Michael Haller, Kyungdahm Yun, Woontack Woo, Maki Sugimoto, Masahiko Inami, Adrian David Cheok, Henry Been-Lirn Duh:
Physical interfaces for tabletop games. Comput. Entertain. 7(4): 61:1-61:21 (2009) - [c32]Izumi Yagi, Yu Ebihara, Tamaki Inada, Yoshiki Tanaka, Maki Sugimoto, Masahiko Inami, Adrian David Cheok, Naohito Okude, Masahiko Inakage:
Yaminabe YAMMY: an interactive cooking pot that uses feeling as spices. Advances in Computer Entertainment Technology 2009: 419-420 - [c31]Takayuki Miyauchi, Ami Yao, Takahiro Nemoto, Masahiko Inami, Masahiko Inakage, Naohito Okude, Adrian David Cheok, Maki Sugimoto:
Urban treasure: new approach for collaborative local recommendation engine. Advances in Computer Entertainment Technology 2009: 460 - [c30]Takuji Tokiwa, Masashi Yoshidzumi, Hideaki Nii, Maki Sugimoto, Masahiko Inami:
Motion Capture System Using an Optical Resolver. HCI (3) 2009: 536-543 - [c29]Fumitaka Ozaki, Takuo Imbe, Shin Kiyasu, Yuta Sugiura, Yusuke Mizukami, Shuichi Ishibashi, Maki Sugimoto, Masahiko Inami, Adrian David Cheok, Naohito Okude, Masahiko Inakage:
MYGLOBE: cognitive map as communication media. SIGGRAPH Posters 2009 - [c28]Yuta Sugiura, Takeo Igarashi, Hiroki Takahashi, Tabare Akim Gowon, Charith Lasantha Fernando, Maki Sugimoto, Masahiko Inami:
Graphical instruction for a garment folding robot. SIGGRAPH Emerging Technologies 2009: 12:1 - [c27]Masahiro Furukawa, Naohisa Nagaya, Takuji Tokiwa, Masahiko Inami, Atsushi Okoshi, Maki Sugimoto, Yuta Sugiura, Yuji Uema:
Fur display. SIGGRAPH ASIA Art Gallery & Emerging Technologies 2009: 70 - [c26]Charith Lasantha Fernando, Takeo Igarashi, Masahiko Inami, Maki Sugimoto, Yuta Sugiura, Anusha Indrajith Withana, Kakehi Gota:
An operating method for a bipedal walking robot for entertainment. SIGGRAPH ASIA Art Gallery & Emerging Technologies 2009: 79 - 2008
- [c25]Jakob Leitner, Michael Haller, Kyungdahm Yun, Woontack Woo, Maki Sugimoto, Masahiko Inami:
IncreTable, a mixed reality tabletop game experience. Advances in Computer Entertainment Technology 2008: 9-16 - [c24]Noriyoshi Shimizu, Maki Sugimoto, Dairoku Sekiguchi, Shoichi Hasegawa, Masahiko Inami:
Mixed reality robotic user interface: virtual kinematics to enhance robot motion. Advances in Computer Entertainment Technology 2008: 166-169 - [c23]Tomoko Fujii, Hideaki Nii, Takuji Tokiwa, Maki Sugimoto, Masahiko Inami:
Motion capture system using single-track gray code. Advances in Computer Entertainment Technology 2008: 426 - [c22]Jakob Leitner, Peter Brandl, Thomas Seifried, Michael Haller, Kyungdahm Yun, Woontack Woo, Maki Sugimoto, Masahiko Inami:
IncreTable, bridging the gap between real and virtual worlds. SIGGRAPH New Tech Demos 2008: 19 - [c21]Naohisa Nagaya, Fabian Foo Chuan Siang, Masahiro Furukawa, Takuji Tokiwa, Maki Sugimoto, Masahiko Inami:
Stop motion goggle. SIGGRAPH New Tech Demos 2008: 35 - [c20]Hiroshi Sakasai, Hiroshi Kato, Takako Igarashi, Miho Ishii, Maki Sugimoto, Masahiko Inami, Masahiko Inakage, Naohito Okude:
Flaneur: digital see-through telescope. SIGGRAPH ASIA Art Gallery & Emerging Technologies 2008: 43 - 2007
- [c19]Maki Sugimoto, Kazuki Kodama, Akihiro Nakamura, Minoru Kojima, Masahiko Inami:
A Display-Based Tracking System: Display-Based Computing for Measurement Systems. ICAT 2007: 31-38 - [c18]Masahiro Furukawa, Mitsunori Ohta, Satoru Miyajima, Maki Sugimoto, Shoichi Hasegawa, Masahiko Inami:
Support System for Micro Operation using a Haptic Display Device. ICAT 2007: 310-311 - [c17]Erika Sawada, Shinya Ida, Tatsuhito Awaji, Keisuke Morishita, Tomohisa Aruga, Ryuta Takeichi, Tomoko Fujii, Hidetoshi Kimura, Toshinari Nakamura, Masahiro Furukawa, Noriyoshi Shimizu, Takuji Tokiwa, Hideaki Nii, Maki Sugimoto, Masahiko Inami:
BYU-BYU-View: a wind communication interface. SIGGRAPH Emerging Technologies 2007: 1 - [c16]Maki Sugimoto, Masahiro Tomita, Tomohisa Aruga, Naohisa Nagaya, Noriyoshi Shimizu, Masahiko Inami:
Tiny dancing robots: Display-based computing for multi-robot control systems. SIGGRAPH Posters 2007: 148 - [c15]Jan Richter, Bruce H. Thomas, Maki Sugimoto, Masahiko Inami:
Remote active tangible interactions. TEI 2007: 39-42 - 2006
- [j3]Noriyoshi Shimizu, Naoya Koizumi, Maki Sugimoto, Hideaki Nii, Dairoku Sekiguchi, Masahiko Inami:
A teddy-bear-based robotic user interface. Comput. Entertain. 4(3): 8 (2006) - [c14]Naohisa Nagaya, Masashi Yoshidzumi, Maki Sugimoto, Hideaki Nii, Taro Maeda, Michiteru Kitazaki, Masahiko Inami:
Gravity jockey: a novel music experience with galvanic vestibular stimulation. Advances in Computer Entertainment Technology 2006: 41 - [c13]Naohisa Nagaya, Masashi Yoshidzumi, Maki Sugimoto, Hideaki Nii, Taro Maeda, Michiteru Kitazaki, Masahiko Inami:
Gravity Jockey: a novel music experience with galvanic vestibular stimulation. Advances in Computer Entertainment Technology 2006: 49 - [c12]Minoru Kojima, Maki Sugimoto, Akihiro Nakamura, Masahiro Tomita, Masahiko Inami, Hideaki Nii:
Augmented Coliseum: An Augmented Game Environment with Small Vehicles. Tabletop 2006: 3-8 - 2005
- [j2]Maki Sugimoto, Georges Kagotani, Hideaki Nii, Naoji Shiroma, Masahiko Inami, Fumitoshi Matsuno:
Time Follower's Vision: A Teleoperation Interface with Past Images. IEEE Computer Graphics and Applications 25(1): 54-63 (2005) - [c11]Noriyoshi Shimizu, Naoya Koizumi, Maki Sugimoto, Hideaki Nii, Dairoku Sekiguchi, Masahiko Inami:
Teddy-bear based robotic user interface. Advances in Computer Entertainment Technology 2005: 75-82 - [c10]Noriyoshi Shimizu, Naoya Koizumi, Maki Sugimoto, Hideaki Nii, Dairoku Sekiguchi, Masahiko Inami:
Teddy-bear based robotic user interface for interactive entertainment. Advances in Computer Entertainment Technology 2005: 389-390 - [c9]Hideaki Nii, Maki Sugimoto, Masahiko Inami:
Smart Light-Ultra High Speed Projector for Spatial Multiplexing Optical Transmission. CVPR Workshops 2005: 95 - [c8]Naoji Shiroma, Hirokazu Nagai, Maki Sugimoto, Masahiko Inami, Fumitoshi Matsuno:
Synthesized Scene Recollection for Robot Teleoperation. FSR 2005: 403-414 - [c7]Naohisa Nagaya, Maki Sugimoto, Hideaki Nii, Michiteru Kitazaki, Masahiko Inami:
Visual perception modulated by galvanic vestibular stimulation. ICAT 2005: 78-84 - [c6]Maki Sugimoto, Georges Kagotani, Minoru Kojima, Hideaki Nii, Akihiro Nakamura, Masahiko Inami:
Augmented coliseum: display-based computing for augmented reality inspiration computing robot. SIGGRAPH Emerging Technologies 2005: 1 - [c5]Taro Maeda, Hideyuki Ando, Tomohiro Amemiya, Naohisa Nagaya, Maki Sugimoto, Masahiko Inami:
Shaking the world: galvanic vestibular stimulation as a novel sensation interface. SIGGRAPH Emerging Technologies 2005: 17 - [c4]Taro Maeda, Hideyuki Ando, Maki Sugimoto:
Virtual Acceleration with Galvanic Vestibular Stimulation in Virtual Reality Environment. VR 2005: 289-290 - 2004
- [j1]Hideyuki Ando, Maki Sugimoto, Taro Maeda:
Wearable Moment Display Device for Nonverbal Communications. IEICE Trans. Inf. Syst. 87-D(6): 1354-1360 (2004) - [c3]Naoji Shiroma, Georges Kagotani, Maki Sugimoto, Masahiko Inami, Fumitoshi Matsuno:
A Novel Teleoperation Method for a Mobile Robot Using Real Image Data Records. ROBIO 2004: 233-238 - [c2]Maki Sugimoto, Georges Kagotani, Hideaki Nii, Naoji Shiroma, Masahiko Inami, Fumitoshi Matsuno:
Time follower's vision. SIGGRAPH Emerging Technologies 2004: 29 - 2002
- [c1]Taro Maeda, Hideyuki Ando, Maki Sugimoto, Junji Watanabe, Takeshi Miki:
Wearable Robotics as a Behavioral Interface - The Study of the Parasitic Humanoid. ISWC 2002: 145-151
Coauthor Index
aka: Masai Katsutoshi
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-12-11 20:47 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint