Profile Picture

Momo Hanawa

M.S. Student

Hi. I am a Master's student at Ishiguro Laboratory, III/GSII, The University of Tokyo.
My research interests include human-agent interaction, robotics, and AI.

About Me

I was born in Osaka and earned my bachelor's degree at Nagoya University, where I studied the fundamentals of computer science with a major in robotics. Currently, I am pursuing a master's degree at The University of Tokyo, and my research is focused on technologies that enhance human interaction. I specialize in Human-Computer Interaction (HCI), exploring the interaction between humans and agents, including robots and virtual agents.

Personal Interests

Curiosity drives me, and I cherish every chance to connect with people and experience new cultures. I love traveling overseas, getting lost in a good movie, grooving to music, and even checking out Japanese comedy shows. Plus, I'm a huge cat lover!

Certification

Feb 2025 - IELTS 6.5
Apr 2023 - TOEIC 875
Dec 2022 - Applied Information Technology Engineer Examination
Nov 2021 - Fundamental Information Technology Engineer Examination

Me 1
Me 2

Background

B.C. Nagoya University

Apr 2020 - Mar 2024

# Education

Computer Science | Nagao Laboratory

CuriousU

Aug 2022

# Global Education

Summer School | University of Twente | the Netherlands

Women in Cybersecurity Program

Sep 2023

# Global Education

The University of North Carolina at Chapel Hill | the U.S.

M.S. The University of Tokyo

Apr 2024 - Current

# Education

HCI | Ishiguro Laboratory

Robot Showcase Team at Matsuo & Iwasawa Laboratory

Jul - Dec 2024

# Working

Sim2Real | Reinforcement Learning | Quadruped Robot

Research

Publications

ParaTalk: A Real-Time Paralinguistic Dialogue System for Human-Agent Interaction(Momo Hanawa, Yoshio Ishiguro)2025.

IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)

Related Project: Paralinguistic Dialogue System

Leash as a Cue: Visual Indicators for Third-Party Acceptance Across Resistance Levels(Momo Hanawa, Satomi Tokida, Yoshio Ishiguro)2025.

IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)

Related Project: Accompanying Robot

Working Papers

hogehoge20xx.

Projects

Wheelchair Robot

Wheelchair Robot

Project

Apr 2023 - Dec 2023, Finished

As part of a project in the Nagao Laboratory at Nagoya University, I participated in the Tsukuba Challenge, a technical trial for autonomous mobile robots navigating outdoor environments. In this challenge, we redeveloped a wheelchair-type robot called WHILL to autonomously traverse pedestrian paths and urban areas. My contributions included developing an iOS application for object detection and semantic segmentation, as well as controlling the WHILL's movements using ROS.

ROSPythonSwift
Accompanying Robot

Accompanying Robot

Research

Apr 2024 - Current, On Going

We are exploring how people perceive and accept robots they encounter in public spaces. Specifically, our ongoing work examines how visual design elements that express the relationship between an accompanying robot and its user may affect impressions, especially for those who are initially less comfortable with robots.

PythonROS
Parkour Simulation

Parkour Simulation

Project

Jul 2024 - Dec 2024, Finished

We used the high-performance simulation environment Isaac Gym to train a quadruped robot through reinforcement learning, enabling it to perform complex parkour-like movements. Advances in simulation have made it possible to learn such behaviors in a short time and acquire models that generalize to real-world environments. By collecting large amounts of training data quickly, we were able to develop models capable of adapting to physical settings.

ROSPythonSimulation
Paralinguistic Dialogue System

Paralinguistic Dialogue System

Research

Sep 2024 - Current, On Going

Have you ever wished you could talk to R2D2—not through words, but through sounds? While voice-based agents are becoming more common, relying solely on spoken language in human–agent interaction presents real challenges. It can be mentally demanding and often impractical in noisy or hands-free situations. To address this, we explored the potential of non-verbal communication using Semantic-Free Utterances (SFUs)—meaningless yet expressive sounds that can lighten cognitive load. In this project, we introduce ParaTalk, a paralinguistic dialogue system that interprets user speech using a large language model and responds with Paralinguistic Utterances (PUs), a type of SFU, in real time. By focusing on the balance between verbal and non-verbal expression, ParaTalk opens up new possibilities for designing more intuitive and emotionally resonant agent communication.

PythonUnity
Remora Barrette

Remora Barrette

Work

Dec 2025, Finished

Have you ever felt uncomfortable when store staff approach you while shopping? We created Romera Barrette, an agent designed to support shy or socially anxious individuals. Inspired by the unspoken signals of headphones and wired earphones—often used as a subtle "do not disturb" sign—this wearable device gently expresses a desire not to be approached. When a staff member's device comes near, the barrette softly glows, offering a warm and non-verbal way to say "not right now."

ArduinoPython