HCI researcher at The University of Tokyo (Tsukada Lab) working on how people interact with autonomous vehicles, robots, and the smart cities they move through. My doctoral work is the Smart Pole Interaction Unit (SPIU) — moving intent signaling from the car to the curb, in time for pedestrians to act.
Part-time at Tier IV, contributing to Autoware — the open-source self-driving stack.
🏆 CHI 2026: Two Honourable Mention awards (top 5%) — SPIU paper · VLM Personas paper
→ More at vish0012.github.io — full publications, talks, and moments.
| Venue | Paper |
|---|---|
| CHI 2026 🏆 | Don't Worry, Just Follow Me — SPIU In-the-Wild Evaluation |
| CHI 2026 🏆 | Peeking Ahead of the Field Study — VLM Personas in HCI |
| VRST 2025 | A Silent Negotiator? Cross-cultural VR Evaluation of SPIU |
| IJHCS 2025 | Pedestrian–AV Interaction: Human Perception vs. LLM Insights |
| Project | Description |
|---|---|
| Smartpole-VR-AWSIM | Unity/AWSIM testbed for pedestrian–AV (eHMI) research |
| EvenDemoApp | Flutter audio recorder + Whisper for AR headset prototype |
| Autoware upstream | Merged CI/scenario-test reliability fixes |

