My research goals focus on developing robust methods for training robots in a generalizable manner. I am invested in achieving a system where a robot can learn broad skills from minimal demonstration. While this vision is conceptually simple, today's systems require lengthy procedures with many demonstrations to learn very specific skills. I seek to explore new procedures for learning more general, robust policies quickly from long demonstrations.
Publications
Huaxiaoyue Wang, Nathaniel Chin, Gonzalo Gonzalez-Pumariega, Xiangwan Sun, Neha Sunkara, Maximus Adrian Pace, Jeannette Bohg, Sanjiban Choudhury. APRICOT: Active Preference Learning and Constraint-Aware Task Planning with LLMs. 8th Annual Conference on Robot Learning (CoRL), 2024. [OpenReview]
Huaxiaoyue Wang, Kushal Kedia, Juntao Ren, Rahma Abdullah, Atiksh Bhardwaj, Angela Chao, Kelly Y Chen, Nathaniel Chin, Prithwish Dan, Xinyi Fan, Gonzalo Gonzalez-Pumariega, Aditya Kompella, Maximus Adrian Pace, Yash Sharma, Xiangwan Sun, Neha Sunkara, Sanjiban Choudhury. MOSAIC: A Modular System for Assistive and Interactive Cooking. 8th Annual Conference on Robot Learning (CoRL), 2024. [arXiv]
Yash Sharma, Yuki Wang, Kelly Chen, Maximus Pace, Sanjiban Choudhury. Video2Demo: Grounding Videos in State-Action Demonstrations.[OpenReview]
Visuomotor Imitation Learning Research
I joined the People and Robots Teaching and Learning (PoRTaL) lab in March, 2023 and have since been conducting research in visuomotor policy learning.
My primary focus on imitation learning centered around training a robust grasping policy on the mobile manipulator Stretch.
After collectind hundreds of teleoperated demonstrations, I experimented with training various imitation learning models including sequence-based policies and diffusion policies. I also tested the performance of regression and classification models for discretized and continuous action spaces, further testing different methods of selecting what action the robot should perform given the model's output.
I also encountered many of the challenges surrounding imitation learning. In particular, I worked extensively on a bimodal distribution, where the action of extending an arm out was nearly always a valid action chocie per the data, but correctively rotating the wrist to point toward the target was necessary at a certain point, despite always having lower probability as an action choice. Through weighted loss functions, categorial distribution sampling, added recovery data, and eventually separating action prediction from goal location prediction to simplify each step, I was able to train successful grasping policies while also gaining a strong understanding of the fundamentals of imitation learning.