people
members of the lab or group
layout: about title: about permalink: / subtitle:
profile: align: right image: prof_pic.jpg image_circular: false # crops the image to make it circular more_info:
news: true # includes a list of news items selected_papers: true social: true # includes social icons at the bottom of the page latest_posts: false # disabled blog posts section —
I am currently a first-year PhD student at the the UCLA’s Mobility Lab, working under the guidance of Prof. Jiaqi Ma and Prof. Wei Wang. My research focuses on vision-language-action (VLA) systems and embodied intelligence, with an emphasis on enabling agents to perceive, reason, and act effectively in real-world environments.
My work lies at the intersection of robotics, artificial intelligence, and mobility. I am particularly interested in developing methods that allow physical AI systems to integrate visual and linguistic understanding with action, maintain structured memory over time, and perform reliable decision-making in long-horizon tasks. My research spans topics including navigation, manipulation, and memory-driven reasoning, with the goal of building robust and adaptable embodied agents.
I am also an Amazon Trainium Fellow, supported for my research on large-scale vision and action learning for embodied intelligence.
layout: about title: about permalink: / subtitle:
profile: align: right image: prof_pic.jpg image_circular: false # crops the image to make it circular more_info:
news: true # includes a list of news items selected_papers: true social: true # includes social icons at the bottom of the page latest_posts: false # disabled blog posts section —
I am currently a first-year PhD student at the the UCLA’s Mobility Lab, working under the guidance of Prof. Jiaqi Ma and Prof. Wei Wang. My research focuses on vision-language-action (VLA) systems and embodied intelligence, with an emphasis on enabling agents to perceive, reason, and act effectively in real-world environments.
My work lies at the intersection of robotics, artificial intelligence, and mobility. I am particularly interested in developing methods that allow physical AI systems to integrate visual and linguistic understanding with action, maintain structured memory over time, and perform reliable decision-making in long-horizon tasks. My research spans topics including navigation, manipulation, and memory-driven reasoning, with the goal of building robust and adaptable embodied agents.
I am also an Amazon Trainium Fellow, supported for my research on large-scale vision and action learning for embodied intelligence.