Patrick Yin

I am a PhD student at the University of Washington advised by Professor Abhishek Gupta. I'm broadly interested in embodied AI and building intelligent robots.

Previously, I did my undergrad at UC Berkeley where I worked with Professors Sergey Levine and Kuan Fang in the Berkeley Artificial Intelligence Research (BAIR) Lab.

Email  |  CV  |  Scholar  |  Github  |  LinkedIn  |  Twitter  |  ML Notes

profile photo
Publications
DROID: A Large-Scale In-the-Wild Robot Manipulation Dataset
Alexander Khazatsky*, Karl Pertsch*, ..., Patrick Yin, ..., Sergey Levine, Chelsea Finn
RSS 2024
project page / arXiv

A large, diverse robot manipulation dataset with 76k demonstration trajectories.

ASID: Active Exploration for System Identification and Reconstruction in Robotic Manipulation
Marius Memmel, Chuning Zhu, Andrew Wagenmaker, Patrick Yin, Dieter Fox, Abhishek Gupta
ICLR 2024 (Oral Presentation)
project page / arXiv

We propose a learning system that can leverage a small amount of real-world data to autonomously refine a simulation model, enabling sim-to-real transfer for real-world robotic manipulation tasks.

Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from Offline Data
Chongyi Zheng, Benjamin Eysenbach, Homer Rich Walke, Patrick Yin, Kuan Fang, Ruslan Salakhutdinov, Sergey Levine
ICLR 2024 (Spotlight Talk)
project page / arXiv

We discover that a shallow and wide architecture can boost the performance of contrastive RL approaches on simulated benchmarks. Additionally, we demonstrate that contrastive approaches can solve real-world robotic manipulation tasks.

Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Open X-Embodiment Collaboration, ..., Patrick Yin, ...
ICRA 2024 (Best Paper)
project page / arXiv

A large, open-source real robot dataset with 1M+ real robot trajectories spanning 22 robot embodiments, from single robot arms to bi-manual robots and quadrupeds.

Generalization with Lossy Affordances: Leveraging Broad Offline Data for Learning Visuomotor Tasks
Kuan Fang, Patrick Yin, Ashvin Nair, Homer Rich Walke, Gengchan Yan, Sergey Levine
CoRL 2022 (Oral Presentation)
project page / arXiv

We propose Fine-Tuning with Lossy Affordance Planner (FLAP), a framework that leverages diverse offline data for learning representations, goal-conditioned policies, and affordance models that enable rapid fine-tuning to new tasks in target scenes.

Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in Latent Space
Kuan Fang*, Patrick Yin*, Ashvin Nair, Sergey Levine (* indicates equal contribution)
IROS 2022
project page / arXiv

We propose Planning to Practice (PTP), a method which makes it practical to train goal-conditioned policies for long-horizon tasks.

Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning
Philippe Hansen-Estruch, Amy Zhang, Ashvin Nair, Patrick Yin, Sergey Levine
ICML 2022
project page / arXiv

We propose a new form of state abstraction called goal-conditioned bisimulation that captures functional equivariance, allowing for the reuse of skills to achieve new goals in goal-conditioned reinforcement learning.

Miscellaneous from Undergrad
Notes that I took on machine learning, math, and books during undergrad

Coursework that I took as an undergrad

Coding projects from when I was first learning to code :)

Website template from Jon Barron.