Patrick Yin

I am a first-year PhD student in CSE at the University of Washington advised by Professor Abhishek Gupta. I'm broadly interested in embodied AI and building intelligent robots.

Before UW, I completed my undergrad at UC Berkeley where I worked with Professors Sergey Levine and Kuan Fang in the Berkeley Artificial Intelligence Research (BAIR) Lab.

Email  |  CV  |  Scholar  |  Github  |  LinkedIn  |  Twitter

profile photo
Publications
ASID: Active Exploration for System Identification and Reconstruction in Robotic Manipulation
Marius Memmel, Chuning Zhu, Andrew Wagenmaker, Patrick Yin, Dieter Fox, Abhishek Gupta
ICLR 2024 (Oral Presentation)
OpenReview

We propose a learning system that can leverage a small amount of real-world data to autonomously refine a simulation model, enabling sim-to-real transfer for real-world robotic manipulation tasks.

Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from Offline Data
Chongyi Zheng, Benjamin Eysenbach, Homer Rich Walke, Patrick Yin, Kuan Fang, Ruslan Salakhutdinov, Sergey Levine
ICLR 2024 (Spotlight Talk)
project page / arXiv

We discover that a shallow and wide architecture can boost the performance of contrastive RL approaches on simulated benchmarks. Additionally, we demonstrate that contrastive approaches can solve real-world robotic manipulation tasks.

Generalization with Lossy Affordances: Leveraging Broad Offline Data for Learning Visuomotor Tasks
Kuan Fang, Patrick Yin, Ashvin Nair, Homer Rich Walke, Gengchan Yan, Sergey Levine
CoRL 2022 (Oral Presentation)
project page / arXiv

We propose Fine-Tuning with Lossy Affordance Planner (FLAP), a framework that leverages diverse offline data for learning representations, goal-conditioned policies, and affordance models that enable rapid fine-tuning to new tasks in target scenes.

Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in Latent Space
Kuan Fang*, Patrick Yin*, Ashvin Nair, Sergey Levine (* indicates equal contribution)
IROS 2022
project page / arXiv

We propose Planning to Practice (PTP), a method which makes it practical to train goal-conditioned policies for long-horizon tasks.

Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning
Philippe Hansen-Estruch, Amy Zhang, Ashvin Nair, Patrick Yin, Sergey Levine
ICML 2022
project page / arXiv

We propose a new form of state abstraction called goal-conditioned bisimulation that captures functional equivariance, allowing for the reuse of skills to achieve new goals in goal-conditioned reinforcement learning.

Miscellaneous from Undergrad
Notes I took on machine learning, math, etc.

Coursework that I took as an undergrad

Coding projects from when I was first learning to code :)

Website template from Jon Barron.