PAIR Lab: PKU Alignment and Interaction Research Lab
PAIR Lab: PKU Alignment and Interaction Research Lab
Open-Source Projects
People
News
Publications
Resources
Contact
Yifan Zhong
Latest
Falcon: Fast Visuomotor Policies via Partial Denoising
Panacea: Pareto Alignment via Preference Adaptation for LLMs
Off-Agent Trust Region Policy Optimization
In-Context Editing: Learning Knowledge from Self-Induced Distributions
Heterogeneous-Agent Reinforcement Learning
CivRealm: A Learning and Reasoning Odyssey in Civilization for Decision-Making Agents
Maximum Entropy Heterogeneous-Agent Reinforcement Learning
Safety Gymnasium: A Unified Safe Reinforcement Learning Benchmark
Cite
×