PAIR Lab: PKU Alignment and Interaction Research Lab
PAIR Lab: PKU Alignment and Interaction Research Lab
Open-Source Projects
People
News
Publications
Resources
Contact
Taiye Chen
Latest
Mitigating Reward Over-Optimization in RLHF via Behavior-Supported Regularization
Safesora: Towards Safety Alignment of Text2video Generation via a Human Preference Dataset
Cite
×