Publications
A Practical Analysis of Human Alignment with* PO
Abstract
At the forefront of state-of-the-art human alignment methods are preference optimization methods (* PO). Prior research has often concentrated on identifying the best-performing method, typically involving a grid search over hyperparameters, which can be impractical for general practitioners. In this paper, we examine the robustness of existing state-of-the-art methods to varying hyperparameters in a realistic out-of-distribution (OOD) scenario that mirrors real-world applications of human alignment. Our goal is to empirically find the method that increases the likelihood of achieving better results through the lens of various metrics, such as KL divergence and response length. We also introduce LN-DPO, a simple length-normalized version of DPO that is more stable across hyperparameters, effectively reduces the average response length, and improves performance. Our analysis of state-of-the-art reference-free (ie, SimPO) and reference-dependent (ie, DPO and LN-DPO) methods reveals that they perform similarly at their peak (ie, best possible scenario). However, we uncover that the pattern of change in performance greatly varies as we move away from the best possible scenario.
- Date
- January 1, 1970
- Authors
- Kian Ahrabian, Xihui Lin, Barun Patra, Vishrav Chaudhary, Alon Benhaim, Jay Pujara, Xia Song
- Conference
- Findings of the Association for Computational Linguistics: NAACL 2025
- Pages
- 8013-8021