Personalizing LLMs with Binary Feedback: A Preference-Corrected Optimization Framework

2026-05-11Computation and Language

Computation and LanguageArtificial Intelligence
AI summary

The authors propose a method called C-BPO to personalize large language models by using simple yes/no signals about what users like, rather than just looking at each user's data alone. Their approach also considers data from other users as negative examples to better understand what makes each user unique. To avoid mistakenly punishing shared knowledge that many users find useful, they apply a technique from Positive-Unlabeled learning to clean the negative signals. Tests show their method works better than previous ones across different tasks and models by effectively capturing differences between users.

Large Language ModelsPersonalizationPositive-Unlabeled LearningBinary SignalsUser PreferencesInter-user DifferencesModel AlignmentImplicit Negative SignalsPreference OverlapMachine Learning
Authors
Xilai Ma, Liye Zhao, Weijun Yao, Haibing Di, Wenya Wang, Jing Li
Abstract
Large Language Model (LLM) personalization aims to align model behaviors with individual user preferences. Existing methods often focus on isolated user histories, neglecting the essential role of inter-user differences. We propose C-BPO, a framework that personalizes LLMs via preference-calibrated binary signals. By treating target user data as positive feedback and other users' data as an auxiliary set of implicit negative signals, C-BPO captures distinct inter-user differences. To mitigate the preference overlap issue, where shared task knowledge is erroneously penalized, we derive an objective grounded in Positive-Unlabeled (PU) learning theory. This approach purifies negative signals by subtracting ``positive bias'', ensuring alignment with unique idiosyncrasies without compromising general helpfulness. Empirical experiments across various personalization tasks and backbone LLMs show C-BPO consistently outperforms baselines, demonstrating the efficacy of preference-calibrated binary signals in modeling inter-user differences.