Rethinking Safety in LLM Fine-tuning: An Optimization Perspective

Abstract

Fine-tuning language models is commonly believed to inevitably harm their safety, even when using harmless datasets, requiring additional safety measures. We challenge this belief through systematic testing, showing that poor optimization choices—not inherent trade-offs—often cause safety problems measured as harmful responses to adversarial prompts. By properly selecting key training hyper parameters—learning rate, batch size, and gradient steps—we reduce unsafe model responses from 16% to approximately 5% as measured by keyword matching and GPT-4 evaluation while maintaining utility performance. Based on this observation, we propose a simple exponential moving average (EMA) momentum technique in parameter space, that can preserve safety by creating a stable optimization path that retains the original model’s safety properties. Our experiments on Llama families across multiple datasets (Dolly, Alpaca, ORCA) demonstrate that safety problems during fine-tuning can be largely avoided without specialized interventions, outperforming existing approaches that require additional safety data while offering practical guidelines for maintaining both model performance and safety during adaptation.

Publication
Conference on Laguage Modeling (COLM25)