gradient_hacking

Does the llm preserve its goal under the pressure of optimization (fine-tuning)