Protecting Language Models Against Unauthorized Distillation through Trace Rewriting
arXiv:2602.15143v1 Announce Type: new Abstract: Knowledge distillation is a widely adopted technique for transferring capabilities from LLMs to smaller, more efficient student models. However, unauthorized …
Xinhang Ma, William Yeoh, Ning Zhang, Yevgeniy Vorobeychik
3 views