Securing LLM Fine-tuning in Distributed Settings

Securing LLM Fine-tuning in Distributed Settings

Privacy-preserving technique using Function Secret Sharing

PriFFT introduces a novel approach to protect sensitive data while fine-tuning large language models across distributed devices.

  • Combines federated learning with function secret sharing to prevent exposure of both training data and model parameters
  • Preserves privacy by keeping training samples on local devices while preventing inference attacks from model updates
  • Maintains model utility with minimal performance degradation compared to centralized fine-tuning
  • Provides comprehensive security guarantees protecting both user data and model intellectual property

This research is critical for organizations that need to improve domain-specific LLM performance while maintaining strict privacy requirements and protecting proprietary model architectures.

PriFFT: Privacy-preserving Federated Fine-tuning of Large Language Models via Function Secret Sharing

9 | 20