Proteomics benefits from large pre-trained protein language models, but fine-tuning on specific tasks becomes challenging with increasing model size.
This work introduces parameter-efficient fine-tuning (PEFT), adapting successful techniques from natural language processing to proteomics.
Surprisingly, PEFT models outperform traditional fine-tuning in PPI prediction while using fewer parameters. Freezing language model parameters and training only a classification head also proves effective, outperforming state-of-the-art PPI prediction methods with substantially reduced compute.
Democratizing Protein Language Models with Parameter-Efficient Fine-Tuning
2