EVALUATING THE EFFECTIVENESS OF TRANSFER LEARNING IN FEW-SHOT LEARNING SCENARIOS FOR NATURAL LANGUAGE PROCESSING TASKS

Authors

  • Dr. Naeem Fatima
  • Nisar Ahmed Memon
  • Muhateer Muhammad
  • Muhammad Saeed Ahmad

Keywords:

Transfer Learning, Few-Shot Learning, Natural Language Processing, BERT, RoBERTa, T5, Fine-tuning

Abstract

This study investigates the effectiveness of transfer learning in few-shot learning scenarios across various natural language processing tasks. The research systematically evaluates three pre-trained language models (BERT, RoBERTa, and T5) across five NLP tasks with limited training data. Through rigorous experimental analysis involving varying training set sizes from 10 to 100 examples, the study demonstrates that transfer learning substantially improves performance in data-scarce environments compared to models trained from scratch. Results indicate that RoBERTa consistently outperforms other models across most tasks, with performance gains becoming more pronounced as training examples increase from 10 to 100. Task-specific analysis reveals that sentiment analysis and text classification benefit more from transfer learning than complex tasks like summarization. The research also identifies a performance plateau effect where gains diminish beyond certain data thresholds, suggesting opportunities for more efficient fine-tuning strategies. These findings provide valuable insights for practitioners implementing NLP solutions under data constraints and contribute to the broader understanding of transfer learning dynamics in few-shot learning contexts.

Downloads

Published

2025-05-20

How to Cite

Dr. Naeem Fatima, Nisar Ahmed Memon, Muhateer Muhammad, & Muhammad Saeed Ahmad. (2025). EVALUATING THE EFFECTIVENESS OF TRANSFER LEARNING IN FEW-SHOT LEARNING SCENARIOS FOR NATURAL LANGUAGE PROCESSING TASKS. Spectrum of Engineering Sciences, 3(5), 551–563. Retrieved from https://sesjournal.com/index.php/1/article/view/390