LLMs - Open-source
A Meta AI bemutatta a LIMA modellt, amely minimális adattal is kimagasló teljesítményt nyújt
Meta AI has unveiled LIMA, a fine-tuned version of its 65B LLaMA model that focuses on high-quality training data rather than sheer volume. Despite a smaller training set, the model shows strong generalization capabilities across a wide variety of tasks.
- Based on the 65-billion parameter LLaMA model
- Achieves performance levels comparable to Google's Bard
- Demonstrates strong generalization on unseen tasks with very little fine-tuning data
- Suggests that data quality is more critical than quantity for instruction tuning
Miért fontos?
LIMA demonstrates that instruction-following models can reach expert levels of performance without massive fine-tuning datasets, potentially lowering the cost and resource requirements for training advanced AI.