Fine-tuning
Continuing to train an existing model on your own data so it specializes for your task.
Fine-tuning takes a pretrained foundation model and continues its training on a smaller, task-specific dataset. The result is a model that's better at your specific use case but it is no longer the same model the original provider ships.
In 2026 most teams use fine-tuning sparingly. RAG is usually a better first move for adding knowledge, and good prompt engineering plus tool use covers a lot of ground. Fine-tuning shines when you need a specific writing style, structured output, or to make a smaller (cheaper) model behave like a larger one for a narrow task.
LoRA and QLoRA fine-tuning have made the process much cheaper you train low-rank adapters instead of every weight. Tools like Hugging Face's PEFT, Modal, and Together AI have made fine-tuning accessible to small teams.