October 29, 2024
Local deployment of Llama 3.2 3B vs. GPT-4o: A deep dive into fine-tuning for specialized content creation
We’re testing a locally deployed Llama-3.2 model to generate UEFA Champions League match reports from play-by-play commentary. Starting with a pre-trained 3B model in ollama, we’ll apply LoRA fine-tuning with unsloth and compare performance, hardware needs, and key tuning parameters.
Read Article
