//nbkelley /homelab

Local Model Training & Fine-Tuning Guide

Local Model Training & Fine-Tuning Guide#

What Was Established#

Guide for fine-tuning local LLMs (DeepSeek) using Hugging Face transformers, with emphasis on VRAM-efficient techniques for single-GPU setups.

Key Decisions#

  • Framework: Hugging Face transformers + Trainer API for fine-tuning
  • Model: deepseek-ai/deepseek-llm-7b (example model)
  • Efficiency: LoRA (Low-Rank Adaptation) + 4-bit quantization via bitsandbytes to fit large models on consumer GPUs

Setup#

pip install torch transformers datasets accelerate peft bitsandbytes

Verify GPU: nvidia-smi — need CUDA 11.8+.

Troubleshooting DeepSeek Language Switching

Troubleshooting DeepSeek Language Switching#

What Was Established#

Local DeepSeek models may intermittently switch from English to Chinese mid-response. This is typically caused by training bias (heavy Chinese dataset influence), loss of context during long conversations, or mixed-language input prompts.

Key Decisions#

To maintain English-only responses, the following parameters and prompting strategies should be applied:

  • Explicit Instruction: Always include a system-level or initial prompt instruction to respond exclusively in English.
  • Temperature Control: Use lower temperature settings (e.g., 0.3) to make the model more deterministic and less likely to drift.
  • Repetition Penalty: Implement a repetition_penalty (e.g., 1.2) to discourage the model from falling into repetitive patterns that might trigger language switching.

Current Configuration#

System Message Pattern#

When using APIs or local inference engines that support system roles: