<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>AI on homelab</title>
    <link>https://homelab.nbkelley.com/docs/ai/</link>
    <description>Recent content in AI on homelab</description>
    <generator>Hugo</generator>
    <language>en</language>
    <lastBuildDate>Sat, 02 May 2026 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://homelab.nbkelley.com/docs/ai/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Wiki Pipeline Scripts</title>
      <link>https://homelab.nbkelley.com/docs/ai/wiki-pipeline-scripts/</link>
      <pubDate>Sat, 02 May 2026 00:00:00 +0000</pubDate>
      <guid>https://homelab.nbkelley.com/docs/ai/wiki-pipeline-scripts/</guid>
      <description>&lt;h1 id=&#34;wiki-pipeline-scripts&#34;&gt;Wiki Pipeline Scripts&lt;a class=&#34;anchor&#34; href=&#34;#wiki-pipeline-scripts&#34;&gt;#&lt;/a&gt;&lt;/h1&gt;&#xA;&lt;h2 id=&#34;what-was-established&#34;&gt;What Was Established&lt;a class=&#34;anchor&#34; href=&#34;#what-was-established&#34;&gt;#&lt;/a&gt;&lt;/h2&gt;&#xA;&lt;p&gt;Eight Python scripts in &lt;code&gt;/opt/wiki/homelab/scripts/&lt;/code&gt; implement the full wiki pipeline: file conversion, document ingestion, conversation crystallization (standard, DeepSeek, and Claude formats), shared LLM infrastructure, wiki health-checking, and knowledge-graph integration. All scripts were ported from the work wiki pipeline (itself developed 2026-04-21 → 2026-04-26) with homelab-specific infrastructure baked in.&lt;/p&gt;&#xA;&lt;p&gt;&lt;code&gt;crystallize.py&lt;/code&gt; (Claude format) uses a two-step LLM approach: gemma4:e2b cleans, qwen3.6:35b crystallizes. &lt;code&gt;crystallize_deepseek.py&lt;/code&gt; skips gemma — JSON parsing is handled deterministically in Python (&lt;code&gt;load_conversation&lt;/code&gt; + &lt;code&gt;_clean_text&lt;/code&gt;), so only qwen is needed.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Wiki System - Architecture</title>
      <link>https://homelab.nbkelley.com/docs/ai/wiki-system/</link>
      <pubDate>Sat, 02 May 2026 00:00:00 +0000</pubDate>
      <guid>https://homelab.nbkelley.com/docs/ai/wiki-system/</guid>
      <description>&lt;h1 id=&#34;wiki-system---architecture&#34;&gt;Wiki System - Architecture&lt;a class=&#34;anchor&#34; href=&#34;#wiki-system---architecture&#34;&gt;#&lt;/a&gt;&lt;/h1&gt;&#xA;&lt;h2 id=&#34;what-was-established&#34;&gt;What Was Established&lt;a class=&#34;anchor&#34; href=&#34;#what-was-established&#34;&gt;#&lt;/a&gt;&lt;/h2&gt;&#xA;&lt;p&gt;The wiki system is designed around the LLM wiki pattern (Karpathy): raw sources (chat transcripts, notes, docs) are crystallized into structured markdown pages, embedded into pgvector, and retrieved semantically by agents in future sessions. A dedicated LXC (&lt;code&gt;nk-wiki&lt;/code&gt;) will host the wiki VM, separating wiki infrastructure from other services.&lt;/p&gt;&#xA;&lt;h2 id=&#34;multi-wiki-namespace-design&#34;&gt;Multi-Wiki Namespace Design&lt;a class=&#34;anchor&#34; href=&#34;#multi-wiki-namespace-design&#34;&gt;#&lt;/a&gt;&lt;/h2&gt;&#xA;&lt;p&gt;Three wikis are planned, each with its own namespace in pgvector:&lt;/p&gt;</description>
    </item>
    <item>
      <title>Local Model Training &amp; Fine-Tuning Guide</title>
      <link>https://homelab.nbkelley.com/docs/ai/local-model-training/</link>
      <pubDate>Thu, 23 Apr 2026 00:00:00 +0000</pubDate>
      <guid>https://homelab.nbkelley.com/docs/ai/local-model-training/</guid>
      <description>&lt;h1 id=&#34;local-model-training--fine-tuning-guide&#34;&gt;Local Model Training &amp;amp; Fine-Tuning Guide&lt;a class=&#34;anchor&#34; href=&#34;#local-model-training--fine-tuning-guide&#34;&gt;#&lt;/a&gt;&lt;/h1&gt;&#xA;&lt;h2 id=&#34;what-was-established&#34;&gt;What Was Established&lt;a class=&#34;anchor&#34; href=&#34;#what-was-established&#34;&gt;#&lt;/a&gt;&lt;/h2&gt;&#xA;&lt;p&gt;Guide for fine-tuning local LLMs (DeepSeek) using Hugging Face &lt;code&gt;transformers&lt;/code&gt;, with emphasis on VRAM-efficient techniques for single-GPU setups.&lt;/p&gt;&#xA;&lt;h2 id=&#34;key-decisions&#34;&gt;Key Decisions&lt;a class=&#34;anchor&#34; href=&#34;#key-decisions&#34;&gt;#&lt;/a&gt;&lt;/h2&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Framework&lt;/strong&gt;: Hugging Face &lt;code&gt;transformers&lt;/code&gt; + &lt;code&gt;Trainer&lt;/code&gt; API for fine-tuning&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Model&lt;/strong&gt;: &lt;code&gt;deepseek-ai/deepseek-llm-7b&lt;/code&gt; (example model)&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Efficiency&lt;/strong&gt;: LoRA (Low-Rank Adaptation) + 4-bit quantization via &lt;code&gt;bitsandbytes&lt;/code&gt; to fit large models on consumer GPUs&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;setup&#34;&gt;Setup&lt;a class=&#34;anchor&#34; href=&#34;#setup&#34;&gt;#&lt;/a&gt;&lt;/h2&gt;&#xA;&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; style=&#34;color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;&#34;&gt;&lt;code class=&#34;language-bash&#34; data-lang=&#34;bash&#34;&gt;&lt;span style=&#34;display:flex;&#34;&gt;&lt;span&gt;pip install torch transformers datasets accelerate peft bitsandbytes&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Verify GPU: &lt;code&gt;nvidia-smi&lt;/code&gt; — need CUDA 11.8+.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Troubleshooting DeepSeek Language Switching</title>
      <link>https://homelab.nbkelley.com/docs/ai/deepseek_language_switching/</link>
      <pubDate>Tue, 25 Mar 2025 00:00:00 +0000</pubDate>
      <guid>https://homelab.nbkelley.com/docs/ai/deepseek_language_switching/</guid>
      <description>&lt;h1 id=&#34;troubleshooting-deepseek-language-switching&#34;&gt;Troubleshooting DeepSeek Language Switching&lt;a class=&#34;anchor&#34; href=&#34;#troubleshooting-deepseek-language-switching&#34;&gt;#&lt;/a&gt;&lt;/h1&gt;&#xA;&lt;h2 id=&#34;what-was-established&#34;&gt;What Was Established&lt;a class=&#34;anchor&#34; href=&#34;#what-was-established&#34;&gt;#&lt;/a&gt;&lt;/h2&gt;&#xA;&lt;p&gt;Local DeepSeek models may intermittently switch from English to Chinese mid-response. This is typically caused by training bias (heavy Chinese dataset influence), loss of context during long conversations, or mixed-language input prompts.&lt;/p&gt;&#xA;&lt;h2 id=&#34;key-decisions&#34;&gt;Key Decisions&lt;a class=&#34;anchor&#34; href=&#34;#key-decisions&#34;&gt;#&lt;/a&gt;&lt;/h2&gt;&#xA;&lt;p&gt;To maintain English-only responses, the following parameters and prompting strategies should be applied:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Explicit Instruction&lt;/strong&gt;: Always include a system-level or initial prompt instruction to respond exclusively in English.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Temperature Control&lt;/strong&gt;: Use lower temperature settings (e.g., &lt;code&gt;0.3&lt;/code&gt;) to make the model more deterministic and less likely to drift.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Repetition Penalty&lt;/strong&gt;: Implement a &lt;code&gt;repetition_penalty&lt;/code&gt; (e.g., &lt;code&gt;1.2&lt;/code&gt;) to discourage the model from falling into repetitive patterns that might trigger language switching.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;h2 id=&#34;current-configuration&#34;&gt;Current Configuration&lt;a class=&#34;anchor&#34; href=&#34;#current-configuration&#34;&gt;#&lt;/a&gt;&lt;/h2&gt;&#xA;&lt;h3 id=&#34;system-message-pattern&#34;&gt;System Message Pattern&lt;a class=&#34;anchor&#34; href=&#34;#system-message-pattern&#34;&gt;#&lt;/a&gt;&lt;/h3&gt;&#xA;&lt;p&gt;When using APIs or local inference engines that support system roles:&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
