//nbkelley /homelab

Homelab Dashboard

Homelab Dashboard#

What Was Established#

A Node.js + Express dashboard is deployed at https://status.nbkelley.com, serving homelab monitoring data. All API calls are server-side — no internal IPs, credentials, or raw API responses reach the browser.

Deployment#

Detail Value
Public URL https://status.nbkelley.com
Host proxy VM (192.168.1.222)
Port 3002
Runtime Node.js + Express, Docker container
compose.yaml location /home/iluvatar/compose.yaml on proxy VM
App directory /opt/homelab-dashboard/ on proxy VM
Routing Cloudflare Tunnel → 127.0.0.1:3002

File Structure#

/opt/homelab-dashboard/
  server.js         ← Express app, all API logic
  package.json
  package-lock.json
  dockerfile
  start.sh
  node_modules/
  public/
    index.html
    styles.css
    app.js          ← frontend render engine

compose.yaml Entry#

homelab-dashboard:
  build: /opt/homelab-dashboard
  container_name: homelab-dashboard
  restart: unless-stopped
  network_mode: host
  environment:
    - PORT=3002

network_mode: host means the app binds directly to host port 3002. The ports: mapping is ignored when using host networking.

Uptime Kuma - Configuration & Integrations

Uptime Kuma - Configuration & Integrations#

What Was Established#

  • Aruba Instant On Switch Monitoring: Confirmed that Aruba Instant On switches (specifically 1930 and 1960 series) support SNMP monitoring when configured via their local web interface, despite being primarily cloud-managed.
  • Microsoft Teams Webhook Integration: Established the correct method for routing Uptime Kuma alerts to Microsoft Teams using Power Automate Workflows, noting the deprecation of legacy webhook.office.com connectors.

Key Decisions#

  • SNMP over Ping: While basic ping monitoring is viable for switch reachability, SNMP is preferred for detailed port status and traffic metrics on supported Aruba models.
  • Power Automate for Teams Alerts: Migrated away from deprecated Office 365 Connectors (webhook.office.com) to Power Automate Workflows to ensure long-term compatibility with Uptime Kuma webhooks.

Current Configuration#

Aruba Instant On Switches (1930/1960 Series)#

  1. Network Setup: Assign a static IP address to the switch via the Aruba Instant On cloud portal or local interface.
  2. SNMP Enablement:
    • Access the switch’s local web interface.
    • Navigate to SNMP settings (typically under “Switching” or “Management”).
    • Enable the SNMP agent.
    • Configure SNMPv2c (read-only community string) or SNMPv3 (username/password with optional encryption). Restrict access by IP for security.
    • Reboot the switch if required for settings to apply.
  3. Uptime Kuma Monitor:
    • Add a new monitor of type SNMP.
    • Input the switch’s static IP, selected SNMP version, and credentials.
    • Customize OIDs if monitoring specific interface traffic or port status beyond system uptime.

Microsoft Teams Webhook Notifications#

  • Notification Type: Webhook (Generic Webhook).
  • Post URL: Power Automate Workflow endpoint URL (e.g., https://prod-05.westus.logic.azure.com/...).
  • Content Type: application/json.
  • Testing: Use the “Test” button in Uptime Kuma notification settings to verify connectivity. A 400 error typically indicates a malformed payload or deprecated connector URL.

Historical Notes#

  • Legacy Connector Deprecation: Microsoft retired the old “Office 365 Connectors” on October 1, 2024. Any Uptime Kuma setups using webhook.office.com URLs will fail and require migration to Power Automate.
  • SNMP Availability: Early guidance incorrectly stated Aruba Instant On switches lack SNMP support. It was later confirmed that local management interfaces on 1930/1960 models fully support SNMPv2c/v3.

Open Questions#

  • Are there specific OIDs recommended for Aruba Instant On 1930/1960 switches to monitor port-level errors or traffic thresholds?
  • Has the Uptime Kuma version been verified against known bugs with Power Automate JSON payloads?

UniFi Express VPN & Network Management

AI-Driven Monitoring Pipeline

AI-Driven Monitoring Pipeline#

What Was Established#

The monitoring pipeline is fully operational and running hourly. It collects rich structured data from four sources (Prometheus — 7 metrics, Uptime Kuma, UniFi, Synology), runs 4 parallel Ollama summarization calls, synthesises a final status report, and writes everything to Postgres. Hourly snapshots of raw UniFi and Prometheus data are stored in dedicated tables for delta computation. End-to-end runtime is ~13 minutes using gemma4:e4b CPU-only on the Pavilion — accepted as-is pending Mac Studio.

n8n

n8n#

What Was Established#

n8n is the automation and orchestration hub for the homelab. It runs as an LXC on Proxmox, connected to PostgreSQL for persistent workflow state and execution history. Community edition is sufficient for current use.

Deployment#

Detail Value
LXC host n8n
IP 192.168.1.169
Port 5678
URL http://192.168.1.169:5678
Version 2.15.1 (Self Hosted)
Installed via tteck Proxmox helper scripts
OS Debian 13 (unprivileged LXC)
Config file /opt/n8n.env

Configuration (/opt/n8n.env)#

N8N_SECURE_COOKIE=false
N8N_PORT=5678
N8N_PROTOCOL=http
N8N_HOST=192.168.1.169
DB_TYPE=postgresdb
DB_POSTGRESDB_HOST=192.168.1.57
DB_POSTGRESDB_PORT=5432
DB_POSTGRESDB_DATABASE=homelab
DB_POSTGRESDB_USER=homelab
DB_POSTGRESDB_PASSWORD=<password>

After editing: systemctl restart n8n

Node Exporter Deployment

Node Exporter Deployment#

What Was Established#

node_exporter is used to expose system-level metrics (CPU, RAM, Disk, Network) from Linux hosts to the central Prometheus instance at 192.168.1.167.

Monitored Hosts#

Host IP Port Prometheus job VLAN
nk-celebrimbor (Pavilion) 192.168.2.192 9100 pavilion mithrandir
wiki-llm 192.168.1.206 9100 wiki gandalf

Installation#

Method A: apt (Ubuntu) — preferred for Ubuntu hosts#

sudo apt install prometheus-node-exporter
# Service auto-enabled; runs on port 9100

Method B: Manual binary (other distros)#

wget https://github.com/prometheus/node_exporter/releases/download/v1.11.1/node_exporter-1.11.1.linux-amd64.tar.gz
tar xvf node_exporter-1.11.1.linux-amd64.tar.gz
sudo cp node_exporter-1.11.1.linux-amd64/node_exporter /usr/local/bin/

Create /etc/systemd/system/node_exporter.service: