//nbkelley /homelab

Hinterflix Help Site - Cloudflare Deployment

Hinterflix Help Site - Cloudflare Deployment#

What Was Established#

The Hinterflix help site (help.hinterflix.com) is deployed as a static Hugo site on Cloudflare Workers. The domain is managed within the same Cloudflare account.

Key Decisions#

  • Hosting: Cloudflare Workers Pages (static hosting).
  • Domain: help.hinterflix.com (root subdomain of hinterflix.com).
  • DNS: CNAME record pointing to the Cloudflare Workers subdomain (*.workers.dev). Proxy status set to Proxied (orange cloud).
  • SSL/TLS: Automatically provisioned by Cloudflare. “Always Use HTTPS” enabled in SSL/TLS settings.

Configuration Steps#

  1. Cloudflare DNS Setup:

Hugo Deployment to Cloudflare Pages - Troubleshooting

Hugo Deployment to Cloudflare Pages - Troubleshooting#

What Was Established#

Patterns for resolving missing assets (favicons, CSS, styling) and build failures when deploying Hugo-generated static sites to Cloudflare Pages.

Key Decisions#

  • Build Configuration: Set build command to hugo, output directory to public, and explicitly match the local Hugo version in Cloudflare Pages settings.
  • Static Asset Placement: Ensure all static files (e.g., favicon.ico, CSS) reside in the static/ directory root or theme-specific static folders.
  • Rebuild Enforcement: Use hugo --cleanDestinationDir or manually remove the public/ directory to force Hugo to regenerate all assets and detect changes.
  • Cache Management: Clear both Cloudflare Pages deployment cache and browser cache to prevent stale asset delivery.
  • Verification Workflow: Validate locally via hugo server, inspect the generated public/ directory, review Cloudflare deployment logs, and confirm full Git commits.

Current Configuration#

  • Build Command: hugo
  • Output Directory: public
  • Static Directory: static/
  • Config File: config.toml / config.yaml (verify baseURL matches target domain)

Obsidian Integration for Hugo Date Format#

Hugo expects ISO 8601 dates with timezone offset: 2025-11-22T23:11:12-05:00

MBTA Dashboard - Kiosk Mode

MBTA Dashboard - Kiosk Mode#

What Was Established#

The MBTA dashboard runs as a 24/7 kiosk display on a Raspberry Pi 3B+ using Anthias (formerly Screenly OSE) in Docker. The display is portrait 1080x1920. The Pi has severe memory constraints (788MB total) requiring aggressive optimization.

Anthias Deployment#

Hardware#

  • Device: Raspberry Pi 3B+ (788MB RAM)
  • Display: 1080x1920 portrait screen
  • Software: Anthias (Docker-based digital signage)

Display Configuration#

  • Dashboard page set as primary asset
  • Splash page appears for 1 minute every 11 hours 59 minutes as a refresh cycle
  • Page refreshes via cron job to prevent memory leaks

Cron Refresh#

# Refresh the kiosk page periodically to clear memory
0 */6 * * * docker restart screenly-anthias-viewer-1

Qt WebEngine Quirks#

Anthias uses Qt WebEngine for rendering, which differs from desktop Chrome:

MBTA Dashboard - Setup

MBTA Dashboard - Setup#

What Was Established#

Office transit dashboard deployed on a self-hosted Debian VM (PLT-MBTADisplay, 192.168.168.42). Nginx serves static files from /var/www/MBTADisplay/public and proxies /api/ requests to a Node/Express caching proxy on port 3000. API keys are stored server-side and never exposed to the browser. Process managed via pm2 with a systemd service.

Architecture#

Browser (Anthias/Desktop)
    → Nginx (:80) → / → static files (/var/www/MBTADisplay/public)
                   → /api/ → Node/Express proxy (:3000)
                                → MBTA v3 API
                                → OpenWeatherMap API
                                → RSS feeds
                                → Caches responses

Nginx Configuration#

server {
    listen 80;
    server_name transit.intra.plgt.com 192.168.168.42;

    root /var/www/MBTADisplay/public;
    index index.html;

    location / {
        try_files $uri $uri/ =404;
    }

    location /api/ {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

Node/Express Proxy#

Setup#

mkdir -p /opt/mbta-proxy
cd /opt/mbta-proxy
npm init -y
npm install express node-fetch

API Key Management#

  • API keys stored in /opt/mbta-proxy/.env
  • Loaded via process.env.MBTA_API_KEY in server.js
  • pm2 started with --env flag to load .env file
  • Critical: API key must survive server.js overwrites from GitHub syncs

pm2 Process Manager#

pm2 start server.js --name mbta-proxy
pm2 save
pm2 startup systemd

systemd Service (/etc/systemd/system/pm2-administrator.service)#

[Unit]
Description=PM2 process manager
After=network.target

[Service]
Type=forking
User=administrator
ExecStart=/usr/local/bin/pm2 resurrect
ExecReload=/usr/local/bin/pm2 reload all
ExecStop=/usr/local/bin/pm2 kill
Restart=on-failure

[Install]
WantedBy=multi-user.target

GitHub Deployment#

Repository#

  • Repo: https://github.com/bich-nguyen/MBTADisplay.git
  • Cloned to /var/www/MBTADisplay
  • Static files in public/ subdirectory
  • Server files in /opt/mbta-proxy/ (separate from web root)

Ownership#

sudo chown -R administrator:administrator /var/www/MBTADisplay

Note: www-data ownership breaks git operations from administrator user.