← Back to Documentation

VERI Administration Guide

Platform operations, security, and infrastructure reference

Version 2026.3.28 — For platform operators and system administrators
Veritaseum Intelligence Global IP Patent Portfolio
US11196566 · US11895246 · US12231579 · JP6813477B2 · JP7204231 · JP7533974 · JP7533983 · JP7736305 · EP/UK/HK/UP 4 148 642

1. Infrastructure Overview

Hardware

MachineTypeMemoryRoleNetwork
Mac Studio AM3 Ultra96 GBPrimary inference, gateway, dashboard127.0.0.1 / Tailscale
Mac Studio BM3 Ultra96 GBSecondary inference (Llama, GLM-Flash)10.0.0.2 (Thunderbolt 5)
DGX Spark 1GB10 Grace128 GBGLM-4.7-358B (vLLM)192.168.5.246
DGX Spark 2GB10 Grace128 GBGLM-4.7-358B (vLLM, redundant)192.168.5.255
DGX Spark 3GB10 Grace128 GBMiniMax-M2.5 (vLLM)192.168.6.1

Total pooled memory: 576 GB across 5 nodes.

System overview showing gateway status, fleet nodes, models, and compute fleet

Network Topology

Key Ports

PortServiceDescription
8800MLX-LM serverLocal model inference
8801MLX proxyModel routing, context escalation, circuit breaker
18789OpenClaw gatewayCore API and WebSocket
18790VERI DashboardWeb UI (Tailscale funnel)
18796Agent EngineTool-use agent loop (auto-started)
3210Activity DashboardEvent store and billing
8900VGI APISovereign risk intelligence

2. Service Management

All services run as launchd agents in ~/Library/LaunchAgents/. They auto-start at boot and auto-restart on crash.

System health diagnostics with component status checks

Common Commands

# List all VERI services
launchctl list | grep "^[0-9]" | grep ai.

# Restart a service
launchctl kickstart -k gui/$(id -u)/ai.openclaw.gateway

# Stop a service
launchctl bootout gui/$(id -u)/ai.openclaw.gateway

# Start a service
launchctl bootstrap gui/$(id -u) ~/Library/LaunchAgents/ai.openclaw.gateway.plist

# View service details
launchctl print gui/$(id -u)/ai.openclaw.gateway

Core Services

LabelPurposeLogs
ai.openclaw.gatewayAPI gateway~/.openclaw/logs/gateway.log
ai.openclaw.nodeAgent node~/.openclaw/logs/node.log
ai.openclaw.dashboardWeb UI server~/openclaw-ui/dashboard.err
ai.agent-engineTool-use agent~/.openclaw/logs/agent-engine.log
ai.mlx-lm.serverMLX inference~/.mlx-bench/server.log
ai.mlx-lm.proxyModel routing~/.mlx-bench/proxy.log
ai.bastion.daemonSecurity supervisor~/.openclaw/logs/bastion.log
ai.sentinel.daemonOpportunity scanning~/.openclaw/logs/sentinel.log
ai.vgi.serverSovereign risk API~/.openclaw/logs/vgi.log

After Code Changes

The dashboard server (Node.js) must be restarted to pick up changes to server.js:

cd ~/openclaw-ui && npx vite build
launchctl kickstart -k gui/$(id -u)/ai.openclaw.dashboard

The agent engine (Python) auto-restarts via launchd — just save the file.

3. Agent Engine

The Agent Engine provides a tool-use agent loop. Any model the router selects can power it.

How It Works

Users describe a task in plain English. The engine loops: sends the task to the model, the model picks tools (bash, read/write/edit files, glob, grep), the engine executes them with security checks, feeds results back, and repeats until done.

Endpoints

Dashboard Access

Agent Mode is the default in the Chat view. Users toggle between Agent Mode (tools) and Chat Mode (conversation only). A folder input lets users point the agent at a specific project.

Chat view with Agent Mode toggle and project folder selector

Configuration

Environment variables (all optional — auto-discovered from nodes.json):

VariableDefaultDescription
AGENT_PORT18796Server port
AGENT_WORKING_DIRhome directoryDefault sandbox root
AGENT_MAX_TURNS30Max tool iterations per task
AGENT_TIMEOUT_SEC180Model call timeout

Audit Trail

All tool executions log to ~/.openclaw/logs/agent-engine/tool_audit.jsonl:

{"ts": "2026-03-27T20:03:11-0400", "session_id": "abc123", "tool_name": "bash", "args": "{'command': 'ls'}", "duration_ms": 12.9, "blocked": false}

4. Model Fleet

Node Registry

The model fleet is defined in ~/.mlx-bench/nodes.json (operator-managed, mode 600).

Local model manager with HuggingFace browsing and quantization options
NodeTypeModelsContext Limit
local (Mac A)MLXqwen3-next:80b64K
node-b (Mac B)MLXllama-3.3:70b100K
node-b-glm-flashMLXglm-4.7-flash160K
cluster1-glm (Spark 1)vLLMglm-4.7-358b26K
cluster2-glm (Spark 2)vLLMglm-4.7-358b26K
cluster3-minimax (Spark 3)vLLMminimax-m2.564K

Escalation Chain

When a model cannot handle the request (context too large, node down), the proxy escalates:

glm-4.7-flash → qwen3-next:80b → llama-3.3:70b → minimax-m2.5 → glm-4.7-358b

Quarantine

Nodes with 10+ consecutive health failures are auto-quarantined. They are reprobed every 300 seconds and auto-reinstated on recovery.

Memory Pressure

When available memory on Mac A drops below 16 GB, requests route to remote nodes automatically.

5. Tenant Administration

Dashboard Admin Panel

Access via the Dashboard at /admin. Requires admin role.

Tenant management showing active tenants, plans, and user table

User management:

Plans

PlanComputeStoragePrice
Free1M tokens/mo0.5 GB$0
Pro50M tokens/mo5 GB$49/mo
Enterprise500M tokens/mo50 GB$199/mo

API Keys

Tenants create API keys via the Dashboard or API. Keys are per-tenant and scoped to their sandbox.

# Create key for a tenant (admin)
curl -X POST http://localhost:18790/api/tenant/keys \
  -H "Authorization: Bearer $TOKEN" \
  -d '{"label": "production"}'

Tenant Isolation

Each tenant gets:

6. Security Operations

Security controls, trust zones, and Bastion monitoring

Four Defense Layers

1. Exec-Guard Hook

Blocks 24 dangerous command patterns before execution

2. Bastion Daemon

24/7 security supervisor (30s/5m/1h/24h scan cycles)

3. Sandbox Isolation

Docker containers with restricted egress

4. Approval Gateway

Human-in-the-loop for sensitive operations

Incident Response

Security incidents are:

  1. Logged to ~/.openclaw/incidents/<id>.json
  2. Broadcast to all agents via VPM mailbox
  3. Alerted to the operator via WhatsApp

Critical incidents auto-freeze agent wallets.

Audit Logs

LogLocationContent
Exec-guard~/.openclaw/logs/exec-guard.logAllowed/blocked commands
Agent engine~/.openclaw/logs/agent-engine/tool_audit.jsonlTool executions
Incidents~/.openclaw/incidents/Security incident records
Bastion~/.openclaw/logs/bastion.logSecurity scan results

7. Monitoring & Logs

Real-time log stream with severity levels

Health Check

# Quick health
curl -s http://127.0.0.1:18790/api/health/deep | python3.14 -m json.tool

# Agent engine health
curl -s http://127.0.0.1:18796/v1/agent/health

# MLX proxy status
curl -s http://127.0.0.1:8801/v1/models

Log Locations

All logs are in ~/.openclaw/logs/ unless noted otherwise:

# Real-time monitoring
tail -f ~/.openclaw/logs/gateway.log
tail -f ~/.openclaw/logs/agent-engine.log
tail -f ~/.mlx-bench/proxy.log

# Agent engine audit trail
tail -f ~/.openclaw/logs/agent-engine/tool_audit.jsonl

Billing Data

8. Backup & Recovery

Backup management interface

Encrypted backups run automatically. For manual operations:

# Run backup
~/openclaw-workspace/automation/backup-recovery/backup.sh

# Restore
~/openclaw-workspace/automation/backup-recovery/restore.sh <backup-file>

Critical Files to Protect

9. Social Media (Amplify)

Amplify dashboard showing connected platforms and content management

The Amplify agent manages social media across 8 platforms. All content is draft-only — nothing posts without human approval.

Connected Platforms

Access via the Amplify tab in the Dashboard. Manage OAuth connections to:

Content Approval Flow

  1. Agent drafts content based on user request or scheduled task
  2. Draft appears in the Approval Queue
  3. Admin reviews, edits if needed, approves or rejects
  4. Only approved content is published

Backend

10. Agent-to-Agent Communication

Agent Comms showing VACP identity and peer discovery

VACP Service

The VACP (Verifiable Agent Communication Protocol) service runs on port 8940.

EndpointDescription
GET /api/vacp/identityAgent DID and capabilities
GET /api/vacp/discoverFind agents by capability
GET /api/vacp/negotiationsActive negotiations
POST /api/vacp/proposeStart negotiation
POST /api/vacp/respondAccept/counter/reject
GET /api/vacp/trustTrust network graph
POST /api/vacp/trust/vouchEndorse an agent

Contra Protocol (P2P)

The Contra bridge runs on port 8920 for P2P intent discovery via Waku.

Agent Marketplace

The Agent LinkedIn marketplace (/api/marketplace/*) enables:

11. Troubleshooting

Service Won't Start

# Check error log
cat ~/.openclaw/logs/<service>.err.log | tail -20

# Check if port is in use
lsof -i :<port>

# Force restart
launchctl kickstart -k gui/$(id -u)/ai.<service-name>

Agent Engine Returns Errors

  1. Check health: curl http://127.0.0.1:18796/v1/agent/health
  2. Check logs: tail ~/.openclaw/logs/agent-engine.err.log
  3. Verify MLX proxy: curl http://127.0.0.1:8801/v1/models
  4. Restart: launchctl kickstart -k gui/$(id -u)/ai.agent-engine

Dashboard Shows Old Code

The Node.js server caches server.js in memory. After editing:

cd ~/openclaw-ui && npx vite build
launchctl kickstart -k gui/$(id -u)/ai.openclaw.dashboard

Model Returns Empty Responses

  1. Check MLX proxy logs: tail ~/.mlx-bench/proxy.log
  2. Verify node is healthy: curl http://<node-ip>:<port>/v1/models
  3. Check quarantine state: cat ~/openclaw-workspace/projects/llm-router/registry/quarantine_state.json
  4. Check memory pressure: sysctl hw.memsize and vm_stat