Skip to content

Monitor Usage

Monitor your Snippbot instance’s token consumption, agent activity, and system health to stay within budget and catch issues early.

Open Monitor in the Admin section of the sidebar to see:

  • Usage overview — tokens consumed, cost estimate, tasks completed/failed
  • Usage chart — timeline with hourly or daily granularity
  • By-model breakdown — which LLMs are consuming most
  • Activity feed — live stream of system events
  • System health — subsystem status at a glance
  • Alerts — configured threshold alerts

See Monitor UI for the full interface walkthrough.

Snippbot calculates estimated costs based on published model pricing. The Monitor dashboard shows a by-model breakdown including token counts and estimated cost for each model used, along with a daily total.

You can set up alerts to get notified before you exceed your budget. Use the Hooks page to create an HTTP hook that fires when costs cross a threshold:

  1. Open Hooks from the sidebar

  2. Click ”+ Create Hook” and select HTTP as the Hook Type

  3. Set the event to a task or job completion event and add a condition to check cost thresholds

  4. Enter the webhook URL (e.g., a Slack incoming webhook)

  5. Save the hook

You can also monitor spending directly from the Monitor page, which shows real-time cost breakdowns by model and agent.

All of this data is available directly on the Monitor page, including token usage summaries, usage over time charts, activity summaries, and full analytics dashboards.

Check system health from the Monitor page, which shows subsystem status at a glance. The health panel displays:

  • Whether the daemon is running and reachable
  • Database accessibility
  • Docker availability (for sandbox)
  • API key configuration status
  • Disk space
  • Scheduler engine status

You can also use the Troubleshoot page for a comprehensive health check of all subsystems.

Configure log verbosity in Settings under the system section. Available log levels: debug, info, warning, error.

Practical tips to manage LLM spend:

  • Use cheaper models for routine tasks — set claude-haiku-4-5 as default for simpler agents
  • Set per-agent budgets — limit token spend per agent session
  • Configure sub-agent budgets — prevent runaway sub-agent chains from consuming large budgets
  • Prune old memory — large memory stores increase context, increasing input token count
  • Review scheduler jobs — disable or reduce frequency of expensive jobs
  • Check cost by agent — identify which agents are highest spenders