Skip to content

Databases

Snippbot uses SQLite for all persistence. No external database server is required. This page covers file locations, backup, and maintenance. For schema details, see Database Architecture.

All databases live under the data directory (~/.snippbot/ by default):

FileContents
main.dbAgents, projects, tasks, chat sessions
scheduler.dbScheduled jobs, runs, chains, activity patterns
workflows.dbWorkflow definitions, runs, templates, schedules
hooks.dbEvent hooks, deliveries, webhook endpoints
channels.dbPlatform credentials (encrypted), bindings, access rules
skills.dbTool registry, MCP server configurations
devices.dbDevice registrations, pairing codes, execution history

| profile_settings.db | User profile (name, avatar, timezone) | | agents/{id}/memory.db | Per-agent episodic memory and knowledge graph |

Terminal window
# Via environment variable
export SNIPPBOT_DATA_DIR=/data/snippbot
snippbot start
# Via config file
snippbot config set data_dir /data/snippbot

All database files move with the data directory. You’ll need to copy existing data manually if migrating.

Snippbot creates automatic backups before running database migrations:

~/.snippbot/backups/
├── pre-upgrade-2026-03-01T10-00-00/
│ ├── main.db
│ ├── scheduler.db
│ └── ...
└── pre-upgrade-2026-02-15T08-30-00/
└── ...
Terminal window
# List existing backups
snippbot reset --list-backups
# Or copy the directory directly
cp -r ~/.snippbot ~/.snippbot.bak
Terminal window
# List available backups
snippbot reset --list-backups
# Restore a backup
snippbot stop
snippbot reset --restore pre-upgrade-2026-03-01T10-00-00
snippbot start

Use a system cron job to back up regularly:

Terminal window
# Backup daily at 2am
0 2 * * * cp -r ~/.snippbot ~/.snippbot.backup.$(date +%Y%m%d) 2>/dev/null
# Keep only last 7 backups
0 3 * * * ls -dt ~/.snippbot.backup.* | tail -n +8 | xargs rm -rf 2>/dev/null

SQLite databases can grow with deleted rows. Vacuum them periodically:

Terminal window
sqlite3 ~/.snippbot/snippbot.db "VACUUM;"
sqlite3 ~/.snippbot/scheduler.db "VACUUM;"

This compacts the files and reclaims disk space. Database maintenance such as pruning old data is handled automatically by Snippbot.

All databases use SQLite WAL (Write-Ahead Logging) for performance:

  • Concurrent reads are non-blocking (readers don’t block writers)
  • Writes are fast (no full journal flush on each write)
  • Data integrity is maintained on crash/power loss

WAL checkpoint files (.db-wal, .db-shm) are normal and should be included in backups.

Change the data directory to a mounted volume for Docker or NFS:

Terminal window
snippbot config set data_dir /mnt/data/snippbot

Make sure the directory is writable by the user running the daemon.

For Docker, the typical configuration mounts the host path:

volumes:
- ~/.snippbot:/home/snippbot/.snippbot

See Docker deployment for the full compose configuration.

All databases are standard SQLite files. You can query them with any SQLite tool:

Terminal window
# Using sqlite3 CLI
sqlite3 ~/.snippbot/scheduler.db "SELECT name, status, run_count FROM scheduled_jobs;"
# Using DB Browser for SQLite (GUI)
# Open any .db file from ~/.snippbot/