Skip to main content

Installation

Prerequisites

1. Python 3.10+

python3 --version
# Python 3.10.0 or higher required

2. Ollama

OWL Watch uses Ollama for local AI inference.

# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh

# Start the Ollama service
ollama serve

# Pull a model (in a new terminal)
ollama pull llama3.2

Verify Ollama is running:

curl http://localhost:11434/api/tags

Install OWL Watch

pip install owl-watch

Option 2: From Source

git clone https://github.com/anthropics/owl-watch.git
cd owl-watch
pip install -e .

Option 3: pipx (Isolated Environment)

pipx install owl-watch

Verify Installation

owl-watch --help

You should see:

usage: owl-watch [-h] [--project PROJECT] [--port PORT] [--model MODEL]
[--no-server] [-v]
{study} ...

Configuration

OWL Watch stores configuration in ~/.owl-watch/config.json:

{
"ollama": {
"host": "http://localhost:11434",
"model": "llama3.2"
},
"server": {
"port": 8080,
"buffer_timeout": 2.0
},
"alerting": {
"webhook": null,
"severities": ["critical", "high"],
"cooldown": 300
}
}

Configuration Options

SectionOptionDefaultDescription
ollama.hoststringhttp://localhost:11434Ollama server URL
ollama.modelstringllama3.2LLM model to use
server.portnumber8080Dashboard port
server.buffer_timeoutnumber2.0Seconds to wait for complete stack traces
alerting.webhookstringnullWebhook URL for notifications
alerting.severitiesarray["critical", "high"]Severity levels to alert on
alerting.cooldownnumber300Seconds between alerts for same error

Webhook Configuration

To send alerts to Slack, Discord, or other services:

{
"alerting": {
"webhook": "https://hooks.slack.com/services/YOUR/WEBHOOK/URL",
"severities": ["critical", "high"],
"cooldown": 300
}
}

Data Storage

OWL Watch stores data in ~/.owl-watch/:

FilePurpose
config.jsonUser configuration
investigations.jsonInvestigation history
profiles/<name>.jsonProject profiles
debug.logDebug logs for troubleshooting

Troubleshooting

Ollama not running

# Check if Ollama is running
curl http://localhost:11434/api/tags

# Start Ollama
ollama serve

Model not found

# List available models
ollama list

# Pull the required model
ollama pull llama3.2

Port already in use

Use a different port:

owl-watch app.log --port 9000

Or configure in ~/.owl-watch/config.json:

{
"server": {
"port": 9000
}
}

Next Steps