Troubleshooting¶
Analysis Jobs Failing¶
Most Common Issue: No Analyzers Configured
If all analysis jobs fail immediately, you likely haven't deployed any analyzer services.
"No URL configured for analyzer: analyzer-nlp"¶
Cause: SPOT requires external analyzer services, which must be deployed separately.
Fix:
- Install at least one analyzer plugin from the dashboard catalog (see Installing plugins).
- The orchestrator re-reads
core/config/spot.yamlper job, so no restart is required after install.
"Only 0 analyzers succeeded, minimum required: N"¶
Cause: The workflow requires one or multiple analyzer(s) but none are installed or all are disabled.
Fix: Same as above ; install the required analyzer plugin(s), or check that existing entries in plugins.analyzers have enabled: true and a reachable url.
Analysis Jobs Show "Failed" Status¶
Since v1.0.0, failed jobs now appear in the dashboard with their error messages.
- Check the job detail page for the specific error
- Common causes:
- No analyzers configured (see above)
- Analyzer service is down
- Network connectivity issues between SPOT and analyzers
- Check analyzer-orchestrator logs:
Knowledge Store / RAG layer¶
Dashboard shows "Knowledge Store is not fully operational"¶
The /knowledge page lists which component is missing. Match the failing row below.
Embedding backend unreachable¶
Symptom. The banner shows a red cross next to Embedding backend, typically with http://ollama:11434. A sync attempt surfaces:
and docker logs spot-knowledge shows httpx.ConnectError: No address associated with hostname.
Cause. OLLAMA_URL points at a hostname the knowledge container can't resolve, or the Ollama instance isn't running.
Fix. Two options ; pick one:
- Enable the bundled side-car in
/opt/spot/.env: - Or point at an existing Ollama: Make sure the model is pulled on that host:
ollama pull bge-m3.
Embedding model missing¶
The server is up but the model isn't pulled. Ollama answers /api/version but /api/embeddings returns 404 for the model. Fix:
docker exec spot-ollama ollama pull bge-m3
# or, if you pinned a different EMBEDDING_MODEL:
docker exec spot-ollama ollama pull "$EMBEDDING_MODEL"
If the pull itself fails with dial tcp ...:443: i/o timeout or a DNS error, the container has no path to registry.ollama.ai. Two causes: egress blocked, or the host is behind a proxy that isn't being forwarded into the container.
Docker does not propagate the host's HTTP_PROXY / HTTPS_PROXY into containers automatically. Configure the Docker client instead so every container on the host gets the proxy env auto-injected at creation time ; no per-service compose edits. Put this in ~/.docker/config.json (for the user that runs docker) or /root/.docker/config.json (if you run sudo docker):
{
"proxies": {
"default": {
"httpProxy": "http://proxy.example.com:3128",
"httpsProxy": "http://proxy.example.com:3128",
"noProxy": "localhost,127.0.0.1,0.0.0.0,postgres,redis,rabbitmq,ollama,knowledge,api-gateway,analyzer-orchestrator,mail-orchestrator,web-dashboard,traefik"
}
}
}
Then recreate the side-car so it gets the fresh env:
docker compose --profile ollama up -d --force-recreate ollama ollama-init
docker compose logs -f ollama-init # watch the model pull succeed
Verify the vars reached the container (no -e needed ; Docker injects them from config.json automatically):
One gotcha worth calling out: the ollama/ollama image sets OLLAMA_HOST=0.0.0.0 so the server listens on all interfaces. The CLI in the same container reads the same env and preflights its own server at http://0.0.0.0:11434/. If 0.0.0.0 is missing from noProxy, Go routes that loopback call through the proxy, Squid returns TCP_DENIED/403 (it has no business connecting to 0.0.0.0), and the CLI aborts with the unhelpful Error: something went wrong, please see the ollama server logs for details ; before the pull ever reaches the server. Always include 0.0.0.0,localhost,127.0.0.1 in noProxy.
The Docker daemon also needs proxy access to pull the ollama/ollama image itself. That's configured separately on the host ; typically /etc/systemd/system/docker.service.d/http-proxy.conf. If the daemon can pull SPOT's own images, this step is already done.
No context providers enabled¶
The RAG layer has nothing feeding it. Install a context_provider plugin from the Plugins page (e.g. provider-employee-dir) and run a sync. The banner turns green once at least one is enabled.
/readiness for scripts¶
GET /api/v1/knowledge/readiness on the api-gateway (admin auth) returns the same data the banner uses. Useful for monitoring:
{
"ready": false,
"components": {
"knowledge_service": true,
"embedding_backend": false,
"context_providers": false
},
"embedding": {
"url": "http://ollama:11434",
"model": "bge-m3",
"reachable": false,
"service_reachable": true
},
"context_providers": {"installed": 0, "enabled": []}
}
Services Not Starting¶
Check container status:
Check startup logs:
Database Connection Failed¶
Verify postgres is healthy:
Check postgres logs:
API Gateway Returning 500¶
Check API Gateway logs:
Verify environment:
Port Already in Use¶
Find process using the port:
Change port in .env:
Out of Memory¶
Check container memory:
Reduce Redis memory in .env:
Dashboard Not Loading¶
Check dashboard logs:
Verify API Gateway is healthy:
RabbitMQ Connection Issues¶
Check RabbitMQ logs:
Access management UI:
"ENCRYPTION_KEY is required"¶
The api-gateway and analyzer-orchestrator refuse to start when ENCRYPTION_KEY is unset - email persistence is mandatory and silent fallback is not allowed.
Fix:
-
Generate an encryption key:
-
Add to
.env: -
Recreate the affected services (restart is not enough):
-
Verify the services are healthy:
"Fernet key must be 32 url-safe base64-encoded bytes"¶
This error appears in analyzer-orchestrator logs when ENCRYPTION_KEY is invalid.
Symptoms: - analyzer-orchestrator shows as "unhealthy" - Dashboard is very slow (10+ second timeouts) - "Failed to load dashboard data" error - API returns 503 for /api/v1/workflows
Cause: The ENCRYPTION_KEY in .env is malformed (wrong length, missing padding, or invalid characters).
Fix:
-
Generate a valid Fernet key (must be exactly 44 characters, ending with
=): -
Update
.envwith the new key: -
Recreate the analyzer-orchestrator:
-
Verify it's healthy:
"Message broker disconnected"¶
This appears when services lose their RabbitMQ connection (e.g., after RabbitMQ restarts).
Fix:
Restart the affected services:
Or restart all services:
Verify connections are restored:
docker exec spot-api-gateway wget -qO- http://localhost:8000/health | grep message_broker
# Should show: "connected": true
Reset Everything¶
Stop and remove all containers and volumes:
Start fresh:
WARNING: This deletes all data including the database.