Monitoring
Log filtering, evaluation states, retry queue inspection, and debugging commands for the ORO validator.
Viewing Logs
All validator output streams to Docker logs:
# Follow logs in real time
docker compose --profile validator logs -f validator
# Last 200 lines
docker compose --profile validator logs --tail=200 validator
# Logs since last 30 minutes
docker compose --profile validator logs --since=30m validatorIf running as a systemd service:
journalctl -u oro-validator -fEvaluation Lifecycle
A healthy evaluation cycle produces this log sequence:
Claiming work from Backend...
Claimed work: <eval_run_id>
Downloading agent from <url> for eval_run <eval_run_id>
Running sandbox for eval_run <eval_run_id>
[ProgressReporter] Problem 1/N scored: 0.8542 (query: Looking for a toner...)
[ProgressReporter] Problem 2/N scored: 0.0000 (query: Show me supplements...)
...
[ProgressReporter] Aggregate score: gt_rate=0.167, success_rate=0.333 (N scored)
Sandbox completed successfully for eval_run <eval_run_id>
Uploaded logs to <s3-key>
Completed <eval_run_id>: SUCCESSKey Log Messages
| Message | Meaning |
|---|---|
No work available, sleeping Ns | No pending evaluations. Normal idle state |
Claimed work: <eval_run_id> | Started evaluating an agent |
Problem N/T scored: X.XXXX | Per-problem score during evaluation |
Sandbox completed successfully | Agent execution finished cleanly |
Aggregate score: gt_rate=X.XXX | Final evaluation score |
Successfully set weight for top miner only | On-chain weight update succeeded |
Backend unavailable | Transient error, will retry with backoff |
Lease expired | Heartbeat was too late. Evaluation forfeited |
At capacity | Already running the maximum concurrent evaluations |
Filtering Logs
Filter log output to isolate specific events:
# Evaluation results only
docker compose --profile validator logs validator | grep "scored\|Aggregate\|Completed"
# Errors and failures
docker compose --profile validator logs validator | grep -i "error\|fail\|timeout\|expired"
# Heartbeat health
docker compose --profile validator logs validator | grep "Heartbeat"
# Weight setting activity
docker compose --profile validator logs validator | grep "weight"
# Sandbox stderr (agent crashes, import errors)
docker compose --profile validator logs validator | grep -A 20 "Sandbox stderr"Retry Queue
If the Backend is unavailable when the validator reports results, completions are queued to ~/.validator/retry_queue.json and retried automatically. No manual intervention is needed.
Inspect the queue:
# Inside the validator container
docker compose exec validator cat /root/.validator/retry_queue.json | python -m json.tool
# From the host (if the volume is mounted)
cat ~/.validator/retry_queue.json 2>/dev/null | python -m json.toolAn empty file or a No such file error means the queue is clear.
Public API
Check your validator's status using the public API (no authentication required):
# List all validators and their status
curl https://api.oroagents.com/v1/public/validators
# See currently running evaluations
curl https://api.oroagents.com/v1/public/evaluations/runningService Health
Check the status of all Docker services in the stack:
docker compose --profile validator psInspect individual service logs when a specific service is unhealthy:
docker compose logs search-server
docker compose logs proxy
docker compose logs validator