Testing Guide¶
Quick reference for testing the SPOT Platform.
Quick Start¶
# Unit tests (fast, no services required)
make test:unit
# Integration tests (full stack)
make test:integration
# All tests
make test
# Test specific service
make test:unit SERVICE=api-gateway
# Coverage report
make test:coverage
Test Types¶
Unit Tests¶
- Location:
services/{service}/tests/ - Purpose: Test individual components in isolation
- Speed: Fast (< 1 minute)
- Dependencies: None (run in containers)
Integration Tests¶
- Location:
tests/integration/ - Purpose: Test service interaction and workflows
- Speed: Moderate (2-5 minutes)
- Dependencies: Full stack (postgres, redis, rabbitmq)
Test Environment¶
Tests automatically run in APP_ENV=test with isolated resources:
- Database:
spot_test(separate from dev/prod) - Redis: Database index 10 (separate)
- RabbitMQ:
spot_testvhost (separate)
You don't need to manually set APP_ENV=test - the test targets handle it automatically.
Test Structure¶
The test infrastructure uses modular fixtures for better organization:
tests/
├── conftest.py # Pytest configuration
├── fixtures/ # Modular test fixtures
│ ├── config.py # Test configuration (immutable)
│ ├── cleanup.py # Resource cleanup
│ ├── clients.py # HTTP client factories
│ ├── data.py # Test data generators
│ └── helpers.py # Test helper functions
└── integration/ # Integration test suites
Fixture Design¶
Fixtures are organized by purpose:
- Each fixture module has a focused responsibility
- Easy to extend without modifying existing code
- Fixtures are interchangeable and reusable
- Split by concern (config, cleanup, clients, data, helpers)
- Tests depend on fixture abstractions
Writing Tests¶
Unit Test Example¶
# services/api-gateway/tests/test_analyzer.py
import pytest
from src.analyzer import analyze_email
def test_analyze_legitimate_email():
email = {
"headers": {
"sender": "manager@company.com",
"subject": "Team Meeting"
},
"body_text": "Meeting at 10am tomorrow"
}
result = analyze_email(email)
assert result.threat_level == "safe"
assert result.confidence_score > 0.9
Integration Test Example¶
# tests/integration/test_analysis_workflow.py
import pytest
async def test_full_analysis_workflow(authenticated_client, legitimate_email):
# Submit analysis
response = await authenticated_client.post(
"/api/v1/analyze",
json={"email": legitimate_email, "workflow_id": "default-workflow"}
)
assert response.status_code == 200
job_id = response.json()["job_id"]
# Check status
status_response = await authenticated_client.get(
f"/api/v1/analyze/{job_id}"
)
assert status_response.json()["status"] == "completed"
Test Coverage¶
Generate Coverage Report¶
# Run all tests with coverage
make test:coverage
# View HTML report
open services/api-gateway/htmlcov/index.html
Coverage Files¶
- HTML:
services/{service}/htmlcov/index.html(detailed) - XML:
services/{service}/coverage.xml(CI integration) - Terminal: Shown after test run (summary)
CI Testing¶
Local CI Pipeline¶
Test your changes before pushing:
# Run full CI pipeline locally
make ci:local FULL=1
# Run specific test job
make ci:local JOB=test-services
# Clean up
make dev:clean
See CI-CD.md for complete CI/CD documentation.
Common Issues¶
Tests Failing with "APP_ENV must be test"¶
Solution: Use the CLI commands (they set APP_ENV automatically):
Database Connection Errors¶
Solution: Ensure services are running:
Test Cleanup Issues¶
Solution: Clean test resources:
Best Practices¶
- Always use CLI commands for tests - They handle environment setup
- Write focused unit tests - Test one thing at a time
- Use fixtures for test data - Keep tests DRY
- Mock external services - Unit tests should be isolated
- Add assertions - Verify expected behavior explicitly
- Run tests before committing - Catch issues early
Testing Custom Analyzers¶
If you're developing a custom analyzer using spot-sdk, analyzer testing is separate from platform testing:
Analyzer Unit Tests¶
Custom analyzers should have their own test suites:
What to test:
- Analyzer endpoints (health, analyze, capabilities)
- Email parsing and analysis logic
- AnalysisResult generation
- Error handling and edge cases
Integration with Platform¶
Once your analyzer passes its own tests, test it with the platform:
- Register analyzer in platform configuration (
config/spot.yaml):
- Run platform integration tests:
Platform tests verify:
- Analyzer registration and discovery
- Orchestrator communication with analyzer
- Result aggregation and workflow execution
- Timeout and error handling
Analyzer Development Resources¶
- Analyzer Development Guide - Complete analyzer development guide
- SDK Testing Patterns - SDK testing best practices
- Python SDK Documentation - SDK API reference
Related Documentation¶
- guides/DEVELOPER-GUIDE.md - Development workflows
- CI-CD.md - CI/CD pipeline
- reference/ENVIRONMENT.md - Environment management
- DATABASE.md - Database testing with ORM