Skip to content

Testing Guide

Quick reference for testing the SPOT Platform.

Quick Start

# Unit tests (fast, no services required)
make test:unit

# Integration tests (full stack)
make test:integration

# All tests
make test

# Test specific service
make test:unit SERVICE=api-gateway

# Coverage report
make test:coverage

Test Types

Unit Tests

  • Location: services/{service}/tests/
  • Purpose: Test individual components in isolation
  • Speed: Fast (< 1 minute)
  • Dependencies: None (run in containers)
make test:unit SERVICE=api-gateway

Integration Tests

  • Location: tests/integration/
  • Purpose: Test service interaction and workflows
  • Speed: Moderate (2-5 minutes)
  • Dependencies: Full stack (postgres, redis, rabbitmq)
make test:integration

Test Environment

Tests automatically run in APP_ENV=test with isolated resources:

  • Database: spot_test (separate from dev/prod)
  • Redis: Database index 10 (separate)
  • RabbitMQ: spot_test vhost (separate)

You don't need to manually set APP_ENV=test - the test targets handle it automatically.

Test Structure

The test infrastructure uses modular fixtures for better organization:

tests/
├── conftest.py           # Pytest configuration
├── fixtures/             # Modular test fixtures
│   ├── config.py        # Test configuration (immutable)
│   ├── cleanup.py       # Resource cleanup
│   ├── clients.py       # HTTP client factories
│   ├── data.py          # Test data generators
│   └── helpers.py       # Test helper functions
└── integration/          # Integration test suites

Fixture Design

Fixtures are organized by purpose:

  • Each fixture module has a focused responsibility
  • Easy to extend without modifying existing code
  • Fixtures are interchangeable and reusable
  • Split by concern (config, cleanup, clients, data, helpers)
  • Tests depend on fixture abstractions

Writing Tests

Unit Test Example

# services/api-gateway/tests/test_analyzer.py
import pytest
from src.analyzer import analyze_email

def test_analyze_legitimate_email():
    email = {
        "headers": {
            "sender": "manager@company.com",
            "subject": "Team Meeting"
        },
        "body_text": "Meeting at 10am tomorrow"
    }

    result = analyze_email(email)

    assert result.threat_level == "safe"
    assert result.confidence_score > 0.9

Integration Test Example

# tests/integration/test_analysis_workflow.py
import pytest

async def test_full_analysis_workflow(authenticated_client, legitimate_email):
    # Submit analysis
    response = await authenticated_client.post(
        "/api/v1/analyze",
        json={"email": legitimate_email, "workflow_id": "default-workflow"}
    )

    assert response.status_code == 200
    job_id = response.json()["job_id"]

    # Check status
    status_response = await authenticated_client.get(
        f"/api/v1/analyze/{job_id}"
    )

    assert status_response.json()["status"] == "completed"

Test Coverage

Generate Coverage Report

# Run all tests with coverage
make test:coverage

# View HTML report
open services/api-gateway/htmlcov/index.html

Coverage Files

  • HTML: services/{service}/htmlcov/index.html (detailed)
  • XML: services/{service}/coverage.xml (CI integration)
  • Terminal: Shown after test run (summary)

CI Testing

Local CI Pipeline

Test your changes before pushing:

# Run full CI pipeline locally
make ci:local FULL=1

# Run specific test job
make ci:local JOB=test-services

# Clean up
make dev:clean

See CI-CD.md for complete CI/CD documentation.

Common Issues

Tests Failing with "APP_ENV must be test"

Solution: Use the CLI commands (they set APP_ENV automatically):

# Correct
make test:integration

# Wrong (will fail)
pytest tests/integration/

Database Connection Errors

Solution: Ensure services are running:

make service:status
make test:integration  # Starts services automatically

Test Cleanup Issues

Solution: Clean test resources:

# Remove test containers and volumes
docker compose down -v

# Restart tests
make test:integration

Best Practices

  1. Always use CLI commands for tests - They handle environment setup
  2. Write focused unit tests - Test one thing at a time
  3. Use fixtures for test data - Keep tests DRY
  4. Mock external services - Unit tests should be isolated
  5. Add assertions - Verify expected behavior explicitly
  6. Run tests before committing - Catch issues early

Testing Custom Analyzers

If you're developing a custom analyzer using spot-sdk, analyzer testing is separate from platform testing:

Analyzer Unit Tests

Custom analyzers should have their own test suites:

# In your analyzer repository
pytest tests/ -v

What to test:

  • Analyzer endpoints (health, analyze, capabilities)
  • Email parsing and analysis logic
  • AnalysisResult generation
  • Error handling and edge cases

Integration with Platform

Once your analyzer passes its own tests, test it with the platform:

  1. Register analyzer in platform configuration (config/spot.yaml):
analyzers:
  my-analyzer:
    enabled: true
    url: "http://my-analyzer:8000"
    settings: {}
  1. Run platform integration tests:
make test:integration

Platform tests verify:

  • Analyzer registration and discovery
  • Orchestrator communication with analyzer
  • Result aggregation and workflow execution
  • Timeout and error handling

Analyzer Development Resources