Testing Guide¶
This guide covers how to run tests effectively during development and contribution to Gaspatchio.
Quick Start¶
The project uses uv
for package management and pytest
for testing. All commands should be run from the gaspatchio-core/bindings/python
directory.
# Run all tests (excluding performance benchmarks by default)
uv run pytest
# Run tests in a specific directory
uv run pytest tests/assumptions/
# Run a specific test file
uv run pytest tests/assumptions/test_curve.py
# Run tests matching a pattern
uv run pytest -k "test_load_curve"
Performance Tests¶
Performance tests use pytest-benchmark
and can be slow to run. By default, performance tests are skipped to keep development cycles fast.
Running Performance Tests¶
# Run ONLY performance/benchmark tests
uv run pytest -m benchmark
# Run all tests INCLUDING performance tests
uv run pytest -m ""
# Run specific performance test groups
uv run pytest -m benchmark -k "lookup_scalar"
uv run pytest -m benchmark -k "load_curve"
# Run performance tests for a specific function
uv run pytest -m benchmark -k "assumption_lookup"
Understanding Performance Test Markers¶
Performance tests are marked with @pytest.mark.benchmark(group="test_group")
. The available groups include:
- Loading Tests:
load_curve_small
,load_curve_large
,load_wide_basic
,load_wide_overflow
- Lookup Tests:
lookup_scalar
,lookup_small_batch
,lookup_medium_batch
,lookup_large_batch
- Multi-key Tests:
lookup_multi_key
,lookup_missing_keys
,lookup_repeated_keys
- Memory Tests:
memory_large_table
,memory_wide_expansion
,concurrent_lookups
- Real-world Tests:
actuarial_projection
,assumption_updates
Environment-Aware Testing¶
Performance tests automatically adjust their scale based on the environment:
- CI Environment: Smaller datasets to ensure tests complete within time limits
- Local Development: Full-scale testing with larger datasets for realistic benchmarks
The tests detect CI environments via the CI
environment variable.
Test Categories¶
Unit Tests¶
Fast, focused tests that validate individual functions and components.
# Run all unit tests (default, excludes benchmarks)
uv run pytest
# Run specific unit test categories
uv run pytest tests/assumptions/test_curve.py
uv run pytest tests/assumptions/test_wide_basic.py
uv run pytest tests/assumptions/test_errors.py
Integration Tests¶
Tests that validate end-to-end workflows and component interactions.
# Run integration tests
uv run pytest tests/assumptions/test_advanced.py
uv run pytest tests/assumptions/test_overflow.py
Performance Tests¶
Comprehensive benchmarks that validate performance characteristics.
# Run all performance tests
uv run pytest -m benchmark
# Run performance tests with output
uv run pytest -m benchmark --benchmark-only --benchmark-verbose
Test Development¶
Writing New Tests¶
Follow these patterns when adding tests:
Unit Tests:
import pytest
import polars as pl
import gaspatchio_core as gs
class TestNewFeature:
def test_basic_functionality(self):
"""Test the basic happy path."""
# Arrange
df = pl.DataFrame({"age": [20, 21], "qx": [0.001, 0.002]})
# Act
result = gs.load_assumptions("test_table", df)
# Assert
assert len(result) == 2
assert "age" in result.columns
Performance Tests:
import pytest
class TestNewFeaturePerformance:
@pytest.mark.benchmark(group="new_feature")
def test_new_feature_performance(self, benchmark):
"""Benchmark new feature performance."""
def operation_to_benchmark():
# Your operation here
return gs.some_operation(data)
result = benchmark(operation_to_benchmark)
# Assertions about results and performance
assert len(result) > 0
assert benchmark.stats.stats.mean < 1.0 # Max 1 second
Test Markers¶
Use appropriate markers for your tests:
@pytest.mark.benchmark(group="operation_name") # Performance tests
@pytest.mark.slow # Slow-running tests
@pytest.mark.skip(reason="Not implemented") # Skip temporarily
Debugging Tests¶
Verbose Output¶
# Run with verbose output
uv run pytest -v
# Show all output (including print statements)
uv run pytest -s
# Stop on first failure
uv run pytest -x
# Run last failed tests only
uv run pytest --lf
Performance Test Output¶
# Show benchmark statistics
uv run pytest -m benchmark --benchmark-only --benchmark-verbose
# Save benchmark results to file
uv run pytest -m benchmark --benchmark-json=results.json
# Compare benchmark results
uv run pytest -m benchmark --benchmark-compare=baseline.json
Environment Setup¶
Skip Performance Tests Permanently¶
If you want to skip performance tests for an entire development session:
# Set environment variable
export SKIP_PERFORMANCE_TESTS=true
uv run pytest # Will skip all performance tests
# Or run once
SKIP_PERFORMANCE_TESTS=true uv run pytest
Shell Aliases¶
Add these to your shell profile (.zshrc
, .bashrc
) for convenience:
# Fast development testing (no benchmarks)
alias test-fast="uv run pytest"
# Performance testing only
alias test-perf="uv run pytest -m benchmark"
# All tests including performance
alias test-all="uv run pytest -m ''"
# Verbose testing for debugging
alias test-debug="uv run pytest -v -s"
Continuous Integration¶
In CI environments, the test configuration automatically:
- Uses smaller datasets for performance tests
- Adjusts time limits for different machine capabilities
- Skips performance tests by default (same as local development)
- Can run performance tests explicitly when needed for benchmarking
To run performance tests in CI:
# In CI scripts
uv run pytest -m benchmark # Explicit performance testing
Troubleshooting¶
Common Issues¶
Tests running slowly: Make sure you're not accidentally running performance tests:
# Check what tests are selected
uv run pytest --collect-only -q
Performance tests failing: Check if you're in a resource-constrained environment:
# Run with relaxed performance requirements
CI=true uv run pytest -m benchmark
Import errors: Make sure you're in the correct directory and have built the package:
cd gaspatchio-core/bindings/python
uv run pytest
Getting Help¶
- Check test output with
-v
for verbose information - Use
--tb=short
for shorter tracebacks - Run
uv run pytest --markers
to see all available test markers - Check the test files in
tests/
for examples of similar functionality