Skip to content

Table Sensitivities

This guide covers the Python API for applying shocks to assumption tables. For the declarative JSON config format, see What-If Analysis.

Core Concept: Shock Existing Tables

Instead of creating separate assumption files for each scenario, apply shocks that transform your base tables at runtime:

from gaspatchio_core.assumptions import Table
from gaspatchio_core.scenarios.shocks import MultiplicativeShock

# Load your existing base table
mortality = Table(
    name="mortality",
    source="assumptions/mortality.parquet",
    dimensions={"age": "age", "duration": "duration"},
    value="qx"
)

# Create a shocked version (original unchanged)
stressed_mortality = mortality.with_shock(MultiplicativeShock(factor=1.2))

# Use in model
af.mort_rate = stressed_mortality.lookup(age=af.age, duration=af.duration)

The with_shock() method returns a new Table - your original is never modified.


Shock Classes

Gaspatchio provides three shock types:

MultiplicativeShock

Scales values by a factor. Use for percentage changes.

from gaspatchio_core.scenarios.shocks import MultiplicativeShock

# 20% increase (multiply by 1.2)
shock = MultiplicativeShock(factor=1.2, table="mortality")

# 10% decrease (multiply by 0.9)
shock = MultiplicativeShock(factor=0.9, table="lapse")

# Target specific column within table
shock = MultiplicativeShock(factor=1.2, table="mortality", column="qx")
Parameter Type Description
factor float Multiplicative factor (1.2 = +20%, 0.8 = -20%)
table str Target table name
column str Optional: specific column to shock

AdditiveShock

Adds a constant delta. Use for basis point shifts.

from gaspatchio_core.scenarios.shocks import AdditiveShock

# Add 50 basis points
shock = AdditiveShock(delta=0.005, table="discount_rates")

# Subtract 100 basis points
shock = AdditiveShock(delta=-0.01, table="discount_rates")
Parameter Type Description
delta float Additive constant (+0.01 = +100bps)
table str Target table name
column str Optional: specific column to shock

OverrideShock

Replaces all values with a constant. Use for extreme scenarios.

from gaspatchio_core.scenarios.shocks import OverrideShock

# Zero out all lapses
shock = OverrideShock(value=0.0, table="lapse")

# Flat 5% discount rate
shock = OverrideShock(value=0.05, table="discount_rates")
Parameter Type Description
value Any Constant replacement value
table str Target table name
column str Optional: specific column to shock

Applying Shocks to Tables

Single Shock

# Apply one shock
stressed = base_table.with_shock(shock)

# With custom name
stressed = base_table.with_shock(shock, name="mortality_stressed")

Chaining Multiple Shocks

Shocks apply sequentially:

# First multiply by 2, then add 0.001
double_shocked = mortality.with_shock(
    MultiplicativeShock(factor=2.0)
).with_shock(
    AdditiveShock(delta=0.001)
)
# Original value 0.001 becomes: 0.001 * 2 + 0.001 = 0.003

Batch Creation with from_shocks()

Create multiple shocked tables at once:

from gaspatchio_core.assumptions import Table
from gaspatchio_core.scenarios.shocks import MultiplicativeShock

shock_specs = {
    "BASE": [],  # Empty = no shocks
    "UP_20": [MultiplicativeShock(factor=1.2)],
    "DOWN_20": [MultiplicativeShock(factor=0.8)],
}

# Returns dict of Tables
tables = Table.from_shocks(base_mortality, shock_specs)
# tables["BASE"], tables["UP_20"], tables["DOWN_20"]

Config Parsing Functions

parse_scenario_config()

Converts JSON config to shock objects:

from gaspatchio_core.scenarios import parse_scenario_config

config = [
    {"id": "BASE"},
    {"id": "MORT_UP_20", "shocks": [{"table": "mortality", "multiply": 1.2}]},
    {"id": "RATES_DOWN", "shocks": [{"table": "discount_rates", "add": -0.005}]},
]

scenarios = parse_scenario_config(config)
# Returns: dict[str, list[Shock]]
# {
#     "BASE": [],
#     "MORT_UP_20": [MultiplicativeShock(factor=1.2, table="mortality")],
#     "RATES_DOWN": [AdditiveShock(delta=-0.005, table="discount_rates")],
# }

describe_scenarios()

Generates human-readable descriptions for audit:

from gaspatchio_core.scenarios import describe_scenarios

print(describe_scenarios(scenarios, output_format="text"))

Output:

Scenario Configuration

Scenario: BASE
  (no shocks)

Scenario: MORT_UP_20
  - multiply mortality by 1.2

Scenario: RATES_DOWN
  - add -0.005 to discount_rates


Sensitivity Analysis Helper

For parameter sweeps, use sensitivity_analysis():

from gaspatchio_core.scenarios import sensitivity_analysis

# Mortality sensitivity sweep
scenarios = sensitivity_analysis(
    table="mortality",
    shock_type="multiplicative",
    values=[0.8, 0.9, 1.0, 1.1, 1.2],
    include_base=True
)
# Returns: {"BASE": [], "mortality_0.8": [...], "mortality_0.9": [...], ...}

# Interest rate parallel shifts
scenarios = sensitivity_analysis(
    table="discount_rates",
    shock_type="additive",
    values=[-0.01, -0.005, 0.0, 0.005, 0.01]
)

# Custom naming
scenarios = sensitivity_analysis(
    table="mortality",
    shock_type="multiplicative",
    values=[0.9, 1.1],
    scenario_format="MORT_{value}"
)
# Returns: {"MORT_0.9": [...], "MORT_1.1": [...]}

Complete Workflow Example

Step 1: Load Base Tables

import polars as pl
from gaspatchio_core.assumptions import Table

base_tables = {
    "mortality": Table(
        name="mortality",
        source=pl.read_parquet("assumptions/mortality.parquet"),
        dimensions={"age": "age", "duration": "duration"},
        value="qx"
    ),
    "lapse": Table(
        name="lapse",
        source=pl.read_parquet("assumptions/lapse.parquet"),
        dimensions={"duration": "duration"},
        value="lapse_rate"
    ),
    "discount_rates": Table(
        name="discount_rates",
        source=pl.read_parquet("assumptions/discount_rates.parquet"),
        dimensions={"year": "year"},
        value="rate"
    ),
}

Step 2: Parse Scenario Config

from gaspatchio_core.scenarios import parse_scenario_config

config = [
    {"id": "BASE"},
    {"id": "ADVERSE", "shocks": [
        {"table": "mortality", "multiply": 1.2},
        {"table": "lapse", "multiply": 0.8},
        {"table": "discount_rates", "add": -0.01}
    ]},
]

scenarios = parse_scenario_config(config)

Step 3: Run Model for Each Scenario

from gaspatchio_core import ActuarialFrame

def run_scenario(scenario_id, shocks, base_tables, model_points):
    """Run model with shocked tables for one scenario."""

    # Create shocked tables for this scenario
    tables = base_tables.copy()
    for shock in shocks:
        if shock.table in tables:
            tables[shock.table] = tables[shock.table].with_shock(shock)

    # Run model with shocked tables
    af = ActuarialFrame(model_points)
    af.scenario_id = scenario_id

    # Lookups use shocked tables
    af.mort_rate = tables["mortality"].lookup(age=af.age, duration=af.duration)
    af.lapse_rate = tables["lapse"].lookup(duration=af.duration)
    af.disc_rate = tables["discount_rates"].lookup(year=af.year)

    # ... rest of model calculations

    return af.collect()

# Run all scenarios
model_points = pl.read_parquet("model_points.parquet")
results = []

for scenario_id, shocks in scenarios.items():
    result = run_scenario(scenario_id, shocks, base_tables, model_points)
    results.append(result)

# Combine results
all_results = pl.concat(results)

Batch Processing

For memory-efficient processing of many scenarios:

from gaspatchio_core.scenarios import batch_scenarios, with_scenarios

scenario_ids = list(range(1, 10001))  # 10K scenarios

all_results = []
for batch in batch_scenarios(scenario_ids, batch_size=1000):
    af = with_scenarios(model_points, batch)
    result = run_model(af)
    all_results.append(result.collect())

Best Practices

1. Always Include BASE

Every analysis should include an unshocked baseline:

config = [
    {"id": "BASE"},  # Always first
    {"id": "STRESS_1", "shocks": [...]},
]

2. Use Consistent Table Names

Standardize table names across your models:

Table Name Description
mortality Death rates
lapse Withdrawal rates
discount_rates Interest/discount rates
expense Per-policy expenses
inflation Expense inflation

3. Validate Before Running

Parse and describe scenarios before execution:

scenarios = parse_scenario_config(config)
print(describe_scenarios(scenarios))  # Review before running

4. Store Configs for Reproducibility

Save the config alongside results:

import json

# Save config
with open("scenario_config.json", "w") as f:
    json.dump(config, f, indent=2)

# Save audit trail
with open("scenario_description.txt", "w") as f:
    f.write(describe_scenarios(scenarios, output_format="text"))

See Also