Natural Language → Executable Shock Configs¶
Gaspatchio's shock system is intentionally LLM-friendly: an LLM can take an English question and emit a JSON/dict config that is parsed, validated, executed, and then summarized back into English.
You do NOT generate new assumption tables again
Scenarios should be expressed as small overlays on top of your existing baseline assumptions.
- Your governed base tables stay untouched.
- Scenario configs are tiny, diffable artifacts.
- Shocks are applied at runtime (or used to create shocked table copies in memory), not by exporting new “Scenario_123.xlsx” assumption tables.
Why configs (not prose) are the point¶
When an actuary asks a question in English, the model run still needs deterministic inputs.
A JSON/dict config gives you:
- Reproducibility: rerun the exact scenario by reusing the same config.
- Auditability: store the config +
describe_scenarios()output alongside results. - Composability: combine simple “lego brick” operations (multiply/add/set/clip/max/min/pipeline) into regulatory scenarios.
- No assumption-table churn: configs describe transformations, not regenerated tables.
The contract: what the LLM is allowed to output¶
At minimum, the LLM emits a scenario config list:
[
{"id": "BASE"},
{
"id": "SCENARIO_NAME",
"shocks": [
{"table": "mortality", "multiply": 1.2}
]
}
]
Then Gaspatchio does the deterministic part:
from gaspatchio_core.scenarios import parse_scenario_config, describe_scenarios
scenarios = parse_scenario_config(config)
print(describe_scenarios(scenarios, output_format="markdown"))
That split matters: the LLM proposes, the engine enforces (schema + validation + execution).
Actuarial prompts → configs that get executed¶
Below are examples of the kind of messy, dynamic English you actually get—and the configs an LLM can generate.
1) Duration-limited lapse stress (cohort + window)¶
English
For TERM only, increase lapses by 25% but only in durations 1–3. Keep everything else base.
Generated config
[
{"id": "BASE"},
{
"id": "LAPSE_UP_25_DUR_1_3_TERM",
"shocks": [
{
"table": "lapse",
"multiply": 1.25,
"where": {"product": "TERM", "duration": {"between": [1, 3]}}
}
]
}
]
What executes
- The base
lapsetable is not replaced. - The lookup/transform applies
× 1.25only where the filter matches.
2) Mass lapse at time 0 (classic regulatory shape)¶
English
Add a 40% mass lapse at t=0 for UL, and cap lapse at 100%.
Generated config
[
{"id": "BASE"},
{
"id": "MASS_LAPSE_UL",
"shocks": [
{"table": "lapse", "add": 0.40, "when": {"t": {"eq": 0}}, "where": {"product": "UL"}, "clip": [null, 1.0]}
]
}
]
3) “Solvency II lapse up” as a composable pipeline¶
English
Solvency II lapse up: multiply by 1.5 but cap at 100%.
Generated config
[
{"id": "BASE"},
{
"id": "SII_LAPSE_UP",
"shocks": [
{
"table": "lapse",
"pipeline": [
{"multiply": 1.5},
{"clip": {"max": 1.0}}
]
}
]
}
]
4) Combined stress (IFRS17 / ORSA style)¶
English
Worst-case combo: mortality +20%, expenses +10% (and cap negative expenses at 0), discount rates -100bps.
Generated config
[
{"id": "BASE"},
{
"id": "ADVERSE_COMBO",
"shocks": [
{"table": "mortality", "multiply": 1.2},
{"table": "expense", "multiply": 1.1, "clip": [0.0, null]},
{"table": "disc_rates", "add": -0.01}
]
}
]
5) Table sensitivity sweep (actuarial “ladder” question)¶
English
Give me PV impact for rates: -200, -100, -50, base, +50, +100 bps.
Generated config
[
{"id": "RATES_DOWN_200BPS", "shocks": [{"table": "disc_rates", "add": -0.02}]},
{"id": "RATES_DOWN_100BPS", "shocks": [{"table": "disc_rates", "add": -0.01}]},
{"id": "RATES_DOWN_50BPS", "shocks": [{"table": "disc_rates", "add": -0.005}]},
{"id": "BASE"},
{"id": "RATES_UP_50BPS", "shocks": [{"table": "disc_rates", "add": 0.005}]},
{"id": "RATES_UP_100BPS", "shocks": [{"table": "disc_rates", "add": 0.01}]}
]
Running the scenarios (no new tables; just overlays)¶
There are two common execution patterns:
Pattern A: build shocked table copies in memory¶
This is the most literal “no new assumption tables” approach: you load base tables once, then create derived tables per scenario.
import polars as pl
from gaspatchio_core.assumptions import Table
from gaspatchio_core.scenarios import parse_scenario_config
# 1) Load baseline tables once
mortality = Table(name="mortality", source="assumptions/mortality.parquet", dimensions={"age": "age", "duration": "duration"}, value="qx")
lapse = Table(name="lapse", source="assumptions/lapse.parquet", dimensions={"duration": "duration"}, value="rate")
# 2) Parse LLM-produced scenario config
scenarios = parse_scenario_config(config) # dict[str, list[Shock]]
# 3) For each scenario, apply only relevant shocks to each base table
by_scenario = {}
for scenario_id, shocks in scenarios.items():
mort_s = mortality
lapse_s = lapse
for s in shocks:
if getattr(s, "table", None) == "mortality":
mort_s = mort_s.with_shock(s)
if getattr(s, "table", None) == "lapse":
lapse_s = lapse_s.with_shock(s)
by_scenario[scenario_id] = {"mortality": mort_s, "lapse": lapse_s}
# 4) Run your model using by_scenario[scenario_id]["mortality"].lookup(...)
Pattern B: scenario-aware runs (vectorized) + scenario-varying assumptions¶
If your model is already using with_scenarios(), you can expand model points across scenario IDs and keep everything grouped by scenario_id. (This works especially well for large sweeps.)
See Running Models Across Scenarios.
Turning results into English answers + charts¶
Gaspatchio gives you scenario-indexed results (usually a Polars DataFrame grouped by scenario_id).
From there, you typically:
- Compute deltas vs
BASE(PV, BEL, CSM, SCR/RBC metrics that your model already computes). - Render charts (bar charts for scenario ladders, waterfalls for attribution, time-series for surplus trajectories).
- Have the LLM summarize the result table into an executive narrative.
A minimal sketch:
import polars as pl
summary = (
results
.group_by("scenario_id")
.agg(pl.col("pv_net_cf").sum().alias("pv"))
)
base = summary.filter(pl.col("scenario_id") == "BASE").select("pv").item()
summary = summary.with_columns((pl.col("pv") - pl.lit(base)).alias("pv_delta"))
# LLM prompt input = summary.to_dicts() + the scenario config + (optional) describe_scenarios()
Note
Gaspatchio doesn't force a charting stack. In practice teams use Plotly/Matplotlib/Altair, then embed the resulting HTML/PNG into their report pipeline.
Best practice: store the config, not a re-exported table¶
If you're doing anything audit/regulatory-adjacent, the artifact to keep is:
- the scenario config (JSON)
- the
describe_scenarios()audit trail - the scenario-level output table(s)
That’s the whole point: no regenerating assumption tables for every scenario; just run baseline + overlays.
See Also¶
- What-If Analysis - the declarative config format
- Shock Operations - full shock grammar (filters, pipeline, max/min, clip)
- Table Sensitivities - apply shocks to existing tables in Python
- Running Models Across Scenarios - vectorized scenario execution