Knowledge Store¶
Gaspatchio includes two searchable knowledge stores accessible via the gspio CLI. These enable both humans and LLMs to find framework documentation and actuarial knowledge while building models.
Two Knowledge Stores¶
| Command | What It Searches | Use When |
|---|---|---|
gspio docs |
Gaspatchio framework documentation | Finding API methods, accessor patterns, code examples |
gspio knowledge |
Actuarial knowledge base | Understanding IFRS 17, Solvency II, actuarial concepts |
gspio docs¶
Search Gaspatchio framework documentation for API methods, accessors, code patterns, and examples.
gspio docs "cumulative survival probability"
gspio docs "projection accessor methods"
gspio docs "excel pv function" -n 10
What it finds:
- ActuarialFrame methods and properties
- Accessor methods (
.projection,.excel,.finance,.date) - Code examples from working models
- Function signatures and parameters
Example Output¶
{
"results": [
{
"text": "cumulative_survival() calculates the cumulative survival probability...",
"source": "gaspatchio_core/accessors/projection.py",
"content_type": "code_example",
"score": 0.92
},
{
"text": "The projection accessor provides actuarial-friendly methods...",
"source": "docs/api/projection.md",
"content_type": "markdown",
"score": 0.87
}
],
"query": "cumulative survival",
"version": "0.4.2"
}
gspio knowledge¶
Search the actuarial knowledge base for regulatory frameworks, concepts, and standards.
gspio knowledge "IFRS 17 contractual service margin"
gspio knowledge "Solvency II technical provisions"
gspio knowledge "lapse rate assumptions" -n 10
What it finds:
- Regulatory frameworks (IFRS 17, Solvency II, US GAAP)
- Actuarial concepts (CSM, risk adjustment, PAA, BBA)
- Industry standards and guidance
- Mortality, morbidity, and lapse assumption guidance
Options¶
| Flag | Description |
|---|---|
--limit, -n |
Maximum number of results (default: 5) |
--answer, -a |
Return RAG-generated answer instead of search results |
Search Results vs Generated Answers¶
Prefer search results (default). They return multiple excerpts you can evaluate against your current context.
# Preferred - returns multiple relevant excerpts
gspio docs "how do I shift values by one period?"
# Use sparingly - only for quick summaries
gspio docs "what is when then otherwise?" --answer
The --answer flag asks the API to generate a synthesized answer using RAG. Reserve this for quick conceptual summaries when you don't need to weigh multiple options.
For LLMs¶
The gspio CLI is designed for LLM discoverability. Run gspio --help to see all available commands and guidance.
Recommended Workflow¶
- Search first: Use
gspio docsorgspio knowledgewithout--answer - Evaluate results: Review the returned excerpts in context
- Use --answer sparingly: Only when you need a quick conceptual summary
Example LLM Usage¶
When building a model and you need to:
# Find how to use a Gaspatchio feature
gspio docs "ActuarialFrame filtering"
# Understand accessor methods
gspio docs "projection.previous_period"
# Look up actuarial regulations
gspio knowledge "IFRS 17 CSM amortization"
# Get mortality assumption guidance
gspio knowledge "mortality improvement factors"
Help Output¶
LLMs should run gspio --help as the first discovery action:
Usage: gspio [OPTIONS] COMMAND [ARGS]...
Gaspatchio CLI for running actuarial models and discovering knowledge.
When building a model and you need to find:
• How to use a Gaspatchio feature → gspio docs "your question"
• Actuarial concepts or regulations → gspio knowledge "your question"
IMPORTANT: Always prefer search results (default) over --answer.
Search returns multiple excerpts you can evaluate against your
current context. Reserve --answer for quick summaries only.
╭─ Knowledge Discovery ────────────────────────────────────────╮
│ docs Search Gaspatchio framework documentation │
│ (API methods, accessors, code patterns) │
│ knowledge Search actuarial knowledge base │
│ (IFRS 17, Solvency II, mortality tables) │
╰──────────────────────────────────────────────────────────────╯
Architecture¶
┌─────────────────┐ HTTP/JSON ┌─────────────────────────────┐
│ gspio │ ─────────────────► │ API │
│ (thin client) │ │ - Embeddings │
│ │ ◄───────────────── │ - Vector search │
│ - Sends query │ JSON response │ - Optional LLM generation │
│ - Sends version│ │ - LanceDB backend │
└─────────────────┘ └─────────────────────────────┘
Key points:
gspiois a thin client - only HTTP calls and JSON responses- The API handles embeddings and vector search
- Version-aware - gspio passes its version for version-specific docs
- Fail fast - on API unavailable, returns error immediately
Error Handling¶
If the API is unavailable:
{
"error": "API unavailable",
"status": 503,
"message": "Knowledge API is temporarily unavailable. Please retry."
}
The CLI exits non-zero on errors. LLMs should handle retries.
Why CLI over MCP?
We previously offered an MCP (Model Context Protocol) server but found that LLMs work more reliably with CLIs.
Self-Documentation via --help — CLIs are self-documenting by convention. An LLM can run gspio --help to discover capabilities on-demand—no pre-configuration or schema registration required. As Warp's research notes: "As long as the tool has a --help option, you can ask Agent Mode to learn it, and then immediately start doing tasks with it."
Token Efficiency — Benchmarking research found that MCP servers often consume more tokens than equivalent CLI tools. Many MCPs function as "unnecessary wrappers around existing tools, potentially degrading agent performance by poisoning the context with excessive output or tool options." CLIs can be further optimized through piping—filtering output with | grep or | head -n 10 to reduce token usage.
Robust Error Handling — CLIs provide battle-tested error communication: exit codes (0 = success, non-zero = specific error types), stderr/stdout separation, and parseable output. MCP debugging is notoriously difficult.
Training Data Advantage — LLMs have extensive training data on CLI conventions. Standard patterns like --help, --version, -n 10, and JSON output are well-understood. Novel MCP schemas require the LLM to interpret unfamiliar interfaces.
Zero Configuration — CLIs work immediately with no server installation or JSON configuration. Anthropic themselves acknowledged that MCP "installation was too complex."
The New Stack summarizes it well: "The CLI is where we do defined tasks. There is one desirable outcome, and probably one sensible way to achieve it. And this is precisely why LLMs are so good at the command-line interface."