This part puts together the professional-grade risk metrics that institutional investors use daily: Value-at-Risk (both historical and parametric), Sharpe ratio, Sortino ratio, and portfolio beta. These are backward-looking statistical models that help quantify portfolio risk and risk-adjusted returns. They're powerful tools — and they have real limitations you need to understand.
Professional Risk Metrics
| Metric | What it measures | Key limitation |
|---|---|---|
| Historical VaR 95% | Max expected loss on 95% of days | Doesn't model tail events |
| Historical VaR 99% | Max expected loss on 99% of days | Understates black swans |
| Sharpe Ratio | Return per unit of total risk | Backward-looking, penalizes upside vol |
| Sortino Ratio | Return per unit of downside risk only | Better for asymmetric return strategies |
| Portfolio Beta | Market sensitivity vs SPY | Assumes linear relationship |
VaR Calculations
Calculate both historical and parametric VaR at 95% and 99% confidence levels:
import numpy as np
import pandas as pd
import yfinance as yf
def historical_var(returns: pd.Series, confidence: float = 0.95) -> float:
return float(np.percentile(returns, (1 - confidence) * 100))
def parametric_var(returns: pd.Series, confidence: float = 0.95) -> float:
from scipy import stats
z = stats.norm.ppf(1 - confidence)
return float(returns.mean() + z * returns.std())
def portfolio_var(positions: list, period: str = "1y") -> dict:
tickers = [p["ticker"] for p in positions]
prices = yf.download(tickers, period=period, auto_adjust=True, progress=False)["Close"]
returns = prices.pct_change().dropna()
total_val = sum(p["shares"] * float(prices[p["ticker"]].iloc[-1]) for p in positions)
weights = {p["ticker"]: (p["shares"] * float(prices[p["ticker"]].iloc[-1])) / total_val
for p in positions}
port_returns = sum(returns[t] * w for t, w in weights.items() if t in returns.columns)
h95 = historical_var(port_returns, 0.95)
h99 = historical_var(port_returns, 0.99)
p95 = parametric_var(port_returns, 0.95)
return {
"historical_var_95_pct": round(h95 * 100, 3),
"historical_var_99_pct": round(h99 * 100, 3),
"parametric_var_95_pct": round(p95 * 100, 3),
"interpretation": f"95% of days, daily loss should not exceed {abs(round(h95 * 100, 2))}%",
}
Sharpe & Sortino from FRED
Calculate risk-adjusted returns using the 3-month T-bill rate from FRED (no API key required):
def risk_adjusted_returns(returns: pd.Series) -> dict:
import httpx
# Get 3-month T-bill rate from FRED (free, no key needed for CSV endpoint)
try:
r = httpx.get("https://fred.stlouisfed.org/graph/fredgraph.csv?id=DGS3MO", timeout=10)
lines = r.text.strip().split("\n")
for line in reversed(lines[1:]):
val = line.split(",")[1].strip()
if val and val != ".":
rf_daily = float(val) / 100 / 252
break
else:
rf_daily = 0.05 / 252
except Exception:
rf_daily = 0.05 / 252
annual_return = returns.mean() * 252
annual_vol = returns.std() * np.sqrt(252)
rf_annual = rf_daily * 252
excess = annual_return - rf_annual
sharpe = excess / annual_vol if annual_vol > 0 else 0
downside_vol = returns[returns < 0].std() * np.sqrt(252)
sortino = excess / downside_vol if downside_vol > 0 else 0
return {
"annual_return_pct": round(annual_return * 100, 2),
"annual_volatility_pct": round(annual_vol * 100, 2),
"risk_free_rate_pct": round(rf_annual * 100, 2),
"sharpe_ratio": round(sharpe, 3),
"sortino_ratio": round(sortino, 3),
"sharpe_interpretation": "excellent" if sharpe > 1.5 else "good" if sharpe > 1.0 else "acceptable" if sharpe > 0.5 else "poor",
}
Weekly Risk Dashboard
Automate weekly metrics reporting:
name: risk_metrics_dashboard
schedule: "0 8 * * 1"
steps:
- load_positions:
file: positions.yaml
- fetch_prices:
period: "1y"
- portfolio_var:
confidence_levels: [0.95, 0.99]
- sharpe_sortino:
risk_free_source: fred_dgs3mo
- portfolio_beta:
benchmark: SPY
- notify:
subject: "🧮 Weekly Risk | Sharpe {{ sharpe }} | VaR95 {{ var_95 }}%"
Frequently Asked Questions
Part 4 FAQs
What's a good Sharpe ratio?
Above 1.0 is good, above 1.5 is excellent. Below 0.5 suggests poor risk-adjusted returns for the volatility taken.
Why does VaR underestimate tail risk?
VaR is based on historical distributions. Extreme events happen more frequently than normal distribution models predict — this is called fat-tail or leptokurtic risk.
Where does OpenClaw get the risk-free rate?
From FRED's public CSV endpoint (DGS3MO — 3-month T-bill). No API key required.
Final Part: Move to Part 5 to build the rebalancing and alert system — Rebalancing & Alert System.