Analytic Module¶
The analytic module provides metrics for evaluating trading strategies and signals, along with statistical validation tools.
Architecture¶
flowchart TB
subgraph Strategy Metrics
A[StrategyMetric] --> B[Main Metrics]
A --> C[Extended Metrics]
end
subgraph Signal Metrics
D[SignalMetric] --> E[Classification]
D --> F[Correlation]
D --> G[Timing]
end
subgraph Visualization
H[StrategyMainResult]
I[StrategyDistributionResult]
J[StrategyEquityResult]
end
subgraph Statistical Validation
K[MonteCarloSimulator]
L[BootstrapValidator]
M[StatisticalTestsValidator]
end
style A fill:#16a34a,stroke:#22c55e,color:#fff
style D fill:#2563eb,stroke:#3b82f6,color:#fff
style H fill:#7c3aed,stroke:#8b5cf6,color:#fff
style K fill:#ea580c,stroke:#f97316,color:#fff
Strategy Metrics¶
Strategy metrics compute performance indicators during backtesting. All metrics inherit from StrategyMetric and are registered via semantic decorators (e.g., @sf.metric("name")).
Base Class¶
signalflow.analytic.base.StrategyMetric
dataclass
¶
Main Metrics¶
Core performance metrics computed during backtest execution.
TotalReturnMetric¶
signalflow.analytic.strategy.main_strategy_metrics.TotalReturnMetric
dataclass
¶
DrawdownMetric¶
signalflow.analytic.strategy.main_strategy_metrics.DrawdownMetric
dataclass
¶
Bases: StrategyMetric
WinRateMetric¶
signalflow.analytic.strategy.main_strategy_metrics.WinRateMetric
dataclass
¶
Bases: StrategyMetric
SharpeRatioMetric¶
signalflow.analytic.strategy.main_strategy_metrics.SharpeRatioMetric
dataclass
¶
SharpeRatioMetric(initial_capital: float = 10000.0, window_size: int = 100, risk_free_rate: float = 0.0, _returns_history: list[float] = list())
Bases: StrategyMetric
BalanceAllocationMetric¶
signalflow.analytic.strategy.main_strategy_metrics.BalanceAllocationMetric
dataclass
¶
Bases: StrategyMetric
Extended Metrics¶
Advanced performance metrics for deeper analysis.
SortinoRatioMetric¶
Risk-adjusted return using only downside volatility.
signalflow.analytic.strategy.extended_metrics.SortinoRatioMetric
dataclass
¶
Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
window_size |
int | 100 | Rolling window for returns history |
risk_free_rate |
float | 0.0 | Risk-free rate for ratio calculation |
target_return |
float | 0.0 | Target return threshold for downside |
Output: {"sortino_ratio": float}
CalmarRatioMetric¶
Return relative to maximum drawdown.
signalflow.analytic.strategy.extended_metrics.CalmarRatioMetric
dataclass
¶
Output: {"calmar_ratio": float, "annualized_return": float, "max_drawdown_calmar": float}
ProfitFactorMetric¶
Gross profit divided by gross loss.
signalflow.analytic.strategy.extended_metrics.ProfitFactorMetric
dataclass
¶
Output: {"profit_factor": float, "gross_profit": float, "gross_loss": float}
AverageTradeMetric¶
Average profit, loss, and trade duration statistics.
signalflow.analytic.strategy.extended_metrics.AverageTradeMetric
dataclass
¶
Output:
| Key | Description |
|---|---|
avg_profit |
Mean profit from winning trades |
avg_loss |
Mean loss from losing trades |
avg_trade |
Mean PnL across all trades |
avg_duration_minutes |
Mean trade duration |
avg_win_duration |
Mean duration of winning trades |
avg_loss_duration |
Mean duration of losing trades |
ExpectancyMetric¶
Mathematical expectancy of profit per trade.
signalflow.analytic.strategy.extended_metrics.ExpectancyMetric
dataclass
¶
Formula: expectancy = win_rate * avg_win - loss_rate * avg_loss
Output: {"expectancy": float, "expectancy_ratio": float}
RiskRewardMetric¶
Risk/reward ratio (average win / average loss).
signalflow.analytic.strategy.extended_metrics.RiskRewardMetric
dataclass
¶
Output: {"risk_reward_ratio": float, "payoff_ratio": float}
MaxConsecutiveMetric¶
Tracks consecutive winning and losing streaks.
signalflow.analytic.strategy.extended_metrics.MaxConsecutiveMetric
dataclass
¶
Output:
| Key | Description |
|---|---|
max_consecutive_wins |
Maximum winning streak |
max_consecutive_losses |
Maximum losing streak |
current_win_streak |
Current winning streak |
current_loss_streak |
Current losing streak |
Usage Example¶
from signalflow.analytic.strategy import (
SortinoRatioMetric,
CalmarRatioMetric,
ProfitFactorMetric,
ExpectancyMetric,
)
from signalflow.strategy.runner import BacktestRunner
# Create runner with extended metrics
runner = BacktestRunner(
strategy_id="my_strategy",
broker=broker,
entry_rules=[entry_rule],
exit_rules=[exit_rule],
metrics=[
SortinoRatioMetric(window_size=100, risk_free_rate=0.02),
CalmarRatioMetric(),
ProfitFactorMetric(),
ExpectancyMetric(),
],
)
state = runner.run(raw_data, signals)
# Access metric values
print(f"Sortino: {state.metrics.get('sortino_ratio', 0):.2f}")
print(f"Calmar: {state.metrics.get('calmar_ratio', 0):.2f}")
print(f"Profit Factor: {state.metrics.get('profit_factor', 0):.2f}")
print(f"Expectancy: ${state.metrics.get('expectancy', 0):.2f}")
Signal Metrics¶
Signal metrics analyze the quality and effectiveness of trading signals.
Base Class¶
signalflow.analytic.base.SignalMetric
dataclass
¶
Base class for signal metrics computation and visualization.
compute ¶
compute(raw_data: RawData, signals: Signals, labels: DataFrame | None = None) -> tuple[dict[str, Any] | None, dict[str, Any]]
Compute metrics from signals.
Returns:
| Type | Description |
|---|---|
tuple[dict[str, Any] | None, dict[str, Any]]
|
Dictionary with computed metrics |
Source code in src/signalflow/analytic/base.py
plot ¶
plot(computed_metrics: dict[str, Any] | None, plots_context: dict[str, Any], raw_data: RawData, signals: Signals, labels: DataFrame | None = None) -> list[go.Figure] | go.Figure | None
Generate visualization from computed metrics.
Returns:
| Type | Description |
|---|---|
list[Figure] | Figure | None
|
Single figure or list of figures |
Source code in src/signalflow/analytic/base.py
SignalClassificationMetric¶
signalflow.analytic.signals.classification_metrics.SignalClassificationMetric
dataclass
¶
SignalClassificationMetric(positive_labels: list = list(), negative_labels: list = list(), chart_height: int = 900, chart_width: int = 1400, roc_n_thresholds: int = 100)
Bases: SignalMetric
Analyze signal classification performance against labels.
Computes standard classification metrics including: - Precision, Recall, F1 Score - Confusion Matrix - ROC Curve and AUC - Signal strength distribution
Requires labels to be provided.
__post_init__ ¶
Set default label mappings if not provided.
Source code in src/signalflow/analytic/signals/classification_metrics.py
compute ¶
compute(raw_data: RawData, signals: Signals, labels: DataFrame | None = None) -> tuple[dict[str, Any] | None, dict[str, Any]]
Compute classification metrics.
Source code in src/signalflow/analytic/signals/classification_metrics.py
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 | |
plot ¶
plot(computed_metrics: dict[str, Any] | None, plots_context: dict[str, Any], raw_data: RawData, signals: Signals, labels: DataFrame | None = None) -> go.Figure
Generate classification metrics visualization.
Source code in src/signalflow/analytic/signals/classification_metrics.py
SignalProfileMetric¶
signalflow.analytic.signals.profile_metrics.SignalProfileMetric
dataclass
¶
SignalProfileMetric(look_ahead: int = 1440, quantiles: tuple[float, float] = (0.25, 0.75), chart_height: int = 900, chart_width: int = 1400)
Bases: SignalMetric
Analyze post-signal price behavior profiles with statistical aggregations.
Computes mean, median, percentile profiles of price changes after signals, including cumulative max/min statistics for understanding typical signal outcomes.
compute ¶
compute(raw_data: RawData, signals: Signals, labels: DataFrame | None = None) -> tuple[dict[str, Any] | None, dict[str, Any]]
Calculate performance metrics for signals across all pairs.
Source code in src/signalflow/analytic/signals/profile_metrics.py
30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 | |
plot ¶
plot(computed_metrics: dict[str, Any] | None, plots_context: dict[str, Any], raw_data: RawData, signals: Signals, labels: DataFrame | None = None) -> go.Figure
Generate visualization from computed metrics.
Source code in src/signalflow/analytic/signals/profile_metrics.py
SignalDistributionMetric¶
signalflow.analytic.signals.distribution_metrics.SignalDistributionMetric
dataclass
¶
SignalDistributionMetric(n_bars: int = 10, rolling_window_minutes: int = 60, ma_window_hours: int = 12, chart_height: int = 1200, chart_width: int = 1400)
Bases: SignalMetric
Analyze signal distribution across pairs and time.
compute ¶
compute(raw_data: RawData, signals: Signals, labels: DataFrame | None = None) -> tuple[dict[str, Any] | None, dict[str, Any]]
Compute signal distribution metrics.
Source code in src/signalflow/analytic/signals/distribution_metrics.py
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 | |
plot ¶
plot(computed_metrics: dict[str, Any] | None, plots_context: dict[str, Any], raw_data: RawData, signals: Signals, labels: DataFrame | None = None) -> go.Figure
Generate distribution visualization.
Source code in src/signalflow/analytic/signals/distribution_metrics.py
SignalCorrelationMetric¶
Analyzes correlation between signal strength and actual returns.
signalflow.analytic.signals.correlation_metrics.SignalCorrelationMetric
dataclass
¶
SignalCorrelationMetric(look_ahead_periods: list[int] = (lambda: [15, 60, 240, 1440])(), strength_col: str = 'strength', chart_height: int = 900, chart_width: int = 1400)
Bases: SignalMetric
Analyze correlation between signal strength and actual returns.
Computes Pearson and Spearman correlations for different look-ahead periods, and analyzes returns by signal strength quintiles.
compute ¶
compute(raw_data: RawData, signals: Signals, labels: DataFrame | None = None) -> tuple[dict[str, Any] | None, dict[str, Any]]
Compute signal-return correlations.
Source code in src/signalflow/analytic/signals/correlation_metrics.py
plot ¶
plot(computed_metrics: dict[str, Any] | None, plots_context: dict[str, Any], raw_data: RawData, signals: Signals, labels: DataFrame | None = None) -> go.Figure
Generate correlation visualization.
Source code in src/signalflow/analytic/signals/correlation_metrics.py
Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
look_ahead_periods |
list[int] | [15, 30, 60, 120] | Minutes to look ahead for return calculation |
min_samples |
int | 30 | Minimum samples for correlation calculation |
Output:
| Key | Description |
|---|---|
correlations |
Pearson/Spearman correlations by period |
quintile_analysis |
Performance by signal strength quintile |
total_signals |
Number of signals analyzed |
Visualization: Scatter plots of signal strength vs. returns, quintile performance bars.
SignalTimingMetric¶
Analyzes optimal hold time after signal entry.
signalflow.analytic.signals.correlation_metrics.SignalTimingMetric
dataclass
¶
SignalTimingMetric(max_look_ahead: int = 1440, sample_points: int = 48, chart_height: int = 800, chart_width: int = 1200)
Bases: SignalMetric
Analyze optimal holding period for signals.
Evaluates signal performance at different holding periods to find optimal exit timing based on mean return, Sharpe ratio, or win rate.
compute ¶
compute(raw_data: RawData, signals: Signals, labels: DataFrame | None = None) -> tuple[dict[str, Any] | None, dict[str, Any]]
Compute optimal timing metrics.
Source code in src/signalflow/analytic/signals/correlation_metrics.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 | |
plot ¶
plot(computed_metrics: dict[str, Any] | None, plots_context: dict[str, Any], raw_data: RawData, signals: Signals, labels: DataFrame | None = None) -> go.Figure
Generate timing optimization visualization.
Source code in src/signalflow/analytic/signals/correlation_metrics.py
481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 | |
Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
max_look_ahead |
int | 240 | Maximum minutes to analyze |
sample_points |
int | 24 | Number of time points to sample |
Output:
| Key | Description |
|---|---|
optimal_hold_time_mean |
Time with highest mean return |
optimal_hold_time_sharpe |
Time with highest Sharpe ratio |
peak_mean_return |
Maximum mean return achieved |
series |
Time series data for plotting |
Visualization: Line charts showing mean return, Sharpe ratio, and win rate over time.
Signal Metrics Usage¶
from signalflow.analytic.signals import (
SignalCorrelationMetric,
SignalTimingMetric,
SignalClassificationMetric,
)
# Create metrics
correlation = SignalCorrelationMetric(
look_ahead_periods=[15, 30, 60],
min_samples=50,
)
timing = SignalTimingMetric(
max_look_ahead=120,
sample_points=12,
)
# Compute metrics
corr_result, corr_ctx = correlation.compute(raw_data, signals)
timing_result, timing_ctx = timing.compute(raw_data, signals)
# Generate visualizations
fig_corr = correlation.plot(corr_result, corr_ctx, raw_data, signals)
fig_timing = timing.plot(timing_result, timing_ctx, raw_data, signals)
# Access quantitative results
print(f"30min correlation: {corr_result['quant']['correlations']['period_30']['pearson_corr']:.3f}")
print(f"Optimal hold time: {timing_result['quant']['optimal_hold_time_mean']} minutes")
Result Visualization¶
Classes for visualizing backtest results.
StrategyMainResult¶
Main dashboard with equity curve, trades, and key metrics.
signalflow.analytic.strategy.result_metrics.StrategyMainResult
dataclass
¶
Bases: StrategyMetric
Strategy-level visualization based on results['metrics_df'] (Polars DataFrame).
compute ¶
StrategyPairResult¶
Per-pair performance breakdown.
signalflow.analytic.strategy.result_metrics.StrategyPairResult
dataclass
¶
StrategyPairResult(pairs: list[str] = list(), price_col: str = 'close', ts_col: str = 'timestamp', pair_col: str = 'pair', trade_id_col: str = 'id', entry_ts_col: str = 'entry_ts', exit_ts_col: str = 'exit_ts', size_col: str = 'size', height: int = 760, template: str = 'plotly_white', hovermode: str = 'x unified')
Bases: StrategyMetric
Pair visualization with price line, entry/exit markers, and net position size.
StrategyDistributionResult¶
Returns distribution analysis with histogram and QQ plot.
signalflow.analytic.strategy.result_metrics.StrategyDistributionResult
dataclass
¶
Features:
- Returns histogram with normal distribution overlay
- QQ plot for normality assessment
- Monthly returns heatmap (Year × Month)
- Distribution statistics (skew, kurtosis)
StrategyEquityResult¶
Equity curve with optional benchmark comparison.
signalflow.analytic.strategy.result_metrics.StrategyEquityResult
dataclass
¶
Features:
- Strategy equity curve
- Optional benchmark overlay
- Drawdown highlighting
- Performance statistics panel
Visualization Usage¶
from signalflow.analytic.strategy import (
StrategyMainResult,
StrategyDistributionResult,
StrategyEquityResult,
)
# After running backtest
state = runner.run(raw_data, signals)
# Main dashboard
main_viz = StrategyMainResult()
fig_main = main_viz.plot(state, raw_data)
fig_main.show()
# Distribution analysis
dist_viz = StrategyDistributionResult()
fig_dist = dist_viz.plot(state, raw_data)
fig_dist.show()
# Equity with benchmark
equity_viz = StrategyEquityResult(benchmark_col="close")
fig_equity = equity_viz.plot(state, raw_data)
fig_equity.show()
Statistical Validation¶
Tools for validating strategy robustness.
MonteCarloSimulator¶
signalflow.analytic.stats.MonteCarloSimulator
dataclass
¶
MonteCarloSimulator(n_simulations: int = 10000, random_seed: int | None = None, confidence_levels: tuple[float, ...] = (0.05, 0.5, 0.95), ruin_threshold: float = 0.2)
Bases: StatisticalValidator
Monte Carlo simulation via trade shuffling.
Randomizes trade execution order to estimate distribution of outcomes under different trade sequences. This helps assess strategy robustness and estimate risk metrics like probability of ruin.
Attributes:
| Name | Type | Description |
|---|---|---|
n_simulations |
int
|
Number of simulations to run (default: 10,000) |
random_seed |
int | None
|
Random seed for reproducibility (None for random) |
confidence_levels |
tuple[float, ...]
|
Percentile levels to compute (default: 5%, 50%, 95%) |
ruin_threshold |
float
|
Max drawdown threshold for risk of ruin (default: 20%) |
Example
from signalflow.analytic.stats import MonteCarloSimulator mc = MonteCarloSimulator(n_simulations=10_000, ruin_threshold=0.30) mc_result = mc.validate(backtest_result) print(mc_result.summary()) mc_result.plot()
Note
This simulation shuffles trade order but keeps trade PnLs unchanged. It answers: "What if these same trades occurred in a different order?"
validate ¶
Run Monte Carlo simulation on backtest trades.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
result
|
BacktestResult
|
BacktestResult containing trades to simulate |
required |
Returns:
| Type | Description |
|---|---|
MonteCarloResult
|
MonteCarloResult with simulation distributions and risk metrics |
Raises:
| Type | Description |
|---|---|
ValueError
|
If no trades available for simulation |
Source code in src/signalflow/analytic/stats/monte_carlo.py
BootstrapValidator¶
signalflow.analytic.stats.BootstrapValidator
dataclass
¶
BootstrapValidator(n_bootstrap: int = 5000, method: Literal['bca', 'percentile', 'block'] = 'bca', block_size: int | None = None, confidence_level: float = 0.95, random_seed: int | None = None, metrics: tuple[str, ...] = ('sharpe_ratio', 'sortino_ratio', 'calmar_ratio', 'profit_factor', 'win_rate'))
Bases: StatisticalValidator
Bootstrap confidence interval estimation with BCa and block support.
Supports: - BCa (bias-corrected accelerated) bootstrap for general metrics - Percentile bootstrap for simple intervals - Block bootstrap for time-series data with autocorrelation
Attributes:
| Name | Type | Description |
|---|---|---|
n_bootstrap |
int
|
Number of bootstrap resamples (default: 5,000) |
method |
Literal['bca', 'percentile', 'block']
|
Bootstrap method ("bca", "percentile", "block") |
block_size |
int | None
|
Block size for block bootstrap (auto if None) |
confidence_level |
float
|
Confidence level (default: 0.95) |
random_seed |
int | None
|
Random seed for reproducibility |
metrics |
tuple[str, ...]
|
Metrics to compute intervals for |
Example
from signalflow.analytic.stats import BootstrapValidator bootstrap = BootstrapValidator( ... n_bootstrap=5000, ... method="bca", ... metrics=("sharpe_ratio", "sortino_ratio", "profit_factor") ... ) result = bootstrap.validate(backtest_result) print(result.intervals["sharpe_ratio"])
validate ¶
Run bootstrap analysis on backtest result.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
result
|
BacktestResult
|
BacktestResult to analyze |
required |
Returns:
| Type | Description |
|---|---|
BootstrapResult
|
BootstrapResult with confidence intervals for each metric |
Source code in src/signalflow/analytic/stats/bootstrap.py
StatisticalTestsValidator¶
signalflow.analytic.stats.StatisticalTestsValidator
dataclass
¶
StatisticalTestsValidator(sr_benchmark: float = 0.0, confidence_level: float = 0.95, annualization_factor: float = np.sqrt(252))
Bases: StatisticalValidator
Statistical significance tests for trading performance.
Implements: - Probabilistic Sharpe Ratio (PSR): P(SR > benchmark | observed data) - Minimum Track Record Length (MinTRL): trades needed for significance
Based on Bailey & Lopez de Prado (2012): "The Sharpe Ratio Efficient Frontier"
Attributes:
| Name | Type | Description |
|---|---|---|
sr_benchmark |
float
|
Benchmark Sharpe ratio to compare against (default: 0) |
confidence_level |
float
|
Required confidence level (default: 0.95) |
annualization_factor |
float
|
Factor to annualize Sharpe ratio (default: sqrt(252)) |
Example
from signalflow.analytic.stats import StatisticalTestsValidator tests = StatisticalTestsValidator( ... sr_benchmark=0.5, # Compare against SR of 0.5 ... confidence_level=0.95 ... ) result = tests.validate(backtest_result) print(f"PSR: {result.psr:.2%}") print(f"Min trades needed: {result.min_track_record_length}")
validate ¶
Run statistical significance tests.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
result
|
BacktestResult
|
BacktestResult to analyze |
required |
Returns:
| Type | Description |
|---|---|
StatisticalTestResult
|
StatisticalTestResult with PSR and MinTRL values |
Source code in src/signalflow/analytic/stats/statistical_tests.py
Convenience Functions¶
from signalflow.analytic import (
monte_carlo,
bootstrap,
statistical_tests,
plot_monte_carlo,
plot_bootstrap,
plot_validation_summary,
)
# Monte Carlo simulation
mc_result = monte_carlo(returns, n_simulations=1000)
fig_mc = plot_monte_carlo(mc_result)
# Bootstrap analysis
bs_result = bootstrap(returns, n_bootstrap=1000)
fig_bs = plot_bootstrap(bs_result)
# Statistical tests
test_result = statistical_tests(returns)
fig_summary = plot_validation_summary(test_result)
Metrics Summary Table¶
Strategy Metrics¶
| Metric | Output Keys | Description |
|---|---|---|
TotalReturnMetric |
final_return |
Total portfolio return |
DrawdownMetric |
max_drawdown, current_drawdown |
Drawdown tracking |
WinRateMetric |
win_rate, total_trades |
Win percentage |
SharpeRatioMetric |
sharpe_ratio |
Risk-adjusted return |
SortinoRatioMetric |
sortino_ratio |
Downside-adjusted return |
CalmarRatioMetric |
calmar_ratio, annualized_return |
Return / max drawdown |
ProfitFactorMetric |
profit_factor, gross_profit, gross_loss |
Profit vs loss ratio |
AverageTradeMetric |
avg_profit, avg_loss, avg_trade, avg_duration_minutes |
Trade statistics |
ExpectancyMetric |
expectancy, expectancy_ratio |
Expected profit per trade |
RiskRewardMetric |
risk_reward_ratio, payoff_ratio |
Avg win / avg loss |
MaxConsecutiveMetric |
max_consecutive_wins, max_consecutive_losses |
Streak tracking |
Signal Metrics¶
| Metric | Key Features |
|---|---|
SignalClassificationMetric |
Precision, recall, F1-score |
SignalProfileMetric |
Signal distribution by type, pair |
SignalDistributionMetric |
Probability distribution analysis |
SignalCorrelationMetric |
Strength vs. returns correlation |
SignalTimingMetric |
Optimal hold time analysis |
See Also¶
- Strategy Module: Backtest execution and entry/exit rules
- Core API:
StrategyState,Portfolio,Position - Visualization: Pipeline visualization tools