Documentation, examples and further information of the ta4j project
This project is maintained by ta4j Organization
Walk-forward analysis tests whether a strategy or predictive model holds up as training windows move forward through time. Current ta4j exposes two layers:
BarSeriesManager and BacktestExecutor.org.ta4j.core.walkforward package.For normal strategy validation, start with WalkForwardConfig.defaultConfig(series) and keep the resulting configuration fixed for every candidate in the same comparison cycle.
WalkForwardConfig config = WalkForwardConfig.defaultConfig(series);
BarSeriesManager manager = new BarSeriesManager(series);
StrategyWalkForwardExecutionResult result = manager.runWalkForward(strategy, config);
List<Num> outOfSampleScores = result.outOfSampleCriterionValues(new GrossReturnCriterion());
When you also need the standard one-series backtest result, use BacktestExecutor:
BacktestExecutor executor = new BacktestExecutor(series);
BacktestExecutor.BacktestAndWalkForwardResult combined =
executor.executeWithWalkForward(strategy, config);
Use the maintained example ta4jexamples.walkforward.WalkForward for the current end-to-end pattern.
Use WalkForwardEngine when the candidate is not just a Strategy, for example a ranked signal generator, Elliott Wave scenario scorer, or model family.
Core roles:
| Role | Class |
|---|---|
| Fold geometry and horizons | WalkForwardConfig |
| Train/test/holdout splits | AnchoredExpandingWalkForwardSplitter, WalkForwardSplit |
| Candidate predictions | PredictionProvider, RankedPrediction, PredictionSnapshot |
| Realized outcomes | OutcomeLabeler |
| Metrics | WalkForwardMetric |
| Run outputs | WalkForwardRunResult, WalkForwardRuntimeReport, WalkForwardExperimentManifest |
| Candidate ranking | WalkForwardTuner, WalkForwardObjective, WalkForwardLeaderboard |
| Holdout checks | WalkForwardHoldoutValidator |
WalkForwardConfig config = WalkForwardConfig.defaultConfig(series);
WalkForwardEngine<MyContext, MySignal, Boolean> engine = new WalkForwardEngine<>(
new AnchoredExpandingWalkForwardSplitter(),
provider,
labeler,
List.of(
WalkForwardMetric.topKHitRate("top1Hit", 1, (prediction, outcome) -> outcome),
WalkForwardMetric.brierScore("brier", 1,
outcome -> outcome ? series.numFactory().one() : series.numFactory().zero())));
WalkForwardRunResult<MySignal, Boolean> run = engine.run(
series,
context,
config,
"candidate-a",
Map.of("family", "example"));
The engine records leakage-audit rows for every decision/horizon pair and skips labels whose future window would cross the fold boundary.
WalkForwardTuner evaluates a list of WalkForwardCandidate<C> values, keeps the top K candidates, and ranks them with a WalkForwardObjective.
Use WalkForwardObjective.weighted(...) when you want a composite objective with metric weights, guardrails, and fold-variance penalties. If the provider emits probabilities, calibration modes on WalkForwardTuner can compare raw probabilities against Platt scaling or an isotonic challenger with a guarded selection rule.
Recent Elliott Wave examples use the generic walk-forward package for scenario research:
ElliottWaveMultiDegreeAnalysisDemo runs automatic multi-degree analysis.ElliottWavePredictionProvider, ElliottWaveOutcomeLabeler, and ElliottWaveWalkForwardProfiles adapt Elliott scenarios to the generic PredictionProvider / OutcomeLabeler model.WalkForwardConfig fixed while ranking a candidate set.org.ta4j.core.walkforward package and related backtest APIs added in commit 279d9056 with only sparse wiki coverage.BarSeriesManager#runWalkForward(...), BacktestExecutor#executeWalkForward(...), and BacktestExecutor#executeWithWalkForward(...).WalkForwardEngine, WalkForwardTuner, WalkForwardMetric, WalkForwardObjective, PredictionProvider, and OutcomeLabeler.ta4jexamples.walkforward.WalkForward and Elliott Wave demos added with commit 279d9056.