Ta4j Wiki

Documentation, examples and further information of the ta4j project

View the Wiki On GitHub

This project is maintained by ta4j Organization

Walk-Forward Research

Walk-forward analysis tests whether a strategy or predictive model holds up as training windows move forward through time. Current ta4j exposes two layers:

Strategy Walk-Forward

For normal strategy validation, start with WalkForwardConfig.defaultConfig(series) and keep the resulting configuration fixed for every candidate in the same comparison cycle.

WalkForwardConfig config = WalkForwardConfig.defaultConfig(series);
BarSeriesManager manager = new BarSeriesManager(series);

StrategyWalkForwardExecutionResult result = manager.runWalkForward(strategy, config);
List<Num> outOfSampleScores = result.outOfSampleCriterionValues(new GrossReturnCriterion());

When you also need the standard one-series backtest result, use BacktestExecutor:

BacktestExecutor executor = new BacktestExecutor(series);
BacktestExecutor.BacktestAndWalkForwardResult combined =
        executor.executeWithWalkForward(strategy, config);

Use the maintained example ta4jexamples.walkforward.WalkForward for the current end-to-end pattern.

Generic Prediction Research

Use WalkForwardEngine when the candidate is not just a Strategy, for example a ranked signal generator, Elliott Wave scenario scorer, or model family.

Core roles:

Role Class
Fold geometry and horizons WalkForwardConfig
Train/test/holdout splits AnchoredExpandingWalkForwardSplitter, WalkForwardSplit
Candidate predictions PredictionProvider, RankedPrediction, PredictionSnapshot
Realized outcomes OutcomeLabeler
Metrics WalkForwardMetric
Run outputs WalkForwardRunResult, WalkForwardRuntimeReport, WalkForwardExperimentManifest
Candidate ranking WalkForwardTuner, WalkForwardObjective, WalkForwardLeaderboard
Holdout checks WalkForwardHoldoutValidator
WalkForwardConfig config = WalkForwardConfig.defaultConfig(series);

WalkForwardEngine<MyContext, MySignal, Boolean> engine = new WalkForwardEngine<>(
        new AnchoredExpandingWalkForwardSplitter(),
        provider,
        labeler,
        List.of(
                WalkForwardMetric.topKHitRate("top1Hit", 1, (prediction, outcome) -> outcome),
                WalkForwardMetric.brierScore("brier", 1,
                        outcome -> outcome ? series.numFactory().one() : series.numFactory().zero())));

WalkForwardRunResult<MySignal, Boolean> run = engine.run(
        series,
        context,
        config,
        "candidate-a",
        Map.of("family", "example"));

The engine records leakage-audit rows for every decision/horizon pair and skips labels whose future window would cross the fold boundary.

Tuning And Calibration

WalkForwardTuner evaluates a list of WalkForwardCandidate<C> values, keeps the top K candidates, and ranks them with a WalkForwardObjective.

Use WalkForwardObjective.weighted(...) when you want a composite objective with metric weights, guardrails, and fold-variance penalties. If the provider emits probabilities, calibration modes on WalkForwardTuner can compare raw probabilities against Platt scaling or an isotonic challenger with a guarded selection rule.

Elliott Wave Research

Recent Elliott Wave examples use the generic walk-forward package for scenario research:

Practical Guardrails

Rationale Notes (2026-04-27)