← Back to Home
Dual Momentum Long-Short Crypto Portfolio With an Aggressive Kelly Sizer

Dual Momentum Long-Short Crypto Portfolio With an Aggressive Kelly Sizer

From “Conservative Kelly” to “Full Kelly + Higher Leverage”

In my previous version of this system, I combined Dual Momentum selection (pick one long + one short from a crypto universe) with a Dynamic Kelly sizer. The results were already solid, but position sizes were conservative because I used half Kelly and a relatively low leverage cap.

This article is a complete, step-by-step walkthrough of the aggressive Kelly variant:

This pushes exposure higher, which increases both return potential and drawdown.

Results (Aggressive Kelly Run)

Backtest window: 2025-01-01 → 2025-12-20
Universe: BTC/USDC, ETH/USDC, SOL/USDC, BNB/USDC, XRP/USDC, ADA/USDC, DOGE/USDC, AVAX/USDC, LINK/USDC, LTC/USDC
Rebalance: weekly
Costs: commission + slippage included

Portfolio

Benchmark (BTC-USD)

Per-asset contribution (high level)

The key change vs the conservative run is simple: the model is taking more risk (higher gross exposure), and the equity curve reflects that.

Architecture Overview (Strategy vs Sizer)

This system is intentionally modular:

That separation makes it easy to run the same strategy with different allocation engines.

Complete Code Walkthrough

Below is the full code in small chunks, with explanations after each chunk.

1) Imports and wiring

import logging
import backtrader as bt
from pydantic import BaseModel, Field
from typing import Type, Optional, Dict, Any

from data_loader import *
from allocators import *
from strategies import *
from reporting import PortfolioTracker, generate_report_and_plots

What this does

2) Logging (clean backtest output)

logger = logging.getLogger("BacktestEngine")
if not logger.handlers:
    handler = logging.StreamHandler()
    formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
    handler.setFormatter(formatter)
    logger.addHandler(handler)
    logger.setLevel(logging.INFO)

Why this matters

3) Execution configuration (broker realism)

class ExecutionConfig(BaseModel):
    """
    Configuration for trade execution and market physics.
    """
    initial_cash: float = Field(100_000.0, description="Starting capital")
    commission: float = Field(0.001, description="Broker commission (e.g., 0.1%)")
    slippage: float = Field(0.0005, description="Estimated slippage (e.g., 5 basis points)")
    check_submit: bool = Field(False, description="If True, broker rejects orders if cash is insufficient before margin calculation")

Key idea

4) Sizer defaults mapping (the “aggressive Kelly” configuration)

This is the most important change in this article.

SIZER_DEFAULTS = {
    RiskParityVolTargetSizer: {
        "lookback": 60,
        "ema_alpha": 0.2,
        "target_portfolio_vol": 0.15,
        "max_weight": 0.30,
        "leverage_cap": 1.5,
        "vol_floor": 1e-4,
        "allow_short": True,
        "dd_soft": 0.10,
        "dd_hard": 0.20,
        "min_notional": 0.0,
        "debug": False,
    },

    DynamicKellySizer: {
        "default_win_rate": 0.58,
        "default_win_loss_ratio": 2.0,
        "kelly_fraction": 1.0,
        "max_leverage": 4.0,
        "dd_soft": 0.20,
        "dd_hard": 0.40,
        "min_notional": 0.0,
        "debug": False,
    },

    EqualWeightSizer: {
        "weight": 0.10,
        "allow_pyramiding": True,
        "allow_short": True,
        "min_notional": 0.0,
        "dd_soft": 0.10,
        "dd_hard": 0.20,
        "debug": False,
    },

    MaxPositionsEqualWeightSizer: {
        "weight": 0.20,
        "max_positions": 5,
        "allow_pyramiding": True,
        "allow_short": True,
        "min_notional": 0.0,
        "dd_soft": 0.10,
        "dd_hard": 0.20,
        "debug": False,
    },

    VolatilityTargetSizer: {
        "lookback": 7,
        "target_risk": 0.1,
        "annualization": 252,
        "fallback_vol": 0.5,
        "allow_pyramiding": True,
        "allow_short": True,
        "min_notional": 50.0,
        "dd_soft": 0.10,
        "dd_hard": 0.20,
        "eps": 1e-8,
        "debug": False,
    }
}

What makes it “aggressive”?

A reality check

The aggressive defaults also raise:

These strongly affect the Kelly fraction:
[
f^* = p -
]
So don’t treat them as “truth”—treat them as a dial that should ultimately be estimated from data (or at least stress-tested).

5) Merge user overrides with defaults

def get_sizer_params(sizer_class: Type[bt.Sizer], user_params: Optional[Dict[str, Any]]) -> Dict[str, Any]:
    """Merges user parameters with defaults."""
    defaults = SIZER_DEFAULTS.get(sizer_class, {})
    if user_params:
        return {**defaults, **user_params}
    return defaults

Why this is useful

6) Backtest engine (data → Cerebro → run → report)

Function signature

def run_backtest(
    StrategyClass: Type[bt.Strategy],
    tickers: list[str],
    start: str,
    end: str,
    exec_config: ExecutionConfig = ExecutionConfig(),
    benchmark_ticker: str = "SPY",
    sizer_class: Optional[Type[bt.Sizer]] = None,
    sizer_params: Optional[Dict[str, Any]] = None,
    do_report: bool = True,
):

What’s happening

Load data

    logger.info(f"Initializing Backtest for {len(tickers)} tickers from {start} to {end}")

    data_loader = BinanceCCXTDataLoader()
    data_dict = data_loader.fetch(tickers, start, end, "1d")

    if not data_dict:
        logger.error("No data downloaded. Aborting backtest.")
        return None, None, None

Why dictionary format is nice

Configure broker

    cerebro = bt.Cerebro()

    cerebro.broker.setcash(exec_config.initial_cash)
    cerebro.broker.setcommission(commission=exec_config.commission)
    cerebro.broker.set_slippage_perc(perc=exec_config.slippage)
    cerebro.broker.set_checksubmit(exec_config.check_submit)

    logger.info(f"Execution Config: Cash={exec_config.initial_cash}, Comm={exec_config.commission}, Slippage={exec_config.slippage}")

Key point

Add all data feeds (multi-asset portfolio input)

    for symbol, df in data_dict.items():
        data_feed = bt.feeds.PandasData(dataname=df, name=symbol)
        cerebro.adddata(data_feed)

Now the strategy receives all feeds in self.datas.

Add strategy

    cerebro.addstrategy(StrategyClass)

Add sizer (this is where aggressive Kelly activates)

    if sizer_class is None:
        sizer_class = RiskParityVolTargetSizer
        logger.warning("No sizer provided. Defaulting to RiskParityVolTargetSizer.")

    final_sizer_params = get_sizer_params(sizer_class, sizer_params)

    logger.info(f"Sizer: {sizer_class.__name__} | Params: {final_sizer_params}")
    cerebro.addsizer(sizer_class, **final_sizer_params)

Important

Add analyzer and run

    cerebro.addanalyzer(PortfolioTracker, _name="portfolio")

    start_val = cerebro.broker.getvalue()
    logger.info(f"  STARTING RUN   | Value: ${start_val:,.2f}")

    results = cerebro.run()

    end_val = cerebro.broker.getvalue()
    pnl = end_val - start_val
    pnl_pct = (pnl / start_val) * 100

    logger.info(f"  RUN COMPLETE   | Value: ${end_val:,.2f} | PnL: {pnl_pct:.2f}%")

    strat = results[0]

Reporting

    if do_report:
        logger.info("Generating report and plots...")
        try:
            generate_report_and_plots(
                strategy=strat,
                data_dict=data_dict,
                initial_cash=exec_config.initial_cash,
                benchmark_ticker=benchmark_ticker or tickers[0],
            )
            logger.info("Reporting complete.")
        except Exception as e:
            logger.error(f"Reporting failed: {e}", exc_info=True)

    return cerebro, strat, data_dict

7) Example run using aggressive Kelly

if __name__ == "__main__":
    my_config = ExecutionConfig(
        initial_cash=250_000,
        commission=0.001,
        slippage=0.0005
    )

    run_backtest(
        StrategyClass=EnhancedDualMomentumStrategy,
        tickers=[
            "BTC/USDC", "ETH/USDC", "SOL/USDC", "BNB/USDC", "XRP/USDC",
            "ADA/USDC", "DOGE/USDC", "AVAX/USDC", "LINK/USDC", "LTC/USDC"
        ],
        start="2025-01-01",
        end="2025-12-20",
        exec_config=my_config,
        benchmark_ticker="btc-usd",
        sizer_class=DynamicKellySizer,
        do_report=True,
    )

This run uses DynamicKellySizer with the aggressive defaults from the mapping.

Pasted image 20251221180016.png
======================================================================
PORTFOLIO PERFORMANCE
======================================================================
Initial capital:       250,000.00
Final equity:          362,596.09
Total return:           45.04%
Annualized return:      46.92%
Annualized volatility:  22.76%
Sharpe ratio:           2.06
Max drawdown:          -8.61%

BENCHMARK PERFORMANCE (btc-usd)
                       -
Total return:          -6.69%
Annualized return:     -6.91%
Annualized volatility:  35.30%
Sharpe ratio:          -0.20
Tracking error:         49.04%
Information ratio:      1.10

PER-ASSET SUMMARY
                       -
           avg_weight  total_contribution  ann_contribution
LINK/USDC     -0.0280              0.2310            0.1644
SOL/USDC       0.0187              0.1993            0.1418
AVAX/USDC     -0.0171              0.0949            0.0676
ETH/USDC       0.0160              0.0866            0.0616
XRP/USDC      -0.0045              0.0392            0.0279
BTC/USDC       0.0371              0.0345            0.0246
BNB/USDC       0.0145             -0.0169           -0.0120
LTC/USDC      -0.0086             -0.0237           -0.0169
ADA/USDC      -0.0606             -0.0486           -0.0346
DOGE/USDC     -0.0088             -0.0710           -0.0505

What Changed vs the Conservative Kelly Version?

The strategy is the same (same selection logic). Only the sizing parameters changed.

Conservative Kelly idea

Aggressive Kelly idea

In this run, aggressive sizing increased:

Final Takeaway

This aggressive Kelly variant shows what happens when you let the sizing engine take the gloves off: the same selection logic can produce a dramatically different equity curve depending on how you size risk. That’s why portfolio research should always treat sizing as a first-class component—not an afterthought.