Meteora LP Edge

Methodology

We believe published numbers should be reproducible. Here is exactly how the table on the front page is built.

1. Universe

We pull all active DLMM pools from the Meteora public API (dlmm.datapi.meteora.ag/pools). We then filter to pools that meet all of:

Survivors are sorted by volume/TVL ratio. We rank the top 30 (this also bounds compute).

2. Historical data

For each pool we fetch 180 days of 1h OHLCVfrom Meteora's OHLCV endpoint (paginated in 72-candle chunks because the API caps the time range per request). Candles with close ≤ 0 or non-finite values are discarded.

3. Strategies tested

We benchmark 5 LP strategies against each pool:

4. Rolling backtest

We slice the 180-day history into 30-day windows stepped every 30 days, giving up to 6 independent samples per pool. Each window simulates entering with $10,000, computing fees earned (CPMM approximation: fees ∝ overlap of candle range with position range × your share of pool TVL), impermanent loss vs initial 50/50 entry, and rebalancing costs.

5. Costs we model

6. Ranking score

For each pool we compute the median APR of the best-performing strategy across windows, and rank pools by:

score = median_edge_vs_hodl_pct × sqrt(TVL) × honesty_score

Square root of TVL rewards larger pools (more capital can be deployed) but with diminishing returns. Honesty score is a heuristic [0–1] that drops when the sample is small, APR variance is high, or time-in-range is low.

7. Known limitations

8. Source code

The ranking engine is open. Strategies, costs, and ranking logic live in lib/engine/. Refresh runs hourly via cron and writes a snapshot file; the dashboard reads the latest snapshot.