All Subnets

colosseum

#38Training

Decentralized LLM training using Butterfly all-reduce for efficient gradient communication across distributed miners.

Price

$0.010054+0.38%

24h

7d Change

-10.23%

Market Cap

$1.53M

Emission

0.52%

Miners

1

Validators

10

Competitor Mapping

colosseum

Decentralized

Market Cap

$1.53M

TAM

$30.00B

Model

Token-incentivized

Team

Decentralized miners

OpenAI + Anthropic (training infrastructure)

Private

Market Cap

Private

TAM

$30.00B

Model

Revenue-driven

Team

Enterprise

SN38 aims to reduce training costs by 40% vs. centralized GPU clusters using Butterfly all-reduce gradient compression, competing with the training infrastructure of OpenAI, Anthropic, and Google.

Implied Valuation

If colosseum captures a share of OpenAI + Anthropic (training infrastructure)'s $30.00B TAM

Bear - 0.1% TAM

$30.00M

implied valuation

+1857.96%

vs $1.53M current

Base case

Base - 1% TAM

$300.00M

implied valuation

+19479.59%

vs $1.53M current

Bull - 5% TAM

$1.50B

implied valuation

+97797.97%

vs $1.53M current

Scenarios illustrate potential scale relative to the traditional market. Actual outcomes depend on adoption, tokenomics, and competitive dynamics.

Thesis

Butterfly all-reduce reduces communication overhead from O(d*n) to O(d*log(n)), making decentralized training viable on commodity internet. Complements SN3 (Templar) with a different approach to distributed gradient aggregation. Team has strong ML credentials.

Team

Built by the DistributedTraining team. Founded by Karim Foda (10 years ML experience, multiple open-source contributions, joined Bittensor community Nov 2022) and Mikkel Loose (6+ years as AI researcher/developer, focused on LLMs and computer vision).

  • Karim Foda - Co-founder - 10 years ML experience, open-source contributor, Bittensor community since Nov 2022
  • Mikkel Loose - Co-founder - 6+ years AI researcher, specialist in LLMs and computer vision

Funding

No disclosed VC funding. Funded through TAO emissions.

Traction

Completed 1.1B parameter model training run. Uses Butterfly all-reduce technique for gradient communication. ~0.5% of total Bittensor emissions. Registered Sep 2024.

Recent News

  • 2026 - Active on Bittensor mainnet with distributed training runs
  • Sep 2024 - Subnet registered on Bittensor mainnet

Risk

Low emissions share (~0.5% of network). Competes directly with SN3 Templar which has far more momentum and visibility after Covenant-72B. May struggle for relevance.