Gradients
#56TrainingNo-code AI model finetuning platform that has trained 118T+ parameters, making training accessible to non-technical users.
Price
24h
7d Change
Market Cap
Emission
Miners
Validators
Competitor Mapping
Gradients
Decentralized
Market Cap
$34.86M
TAM
$15.00B
Model
Token-incentivized
Team
Decentralized miners
Google Vertex AI + Hugging Face
GOOGL
Market Cap
$3.50T
TAM
$15.00B
Model
Revenue-driven
Team
Enterprise
Democratizes finetuning like Hugging Face democratized hosting. Google Vertex charges $3-5/hour; Gradients uses idle decentralized GPUs at a fraction.
Implied Valuation
If Gradients captures a share of Google Vertex AI + Hugging Face's $15.00B TAM
Bear - 0.1% TAM
$15.00M
implied valuation
vs $34.86M current
Base - 1% TAM
$150.00M
implied valuation
vs $34.86M current
Bull - 5% TAM
$750.00M
implied valuation
vs $34.86M current
Scenarios illustrate potential scale relative to the traditional market. Actual outcomes depend on adoption, tokenomics, and competitive dynamics.
Thesis
Third pillar of the Rayon Trio. No-code finetuning UX is a massive unlock for enterprise - most companies want to customize models but lack ML engineering talent.
Team
Built by Rayon Labs (same team as Chutes SN64 and Nineteen SN19). Led by namoray (pseudonymous). No-code approach to AI model training.
- namoray - Founder/Lead (Rayon Labs) - Pseudonymous
Funding
No separate VC funding. Funded through TAO emissions as part of Rayon Labs' three-subnet operation.
Traction
6.66% of total network emissions. Part of Rayon Trio (23.7% combined). Qwen 3 fine-tuning added recently. Market cap ~$23.3M. Clean UI for non-technical users. Integrated with Chutes for 'Run with Chutes, Train with Gradients' workflow.
Recent News
- Mar 2026 - Qwen 3 fine-tuning support added ('3 clicks or less')
- Mar 2026 - Market cap ~$23.3M
- 2026 - Integrated workflow with Chutes: 'Run with Chutes, Train with Gradients'
Risk
Hugging Face AutoTrain and OpenAI's finetuning API are commoditizing this space with simpler solutions.