All Subnets

blockmachine

#19Inference

Real-time low-latency AI inference optimized for assistants, autonomous agents, and industrial AI.

Price

$0.014774-2.35%

24h

7d Change

-11.35%

Market Cap

$20.32M

Emission

-

Miners

1

Validators

14

Competitor Mapping

blockmachine

Decentralized

Market Cap

$20.32M

TAM

$118.00B

Model

Token-incentivized

Team

Decentralized miners

Mistral AI + Groq

Private

Market Cap

Private

TAM

$118.00B

Model

Revenue-driven

Team

Enterprise

Targets latency-sensitive inference. Groq's LPU chips and Mistral's optimized models are direct competitors. Decentralized architecture scales supply without capex.

Implied Valuation

If blockmachine captures a share of Mistral AI + Groq's $118.00B TAM

Bear - 0.1% TAM

$118.00M

implied valuation

+480.63%

vs $20.32M current

Base case

Base - 1% TAM

$1.18B

implied valuation

+5706.29%

vs $20.32M current

Bull - 5% TAM

$5.90B

implied valuation

+28931.43%

vs $20.32M current

Scenarios illustrate potential scale relative to the traditional market. Actual outcomes depend on adoption, tokenomics, and competitive dynamics.

Thesis

Part of the Rayon Trio controlling 23.7% of TAO emissions alongside Chutes and Gradients. Low-latency inference for real-time applications is a defensible niche.

Team

Built by Rayon Labs (same team as Chutes SN64 and Gradients SN56). Led by namoray (pseudonymous). Part of the 'Rayon Trio' controlling ~23.7% of daily TAO emissions.

  • namoray - Founder/Lead (Rayon Labs) - Pseudonymous

Funding

No separate VC funding. Funded through TAO emissions as part of Rayon Labs' three-subnet operation.

Traction

Set world record for fastest LLM inference (outperformed Perplexity). Public API at sn19.ai with operational payments. Part of Rayon Trio (23.7% of TAO emissions). Supports text and image generation at scale.

Recent News

  • 2026 - API payments now operational for NineteenAI
  • 2026 - Set world record for fastest LLM inference
  • 2026 - Continued growth as part of Rayon Labs trio

Risk

Groq and Cerebras building dedicated inference ASICs could make GPU-based inference networks obsolete for latency-critical workloads.