Coldint
#29safetyAI Agent Safety & Security evaluation subnet - benchmarking and scoring AI agents for safety compliance on Bittensor.
Price
24h
7d Change
Market Cap
Emission
Miners
Validators
Competitor Mapping
Coldint
Decentralized
Market Cap
$19.79M
TAM
$8.00B
Model
Token-incentivized
Team
Decentralized miners
Anthropic (Constitutional AI) + NIST AI Safety
Private
Market Cap
Private
TAM
$8.00B
Model
Revenue-driven
Team
Enterprise
Coldint aims to be the decentralized counterpart to AI safety evaluation frameworks from Anthropic, NIST, and centralized red-teaming firms, focusing specifically on AI agent safety rather than model alignment.
Implied Valuation
If Coldint captures a share of Anthropic (Constitutional AI) + NIST AI Safety's $8.00B TAM
Bear - 0.1% TAM
$8.00M
implied valuation
vs $19.79M current
Base - 1% TAM
$80.00M
implied valuation
vs $19.79M current
Bull - 5% TAM
$400.00M
implied valuation
vs $19.79M current
Scenarios illustrate potential scale relative to the traditional market. Actual outcomes depend on adoption, tokenomics, and competitive dynamics.
Thesis
AI agent safety is an emerging regulatory requirement as autonomous agents proliferate. Being first to build decentralized safety benchmarks on Bittensor could create a valuable niche as governments mandate AI safety testing. However, the pivot from distributed training to safety is still early and unproven.
Team
Led by RWH (PhD in experimental quantum physics) and u (pseudonymous). Both have been in the Bittensor ecosystem since early 2024 with mining experience. Based in Netherlands. Community-driven with Hall of Fame reward system for contributors.
- RWH - Lead - PhD in experimental quantum physics, Bittensor miner since early 2024
- u - Co-lead - Pseudonymous, Bittensor ecosystem veteran since early 2024
Funding
No disclosed VC funding. Funded through TAO emissions.
Traction
SN29 token trading at ~$3.12. Validator software live on GitHub with HuggingFace model evaluation. Previous distributed training model struggled post-dTAO launch. Pivoting to AI-ASSeSS (AI Agent Safety & Security) framework.
Recent News
- 2026 - Relaunching as AI-ASSeSS focused on AI Agent Safety & Security
- 2026 - Previous distributed training model acknowledged as not working well in dTAO
- 2025 - Published validator software for model evaluation on HuggingFace datasets
Risk
Team admitted their previous model 'did not work well in a dTAO world.' Pivoting to AI agent safety is speculative with no proven traction yet. Pseudonymous leadership adds uncertainty.