Upstream Supply Network
Internal sourcing and routing view that powers our unified AI API service
T
Together AI
🇺🇸Public WelfareFree
Together AI is a cloud platform designed for running and fine-tuning open-source AI models with fast inference and competitive pricing. The platform provides access to over 50 popular open-source models including Llama 2, Mistral, Mixtral, CodeLlama, and Yi, with support for custom model fine-tuning and deployment. Together AI offers a $5 free credit for new users to explore the platform, along with developer-friendly APIs, comprehensive documentation, and enterprise-grade infrastructure suitable for both research and production environments requiring flexible open-source AI deployment.
Cloud platform accessProcurement readyRecommendedNot live verified
Operating model
Cloud platform access
Procurement
Procurement ready
Recommendation
Best when your team already runs on the matching cloud and wants unified billing, IAM, and support.
unknown · 14d cadence
Models:
llama-2mistralmixtral+2
Groq is an ultra-fast AI inference platform that leverages custom-designed LPU (Language Processing Unit) hardware to deliver unprecedented inference speeds for open-source LLMs. The platform provides free access to popular models like Llama 2, Mixtral, and Gemma through an OpenAI-compatible API, making it easy for developers to integrate blazing-fast AI capabilities into their applications. Groq's custom hardware enables token generation speeds up to 10x faster than traditional GPUs, with a generous free tier and competitive pay-per-use pricing for production workloads requiring maximum performance.
Cloud platform accessProcurement guardedRecommendedPartial
Operating model
Cloud platform access
Procurement
Procurement guarded
Recommendation
Good production candidate when low-latency managed inference on GroqCloud matters more than direct control over every open model host. Live verification is currently partial because some required official source types are blocked from this environment: documentation, pricing.
overdue · 21d cadence
Live Verification
Some required official source types are live-verified, while others are blocked or broken and need follow-up.
2 verified · 2 blocked · 0 broken
official baseline
Models:
llama-3.3-70bmixtral-8x7bgemma-7b
A
Amazon Bedrock
🇺🇸Public WelfareFree
AWS managed service for foundation models from multiple providers
Cloud platform accessProcurement readyRecommendedNot live verified
Operating model
Cloud platform access
Procurement
Procurement ready
Recommendation
Best when your team already runs on the matching cloud and wants unified billing, IAM, and support.
unknown · 14d cadence
Models:
claude-3llama-2titan+1
Quick Start
$npx ccjk -p amazon-bedrock
T
Tencent (混元)
🇨🇳Public WelfareFree
Tencent's Hunyuan models integrated with WeChat and Tencent Cloud
Cloud platform accessProcurement readyRecommendedNot live verified
Operating model
Cloud platform access
Procurement
Procurement ready
Recommendation
Best when your team already runs on the matching cloud and wants unified billing, IAM, and support.
unknown · 14d cadence
Models:
hunyuan-litehunyuan-standardhunyuan-pro
Quick Start
$npx ccjk -p tencent-hunyuan
S
SiliconFlow (Silicon Cloud)
🇨🇳Public WelfareFree
SiliconFlow (Silicon Cloud) is a Chinese AI infrastructure platform specializing in fast inference for open-source large language models. The platform provides optimized access to popular Chinese and international models including Qwen, ChatGLM, Baichuan, Yi, and DeepSeek with latency-optimized inference endpoints. SiliconFlow offers competitive pricing with a free tier for testing, making advanced AI models accessible to Chinese developers and businesses. The service features high-performance inference infrastructure, Chinese language optimization, and seamless integration capabilities for enterprises requiring reliable AI model deployment.
Cloud platform accessProcurement guardedUse with guardrailsLive Verified
Operating model
Cloud platform access
Procurement
Procurement guarded
Recommendation
Useful when you need China-friendly access to many mainstream models from one managed platform, but enterprise rollout should still verify support and billing maturity.
overdue · 14d cadence
Live Verification
All required official source types are currently reachable from this verification environment.
4 verified · 0 blocked · 0 broken
official baseline
Models:
qwenchatglmbaichuan+3
Groq delivers ultra-fast LLM inference using custom Language Processing Units (LPUs), supporting open models like Llama and Mixtral with exceptional speed for real-time applications.
Cloud platform accessProcurement readyRecommendedNot live verified
Operating model
Cloud platform access
Procurement
Procurement ready
Recommendation
Best when your team already runs on the matching cloud and wants unified billing, IAM, and support.
unknown · 14d cadence
SiliconFlow provides a high-performance all-in-one AI cloud platform with unified APIs for fast inference of open-source multimodal models, emphasizing speed and cost efficiency.
Cloud platform accessProcurement readyRecommendedNot live verified
Operating model
Cloud platform access
Procurement
Procurement ready
Recommendation
Best when your team already runs on the matching cloud and wants unified billing, IAM, and support.
unknown · 14d cadence
Quick Start
$npx ccjk -p siliconflow-api
Amazon's fully managed service providing access to a wide range of foundation models from Anthropic, Meta, Mistral, Cohere, and Amazon's own Titan models through a single API with enterprise security and customization options.
Cloud platform accessProcurement readyRecommendedNot live verified
Operating model
Cloud platform access
Procurement
Procurement ready
Recommendation
Best when your team already runs on the matching cloud and wants unified billing, IAM, and support.
unknown · 14d cadence