CC
CCJK
Ranking🔥MCP ServersHOTProvidersModel InspectorNEW
MarketplaceNEW
DocsArticlesSolutionsDownloadGitHub
CC
CCJK

The official toolkit for supercharging Claude Code with zero-config setup, specialist agents, hot-reloadable skills, and multi-provider access.

Product

  • Features
  • AI Agents
  • Skills
  • Ranking
  • Providers
  • Marketplace

Tools

  • Download
  • Tool Quiz
  • Compare Tools
  • Methodology
  • Vendor Portal

Resources

  • Articles
  • Documentation
  • API Reference
  • Examples
  • Changelog

Legal

  • MIT License

© 2026 CCJK Maintainers. All rights reserved.

Upstream Supply Network

Internal sourcing and routing view that powers our unified AI API service

All Types:
All TypesPublicHybridCommercial
Pricing:
All PricingFreeFreemiumPaidSubscription
Sort By:
Most PopularHighest RatedName (A-Z)Newest First
Price Sort:
All PricingPrice: Low to HighPrice: High to Low
Operating model:
All ModesDirectRelayCloud
Baseline:
All BaselinesCompletePartialMinimal
Recommendation:
All VerdictsRecommendedGuardrailsEvaluate OnlyNeeds VerificationFirst-Party Preferred
Live Verification:
All StatusesLive VerifiedPartialBlockedBroken
Found 2 providers
G

Groq

🇺🇸
Public WelfareFree

Groq is an ultra-fast AI inference platform that leverages custom-designed LPU (Language Processing Unit) hardware to deliver unprecedented inference speeds for open-source LLMs. The platform provides free access to popular models like Llama 2, Mixtral, and Gemma through an OpenAI-compatible API, making it easy for developers to integrate blazing-fast AI capabilities into their applications. Groq's custom hardware enables token generation speeds up to 10x faster than traditional GPUs, with a generous free tier and competitive pay-per-use pricing for production workloads requiring maximum performance.

Cloud platform accessProcurement guardedRecommendedPartial
Reviewed
Mar 13
Sources
5
Confidence
54%
Next Review
Apr 3
Operating model
Cloud platform access
Procurement
Procurement guarded
Recommendation
Good production candidate when low-latency managed inference on GroqCloud matters more than direct control over every open model host. Live verification is currently partial because some required official source types are blocked from this environment: documentation, pricing.
overdue · 21d cadence
Live Verification
Some required official source types are live-verified, while others are blocked or broken and need follow-up.
2 verified · 2 blocked · 0 broken
Baseline
complete
official baseline
Models:
llama-3.3-70bmixtral-8x7bgemma-7b
Quick Start
Quick Start
$npx ccjk -p groq
Copy
80
0
View Details
S

SiliconFlow (Silicon Cloud)

🇨🇳
Public WelfareFree

SiliconFlow (Silicon Cloud) is a Chinese AI infrastructure platform specializing in fast inference for open-source large language models. The platform provides optimized access to popular Chinese and international models including Qwen, ChatGLM, Baichuan, Yi, and DeepSeek with latency-optimized inference endpoints. SiliconFlow offers competitive pricing with a free tier for testing, making advanced AI models accessible to Chinese developers and businesses. The service features high-performance inference infrastructure, Chinese language optimization, and seamless integration capabilities for enterprises requiring reliable AI model deployment.

Cloud platform accessProcurement guardedUse with guardrailsLive Verified
Reviewed
Mar 13
Sources
4
Confidence
54%
Next Review
Mar 27
Operating model
Cloud platform access
Procurement
Procurement guarded
Recommendation
Useful when you need China-friendly access to many mainstream models from one managed platform, but enterprise rollout should still verify support and billing maturity.
overdue · 14d cadence
Live Verification
All required official source types are currently reachable from this verification environment.
4 verified · 0 blocked · 0 broken
Baseline
complete
official baseline
Models:
qwenchatglmbaichuan+3
Quick Start
Quick Start
$npx ccjk -p siliconflow
Copy
80
0
View Details