CC
CCJK
Ranking🔥MCP ServersHOTProvidersModel InspectorNEW
MarketplaceNEW
DocsArticlesSolutionsDownloadGitHub
CC
CCJK

The official toolkit for supercharging Claude Code with zero-config setup, specialist agents, hot-reloadable skills, and multi-provider access.

Product

  • Features
  • AI Agents
  • Skills
  • Ranking
  • Providers
  • Marketplace

Tools

  • Download
  • Tool Quiz
  • Compare Tools
  • Methodology
  • Vendor Portal

Resources

  • Articles
  • Documentation
  • API Reference
  • Examples
  • Changelog

Legal

  • MIT License

© 2026 CCJK Maintainers. All rights reserved.

Upstream Supply Network

Internal sourcing and routing view that powers our unified AI API service

All Types:
All TypesPublicHybridCommercial
Pricing:
All PricingFreeFreemiumPaidSubscription
Sort By:
Most PopularHighest RatedName (A-Z)Newest First
Price Sort:
All PricingPrice: Low to HighPrice: High to Low
Operating model:
All ModesDirectRelayCloud
Baseline:
All BaselinesCompletePartialMinimal
Recommendation:
All VerdictsRecommendedGuardrailsEvaluate OnlyNeeds VerificationFirst-Party Preferred
Live Verification:
All StatusesLive VerifiedPartialBlockedBroken
Found 1 providers
G

Groq

🇺🇸
Public WelfareFree

Groq is an ultra-fast AI inference platform that leverages custom-designed LPU (Language Processing Unit) hardware to deliver unprecedented inference speeds for open-source LLMs. The platform provides free access to popular models like Llama 2, Mixtral, and Gemma through an OpenAI-compatible API, making it easy for developers to integrate blazing-fast AI capabilities into their applications. Groq's custom hardware enables token generation speeds up to 10x faster than traditional GPUs, with a generous free tier and competitive pay-per-use pricing for production workloads requiring maximum performance.

Cloud platform accessProcurement guardedRecommendedPartial
Reviewed
Mar 13
Sources
5
Confidence
54%
Next Review
Apr 3
Operating model
Cloud platform access
Procurement
Procurement guarded
Recommendation
Good production candidate when low-latency managed inference on GroqCloud matters more than direct control over every open model host. Live verification is currently partial because some required official source types are blocked from this environment: documentation, pricing.
overdue · 21d cadence
Live Verification
Some required official source types are live-verified, while others are blocked or broken and need follow-up.
2 verified · 2 blocked · 0 broken
Baseline
complete
official baseline
Models:
llama-3.3-70bmixtral-8x7bgemma-7b
Quick Start
Quick Start
$npx ccjk -p groq
Copy
80
0
View Details