Multi-Provider Support: Choose Your AI Backend
CCJK supports multiple AI providers. Learn how to configure and switch between Claude, GPT-4, and other models.
Multi-Provider Support: Choose Your AI Backend
CCJK is designed to be provider-agnostic. While optimized for Claude, it supports multiple AI providers, giving you flexibility in choosing the best model for your needs.
Supported Providers
| Provider | Models | Best For |
|---|---|---|
| Anthropic | Claude 3.5 Sonnet, Claude 3 Opus | Complex reasoning, code generation |
| OpenAI | GPT-4, GPT-4 Turbo, GPT-3.5 | General tasks, wide compatibility |
| Gemini Pro, Gemini Ultra | Multi-modal tasks | |
| Local | Ollama, LM Studio | Privacy, offline use |
| Azure | Azure OpenAI | Enterprise compliance |
Configuration
Basic Provider Setup
hljs yaml# .claude/config.yaml
providers:
default: anthropic
anthropic:
api_key: ${ANTHROPIC_API_KEY}
model: claude-sonnet-4-20250514
max_tokens: 8192
openai:
api_key: ${OPENAI_API_KEY}
model: gpt-4-turbo-preview
max_tokens: 4096
google:
api_key: ${GOOGLE_API_KEY}
model: gemini-pro
local:
endpoint: http://localhost:11434
model: codellama:13b
Environment Variables
hljs bash# .env or shell profile
export ANTHROPIC_API_KEY="sk-ant-..."
export OPENAI_API_KEY="sk-..."
export GOOGLE_API_KEY="..."
Switching Providers
Command Line
hljs bash# Use default provider
ccjk
# Specify provider
ccjk --provider openai
# Specify model
ccjk --provider anthropic --model claude-3-opus-20240229
In-Session Switching
You: /provider openai
Switched to OpenAI (gpt-4-turbo-preview)
You: /provider anthropic
Switched to Anthropic (claude-sonnet-4-20250514)
You: /model claude-3-opus-20240229
Switched to claude-3-opus-20240229
Per-Task Provider
hljs yaml# .claude/skills/complex-analysis.yaml
name: complex-analysis
provider: anthropic
model: claude-3-opus-20240229 # Use Opus for complex tasks
prompt: |
Perform deep analysis of...
Provider-Specific Features
Anthropic (Claude)
Best for:
- Complex code generation
- Multi-file refactoring
- Nuanced code review
Configuration:
hljs yamlanthropic:
model: claude-sonnet-4-20250514
max_tokens: 8192
features:
extended_thinking: true # For complex problems
artifacts: true # For structured output
OpenAI (GPT-4)
Best for:
- Quick tasks
- Wide language support
- Function calling
Configuration:
hljs yamlopenai:
model: gpt-4-turbo-preview
max_tokens: 4096
features:
json_mode: true
function_calling: true
vision: true # For GPT-4V
Google (Gemini)
Best for:
- Multi-modal tasks
- Long context windows
- Google Cloud integration
Configuration:
hljs yamlgoogle:
model: gemini-pro
max_tokens: 8192
features:
multi_modal: true
long_context: true
Local Models (Ollama)
Best for:
- Privacy-sensitive code
- Offline development
- Cost savings
Configuration:
hljs yamllocal:
endpoint: http://localhost:11434
model: codellama:13b
options:
num_ctx: 4096
temperature: 0.7
Fallback Configuration
Automatic Fallback
hljs yamlproviders:
default: anthropic
fallback:
- provider: openai
condition: rate_limit
- provider: local
condition: api_error
anthropic:
model: claude-sonnet-4-20250514
rate_limit_fallback: openai
openai:
model: gpt-4-turbo-preview
Cost-Based Routing
hljs yamlrouting:
# Use cheaper models for simple tasks
simple_tasks:
provider: openai
model: gpt-3.5-turbo
# Use powerful models for complex tasks
complex_tasks:
provider: anthropic
model: claude-3-opus-20240229
# Use local for sensitive code
sensitive:
provider: local
model: codellama:13b
Setting Up Local Models
Ollama Setup
hljs bash# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull a coding model
ollama pull codellama:13b
# Or a general model
ollama pull llama2:13b
# Start the server
ollama serve
LM Studio Setup
- Download LM Studio from lmstudio.ai
- Download a model (e.g., CodeLlama, Mistral)
- Start the local server
- Configure CCJK:
hljs yamllocal:
endpoint: http://localhost:1234/v1
model: local-model
api_type: openai_compatible
Enterprise Configuration
Azure OpenAI
hljs yamlazure:
endpoint: https://your-resource.openai.azure.com
api_key: ${AZURE_OPENAI_KEY}
api_version: "2024-02-15-preview"
deployment: your-gpt4-deployment
AWS Bedrock
hljs yamlbedrock:
region: us-east-1
model: anthropic.claude-3-sonnet-20240229-v1:0
credentials:
access_key: ${AWS_ACCESS_KEY}
secret_key: ${AWS_SECRET_KEY}
Private Deployment
hljs yamlprivate:
endpoint: https://ai.internal.company.com
api_key: ${INTERNAL_API_KEY}
model: company-model-v2
tls:
ca_cert: /path/to/ca.crt
client_cert: /path/to/client.crt
Comparing Providers
Performance Comparison
Run benchmarks:
hljs bashccjk benchmark --providers anthropic,openai,local --task code-review
Output:
Provider Benchmark Results
==========================
Task: Code Review (500 lines)
| Provider | Model | Time | Quality | Cost |
|-----------|--------------------| -------|---------|---------|
| Anthropic | claude-3.5-sonnet | 12.3s | 9.2/10 | $0.045 |
| OpenAI | gpt-4-turbo | 15.1s | 8.8/10 | $0.062 |
| Local | codellama:13b | 28.4s | 7.1/10 | $0.00 |
Cost Analysis
hljs bashccjk cost --period month --by-provider
Monthly Cost Analysis
=====================
Anthropic: $45.20 (1,200 requests)
OpenAI: $12.50 (450 requests)
Local: $0.00 (800 requests)
Total: $57.70
Estimated savings from local: $28.00
Best Practices
1. Match Model to Task
hljs yamltask_routing:
# Quick questions → fast, cheap model
quick:
provider: openai
model: gpt-3.5-turbo
# Code generation → balanced model
code:
provider: anthropic
model: claude-sonnet-4-20250514
# Architecture decisions → powerful model
architecture:
provider: anthropic
model: claude-3-opus-20240229
# Sensitive code → local model
sensitive:
provider: local
model: codellama:13b
2. Set Spending Limits
hljs yamllimits:
daily_spend: 10.00
monthly_spend: 200.00
alert_threshold: 0.8 # Alert at 80%
per_request:
max_tokens: 4096
max_cost: 0.50
3. Monitor Usage
hljs bash# View usage statistics
ccjk stats --period week
# Export for analysis
ccjk stats --export csv --output usage.csv
4. Test Before Switching
hljs bash# Test a provider before making it default
ccjk test-provider openai --task "Review this code..."
# Compare outputs
ccjk compare --providers anthropic,openai --task "Implement..."
Troubleshooting
Provider Connection Issues
hljs bash# Test connectivity
ccjk diagnose --provider anthropic
# Check API key
ccjk verify-key --provider openai
Model Not Available
hljs yaml# Configure fallback for unavailable models
anthropic:
model: claude-3-opus-20240229
fallback_model: claude-sonnet-4-20250514
Rate Limiting
hljs yamlrate_limiting:
retry_attempts: 3
retry_delay: 1000 # ms
exponential_backoff: true
fallback_on_limit: true
Conclusion
Multi-provider support gives you flexibility to:
- Choose the best model for each task
- Manage costs effectively
- Maintain privacy with local models
- Ensure availability with fallbacks
Start with a single provider, then expand as you understand your needs.
Next: Return to Getting Started to review the basics, or explore our provider directory for real-world upstream options.
Related Articles
Team Collaboration with CCJK: Shared AI Workflows
Learn how to set up CCJK for team environments. Share configurations, skills, and best practices across your development team.
Advanced Prompt Engineering for AI-Assisted Development
Master the art of crafting effective prompts to maximize your productivity with AI coding assistants. Learn proven techniques used by expert developers.
Understanding AI Agents: Autonomous Coding Assistants
Explore how AI agents in CCJK can autonomously handle complex, multi-step development tasks while you focus on high-level decisions.