Providers
Squad.NET providers all implement IChatModel, so the same squad can run against fake, local, hosted, or AWS-backed models.
var openAi = new OpenAIChatModel("gpt-4.1-mini", apiKey);
var openRouter = new OpenRouterChatModel("openai/gpt-oss-120b:free", apiKey);
var bedrock = new BedrockChatModel("anthropic.claude-3-haiku-20240307-v1:0", "us-east-1");
var ollama = new OllamaChatModel("gpt-oss:120b-cloud");
var lmStudio = new LMStudioChatModel("qwen2.5-coder-7b");
Comparison
| Provider | Package | API key | Default base URL | Tools | Best use case |
|---|---|---|---|---|---|
| OpenAI-compatible | Squad.OpenAI | Yes | https://api.openai.com/v1 | Yes | OpenAI or compatible hosted endpoints. |
| OpenRouter | Squad.OpenRouter | Yes | https://openrouter.ai/api/v1 | Yes | Trying many hosted model families quickly. |
| Bedrock | Squad.Bedrock | AWS credentials | Region-derived | Yes | AWS-centered production apps. |
| Ollama | Squad.Ollama | No by default | http://localhost:11434/v1 | Yes | Local prototyping and LAN-hosted models. |
| LM Studio | Squad.LMStudio | No by default | http://localhost:1234/v1 | Yes | Desktop/local model experiments. |
Choosing A Provider
- Start with Ollama or LM Studio when you want local iteration without an API key.
- Use
FakeChatModelfor deterministic tests, scripted tool-call examples, and failure simulations. - Use OpenRouter when you want to compare hosted models quickly.
- Use
Squad.OpenAIwhen you have OpenAI or an OpenAI-compatible endpoint. - Use Bedrock when your deployment and credentials already live in AWS.
Next
Pick a provider page: OpenAI-compatible, OpenRouter, Bedrock, Ollama, or LM Studio.