Skip to main content

Providers

Squad.NET providers all implement IChatModel, so the same squad can run against fake, local, hosted, or AWS-backed models.

var openAi = new OpenAIChatModel("gpt-4.1-mini", apiKey);
var openRouter = new OpenRouterChatModel("openai/gpt-oss-120b:free", apiKey);
var bedrock = new BedrockChatModel("anthropic.claude-3-haiku-20240307-v1:0", "us-east-1");
var ollama = new OllamaChatModel("gpt-oss:120b-cloud");
var lmStudio = new LMStudioChatModel("qwen2.5-coder-7b");

Comparison

ProviderPackageAPI keyDefault base URLToolsBest use case
OpenAI-compatibleSquad.OpenAIYeshttps://api.openai.com/v1YesOpenAI or compatible hosted endpoints.
OpenRouterSquad.OpenRouterYeshttps://openrouter.ai/api/v1YesTrying many hosted model families quickly.
BedrockSquad.BedrockAWS credentialsRegion-derivedYesAWS-centered production apps.
OllamaSquad.OllamaNo by defaulthttp://localhost:11434/v1YesLocal prototyping and LAN-hosted models.
LM StudioSquad.LMStudioNo by defaulthttp://localhost:1234/v1YesDesktop/local model experiments.

Choosing A Provider

  • Start with Ollama or LM Studio when you want local iteration without an API key.
  • Use FakeChatModel for deterministic tests, scripted tool-call examples, and failure simulations.
  • Use OpenRouter when you want to compare hosted models quickly.
  • Use Squad.OpenAI when you have OpenAI or an OpenAI-compatible endpoint.
  • Use Bedrock when your deployment and credentials already live in AWS.

Next

Pick a provider page: OpenAI-compatible, OpenRouter, Bedrock, Ollama, or LM Studio.