.NET-native agent workflows
Build multi-agent workflows in C# without ceremony.
Squad.NET gives you agents, tasks, tools, structured output, provider routing, and ASP.NET hosting with local-first defaults.
dotnet add package Squad.NET --version 1.0.2
using Squad.Abstractions;
using Squad.Core;
using Squad.Ollama;
var squad = SquadBuilder.Create("LocalSquad")
.AddAgent(AgentDef.Create("Assistant", "Local assistant", "Answer clearly"))
.AddTask(TaskDef.Create("Answer", "Answer the user's question."))
.WithModel(new OllamaChatModel("gpt-oss:120b-cloud"))
.Build();
var result = await squad.RunAsync(
SquadInput.FromText("Give me one practical reason to prototype locally."));
Console.WriteLine(result.OutputText);
Choose the path you need.
Prototype with local providers or FakeChatModel, move to hosted endpoints, add tools, or expose the squad over HTTP.
Prototype fast
Use Ollama or LM Studio for real local runs, or FakeChatModel for deterministic tests.
Use a hosted provider
Switch the same squad to OpenAI-compatible endpoints, OpenRouter, or Bedrock.
Build with tools
Let models call typed C# tools and inspect the tool traces afterward.
Expose an API
Map execute and streaming endpoints from ASP.NET Core with Squad.Hosting.
Switch providers with one model object.
Happy-path examples use convenience constructors. Advanced `HttpClient` overloads stay available for tests, proxies, and app-managed lifetimes.
var openAi = new OpenAIChatModel("gpt-4.1-mini", apiKey);
var openRouter = new OpenRouterChatModel("openai/gpt-oss-120b:free", apiKey);
var bedrock = new BedrockChatModel("anthropic.claude-3-haiku-20240307-v1:0", "us-east-1");
var ollama = new OllamaChatModel("gpt-oss:120b-cloud");
var lmStudio = new LMStudioChatModel("qwen2.5-coder-7b");