Microsoft Copilot |
|
|
Copilot's LLM models Microsoft Copilot (as of January 2026) primarily uses large language models (LLMs) from OpenAI as its core foundation, with expanding support for other providers in certain scenarios. Here's a breakdown of the main models powering different versions of Copilot: Consumer / General Copilot (copilot.microsoft.com, Windows, mobile apps, etc.) •Primary / default model — OpenAI's GPT-5 (or the latest iteration like GPT-5.x series) for most experiences, including chat, reasoning, and web-grounded responses. •It uses Microsoft's Prometheus orchestration layer (which combines LLM reasoning with Bing search grounding for real-time, cited answers). •Earlier models like GPT-4o are still referenced in some contexts, but GPT-5 has become the main driver for smart/default modes. Microsoft 365 Copilot (enterprise/work version integrated into Word, Excel, Teams, etc.) •Core foundation — OpenAI's latest models (heavily GPT-5 family for reasoning, writing, summarization, and structured tasks). •Multi-model support — As of late 2025, Microsoft expanded choices: oOpenAI models remain central. oAnthropic's Claude models (e.g., Claude Sonnet 4, Claude Opus 4.1, and likely newer like 4.5) are now available — especially for the Researcher agent (deep multistep research) and when building agents in Copilot Studio. •Routing is dynamic: The system selects or lets users/admins choose the best model for the task (e.g., deep reasoning vs. quick responses). •Grounded in your Microsoft Graph data (emails, files, meetings, etc.) for enterprise privacy and relevance. GitHub Copilot (code-focused) •Supports a wide range of models from multiple providers. •OpenAI family dominates: GPT-5, GPT-5.1, GPT-5.2, GPT-5-Codex variants, GPT-5 mini, etc. (some older ones like certain GPT-5 versions are being deprecated in early 2026 in favor of newer ones). •Also includes options from Anthropic (Claude variants), Google (Gemini series), and others depending on your plan/subscription. Other notes •Microsoft has introduced some in-house models (e.g., MAI series for voice/speech features in Copilot Daily/Podcasts, or lightweight Phi models in certain local/Edge scenarios), but these are supplementary rather than replacing the main LLMs. •In Copilot Studio (for building custom agents), you can explicitly select from OpenAI, Anthropic, or other Azure-hosted models. •The exact model can vary by feature, prompt complexity, subscription tier (free vs. Pro/enterprise), and region — Microsoft often uses dynamic routing for cost, speed, and quality. In short, Copilot is no longer a single-model product — it's increasingly multi-LLM, but OpenAI's GPT series (especially GPT-5 family in 2026) remains the backbone for most everyday use. If you're asking about a specific Copilot variant (e.g., M365, GitHub, or the free web version), let me know for more tailored details!
Note: There isn't really a dedicated "Microsoft Copilot API" - the term is ambiguous across several Microsoft products. The closest option is the GitHub Models API (released May 2025), which provides access to GPT-4.1, Claude, Gemini, and other models via a GitHub subscription. The good news is that this API is OpenAI-compatible, so it already works with our existing ProviderType = "Generic" option. If a customer wants to use their GitHub Copilot subscription for AI features, they can: 1.Create a ModelProvider with ProviderType = "Generic" 2.Set the endpoint to https://models.github.ai/inference/chat/completions 3.Use a GitHub PAT as the API key
|