GPT-5.4 mini Now Generally Available in GitHub Copilot

GitHub Copilot

GitHub has made GPT-5.4 mini, OpenAI's latest fast variant of its agentic coding model, generally available across GitHub Copilot. The model is positioned as OpenAI's highest-performing mini model to date, delivering the fastest time to first token while excelling at codebase exploration and grep-style tool use. GPT-5.4 mini is accessible to Copilot Pro, Pro+, Business, and Enterprise subscribers via the model picker across all major IDEs and platforms, launching with a 0.33x premium request multiplier (noted as tentative and subject to change). Enterprise and Business administrators must enable the GPT-5.4 mini policy in Copilot settings before their organizations can access it.


What Is GPT-5.4 mini?

GPT-5.4 mini is OpenAI's latest fast variant of its agentic coding model family, and it is now generally available to GitHub Copilot subscribers. Positioned as OpenAI's highest-performing mini model to date, GPT-5.4 mini prioritizes speed and efficiency without sacrificing coding capability β€” making it especially well-suited for the kinds of rapid, high-frequency operations that agentic coding workflows demand.

The model achieves the fastest time to first token among the models available in GitHub Copilot, a metric that matters significantly in interactive coding sessions where latency directly affects the developer experience. Beyond raw speed, GPT-5.4 mini demonstrates particular strength in codebase exploration and grep-style tool use β€” capabilities that are central to how the Copilot coding agent navigates repositories and executes multi-step tasks.

Availability and Pricing

GPT-5.4 mini is available immediately to subscribers on Copilot Pro, Pro+, Business, and Enterprise plans. Developers can select the model via the model picker across all major supported surfaces: VS Code, Visual Studio, JetBrains IDEs, Xcode, Eclipse, github.com, GitHub Mobile, and the GitHub CLI.

The model launches with a 0.33x premium request multiplier, meaning it consumes significantly fewer premium requests per interaction than the full GPT-5.4 model. GitHub notes this pricing is tentative and subject to change as the model matures in production.

Admin Configuration for Business and Enterprise

Organizations on Copilot Business or Enterprise plans cannot access GPT-5.4 mini by default. Administrators must first enable the model through the Copilot settings policy panel before it becomes available to users in their organization. This follows GitHub's standard governance model for new model rollouts, which gives organizations control over which AI models their developers can access.

Why GPT-5.4 mini Matters for Agentic Workflows

GPT-5.4 mini is designed with the subagent era in mind. As GitHub Copilot's coding agent grows more sophisticated β€” running parallel subtasks, exploring repositories autonomously, and executing long-horizon coding plans β€” there is a growing need for fast, efficient models that can handle the high volume of search, navigation, and retrieval operations that agentic tasks require.

Its strength in codebase exploration and grep-style search makes it a natural candidate for these subagent roles, running alongside or under a coordinating model like the full GPT-5.4 to execute specific subtasks at speed. The combination of low latency, strong tool use, and an efficient cost multiplier positions GPT-5.4 mini as a practical building block for agentic coding pipelines within Copilot.