GPT-5.5 Instant: OpenAI's New Default Model Cuts Hallucinations by 52%

Codex

OpenAI launched GPT-5.5 Instant on May 5, 2026, replacing GPT-4o as the default model across ChatGPT and the Codex ecosystem. The model delivers a 52% reduction in hallucinations compared to GPT-4o as measured on the SimpleQA benchmark, alongside meaningful improvements in instruction-following accuracy. GPT-5.5 Instant is positioned as a faster, more reliable successor with sharper reasoning and fewer factual errors, making it the new baseline for developers building on the OpenAI platform.


A New Default Model for ChatGPT and the OpenAI Platform

OpenAI replaced GPT-4o as the default model in ChatGPT and across its developer platform on May 5, 2026, introducing GPT-5.5 Instant as the new standard. The transition marks one of the most significant model upgrades in recent memory, driven by measurable improvements in factual accuracy, instruction-following, and response quality rather than raw benchmark performance alone.

GPT-5.5 Instant is designed to feel noticeably smarter in everyday use β€” more reliable, less prone to confabulation, and better at parsing complex multi-step instructions. For developers using the OpenAI API and Codex-powered tools, the shift means existing integrations automatically benefit from a stronger baseline without requiring prompt rewrites or code changes.

Hallucinations Slashed by More Than Half

The headline figure from OpenAI is a 52% reduction in hallucinations versus GPT-4o, as measured on the SimpleQA benchmark. SimpleQA tests models on factual recall across a wide range of topics, penalizing confident but incorrect answers. A 52% reduction signals a substantive architectural or training improvement β€” not just incremental tuning.

For developers building knowledge-intensive applications β€” legal research tools, medical information systems, customer support bots, or educational platforms β€” this is a meaningful change. Fewer hallucinations mean fewer guardrail systems needed downstream, lower error-correction overhead, and more trustworthy outputs in production.

Instruction-Following Improvements

Beyond factual accuracy, GPT-5.5 Instant demonstrates stronger instruction-following compared to its predecessor. The model is more precise at adhering to system prompts, respecting output format constraints, and handling nuanced or conflicting instructions. This improvement is particularly relevant for Codex users who rely on structured prompts to guide code generation, file editing, and multi-step agentic tasks.

In practice, developers should expect fewer instances of the model ignoring specific instructions or drifting from the intended output format β€” a recurring source of friction with earlier GPT-4o deployments.

Rollout and Availability

GPT-5.5 Instant became the default model for all ChatGPT tiers on May 5, 2026. It is also available via the OpenAI API under the model identifier gpt-5.5-instant. Developers who explicitly pin to gpt-4o in their API calls will continue using GPT-4o until they opt in; those using the gpt-4o-latest alias or the ChatGPT interface will be automatically migrated to GPT-5.5 Instant.

The model sits below the full GPT-5.5 in the capability hierarchy β€” trading some peak reasoning depth for higher throughput and lower latency. For most everyday tasks, the difference is imperceptible. For the most computationally demanding reasoning chains, developers can continue to use the full GPT-5.5 model.

What This Means for Codex Users

For Codex CLI and Codex App users, GPT-5.5 Instant as the platform default means the underlying model powering code generation, file editing, shell command execution, and agent workflows has quietly become more capable. The improvement in hallucination reduction is particularly relevant for Codex's code explanation and documentation features, where factual accuracy about API signatures, library behavior, and language semantics directly impacts developer productivity.

OpenAI has not announced a separate Codex-specific variant of GPT-5.5 Instant (analogous to GPT-5.3-Codex-Spark), but the general model improvements propagate through all Codex-powered features that rely on the standard OpenAI API stack.