Gemini Deep Research API: Collaborative Planning, MCP Integration, and Max Mode
Google released updated Deep Research and Deep Research Max agents via the Gemini API on April 21, 2026, marking a significant upgrade to its autonomous research infrastructure for developers. The new deep-research-preview-04-2026 and deep-research-max-preview-04-2026 models add collaborative planning (letting developers review and refine the research plan before execution), native visualization (auto-generated charts and infographics inline with reports), MCP server integration (connecting to custom or third-party data sources like FactSet and S&P Global), and File Search (grounding research in proprietary documents). Deep Research Max benchmarks at 93.3% on DeepSearchQA and 54.6% on Humanity's Last Exam β up from 66.1% and 46.4% respectively.
Featured Video
A video we selected to help illustrate this changelog
Sources & Mentions
5 external resources covering this update
Gemini Deep Research API Demo
Youtube
Google's new Deep Research and Deep Research Max agents can search the web and your private data
VentureBeat
Google Debuts Deep Research Agents on AI Studio and APIs
Testing Catalog
Google Gemini API Deep Research Updates: MCP Support, Native Charts and Max Mode Quality Boost
Blockchain News
Gemini API: Deep Research updates
Hacker News
Overview
On April 21, 2026, Google released updated versions of its Deep Research and Deep Research Max agents through the Gemini API, delivering the most significant capability upgrade to the autonomous research system since its initial API availability. The release introduces two new model identifiers β deep-research-preview-04-2026 and deep-research-max-preview-04-2026 β and substantially expands what developers can build with the Interactions API.
Two Tiers for Different Use Cases
The April 2026 update formalizes a clear two-tier model structure built for distinct workflows.
Deep Research (deep-research-preview-04-2026) is optimized for speed and interactivity. It is designed for streaming to client-facing UIs where users expect near-real-time progress feedback. Estimated at $1β3 per research task, it runs approximately 80 web searches and processes around 250k input tokens per task.
Deep Research Max (deep-research-max-preview-04-2026) is built for thoroughness. It is intended for asynchronous, background workflows where comprehensiveness takes priority over latency. Tasks cost roughly $3β7, run approximately 160 searches, and process up to 900k input tokens. On standardized benchmarks, Deep Research Max now scores 93.3% on DeepSearchQA (up from 66.1%) and 54.6% on Humanity's Last Exam (up from 46.4%) β a substantial quality leap from the December 2025 baseline.
Collaborative Planning
One of the headline additions is a collaborative planning mode that allows developers to create multi-turn research workflows where users can guide the agent before it begins execution. Instead of submitting a research prompt and waiting for a finished report, developers can now enable a planning phase: the agent generates a proposed research plan, the user reviews it, suggests refinements across multiple turns, and then approves execution. This gives products built on Deep Research a fundamentally more interactive and controllable research experience β particularly valuable for enterprise workflows where research scope matters.
MCP Server Integration
The April update adds support for connecting Deep Research to remote Model Context Protocol (MCP) servers. This allows the agent to combine open-web searches with access to specialized third-party data sources in a single research workflow. Developers can configure custom MCP endpoints with authentication headers and restrict the tools each server exposes.
Google has announced partnerships with FactSet, S&P Global, and PitchBook to build MCP integrations for financial data, enabling Deep Research workflows grounded in professional-grade market and company data. The agent can run simultaneously with Google Search, remote MCP servers, URL Context, Code Execution, and File Search β or operate in a fully private mode with web access disabled, searching exclusively over custom data.
Native Visualization
Deep Research can now generate charts and infographics directly within research reports. When the visualization: "auto" parameter is enabled, the agent automatically produces visual representations of data trends, comparisons, and distributions inline with the report narrative. Developers can also request visualizations explicitly in their prompts (e.g., "Include charts showing year-over-year trends"). This eliminates a common post-processing step where developers had to programmatically render charts from raw data extracted from research output.
File Search for Private Data
The File Search tool allows developers to give Deep Research access to proprietary document sets β PDFs, CSVs, presentations, and other formats β so the agent can incorporate private data alongside public web research. This is particularly useful for due diligence, internal knowledge synthesis, and competitive analysis that combines an organization's internal materials with fresh market data.
Developer Workflow
Developers access both models through the Interactions API using background=True and store=True parameters. The typical workflow is:
- Initialize a research task with optional planning enabled
- Stream intermediate thinking summaries for real-time progress display
- Retrieve the final report, including inline visualizations
- Continue with follow-up questions using
previous_interaction_id
Research tasks can run for up to 60 minutes. Both models are available in public preview via paid Gemini API tiers.
Availability
Deep Research and Deep Research Max are accessible through the Interactions API in both Google AI Studio and the Gemini API. Access requires a paid API tier. Google has indicated plans to expand availability to Google Cloud customers in upcoming releases.