Base44: InvokeLLM Per-Call Model Override
Base44 added an optional model parameter to the integrations.Core.InvokeLLM method, allowing developers to override the app-level LLM setting on a per-call basis. Six models are supported: gpt_5, gpt_5_mini, gemini_3_pro, gemini_3_flash, claude_sonnet_4_6, and claude_opus_4_6. Previously, all InvokeLLM calls used the model configured at the application level; this change enables routing specific AI tasks to the most appropriate model regardless of the app-wide default, responding directly to developer requests for dynamic LLM selection.
Sources & Mentions
5 external resources covering this update
Per-Call Model Selection in InvokeLLM
Base44 integrations.Core.InvokeLLM method is the primary SDK entry point for generating AI responses from within backend functions. Before March 12, 2026, every InvokeLLM call used whichever model had been configured at the application level β a single setting that applied uniformly across all AI calls in the app.
Base44 changed this by adding an optional model parameter that overrides the app-level setting for a specific call. The parameter accepts a string identifier from the following list of supported models:
gpt_5β OpenAI flagship modelgpt_5_miniβ OpenAI lighter, faster variantgemini_3_proβ Google high-capability Gemini modelgemini_3_flashβ Google speed-optimized Gemini modelclaude_sonnet_4_6β Anthropic balanced Claude modelclaude_opus_4_6β Anthropic highest-capability Claude model
Why Per-Call Model Selection Matters
Different AI tasks inside an application often have very different requirements. A function that classifies a user intent from a short string has different latency and cost needs than one generating a detailed structured report. With a single app-level model setting, developers previously had to either accept a suboptimal model for some tasks or build custom abstraction layers outside of the SDK.
The model parameter resolves this mismatch directly. Developers can now route lightweight classification or extraction tasks to gpt_5_mini or gemini_3_flash for speed and cost efficiency, while directing complex reasoning or long-form generation tasks to claude_opus_4_6 or gemini_3_pro β all within the same application, without changing the default model for the rest of the app.
Backward-Compatible and Incremental
The parameter is optional and additive. Existing InvokeLLM calls with no model argument continue to behave exactly as before, falling back to the app-level model setting. Only calls that explicitly pass a model string are routed differently. This makes the change fully backward-compatible and safe to adopt incrementally β developers can introduce per-call overrides one function at a time without any migration risk.
The addition responds directly to developer feedback on the Base44 feedback board, where requests for dynamic LLM routing and per-call model control had accumulated significant support. The implementation covers models from all three major frontier AI providers, ensuring Base44 apps are not locked to any single vendor capabilities.