Gemini CLI: Background Memory Service Learns Your Workflows Automatically

Gemini CLI

Gemini CLI v0.38.0 ships a Background Memory Service that runs on startup and automatically extracts reusable skills from past conversation sessions. Behind the experimentalMemoryManager flag, a specialized sub-agent scans session history, identifies repeated project-specific workflows and conventions, and writes them as SKILL.md files to ~/.gemini/memory//skills/. The result is a CLI that progressively gets smarter about each project it works on β€” without any manual effort from the user.


What Is the Background Memory Service?

Gemini CLI has long supported persistent memory via the /memory add command and GEMINI.md files β€” but both require manual intervention from the user. The Background Memory Service, introduced in v0.38.0 behind the experimentalMemoryManager flag, takes a fundamentally different approach: it runs automatically on startup and progressively builds a library of reusable project knowledge without requiring any user input.

The service operates as a specialized sub-agent that analyzes past conversation sessions stored in the CLI's local history. It scans those sessions looking for repeated patterns β€” commands the user runs frequently, project-specific conventions, recurring workflows, common tool sequences β€” and extracts them as discrete, reusable skills.

Where Skills Are Stored

Extracted skills are written as SKILL.md files to a structured directory hierarchy:

~/.gemini/memory/<project>/skills/

Each SKILL.md captures a specific workflow or convention in a format that Gemini CLI can reference in future sessions, providing relevant context automatically without the user needing to re-explain project-specific patterns from scratch.

This directory structure is project-scoped, meaning the memory service builds a separate skill library for each project directory β€” so skills learned in one codebase don't pollute the context of another.

Smart Architecture: Tiered Confidence and Safety Mechanisms

The Background Memory Service is designed to be conservative about what it commits to long-term storage. A few key design decisions prevent it from becoming a source of noise or incorrect assumptions:

Tiered confidence scoring β€” the sub-agent evaluates extracted patterns against a confidence threshold before writing a SKILL.md. Low-confidence observations are discarded rather than written as uncertain facts.

Lock file safety β€” when multiple Gemini CLI instances are running (for example, in parallel worktrees), the service uses a lock file mechanism to prevent concurrent writes to the same skills directory, avoiding corruption.

Bounded execution β€” the background scan is time-bounded and resource-limited. It runs as a low-priority background task at startup so it doesn't impact the latency of the user's primary session.

How to Enable It

The Background Memory Service is opt-in in v0.38.0, gated behind the experimentalMemoryManager setting. To enable it, add the following to the CLI's settings file:

{
  "experimental": {
    "memoryManager": true
  }
}

Once enabled, the service runs silently on each startup. There is no explicit UI feedback during the scan β€” the user's evidence of its operation is the growing contents of the ~/.gemini/memory/ directory.

Why This Matters

The Background Memory Service represents a meaningful shift in how Gemini CLI accumulates project knowledge. Previous memory mechanisms placed the full burden on the user: the user had to recognize which facts were worth remembering, articulate them explicitly, and invoke /memory add at the right moment. That workflow is useful but relies on user diligence.

The new service inverts the model. Rather than asking the user to push knowledge into the CLI's memory, the service pulls knowledge from the session record β€” identifying what the user actually does repeatedly, rather than what they think to tell the CLI about.

Over time, this creates a CLI that becomes progressively more fluent with each project it works on, building up a skill library that reflects the real workflows of that codebase without any manual curation effort.