Skip to main content

Roo Code 3.36 Release Notes (Combined)

Roo Code 3.36 introduces non-destructive context management, new debugging and UI controls, and a steady stream of reliability fixes and provider improvements.

Roo Code v3.36 Release

Non-Destructive Context Management

Context condensing and sliding window truncation now preserve your original messages internally rather than deleting them (#9665). When you rewind to an earlier checkpoint, the full conversation history is restored automatically.

GPT-5.1 Codex Max Support

Roo Code supports GPT-5.1 Codex Max, OpenAI’s long-horizon coding model, including model defaults for gpt-5.1 / gpt-5 / gpt-5-mini variants (#9848).

Browser Screenshot Saving

The browser tool can now save screenshots to a specified file path with a new screenshot action, so you can capture visual state during browser automation tasks (#9963).

Extra-High Reasoning Effort

If you use gpt-5.1-codex-max with the OpenAI provider, you can now select an “Extra High” reasoning effort level for maximum reasoning depth on complex tasks (#9900).

OpenRouter Native Tools Default

OpenRouter models that support native tools now use native tool calling by default, improving tool calling reliability without manual configuration (#9878).

Error Details Modal

Hover over error rows to reveal an info icon that opens a modal with full error details and a copy button (#9985).

GPT-5.2 Model Support

GPT-5.2 is available in the OpenAI provider and set as the default model (#10024).

Enter Key Behavior Toggle

You can now configure how Enter behaves in the chat input so it better fits multiline prompts and different input methods (#10002).

QOL Improvements

  • Symlink support for slash commands: Share and organize commands across projects using symlinks for individual files or directories, with command names derived from symlink names (#9838)
  • Smoother chat scroll: Chat view maintains scroll position more reliably during streaming (#8999)
  • Clearer error messages: More actionable errors with direct links to documentation (#9777)
  • Enter key behavior toggle: Configure whether Enter sends or inserts a newline in chat input (#10002)
  • Unified context-management UX: Real-time feedback for truncation notifications and condensation summaries (#9795)
  • Better OpenAI error messages: Extracts more detail from API errors for easier troubleshooting (#9639)
  • Token counting optimization: Removes separate API calls for token counting to improve performance (#9884)
  • Tool instructions decoupled from system prompts: Tool-specific guidance is self-contained in tool descriptions (#9784)
  • Clearer auto-approve timing in follow-up suggestions: Makes the auto-approve countdown harder to miss (#10048)

Bug Fixes

  • Write tool validation: Avoids false positives where write_to_file rejected complete markdown files containing inline code comments like # NEW: or // Step 1: (#9787)
  • Download count display: Fixes homepage download count precision for million-scale numbers (#9807)
  • Extension freeze prevention: Avoids freezes when a model attempts to call a non-existent tool (#9834)
  • Checkpoint restore reliability: Message history handling is consistent across rewind operations (#9842)
  • Context truncation fix: Prevents cascading truncation loops by truncating only visible messages (#9844)
  • Reasoning models: Models that require reasoning always receive valid reasoning-effort values (#9836)
  • Terminal input handling: Inline terminal no longer hangs when commands require user input (#9827)
  • Large file safety: Large file reads handle token budgets more safely (#9843)
  • Follow-up button styling: Fixes overly rounded corners on follow-up suggestions (#9829)
  • Chutes provider fix: Resolves model fetching errors by making schema validation more robust for optional fields (#9854)
  • Tool protocol selector: Always shows the tool protocol selector for OpenAI-compatible providers (#9966)
  • apply_diff filtering: Properly excludes apply_diff from native tools when diff is disabled (#9920)
  • API timeout handling: Fixes a disabled timeout (set to 0) causing immediate request failures (#9960)
  • Reasoning effort dropdown: Respects explicit supportsReasoningEffort values and fixes disable handling (#9970, #9930)
  • Actual error messages on retry: Displays the provider’s error details instead of generic text (#9954)
  • Stream hanging fix: Ensures finish_reason triggers tool_call_end events for multiple providers (#9927, #9929)
  • tool_result ID validation: Validates and fixes tool_result IDs before requests to prevent provider rejections (#9952)
  • Suppressed internal error: Fixes an internal “ask promise was ignored” error leaking to conversations (#9914)
  • Provider sanitization: Fixes an infinite loop when using removed/invalid API providers (#9869)
  • Context icons theme: Context-management icons now use foreground color to match VS Code themes (#9912)
  • Eval runs deletion: Fixes a foreign key constraint preventing eval run deletions (#9909)
  • OpenAI-compatible timeout reliability: Adds timeout handling to prevent indefinite hangs (#9898)
  • MCP tool streaming: Fixes MCP tools failing with “unknown tool” errors due to premature clearing of internal streaming data (#9993)
  • TODO list display order: TODO items display in execution order instead of being grouped by status (#9991)
  • Telemetry improvements: Filters out 429 rate limit errors from API error telemetry for cleaner metrics (#9987)
  • Gemini stability: Fixes reasoning loops and empty response errors (#10007)
  • Parallel tool execution: Fixes “Expected toolResult blocks at messages” errors during parallel tool use (#10015)
  • tool_result ID mismatch: Fixes ToolResultIdMismatchError when history has orphaned tool_result blocks (#10027)
  • Parallel tool calls fix: Preserves tool_use blocks in summaries during context condensation to avoid API errors with parallel tool calling (#9714)
  • Navigation button wrapping: Prevents navigation buttons from wrapping on smaller screens (#9721)
  • Task delegation tool flush: Ensures pending tool results are flushed before delegating tasks to avoid provider 400 errors (#9726)
  • Malformed tool call handling: Prevents the extension from hanging indefinitely on malformed tool calls by validating and reporting missing parameters (#9758)
  • Auto-approval stops when you start typing: Fixes an issue where an auto-approve timer could still fire after you began writing a response (#9937)
  • More actionable OpenRouter error messages: Surfaces upstream error details when available (#10039)
  • LiteLLM tool protocol dropdown always appears: Restores the tool protocol dropdown in Advanced settings even when model metadata isn’t available yet (#10053)
  • MCP tool calls work with stricter providers: Avoids failures caused by special characters in MCP server/tool names by sanitizing names and using an unambiguous mcp--server--tool ID format (#10054)

Misc Improvements

  • Evals UI enhancements: Adds better filtering, bulk delete actions, tool column consolidation, and run notes (#9837)
  • Multi-model evals launch: Launches identical test runs across multiple models with automatic staggering (#9845)
  • New pricing page: Updates the website pricing page with clearer feature explanations (#9821)
  • Announcement UI updates: Improves announcement visuals with updated social icons and GitHub stars CTA (#9945)
  • Improved error logging: Adds better context for parseToolCall exceptions and cloud job errors (#9857, #9924)
  • search_replace native tool: Adds a tool for single-replacement file operations with precise targeting via unique text matching (#9918)
  • Versioned settings support: Adds internal infrastructure for API-side versioning of model settings with minimum plugin version gating (#9934)
  • OpenRouter telemetry: Adds API error telemetry for better diagnostics (#9953)
  • Evals streaming stats: Tool usage stats stream in real time with token usage throttling (#9926)
  • Tool consolidation: Removes the deprecated insert_content tool (use apply_diff or write_to_file) (#9751)
  • Experimental settings: Temporarily disables the parallel tool calls experiment while improvements are in progress (#9798)
  • Infrastructure: Updates Next.js dependencies for web applications (#9799)
  • Removed deprecated tool: Removes the deprecated list_code_definition_names tool (#10005)
  • Tool aliases for model-specific tool naming: Adds support for alternative tool names so different models can call the same tool using the naming they expect (#9989)
  • Workspace task visibility controls for organizations: Adds an org-level setting for how visible Roo Code Cloud “extension tasks” are across the workspace (#10020)

Provider Updates

  • Reasoning details support (Roo provider): Displays reasoning details from supported models (#9796)
  • Native tools default (Roo provider): Roo provider models default to native tool protocol (#9811)
  • MiniMax search_and_replace: MiniMax M2 uses search_and_replace for more reliable file edits (#9780)
  • Cerebras token optimization: Prevents premature rate limiting and cleans up deprecated models (#9804)
  • Vercel AI Gateway: More reliable model fetching when pricing data is incomplete (#9791)
  • Dynamic model settings (Roo provider): Roo models receive configuration dynamically from the API (#9852)
  • Optimized GPT-5 tool configuration: GPT-5.x, GPT-5.1.x, and GPT-4.1 use only apply_patch for file edits (#9853)
  • DeepSeek V3.2: Updates to V3.2 with a price reduction, native tools by default, and 8K max output (#9962)
  • xAI models catalog: Corrects context windows, adds image support for grok-3/grok-3-mini, and removes deprecated models (#9872)
  • xAI tool preferences: Configures xAI models to use search_replace for better file editing compatibility (#9923)
  • DeepSeek V3.2 for Baseten: Adds DeepSeek V3.2 model support (#9861)
  • Baseten model tweaks: Improves maxTokens limits and native tools support for stability (#9866)
  • Bedrock models: Adds Kimi, MiniMax, and Qwen model configurations (#9905)
  • Z.ai endpoint options: Adds endpoint options for users on API billing instead of the Coding plan (#9894)