Skip to main content

Roo Code 3.46.2 Release Notes (2026-02-03)

This release improves message queuing and task cancellation, fixes MCP image tool outputs, and updates several providers.

Bug Fixes

  • MCP tool results can include images: Fixes an issue where MCP tools that return images (for example, Figma screenshots) could show up as “(No response)”. See Using MCP in Roo for details. (thanks Sniper199999!) (#10874)
  • More reliable condensing with Bedrock via LiteLLM: Fixes an issue where conversation condensing could fail when the history contained tool-use/tool-result blocks. (#10975)
  • Messages aren’t dropped during command execution: Fixes an issue where messages sent while a command was still running could be lost; they’re now queued and delivered when the command finishes. (#11140)
  • OpenRouter model list refresh respects your Base URL: Fixes an issue where refreshing the OpenRouter model list ignored a configured Base URL and always called openrouter.ai. See OpenRouter for details. (thanks sebastianlang84!) (#11154)
  • More reliable task cancellation and queued-message handling: Fixes issues where canceling/closing tasks or updating queued messages could behave inconsistently between the VS Code extension and the CLI. (#11162)

Misc Improvements

  • Cleaner GitHub issue templates: Removes the “Feature Request” option from the issue template chooser so feature requests are directed to Discussions. (#11141)

Provider Updates

  • Code indexing embedding model migration (Gemini): Keeps code indexing working by migrating requests from the deprecated text-embedding-004 to gemini-embedding-001. See Gemini and Codebase Indexing. (#11038)
  • Mistral provider migration to AI SDK: Improves consistency for streaming and tool handling while preserving Codestral support and custom base URLs. See Mistral. (#11089)
  • SambaNova provider migration to AI SDK: Improves streaming, tool-call handling, and usage reporting. See SambaNova. (#11153)
  • xAI provider migration to the dedicated AI SDK package: Improves consistency for streaming, tool calls, and usage reporting when using Grok models. See xAI. (#11158)