Roo Code 3.34 Release Notes (2025-11-21)
Roo Code 3.34 combines Browser Use 2.0, the Baseten provider, OpenAI-compatible improvements, and refined onboarding and native tool descriptions with patches that add Roo Code Cloud eval support, clearer image generation prompts, todo list cleanup, cloud sync fixes, and Claude Opus 4.5 across Roo Code Cloud, OpenRouter, Anthropic, and Vertex plus new provider reliability and model updates.
Browser Use 2.0โ
Browser Use now supports a more capable "2.0" experience (#8941):
- Richer browser interaction: Enables more advanced navigation and interaction patterns so Roo can better follow multi-step web workflows.
- More reliable automation: Improves stability for sequences of clicks, typing, and scrolling, reducing the chance of flaky browser runs.
- Better fit for complex sites: Makes it easier to work with modern web apps that require multiple steps or stateful interactions.
๐ Documentation: See Browser Use for details on how to enable and use browser workflows. Note: We have not yet updated these docs with images and a video of the new experience.
QOL Improvementsโ
- Provider-oriented welcome screen: Added a provider-focused welcome screen so new users can more quickly choose and configure a working model setup (#9484).
- Pinned Roo provider: Pinned the Roo provider to the top of the provider list so it is easier to discover and select (#9485).
- Clearer native tool descriptions: Enhanced built-in tool descriptions with examples and clarifications so Roo can choose the right tools and use them more accurately (#9486).
- Clearer image generation prompts: The full prompt and path for image generation now appear directly in the chat UI with clearer spacing and typography, making it easier to inspect, debug, and reuse prompts (#9505, #9522).
- Eval jobs on Roo Code Cloud: You can now run evaluation jobs directly on Roo Code Cloud models, reusing the same managed models and job tokens as regular cloud runs (#9492, #9522).
- XML tool protocol stays in sync with configuration: Tool runs that use the XML protocol now correctly track the configured tool protocol after configuration updates, preventing rare parser-state errors when switching between XML and native tools (#9535)).
Bug Fixesโ
- Streaming cancel responsiveness: Fixed the cancel button so it responds immediately during streaming, making it easier to stop long or unwanted runs (#9448).
- apply_diff performance regression: Resolved a recent performance regression in
apply_diff, restoring fast patch application on larger edits (#9474). - Model cache refresh: Implemented cache refreshing to avoid using stale disk-cached models, ensuring configuration updates are picked up correctly (#9478).
- Tool call fallbacks: Added a fallback to always yield tool calls regardless of
finish_reason, preventing cases where valid tool calls were dropped (#9476). - Single todo list in updates: Removed a redundant todo list block from chat updates so you only see one clean, focused list when the updateTodoList tool runs (#9517, #9522).
- Cloud message deduplication: Fixed cloud message syncing so duplicate copies of the same reasoning or assistant message are no longer re-synced, keeping task histories cleaner and less confusing (#9518, #9522).
- Gemini 3 reasoning_details support: Fixes 400 INVALID_ARGUMENT errors when using Gemini 3 models via OpenRouter by fully supporting the newer
reasoning_detailsformat, so multi-turn and tool-calling conversations now work reliably without dropping reasoning context (#9506)). - Skip unsupported Gemini content blocks safely: Gemini conversations on Vertex AI now skip unsupported metadata blocks (such as certain reasoning or document types) with a warning instead of failing the entire thread, keeping long-running chats stable (#9537)).
Provider Updatesโ
- Baseten provider: Added Baseten as a new AI provider, giving you another option for hosted models and deployments (#9461).
- OpenAI-compatible improvements: Improved the base OpenAI-compatible provider configuration and error handling so more OpenAI-style endpoints work smoothly without special tweaks (#9462).
- OpenRouter capabilities: Improved copying of model-level capabilities onto OpenRouter endpoint models so routing respects each model's real abilities (#9483).
- Roo Code Cloud image generation provider: Roo Code Cloud is now available as an image generation provider, so you can generate images directly through Roo Code Cloud instead of relying only on third-party image APIs (#9528)).
- Cerebras model list clean-up: The Cerebras provider model list now only shows currently supported models, reducing errors from deprecated Cerebras/Qwen variants and keeping the model picker aligned with what the API actually serves (#9527)).
- Reliable LiteLLM model refresh after credential changes: Clicking Refresh Models after changing your LiteLLM API key or base URL now immediately reloads the model list using the new credentials, so you do not need to clear caches or restart VS Code, while background refreshes still benefit from caching for speed (#9536)).