Skip to main content

Roo Code 3.44 Release Notes (2026-01-27)

This release improves worktree workflows, adds MCP configuration flexibility, and fixes context condensing and provider issues.

Roo Code v3.44 Release

Worktrees

Worktrees are easier to work with in chat. The Worktree selector is more prominent, creating a worktree takes fewer steps, and the Create Worktree flow is clearer (including a native folder picker), so it’s faster to spin up an isolated branch/workspace and switch between worktrees while you work. (#10940)

📚 Documentation: See Worktrees for detailed usage.

Parallel tool calls (Experimental)

Re-enables parallel tool calling (with new_task isolation safeguards) so you can use the experimental “Parallel tool calls” setting again without breaking task delegation workflows. (#11006)

QOL Improvements

  • Makes subtasks easier to find and navigate by improving parent/child visibility across History and Chat (including clearer “back to parent” navigation), so you can move between related tasks faster. (#10864)
  • Lets you auto-approve all tools from a trusted MCP server by using alwaysAllow: ["*"], so you don’t have to list each tool name individually. (#10948)
  • Reduces token overhead in prompts by removing a duplicate MCP server/tools section from internal instructions, leaving more room for your conversation context. (#10895)
  • Improves Traditional Chinese (zh-TW) UI text for better clarity and consistency. (thanks PeterDaveHello!) (#10953)

Bug Fixes

  • Fixes an issue where context condensing could accidentally pull in content that was already condensed earlier, which could reduce the effectiveness of long-conversation summaries. (#10985)
  • Fixes an issue where automatic context condensing could silently fail for VS Code LM API users when token counting returned 0 outside active requests, which could lead to unexpected context-limit errors. (thanks srulyt!) (#10983)
  • Fixes an issue where Roo didn’t record a successful truncation fallback when condensation failed, which could make Rewind restores unreliable after a condensing error. (#10984)
  • Fixes an issue where MCP tools with hyphens in their names could fail to resolve in native tool calling (for example when a provider/model rewrites - as _). (thanks hori-so!) (#10775)
  • Fixes an issue where tool calls could fail validation through AWS Bedrock when toolUseId exceeded Bedrock’s 64-character limit, improving reliability for longer tool-heavy sessions. (#10902)
  • Fixes an issue where Settings section headers could look transparent while scrolling, restoring an opaque background so the UI stays legible. (#10951)
  • Fixes a Fireworks provider type mismatch by removing unsupported model tool fields, keeping provider model metadata consistent and preventing breakage from schema changes. (#10937)
  • Fixes an issue where the new_task tool could miss creating a checkpoint before handing off, making task state more consistent and recoverable. (#10982)
  • Fixes an issue where leftover Power Steering experiment references could display raw translation keys in the UI. (#10980)
  • Fixes an issue where Roo could fail to index code in worktrees stored inside hidden directories (for example ~/.roo/worktrees/), which could break search and other codebase features in those worktrees. (#11009)

Provider Updates

  • Adds the latest Fireworks models to the Fireworks provider’s model list. (thanks ThanhNguyxn!) (#10679)
  • Fixes an issue where tool-enabled chats could fail when using LiteLLM as a proxy to Amazon Bedrock due to Bedrock tool call ID validation limits. (#10990)
  • Sets default sampling parameters for the Cerebras zai-glm-4.7 model (temperature and top_p) so outputs are more consistent without extra configuration. (thanks sebastiand-cerebras!) (#10945)
  • Fixes an issue where local Ollama models could be incorrectly flagged as unavailable due to validation running against an empty router model list. (#10893)
  • Fixes an issue where tools could appear twice when using OpenAI Responses API-based providers, reducing duplicate tool output and making results easier to follow. (#11008)