LangChain Deep Agents v0.5 Introduces Async Subagents for Non-Blocking Workflows
Tools
![]()
What's New with Deep Agents v0.5?
LangChain has just rolled out exciting new minor versions for its deepagents and deepagentsjs libraries, bringing a significant upgrade to how your AI agents can operate. The headline feature is the introduction of async (non-blocking) subagents, alongside expanded multi-modal filesystem support. This update is designed to empower your agents with greater efficiency and flexibility, especially when tackling complex, time-consuming tasks.
Traditionally, subagents would block the main agent's execution until their tasks were complete. With Deep Agents v0.5, async subagents change this paradigm entirely. They delegate work to remote agents in the background, returning a task ID immediately and executing independently. This means your main agent can initiate long-running processes without being tied up, freeing it to handle other interactions or launch additional subagents in parallel.
This non-blocking approach is a game-changer for agent-based workflows, allowing for more dynamic and responsive applications. For a comprehensive dive into the new features, check out the official Deep Agents v0.5 Release Blog.
Why Async Subagents Are a Workflow Game-Changer
The ability to run subagents asynchronously addresses a critical bottleneck in many advanced AI applications. Imagine an agent tasked with deep research, extensive code analysis, or complex multi-step data pipelines – tasks that can take minutes rather than seconds. Previously, the main supervisor agent would be stalled, unable to respond to users or progress other work. Now, with async subagents, the supervisor can launch multiple tasks, continue interacting with the user, and collect results as they become available.
Moreover, these async subagents are stateful. Unlike their inline counterparts, they maintain their own thread across interactions. This powerful feature allows the supervisor agent to send follow-up instructions or course-correct a mid-task subagent, ensuring greater control and adaptability over ongoing processes. This opens up possibilities for heterogeneous deployments, where a lightweight orchestrator can delegate specialized tasks to remote agents optimized for different hardware or models.
Getting Started and Technical Requirements
Implementing async subagents into your Deep Agents workflow is straightforward. The create_deep_agent function now accepts an AsyncSubAgent specification, which points to a remote agent. Once configured, your main agent gains a suite of five new tools to manage background work: start_async_task to launch a task, check_async_task to poll its status and retrieve results, update_async_task for sending follow-up instructions, cancel_async_task to stop a running task, and list_async_tasks to see all tracked tasks.
Crucially, any remote agent implementing the open-source Agent Protocol is a valid target for these async subagents. This means you can integrate with agents deployed using LangSmith, custom FastAPI services, or other compliant implementations. LangChain has even added example server implementations for both Python and JS Deep Agents to help you get started. If the url field for an AsyncSubAgent is omitted, Deep Agents will cleverly use ASGI transport, enabling co-deployment of your supervisor and sub-agents within the same process.
Beyond subagents, Deep Agents v0.5 also expands its multi-modal filesystem support. While it previously handled images, this release extends support to include PDFs, audio, video, and various other file types. The agent's read_file tool automatically detects the file type and passes content to the model as a native content block, though actual modality support depends on the underlying model used.
Read more: LangChain Deep Agents v0.5 Release Blog to explore these powerful new capabilities and supercharge your agent workflows.