LangChain Launches Managed Deep Agents for Faster Production Agent Deployment
Written byCoquette
Drafted with AI; edited and reviewed by a human.
![]()
TL;DR
- LangChain has launched Managed Deep Agents, a new API-first hosted runtime service.
- This service simplifies the creation, execution, and operation of long-running AI agents.
- Key features include durable threads, checkpointing, Context Hub for persistent memory, and sandbox-backed execution for code.
- Managed Deep Agents are now available in private beta, accessible via a waitlist.
LangChain has officially unveiled Managed Deep Agents, a significant new offering designed to streamline the deployment and management of sophisticated AI agents. This API-first hosted runtime service, now in private beta, aims to tackle the complexities inherent in operating agents that need to perform tasks over extended periods. By providing a robust infrastructure within LangSmith, developers can focus more on the agent's core logic and less on the intricacies of its operational environment.
Building and deploying agents that require durable execution, access to various tools, sandboxing for code, persistent memory, and comprehensive tracing has historically been a challenging endeavor. Managed Deep Agents directly addresses these pain points by offering a managed solution that handles many of these operational necessities. This allows for faster iteration and more reliable agent performance in production environments, ultimately accelerating the journey from development to real-world application.
A core component of this new service is the integration with LangSmith, which provides a centralized hub for agent definitions, versioning, and observability. Agent configurations, including skill definitions and tool setups, are stored and versioned within LangSmith. This ensures that agents are not only defined but also managed and updated efficiently over time, allowing them to evolve with new capabilities and configurations.
The introduction of Context Hub is a notable advancement, providing agents with a persistent space to store and update context across multiple runs. This is crucial for agents that need to retain information about user preferences, project details, or operational procedures, enabling them to learn and improve from real-world usage. Furthermore, the LangSmith Engine can optionally review agent traces to identify bugs and suggest areas for enhancement, fostering continuous improvement.
For agents requiring the execution of code, commands, or file manipulation, Managed Deep Agents offers sandbox-backed execution. This ensures that agent-generated code runs safely and securely, isolated from the main system. Coupled with the ability to configure tools via tools.json and enable human-in-the-loop workflows for any tool, the platform provides a flexible and secure environment for complex agent tasks. The entire operational layer is packaged, freeing developers from rebuilding runtime infrastructure for each agent.
Managed Deep Agents is particularly well-suited for a range of applications, including support and triage agents that manage long-running conversations, research agents that compile information over time, coding agents needing file system access, data analysis agents performing iterative workflows, and internal operations agents that refine their context with repeated use. This new service aims to unlock the potential of agents that require more than just a simple prompt and tool call.
Summary
- LangChain's Managed Deep Agents offers a hosted runtime for long-running AI agents within LangSmith.
- Key features include durable threads, checkpointing, and Context Hub for persistent memory across runs.
- The service simplifies deployment by managing runtime infrastructure, tool access, and sandbox-backed code execution.
- Managed Deep Agents are currently in private beta and accessible via a waitlist.
Source: Managed Deep Agents: the fastest way to ship a production deep agent
Read next

Claude's New Best Practices Boost Browser Automation Accuracy
Anthropic releases best practices for using Claude models in computer and browser automation, focusing on image scaling for improved click accuracy.
Continue reading