Cyata Uncovers LangChain Flaw That Puts AI Agents at Risk

28 December, 2025

The vulnerability enables secret leakage and unintended object execution through a core serialization mechanism embedded in millions of agent-based systems

Israeli cybersecurity company Cyata has disclosed a critical security vulnerability in langchain-core, one of the most fundamental and widely used components in the AI agent ecosystem. The flaw, tracked as CVE-2025-68664 and assigned a high severity score of CVSS 9.3, can under certain conditions lead to the leakage of sensitive secrets—including API keys, tokens, and login credentials—and even trigger unintended code execution. The issue stems from a mechanism long considered relatively safe: serialization.

LangChain is one of the core software libraries used to build AI agents—systems designed to perform complex tasks, connect to external services, retain memory, and operate as digital workers. The langchain-core package serves as the library’s foundational layer, responsible for object representation, data structure management, and the way agents, tools, and memories move through the system and are reloaded. According to publicly available telemetry, langchain-core alone has amassed hundreds of millions of downloads, while the broader LangChain package sees tens of millions of downloads each month—underscoring the potential scale of exposure.

The issue identified by Cyata’s researchers originates from flawed handling of the serialization process—the stage in which in-memory objects such as agents, tools, or data structures are converted into a simple data format for storage or transfer between system components. Deserialization is the reverse process, in which that data is loaded back and reconstructed as an active object in code. While most known vulnerabilities typically emerge during deserialization, this case is unusual in that the weakness is introduced earlier, at the point where data is first created and written out. By manipulating an internal key known as lc, an attacker can cause data saved as ordinary information to later be interpreted as a legitimate LangChain object when reloaded. As a result, when the data reenters the system, it is treated not as passive input but as an active entity with access to the agent’s execution context.

In common AI agent workflows—where data is stored in memory, passed between agents, or reloaded as part of automated pipelines—the flaw may allow attackers to extract sensitive environment variables from the running process. These variables typically store API keys, tokens, and credentials required to access external services. Because AI agents often operate with broad permissions to cloud platforms, databases, and enterprise systems, and are sometimes connected to secret vaults that dynamically supply credentials, such leakage can significantly amplify the impact of an attack. In certain cases, the report notes, attackers may also trigger the execution of code classes from within the project’s namespace—software components already available in LangChain’s runtime environment—leading to unintended side effects and altered agent behavior during data loading.

According to Yarden Porat, the Cyata security researcher who discovered the vulnerability, what makes the finding particularly notable is that it challenges a long-standing assumption among developers. “What makes this issue unusual is that it starts in the serialization path, not the deserialization path,” Porat said. While deserialization has long been viewed as a known risk area, serialization is often assumed to be safe because the data is leaving the system. In the context of AI agents and continuous event-driven workflows, Porat emphasized, “that assumption no longer holds—what goes out can come back in under a different identity.”

The severity of the vulnerability was also reflected in the response from the LangChain development community, which awarded the researchers a $4,000 bug bounty—the highest ever granted by the project. Cyata recommends that organizations and developers using LangChain promptly upgrade to the patched versions and apply strict least-privilege principles and hardened default configurations for AI agents.

Shahar Tal, co-founder and CEO of Cyata, said organizations must begin treating AI agents as full-fledged operational entities rather than just code. “When it comes to AI workers, the key question isn’t only what code is running, but what permissions it runs with and what can go wrong,” Tal said. In an environment where AI agents often have wide access to systems and data, he added, effective security design must focus not only on rapid detection and remediation, but also on smart defaults that limit potential damage from the outset.

[Image above: Cyata’s founders. Credit: Eric Sultan]

Share via Whatsapp

Posted in: Cyber

Posted in tags: Cyata , LangChain