ghsa-r399-636x-v7f6
Vulnerability from github
Context
A serialization injection vulnerability exists in LangChain JS's toJSON() method (and subsequently when string-ifying objects using JSON.stringify(). The method did not escape objects with 'lc' keys when serializing free-form data in kwargs. The 'lc' key is used internally by LangChain to mark serialized objects. When user-controlled data contains this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data.
Attack surface
The core vulnerability was in Serializable.toJSON(): this method failed to escape user-controlled objects containing 'lc' keys within kwargs (e.g., additional_kwargs, metadata, response_metadata). When this unescaped data was later deserialized via load(), the injected structures were treated as legitimate LangChain objects rather than plain user data.
This escaping bug enabled several attack vectors:
- Injection via user data: Malicious LangChain object structures could be injected through user-controlled fields like
metadata,additional_kwargs, orresponse_metadata - Secret extraction: Injected secret structures could extract environment variables when
secretsFromEnvwas enabled (which had no explicit default, effectively defaulting totruebehavior) - Class instantiation via import maps: Injected constructor structures could instantiate any class available in the provided import maps with attacker-controlled parameters
Note on import maps: Classes must be explicitly included in import maps to be instantiatable. The core import map includes standard types (messages, prompts, documents), and users can extend this via importMap and optionalImportsMap options. This architecture naturally limits the attack surface—an allowedObjects parameter is not necessary because users control which classes are available through the import maps they provide.
Security hardening: This patch fixes the escaping bug in toJSON() and introduces new restrictive defaults in load(): secretsFromEnv now explicitly defaults to false, and a maxDepth parameter protects against DoS via deeply nested structures. JSDoc security warnings have been added to all import map options.
Who is affected?
Applications are vulnerable if they:
- Serialize untrusted data via
JSON.stringify()on Serializable objects, then deserialize withload()— Trusting your own serialization output makes you vulnerable if user-controlled data (e.g., from LLM responses, metadata fields, or user inputs) contains'lc'key structures. - Deserialize untrusted data with
load()— Directly deserializing untrusted data that may contain injected'lc'structures. - Use LangGraph checkpoints — Checkpoint serialization/deserialization paths may be affected.
The most common attack vector is through LLM response fields like additional_kwargs or response_metadata, which can be controlled via prompt injection and then serialized/deserialized in streaming operations.
Impact
Attackers who control serialized data can extract environment variable secrets by injecting {"lc": 1, "type": "secret", "id": ["ENV_VAR"]} to load environment variables during deserialization (when secretsFromEnv: true). They can also instantiate classes with controlled parameters by injecting constructor structures to instantiate any class within the provided import maps with attacker-controlled parameters, potentially triggering side effects such as network calls or file operations.
Key severity factors:
- Affects the serialization path—applications trusting their own serialization output are vulnerable
- Enables secret extraction when combined with
secretsFromEnv: true - LLM responses in
additional_kwargscan be controlled via prompt injection
Exploit example
```typescript import { load } from "@langchain/core/load";
// Attacker injects secret structure into user-controlled data const attackerPayload = JSON.stringify({ user_data: { lc: 1, type: "secret", id: ["OPENAI_API_KEY"], }, });
process.env.OPENAI_API_KEY = "sk-secret-key-12345";
// With secretsFromEnv: true, the secret is extracted const deserialized = await load(attackerPayload, { secretsFromEnv: true });
console.log(deserialized.user_data); // "sk-secret-key-12345" - SECRET LEAKED! ```
Security hardening changes
This patch introduces the following changes to load():
secretsFromEnvdefault changed tofalse: Disables automatic secret loading from environment variables. Secrets not found insecretsMapnow throw an error instead of being loaded fromprocess.env. This fail-safe behavior ensures missing secrets are caught immediately rather than silently continuing withnull.- New
maxDepthparameter (defaults to50): Protects against denial-of-service attacks via deeply nested JSON structures that could cause stack overflow. - Escape mechanism in
toJSON(): User-controlled objects containing'lc'keys are now wrapped in{"__lc_escaped__": {...}}during serialization and unwrapped as plain data during deserialization. - JSDoc security warnings: All import map options (
importMap,optionalImportsMap,optionalImportEntrypoints) now include security warnings about never populating them from user input.
Migration guide
No changes needed for most users
If you're deserializing standard LangChain types (messages, documents, prompts) using the core import map, your code will work without changes:
```typescript import { load } from "@langchain/core/load";
// Works with default settings const obj = await load(serializedData); ```
For secrets from environment
secretsFromEnv now defaults to false, and missing secrets throw an error. If you need to load secrets:
```typescript import { load } from "@langchain/core/load";
// Provide secrets explicitly (recommended) const obj = await load(serializedData, { secretsMap: { OPENAI_API_KEY: process.env.OPENAI_API_KEY }, });
// Or explicitly opt-in to load from env (only use with trusted data) const obj = await load(serializedData, { secretsFromEnv: true }); ```
Warning: Only enable
secretsFromEnvif you trust the serialized data. Untrusted data could extract any environment variable.Note: If a secret reference is encountered but not found in
secretsMap(andsecretsFromEnvisfalseor the secret is not in the environment), an error is thrown. This fail-safe behavior ensures you're aware of missing secrets rather than silently receivingnullvalues.
For deeply nested structures
If you have legitimate deeply nested data that exceeds the default depth limit of 50:
```typescript import { load } from "@langchain/core/load";
const obj = await load(serializedData, { maxDepth: 100 }); ```
For custom import maps
If you provide custom import maps, ensure they only contain trusted modules:
```typescript import { load } from "@langchain/core/load"; import * as myModule from "./my-trusted-module";
// GOOD - explicitly include only trusted modules const obj = await load(serializedData, { importMap: { my_module: myModule }, });
// BAD - never populate from user input const obj = await load(serializedData, { importMap: userProvidedImports, // DANGEROUS! }); ```
{
"affected": [
{
"package": {
"ecosystem": "npm",
"name": "@langchain/core"
},
"ranges": [
{
"events": [
{
"introduced": "1.0.0"
},
{
"fixed": "1.1.8"
}
],
"type": "ECOSYSTEM"
}
]
},
{
"package": {
"ecosystem": "npm",
"name": "@langchain/core"
},
"ranges": [
{
"events": [
{
"introduced": "0"
},
{
"fixed": "0.3.80"
}
],
"type": "ECOSYSTEM"
}
]
},
{
"package": {
"ecosystem": "npm",
"name": "langchain"
},
"ranges": [
{
"events": [
{
"introduced": "1.0.0"
},
{
"fixed": "1.2.3"
}
],
"type": "ECOSYSTEM"
}
]
},
{
"package": {
"ecosystem": "npm",
"name": "langchain"
},
"ranges": [
{
"events": [
{
"introduced": "0"
},
{
"fixed": "0.3.37"
}
],
"type": "ECOSYSTEM"
}
]
}
],
"aliases": [
"CVE-2025-68665"
],
"database_specific": {
"cwe_ids": [
"CWE-502"
],
"github_reviewed": true,
"github_reviewed_at": "2025-12-23T20:08:48Z",
"nvd_published_at": "2025-12-23T23:15:45Z",
"severity": "HIGH"
},
"details": "## Context\n\nA serialization injection vulnerability exists in LangChain JS\u0027s `toJSON()` method (and subsequently when string-ifying objects using `JSON.stringify()`. The method did not escape objects with `\u0027lc\u0027` keys when serializing free-form data in kwargs. The `\u0027lc\u0027` key is used internally by LangChain to mark serialized objects. When user-controlled data contains this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data.\n\n### Attack surface\n\nThe core vulnerability was in `Serializable.toJSON()`: this method failed to escape user-controlled objects containing `\u0027lc\u0027` keys within kwargs (e.g., `additional_kwargs`, `metadata`, `response_metadata`). When this unescaped data was later deserialized via `load()`, the injected structures were treated as legitimate LangChain objects rather than plain user data.\n\nThis escaping bug enabled several attack vectors:\n\n1. **Injection via user data**: Malicious LangChain object structures could be injected through user-controlled fields like `metadata`, `additional_kwargs`, or `response_metadata`\n2. **Secret extraction**: Injected secret structures could extract environment variables when `secretsFromEnv` was enabled (which had no explicit default, effectively defaulting to `true` behavior)\n3. **Class instantiation via import maps**: Injected constructor structures could instantiate any class available in the provided import maps with attacker-controlled parameters\n\n**Note on import maps:** Classes must be explicitly included in import maps to be instantiatable. The core import map includes standard types (messages, prompts, documents), and users can extend this via `importMap` and `optionalImportsMap` options. This architecture naturally limits the attack surface\u2014an `allowedObjects` parameter is not necessary because users control which classes are available through the import maps they provide.\n\n**Security hardening:** This patch fixes the escaping bug in `toJSON()` and introduces new restrictive defaults in `load()`: `secretsFromEnv` now explicitly defaults to `false`, and a `maxDepth` parameter protects against DoS via deeply nested structures. JSDoc security warnings have been added to all import map options.\n\n## Who is affected?\n\nApplications are vulnerable if they:\n\n1. **Serialize untrusted data via `JSON.stringify()` on Serializable objects, then deserialize with `load()`** \u2014 Trusting your own serialization output makes you vulnerable if user-controlled data (e.g., from LLM responses, metadata fields, or user inputs) contains `\u0027lc\u0027` key structures.\n2. **Deserialize untrusted data with `load()`** \u2014 Directly deserializing untrusted data that may contain injected `\u0027lc\u0027` structures.\n3. **Use LangGraph checkpoints** \u2014 Checkpoint serialization/deserialization paths may be affected.\n\nThe most common attack vector is through **LLM response fields** like `additional_kwargs` or `response_metadata`, which can be controlled via prompt injection and then serialized/deserialized in streaming operations.\n\n## Impact\n\nAttackers who control serialized data can extract environment variable secrets by injecting `{\"lc\": 1, \"type\": \"secret\", \"id\": [\"ENV_VAR\"]}` to load environment variables during deserialization (when `secretsFromEnv: true`). They can also instantiate classes with controlled parameters by injecting constructor structures to instantiate any class within the provided import maps with attacker-controlled parameters, potentially triggering side effects such as network calls or file operations.\n\nKey severity factors:\n\n- Affects the serialization path\u2014applications trusting their own serialization output are vulnerable\n- Enables secret extraction when combined with `secretsFromEnv: true`\n- LLM responses in `additional_kwargs` can be controlled via prompt injection\n\n## Exploit example\n\n```typescript\nimport { load } from \"@langchain/core/load\";\n\n// Attacker injects secret structure into user-controlled data\nconst attackerPayload = JSON.stringify({\n user_data: {\n lc: 1,\n type: \"secret\",\n id: [\"OPENAI_API_KEY\"],\n },\n});\n\nprocess.env.OPENAI_API_KEY = \"sk-secret-key-12345\";\n\n// With secretsFromEnv: true, the secret is extracted\nconst deserialized = await load(attackerPayload, { secretsFromEnv: true });\n\nconsole.log(deserialized.user_data); // \"sk-secret-key-12345\" - SECRET LEAKED!\n```\n\n## Security hardening changes\n\nThis patch introduces the following changes to `load()`:\n\n1. **`secretsFromEnv` default changed to `false`**: Disables automatic secret loading from environment variables. Secrets not found in `secretsMap` now throw an error instead of being loaded from `process.env`. This fail-safe behavior ensures missing secrets are caught immediately rather than silently continuing with `null`.\n2. **New `maxDepth` parameter** (defaults to `50`): Protects against denial-of-service attacks via deeply nested JSON structures that could cause stack overflow.\n3. **Escape mechanism in `toJSON()`**: User-controlled objects containing `\u0027lc\u0027` keys are now wrapped in `{\"__lc_escaped__\": {...}}` during serialization and unwrapped as plain data during deserialization.\n4. **JSDoc security warnings**: All import map options (`importMap`, `optionalImportsMap`, `optionalImportEntrypoints`) now include security warnings about never populating them from user input.\n\n## Migration guide\n\n### No changes needed for most users\n\nIf you\u0027re deserializing standard LangChain types (messages, documents, prompts) using the core import map, your code will work without changes:\n\n```typescript\nimport { load } from \"@langchain/core/load\";\n\n// Works with default settings\nconst obj = await load(serializedData);\n```\n\n### For secrets from environment\n\n`secretsFromEnv` now defaults to `false`, and missing secrets throw an error. If you need to load secrets:\n\n```typescript\nimport { load } from \"@langchain/core/load\";\n\n// Provide secrets explicitly (recommended)\nconst obj = await load(serializedData, {\n secretsMap: { OPENAI_API_KEY: process.env.OPENAI_API_KEY },\n});\n\n// Or explicitly opt-in to load from env (only use with trusted data)\nconst obj = await load(serializedData, { secretsFromEnv: true });\n```\n\n\u003e **Warning:** Only enable `secretsFromEnv` if you trust the serialized data. Untrusted data could extract any environment variable.\n\n\u003e **Note:** If a secret reference is encountered but not found in `secretsMap` (and `secretsFromEnv` is `false` or the secret is not in the environment), an error is thrown. This fail-safe behavior ensures you\u0027re aware of missing secrets rather than silently receiving `null` values.\n\n### For deeply nested structures\n\nIf you have legitimate deeply nested data that exceeds the default depth limit of 50:\n\n```typescript\nimport { load } from \"@langchain/core/load\";\n\nconst obj = await load(serializedData, { maxDepth: 100 });\n```\n\n### For custom import maps\n\nIf you provide custom import maps, ensure they only contain trusted modules:\n\n```typescript\nimport { load } from \"@langchain/core/load\";\nimport * as myModule from \"./my-trusted-module\";\n\n// GOOD - explicitly include only trusted modules\nconst obj = await load(serializedData, {\n importMap: { my_module: myModule },\n});\n\n// BAD - never populate from user input\nconst obj = await load(serializedData, {\n importMap: userProvidedImports, // DANGEROUS!\n});\n```",
"id": "GHSA-r399-636x-v7f6",
"modified": "2025-12-24T01:08:11Z",
"published": "2025-12-23T20:08:48Z",
"references": [
{
"type": "WEB",
"url": "https://github.com/langchain-ai/langchainjs/security/advisories/GHSA-r399-636x-v7f6"
},
{
"type": "ADVISORY",
"url": "https://nvd.nist.gov/vuln/detail/CVE-2025-68665"
},
{
"type": "WEB",
"url": "https://github.com/langchain-ai/langchainjs/commit/e5063f9c6e9989ea067dfdff39262b9e7b6aba62"
},
{
"type": "PACKAGE",
"url": "https://github.com/langchain-ai/langchainjs"
},
{
"type": "WEB",
"url": "https://github.com/langchain-ai/langchainjs/releases/tag/%40langchain%2Fcore%401.1.8"
},
{
"type": "WEB",
"url": "https://github.com/langchain-ai/langchainjs/releases/tag/langchain%401.2.3"
}
],
"schema_version": "1.4.0",
"severity": [
{
"score": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:N/A:N",
"type": "CVSS_V3"
}
],
"summary": "LangChain serialization injection vulnerability enables secret extraction"
}
Sightings
| Author | Source | Type | Date |
|---|
Nomenclature
- Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
- Confirmed: The vulnerability is confirmed from an analyst perspective.
- Published Proof of Concept: A public proof of concept is available for this vulnerability.
- Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
- Patched: This vulnerability was successfully patched by the user reporting the sighting.
- Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
- Not confirmed: The user expresses doubt about the veracity of the vulnerability.
- Not patched: This vulnerability was not successfully patched by the user reporting the sighting.