GHSA-QH6H-P6C9-FF54

Vulnerability from github – Published: 2026-03-27 19:45 – Updated: 2026-03-31 18:41
VLAI?
Summary
LangChain Core has Path Traversal vulnerabilites in legacy `load_prompt` functions
Details

Summary

Multiple functions in langchain_core.prompts.loading read files from paths embedded in deserialized config dicts without validating against directory traversal or absolute path injection. When an application passes user-influenced prompt configurations to load_prompt() or load_prompt_from_config(), an attacker can read arbitrary files on the host filesystem, constrained only by file-extension checks (.txt for templates, .json/.yaml for examples).

Note: The affected functions (load_prompt, load_prompt_from_config, and the .save() method on prompt classes) are undocumented legacy APIs. They are superseded by the dumpd/dumps/load/loads serialization APIs in langchain_core.load, which do not perform filesystem reads and use an allowlist-based security model. As part of this fix, the legacy APIs have been formally deprecated and will be removed in 2.0.0.

Affected component

Package: langchain-core File: langchain_core/prompts/loading.py Affected functions: _load_template(), _load_examples(), _load_few_shot_prompt()

Severity

High

The score reflects the file-extension constraints that limit which files can be read.

Vulnerable code paths

Config key Loaded by Readable extensions
template_path, suffix_path, prefix_path _load_template() .txt
examples (when string) _load_examples() .json, .yaml, .yml
example_prompt_path _load_few_shot_prompt() .json, .yaml, .yml

None of these code paths validated the supplied path against absolute path injection or .. traversal sequences before reading from disk.

Impact

An attacker who controls or influences the prompt configuration dict can read files outside the intended directory:

  • .txt files: cloud-mounted secrets (/mnt/secrets/api_key.txt), requirements.txt, internal system prompts
  • .json/.yaml files: cloud credentials (~/.docker/config.json, ~/.azure/accessTokens.json), Kubernetes manifests, CI/CD configs, application settings

This is exploitable in applications that accept prompt configs from untrusted sources, including low-code AI builders and API wrappers that expose load_prompt_from_config().

Proof of concept

from langchain_core.prompts.loading import load_prompt_from_config

# Reads /tmp/secret.txt via absolute path injection
config = {
    "_type": "prompt",
    "template_path": "/tmp/secret.txt",
    "input_variables": [],
}
prompt = load_prompt_from_config(config)
print(prompt.template)  # file contents disclosed

# Reads ../../etc/secret.txt via directory traversal
config = {
    "_type": "prompt",
    "template_path": "../../etc/secret.txt",
    "input_variables": [],
}
prompt = load_prompt_from_config(config)

# Reads arbitrary .json via few-shot examples
config = {
    "_type": "few_shot",
    "examples": "../../../../.docker/config.json",
    "example_prompt": {
        "_type": "prompt",
        "input_variables": ["input", "output"],
        "template": "{input}: {output}",
    },
    "prefix": "",
    "suffix": "{query}",
    "input_variables": ["query"],
}
prompt = load_prompt_from_config(config)

Mitigation

Update langchain-core to >= 1.2.22.

The fix adds path validation that rejects absolute paths and .. traversal sequences by default. An allow_dangerous_paths=True keyword argument is available on load_prompt() and load_prompt_from_config() for trusted inputs.

As described above, these legacy APIs have been formally deprecated. Users should migrate to dumpd/dumps/load/loads from langchain_core.load.

Credit

Show details on source website

{
  "affected": [
    {
      "package": {
        "ecosystem": "PyPI",
        "name": "langchain-core"
      },
      "ranges": [
        {
          "events": [
            {
              "introduced": "0"
            },
            {
              "fixed": "1.2.22"
            }
          ],
          "type": "ECOSYSTEM"
        }
      ]
    }
  ],
  "aliases": [
    "CVE-2026-34070"
  ],
  "database_specific": {
    "cwe_ids": [
      "CWE-22"
    ],
    "github_reviewed": true,
    "github_reviewed_at": "2026-03-27T19:45:00Z",
    "nvd_published_at": "2026-03-31T03:15:58Z",
    "severity": "HIGH"
  },
  "details": "## Summary\n\nMultiple functions in `langchain_core.prompts.loading` read files from paths embedded in deserialized config dicts without validating against directory traversal or absolute path injection. When an application passes user-influenced prompt configurations to `load_prompt()` or `load_prompt_from_config()`, an attacker can read arbitrary files on the host filesystem, constrained only by file-extension checks (`.txt` for templates, `.json`/`.yaml` for examples).\n\n**Note:** The affected functions (`load_prompt`, `load_prompt_from_config`, and the `.save()` method on prompt classes) are undocumented legacy APIs. They are superseded by the `dumpd`/`dumps`/`load`/`loads` serialization APIs in `langchain_core.load`, which do not perform filesystem reads and use an allowlist-based security model. As part of this fix, the legacy APIs have been formally deprecated and will be removed in 2.0.0.\n\n## Affected component\n\n**Package:** `langchain-core`\n**File:** `langchain_core/prompts/loading.py`\n**Affected functions:** `_load_template()`, `_load_examples()`, `_load_few_shot_prompt()`\n\n## Severity\n\n**High** \n\nThe score reflects the file-extension constraints that limit which files can be read.\n\n## Vulnerable code paths\n\n| Config key | Loaded by | Readable extensions |\n|---|---|---|\n| `template_path`, `suffix_path`, `prefix_path` | `_load_template()` | `.txt` |\n| `examples` (when string) | `_load_examples()` | `.json`, `.yaml`, `.yml` |\n| `example_prompt_path` | `_load_few_shot_prompt()` | `.json`, `.yaml`, `.yml` |\n\nNone of these code paths validated the supplied path against absolute path injection or `..` traversal sequences before reading from disk.\n\n## Impact\n\nAn attacker who controls or influences the prompt configuration dict can read files outside the intended directory:\n\n- **`.txt` files:** cloud-mounted secrets (`/mnt/secrets/api_key.txt`), `requirements.txt`, internal system prompts\n- **`.json`/`.yaml` files:** cloud credentials (`~/.docker/config.json`, `~/.azure/accessTokens.json`), Kubernetes manifests, CI/CD configs, application settings\n\nThis is exploitable in applications that accept prompt configs from untrusted sources, including low-code AI builders and API wrappers that expose `load_prompt_from_config()`.\n\n## Proof of concept\n\n```python\nfrom langchain_core.prompts.loading import load_prompt_from_config\n\n# Reads /tmp/secret.txt via absolute path injection\nconfig = {\n    \"_type\": \"prompt\",\n    \"template_path\": \"/tmp/secret.txt\",\n    \"input_variables\": [],\n}\nprompt = load_prompt_from_config(config)\nprint(prompt.template)  # file contents disclosed\n\n# Reads ../../etc/secret.txt via directory traversal\nconfig = {\n    \"_type\": \"prompt\",\n    \"template_path\": \"../../etc/secret.txt\",\n    \"input_variables\": [],\n}\nprompt = load_prompt_from_config(config)\n\n# Reads arbitrary .json via few-shot examples\nconfig = {\n    \"_type\": \"few_shot\",\n    \"examples\": \"../../../../.docker/config.json\",\n    \"example_prompt\": {\n        \"_type\": \"prompt\",\n        \"input_variables\": [\"input\", \"output\"],\n        \"template\": \"{input}: {output}\",\n    },\n    \"prefix\": \"\",\n    \"suffix\": \"{query}\",\n    \"input_variables\": [\"query\"],\n}\nprompt = load_prompt_from_config(config)\n```\n\n## Mitigation\n\n**Update `langchain-core` to \u003e= 1.2.22.**\n\nThe fix adds path validation that rejects absolute paths and `..` traversal sequences by default. An `allow_dangerous_paths=True` keyword argument is available on `load_prompt()` and `load_prompt_from_config()` for trusted inputs.\n\nAs described above, these legacy APIs have been formally deprecated. Users should migrate to `dumpd`/`dumps`/`load`/`loads` from `langchain_core.load`.\n\n## Credit\n\n- [jiayuqi7813](https://github.com/jiayuqi7813) reporter\n- [VladimirEliTokarev](https://github.com/VladimirEliTokarev) reporter\n- [Rickidevs](https://github.com/Rickidevs) reporter\n- Kenneth Cox (cczine@gmail.com) reporter",
  "id": "GHSA-qh6h-p6c9-ff54",
  "modified": "2026-03-31T18:41:12Z",
  "published": "2026-03-27T19:45:00Z",
  "references": [
    {
      "type": "WEB",
      "url": "https://github.com/langchain-ai/langchain/security/advisories/GHSA-qh6h-p6c9-ff54"
    },
    {
      "type": "ADVISORY",
      "url": "https://nvd.nist.gov/vuln/detail/CVE-2026-34070"
    },
    {
      "type": "WEB",
      "url": "https://github.com/langchain-ai/langchain/commit/27add913474e01e33bededf4096151130ba0d47c"
    },
    {
      "type": "PACKAGE",
      "url": "https://github.com/langchain-ai/langchain"
    },
    {
      "type": "WEB",
      "url": "https://github.com/langchain-ai/langchain/releases/tag/langchain-core==1.2.22"
    }
  ],
  "schema_version": "1.4.0",
  "severity": [
    {
      "score": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N",
      "type": "CVSS_V3"
    }
  ],
  "summary": "LangChain Core has Path Traversal vulnerabilites in legacy `load_prompt` functions"
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading…

Loading…

Loading…

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or observed by the user.
  • Confirmed: The vulnerability has been validated from an analyst's perspective.
  • Published Proof of Concept: A public proof of concept is available for this vulnerability.
  • Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
  • Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
  • Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
  • Not confirmed: The user expressed doubt about the validity of the vulnerability.
  • Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.


Loading…

Detection rules are retrieved from Rulezet.

Loading…

Loading…