GHSA-V8HW-MH8C-JXFC

Vulnerability from github – Published: 2026-03-26 18:31 – Updated: 2026-03-27 21:49
VLAI?
Summary
Langflow has Authenticated Code Execution in Agentic Assistant Validation
Details

Description

1. Summary

The Agentic Assistant feature in Langflow executes LLM-generated Python code during its validation phase. Although this phase appears intended to validate generated component code, the implementation reaches dynamic execution sinks and instantiates the generated class server-side.

In deployments where an attacker can access the Agentic Assistant feature and influence the model output, this can result in arbitrary server-side Python execution.

2. Description

2.1 Intended Functionality

The Agentic Assistant endpoints are designed to help users generate and validate components for a flow. Users can submit requests to the assistant, which returns candidate component code for further processing.

A reasonable security expectation is that validation should treat model output as untrusted text and perform only static or side-effect-free checks.

The externally reachable endpoints are:

https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/api/router.py#L252-L297

The request model accepts attacker-influenceable fields such as input_value, flow_id, provider, model_name, session_id, and max_retries:

https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/api/schemas.py#L20-L31

2.2 Root Cause

In the affected code path, Langflow processes model output through the following chain:

/assistexecute_flow_with_validation()execute_flow_file() → LLM returns component code → extract_component_code()validate_component_code()create_class() → generated class is instantiated

The assistant service reaches the validation path here:

https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/services/assistant_service.py#L58-L79

The code extraction step occurs here:

https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/helpers/code_extraction.py#L11-L53

The validation entry point is here:

https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/helpers/validation.py#L27-L47

The issue is that this validation path is not purely static. It ultimately invokes create_class() in lfx.custom.validate, where Python code is dynamically executed via exec(...), including both global-scope preparation and class construction.

https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/custom/validate.py#L241-L272

https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/custom/validate.py#L394-L399

https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/custom/validate.py#L441-L443

As a result, LLM-generated code is treated as executable Python rather than inert data. This means the “validation” step crosses a trust boundary and becomes an execution sink.

The streaming path can also reach this sink when the request is classified into the component-generation branch:

https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/services/assistant_service.py#L142-L156

https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/services/assistant_service.py#L259-L300

3. Proof of Concept (PoC)

  1. Send a request to the Agentic Assistant endpoint.
  2. Provide input that causes the model to return malicious component code.
  3. The returned code reaches the validation path.
  4. During validation, the server dynamically executes the generated Python.
  5. Arbitrary server-side code execution occurs.

4. Impact

  • Attackers who can access the Agentic Assistant feature and influence model output may execute arbitrary Python code on the server.
  • This can lead to:

  • OS command execution

  • file read/write
  • credential or secret disclosure
  • full compromise of the Langflow process

5. Exploitability Notes

This issue is most accurately described as an authenticated or feature-reachable code execution vulnerability, rather than an unconditional unauthenticated remote attack.

Severity depends on deployment model:

  • In local-only, single-user development setups, the issue may be limited to self-exposure by the operator.
  • In shared, team, or internet-exposed deployments, it may be exploitable by other users or attackers who can reach the assistant feature.

The assistant feature depends on an active user context:

https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/api/utils/core.py#L38

Authentication sources include bearer token, cookie, or API key:

https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/services/auth/utils.py#L39-L53

https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/services/auth/utils.py#L156-L163

Default deployment settings may widen exposure, including AUTO_LOGIN=true and the /api/v1/auto_login endpoint:

https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/services/settings/auth.py#L71-L87

https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/api/v1/login.py#L96-L135

6. Patch Recommendation

  • Remove all dynamic execution from the validation path.
  • Ensure validation is strictly static and side-effect-free.
  • Treat all LLM output as untrusted input.
  • If code generation must be supported, require explicit approval and run it in a hardened sandbox isolated from the main server process.

Discovered by: @kexinoh (https://github.com/kexinoh, works at Tencent Zhuque Lab)

Show details on source website

{
  "affected": [
    {
      "database_specific": {
        "last_known_affected_version_range": "\u003c= 1.8.1"
      },
      "package": {
        "ecosystem": "PyPI",
        "name": "langflow"
      },
      "ranges": [
        {
          "events": [
            {
              "introduced": "0"
            },
            {
              "fixed": "1.9.0"
            }
          ],
          "type": "ECOSYSTEM"
        }
      ]
    }
  ],
  "aliases": [
    "CVE-2026-33873"
  ],
  "database_specific": {
    "cwe_ids": [
      "CWE-94"
    ],
    "github_reviewed": true,
    "github_reviewed_at": "2026-03-26T18:31:36Z",
    "nvd_published_at": "2026-03-27T21:17:23Z",
    "severity": "CRITICAL"
  },
  "details": "## Description\n\n### 1. Summary\n\nThe Agentic Assistant feature in Langflow executes LLM-generated Python code during its **validation** phase. Although this phase appears intended to validate generated component code, the implementation reaches dynamic execution sinks and instantiates the generated class server-side.\n\nIn deployments where an attacker can access the Agentic Assistant feature and influence the model output, this can result in arbitrary server-side Python execution.\n\n### 2. Description\n\n#### 2.1 Intended Functionality\n\nThe Agentic Assistant endpoints are designed to help users generate and validate components for a flow. Users can submit requests to the assistant, which returns candidate component code for further processing.\n\nA reasonable security expectation is that validation should treat model output as **untrusted text** and perform only static or side-effect-free checks.\n\nThe externally reachable endpoints are:\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/api/router.py#L252-L297](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/api/router.py#L252-L297)\n\nThe request model accepts attacker-influenceable fields such as `input_value`, `flow_id`, `provider`, `model_name`, `session_id`, and `max_retries`:\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/api/schemas.py#L20-L31](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/api/schemas.py#L20-L31)\n\n#### 2.2 Root Cause\n\nIn the affected code path, Langflow processes model output through the following chain:\n\n`/assist`\n\u2192 `execute_flow_with_validation()`\n\u2192 `execute_flow_file()`\n\u2192 LLM returns component code\n\u2192 `extract_component_code()`\n\u2192 `validate_component_code()`\n\u2192 `create_class()`\n\u2192 generated class is instantiated\n\nThe assistant service reaches the validation path here:\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/services/assistant_service.py#L58-L79](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/services/assistant_service.py#L58-L79)\n\nThe code extraction step occurs here:\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/helpers/code_extraction.py#L11-L53](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/helpers/code_extraction.py#L11-L53)\n\nThe validation entry point is here:\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/helpers/validation.py#L27-L47](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/helpers/validation.py#L27-L47)\n\nThe issue is that this validation path is not purely static. It ultimately invokes `create_class()` in `lfx.custom.validate`, where Python code is dynamically executed via `exec(...)`, including both global-scope preparation and class construction.\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/custom/validate.py#L241-L272](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/custom/validate.py#L241-L272)\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/custom/validate.py#L394-L399](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/custom/validate.py#L394-L399)\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/custom/validate.py#L441-L443](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/custom/validate.py#L441-L443)\n\nAs a result, LLM-generated code is treated as executable Python rather than inert data. This means the \u201cvalidation\u201d step crosses a trust boundary and becomes an execution sink.\n\nThe streaming path can also reach this sink when the request is classified into the component-generation branch:\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/services/assistant_service.py#L142-L156](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/services/assistant_service.py#L142-L156)\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/services/assistant_service.py#L259-L300](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/services/assistant_service.py#L259-L300)\n\n### 3. Proof of Concept (PoC)\n\n1. Send a request to the Agentic Assistant endpoint.\n2. Provide input that causes the model to return malicious component code.\n3. The returned code reaches the validation path.\n4. During validation, the server dynamically executes the generated Python.\n5. Arbitrary server-side code execution occurs.\n\n### 4. Impact\n\n* Attackers who can access the Agentic Assistant feature and influence model output may execute arbitrary Python code on the server.\n* This can lead to:\n\n  * OS command execution\n  * file read/write\n  * credential or secret disclosure\n  * full compromise of the Langflow process\n\n### 5. Exploitability Notes\n\nThis issue is most accurately described as an **authenticated or feature-reachable code execution vulnerability**, rather than an unconditional unauthenticated remote attack.\n\nSeverity depends on deployment model:\n\n* In **local-only, single-user development setups**, the issue may be limited to self-exposure by the operator.\n* In **shared, team, or internet-exposed deployments**, it may be exploitable by other users or attackers who can reach the assistant feature.\n\nThe assistant feature depends on an active user context:\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/api/utils/core.py#L38](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/api/utils/core.py#L38)\n\nAuthentication sources include bearer token, cookie, or API key:\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/services/auth/utils.py#L39-L53](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/services/auth/utils.py#L39-L53)\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/services/auth/utils.py#L156-L163](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/services/auth/utils.py#L156-L163)\n\nDefault deployment settings may widen exposure, including `AUTO_LOGIN=true` and the `/api/v1/auto_login` endpoint:\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/services/settings/auth.py#L71-L87](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/services/settings/auth.py#L71-L87)\n\n[https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/api/v1/login.py#L96-L135](https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/api/v1/login.py#L96-L135)\n\n### 6. Patch Recommendation\n\n* Remove all dynamic execution from the validation path.\n* Ensure validation is strictly static and side-effect-free.\n* Treat all LLM output as untrusted input.\n* If code generation must be supported, require explicit approval and run it in a hardened sandbox isolated from the main server process.\n\nDiscovered by: @kexinoh ([https://github.com/kexinoh](https://github.com/kexinoh), works at Tencent Zhuque Lab)",
  "id": "GHSA-v8hw-mh8c-jxfc",
  "modified": "2026-03-27T21:49:27Z",
  "published": "2026-03-26T18:31:36Z",
  "references": [
    {
      "type": "WEB",
      "url": "https://github.com/langflow-ai/langflow/security/advisories/GHSA-v8hw-mh8c-jxfc"
    },
    {
      "type": "ADVISORY",
      "url": "https://nvd.nist.gov/vuln/detail/CVE-2026-33873"
    },
    {
      "type": "WEB",
      "url": "https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/services/settings/auth.py#L71-L87"
    },
    {
      "type": "WEB",
      "url": "https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/custom/validate.py#L441-L443"
    },
    {
      "type": "WEB",
      "url": "https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/custom/validate.py#L394-L399"
    },
    {
      "type": "WEB",
      "url": "https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/custom/validate.py#L241-L272"
    },
    {
      "type": "WEB",
      "url": "https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/services/auth/utils.py#L39-L53"
    },
    {
      "type": "WEB",
      "url": "https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/services/auth/utils.py#L156-L163"
    },
    {
      "type": "WEB",
      "url": "https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/api/v1/login.py#L96-L135"
    },
    {
      "type": "WEB",
      "url": "https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/api/utils/core.py#L38"
    },
    {
      "type": "WEB",
      "url": "https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/services/assistant_service.py#L58-L79"
    },
    {
      "type": "WEB",
      "url": "https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/services/assistant_service.py#L259-L300"
    },
    {
      "type": "WEB",
      "url": "https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/services/assistant_service.py#L142-L156"
    },
    {
      "type": "WEB",
      "url": "https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/helpers/validation.py#L27-L47"
    },
    {
      "type": "WEB",
      "url": "https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/helpers/code_extraction.py#L11-L53"
    },
    {
      "type": "WEB",
      "url": "https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/api/schemas.py#L20-L31"
    },
    {
      "type": "WEB",
      "url": "https://github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/api/router.py#L252-L297"
    },
    {
      "type": "PACKAGE",
      "url": "https://github.com/langflow-ai/langflow"
    }
  ],
  "schema_version": "1.4.0",
  "severity": [
    {
      "score": "CVSS:4.0/AV:N/AC:L/AT:N/PR:L/UI:N/VC:H/VI:H/VA:N/SC:H/SI:H/SA:N",
      "type": "CVSS_V4"
    }
  ],
  "summary": "Langflow has Authenticated Code Execution in Agentic Assistant Validation"
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading…

Loading…

Loading…

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or observed by the user.
  • Confirmed: The vulnerability has been validated from an analyst's perspective.
  • Published Proof of Concept: A public proof of concept is available for this vulnerability.
  • Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
  • Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
  • Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
  • Not confirmed: The user expressed doubt about the validity of the vulnerability.
  • Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.


Loading…

Detection rules are retrieved from Rulezet.

Loading…

Loading…