GHSA-XQMJ-J6MV-4862

Vulnerability from github – Published: 2026-04-24 16:02 – Updated: 2026-05-04 20:43
VLAI?
Summary
LiteLLM: Server-Side Template Injection in /prompts/test endpoint
Details

Impact

The POST /prompts/test endpoint accepted user-supplied prompt templates and rendered them without sandboxing. A crafted template could run arbitrary code inside the LiteLLM Proxy process.

The endpoint only checks that the caller presents a valid proxy API key, so any authenticated user could reach it. Depending on how the proxy is deployed, this could expose secrets in the process environment (such as provider API keys or database credentials) and allow commands to be run on the host.

Proxy deployments running an affected version are in scope.

Patches

The issue is fixed in 1.83.7-stable. The fix switches the prompt template renderer to a sandboxed environment that blocks the attributes this attack relies on.

LiteLLM recommends upgrading to 1.83.7-stable or later.

Workarounds

If upgrading is not immediately possible:

  1. Block POST /prompts/test at your reverse proxy or API gateway.
  2. Review and rotate API keys that should not have access to prompt management routes.
Show details on source website

{
  "affected": [
    {
      "package": {
        "ecosystem": "PyPI",
        "name": "litellm"
      },
      "ranges": [
        {
          "events": [
            {
              "introduced": "1.80.5"
            },
            {
              "fixed": "1.83.7"
            }
          ],
          "type": "ECOSYSTEM"
        }
      ]
    }
  ],
  "aliases": [
    "CVE-2026-42203"
  ],
  "database_specific": {
    "cwe_ids": [
      "CWE-1336"
    ],
    "github_reviewed": true,
    "github_reviewed_at": "2026-04-24T16:02:42Z",
    "nvd_published_at": null,
    "severity": "HIGH"
  },
  "details": "### Impact\nThe `POST /prompts/test` endpoint accepted user-supplied prompt templates and rendered them without sandboxing. A crafted template could run arbitrary code inside the LiteLLM Proxy process.\n\nThe endpoint only checks that the caller presents a valid proxy API key, so any authenticated user could reach it. Depending on how the proxy is deployed, this could expose secrets in the process environment (such as provider API keys or database credentials) and allow commands to be run on the host.\n\nProxy deployments running an affected version are in scope.\n\n### Patches\nThe issue is fixed in **`1.83.7-stable`**. The fix switches the prompt template renderer to a sandboxed environment that blocks the attributes this attack relies on.\n\nLiteLLM recommends upgrading to `1.83.7-stable` or later.\n\n### Workarounds\nIf upgrading is not immediately possible:\n\n1. Block `POST /prompts/test` at your reverse proxy or API gateway.\n2. Review and rotate API keys that should not have access to prompt management routes.",
  "id": "GHSA-xqmj-j6mv-4862",
  "modified": "2026-05-04T20:43:03Z",
  "published": "2026-04-24T16:02:42Z",
  "references": [
    {
      "type": "WEB",
      "url": "https://github.com/BerriAI/litellm/security/advisories/GHSA-xqmj-j6mv-4862"
    },
    {
      "type": "PACKAGE",
      "url": "https://github.com/BerriAI/litellm"
    },
    {
      "type": "WEB",
      "url": "https://github.com/BerriAI/litellm/releases/tag/v1.83.7-stable"
    }
  ],
  "schema_version": "1.4.0",
  "severity": [
    {
      "score": "CVSS:4.0/AV:N/AC:L/AT:N/PR:L/UI:N/VC:H/VI:H/VA:N/SC:N/SI:N/SA:N",
      "type": "CVSS_V4"
    }
  ],
  "summary": "LiteLLM: Server-Side Template Injection in /prompts/test endpoint"
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading…

Loading…

Loading…
Forecast uses a logistic model when the trend is rising, or an exponential decay model when the trend is falling. Fitted via linearized least squares.

Sightings

Author Source Type Date Other

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or observed by the user.
  • Confirmed: The vulnerability has been validated from an analyst's perspective.
  • Published Proof of Concept: A public proof of concept is available for this vulnerability.
  • Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
  • Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
  • Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
  • Not confirmed: The user expressed doubt about the validity of the vulnerability.
  • Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.


Loading…

Detection rules are retrieved from Rulezet.

Loading…

Loading…