Search criteria

Related vulnerabilities

GHSA-HP5M-24VP-VQ2Q

Vulnerability from github – Published: 2026-05-08 19:45 – Updated: 2026-05-08 19:45
VLAI?
Summary
Open WebUI's responses passthrough endpoint lacks access control authorization
Details

Summary

The /responses endpoint in the OpenAI router accepts any authenticated user and forwards requests directly to upstream LLM providers without enforcing per-model access control. While the primary chat completion endpoint (generate_chat_completion) checks model ownership, group membership, and AccessGrants before allowing a request, the /responses proxy only validates that the user has a valid session via get_verified_user.

This allows any authenticated user — regardless of role or group assignment — to interact with any model configured on the instance by sending a POST request to /api/openai/responses with an arbitrary model ID.

Impact

As per OWASP TOP 10 LLM:

  • Model Denial of Service (OWASP LLM04): An unauthorized user can submit resource-intensive requests to expensive models (e.g., o1-pro, GPT-4o) that were explicitly restricted by the administrator. In shared deployments, this can exhaust API budgets or rate limits, causing total service disruption for all legitimate users.

  • Model Theft (OWASP LLM10): If the instance proxies access to fine-tuned or self-hosted models, unauthorized users can freely interact with them, enabling capability extraction or model distillation without authorization.

  • Access Policy Bypass: Administrators lose the ability to enforce cost-tier restrictions, team-based model assignments, or compliance boundaries through the existing access control system.

The endpoint is a raw passthrough proxy and does not resolve workspace model configurations (system prompts, knowledge bases, RAG pipelines). Therefore, workspace-specific confidential data is not directly exposed through this vector.

PR: https://github.com/open-webui/open-webui/pull/23481

Show details on source website

{
  "affected": [
    {
      "database_specific": {
        "last_known_affected_version_range": "\u003c= 0.8.12"
      },
      "package": {
        "ecosystem": "PyPI",
        "name": "open-webui"
      },
      "ranges": [
        {
          "events": [
            {
              "introduced": "0"
            },
            {
              "fixed": "0.9.0"
            }
          ],
          "type": "ECOSYSTEM"
        }
      ]
    }
  ],
  "aliases": [
    "CVE-2026-44556"
  ],
  "database_specific": {
    "cwe_ids": [
      "CWE-284",
      "CWE-862"
    ],
    "github_reviewed": true,
    "github_reviewed_at": "2026-05-08T19:45:53Z",
    "nvd_published_at": null,
    "severity": "HIGH"
  },
  "details": "## Summary\n\nThe /responses endpoint in the OpenAI router accepts any authenticated user and forwards requests directly to upstream LLM providers without enforcing per-model access control. While the primary chat completion endpoint (generate_chat_completion) checks model ownership, group membership, and AccessGrants before allowing a request, the /responses proxy only validates that the user has a valid session via get_verified_user.\n\nThis allows any authenticated user \u2014 regardless of role or group assignment \u2014 to interact with any model configured on the instance by sending a POST request to /api/openai/responses with an arbitrary model ID.\n\n## Impact\n\nAs per OWASP TOP 10 LLM:\n\n- **Model Denial of Service (OWASP LLM04):** An unauthorized user can submit resource-intensive requests to expensive models (e.g., o1-pro, GPT-4o) that were explicitly restricted by the administrator. In shared deployments, this can exhaust API budgets or rate limits, causing total service disruption for all legitimate users.\n\n- **Model Theft (OWASP LLM10):** If the instance proxies access to fine-tuned or self-hosted models, unauthorized users can freely interact with them, enabling capability extraction or model distillation without authorization.\n\n- **Access Policy Bypass:** Administrators lose the ability to enforce cost-tier restrictions, team-based model assignments, or compliance boundaries through the existing access control system.\n\nThe endpoint is a raw passthrough proxy and does not resolve workspace model configurations (system prompts, knowledge bases, RAG pipelines). Therefore, workspace-specific confidential data is not directly exposed through this vector.\n\nPR: https://github.com/open-webui/open-webui/pull/23481",
  "id": "GHSA-hp5m-24vp-vq2q",
  "modified": "2026-05-08T19:45:53Z",
  "published": "2026-05-08T19:45:53Z",
  "references": [
    {
      "type": "WEB",
      "url": "https://github.com/open-webui/open-webui/security/advisories/GHSA-hp5m-24vp-vq2q"
    },
    {
      "type": "WEB",
      "url": "https://github.com/open-webui/open-webui/pull/23481"
    },
    {
      "type": "PACKAGE",
      "url": "https://github.com/open-webui/open-webui"
    }
  ],
  "schema_version": "1.4.0",
  "severity": [
    {
      "score": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:L/I:N/A:H",
      "type": "CVSS_V3"
    }
  ],
  "summary": "Open WebUI\u0027s responses passthrough endpoint lacks access control authorization"
}