GHSA-MCMC-2M55-J8JJ
Vulnerability from github – Published: 2026-01-08 21:47 – Updated: 2026-01-08 21:47Summary
The fix here for CVE-2025-62164 is not sufficient. The fix only disables prompt embeds by default rather than addressing the root cause, so the DoS vulnerability remains when the feature is enabled.
Details
vLLM's pending change attempts to fix the root cause, which is the missing sparse tensor validation. PyTorch (~v2.0) disables sparse tensor validation (specifically, sparse tensor invariants checks) by default for performance reasons. vLLM is adding the sparse tensor validation to ensure indices are valid, non-negative, and within bounds. These checks help catch malformed tensors.
PoC
NA
Impact
Current fix only added a flag to disable/enable prompt embeds, so by default, prompt embeds feature is disabled in vLLM, which stops DoS attacks through the embeddings. However, It doesn’t address the problem when the flag is enabled and there is still potential for DoS attacks.
Changes
- https://github.com/vllm-project/vllm/pull/30649
{
"affected": [
{
"database_specific": {
"last_known_affected_version_range": "\u003c 0.11.1"
},
"package": {
"ecosystem": "PyPI",
"name": "vllm"
},
"ranges": [
{
"events": [
{
"introduced": "0.10.2"
},
{
"fixed": "0.13.0"
}
],
"type": "ECOSYSTEM"
}
]
}
],
"aliases": [],
"database_specific": {
"cwe_ids": [
"CWE-123",
"CWE-20",
"CWE-502",
"CWE-787"
],
"github_reviewed": true,
"github_reviewed_at": "2026-01-08T21:47:43Z",
"nvd_published_at": null,
"severity": "HIGH"
},
"details": "### Summary\nThe fix [here](https://github.com/vllm-project/vllm/pull/27204) for CVE-2025-62164 is not sufficient. The fix only disables prompt embeds by default rather than addressing the root cause, so the DoS vulnerability remains when the feature is enabled.\n\n### Details\nvLLM\u0027s pending change attempts to fix the root cause, which is the missing sparse tensor validation. PyTorch (~v2.0) disables sparse tensor validation (specifically, sparse tensor invariants checks) by default for performance reasons. vLLM is adding the sparse tensor validation to ensure indices are valid, non-negative, and within bounds. These checks help catch malformed tensors.\n\n### PoC\nNA\n\n### Impact\nCurrent fix only added a flag to disable/enable prompt embeds, so by default, prompt embeds feature is disabled in vLLM, which stops DoS attacks through the embeddings. However, It doesn\u2019t address the problem when the flag is enabled and there is still potential for DoS attacks.\n\n### Changes\n\n* https://github.com/vllm-project/vllm/pull/30649",
"id": "GHSA-mcmc-2m55-j8jj",
"modified": "2026-01-08T21:47:43Z",
"published": "2026-01-08T21:47:43Z",
"references": [
{
"type": "WEB",
"url": "https://github.com/vllm-project/vllm/security/advisories/GHSA-mcmc-2m55-j8jj"
},
{
"type": "WEB",
"url": "https://github.com/vllm-project/vllm/pull/30649"
},
{
"type": "PACKAGE",
"url": "https://github.com/vllm-project/vllm"
}
],
"schema_version": "1.4.0",
"severity": [
{
"score": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
"type": "CVSS_V3"
}
],
"summary": "vLLM introduced enhanced protection for CVE-2025-62164"
}
Sightings
| Author | Source | Type | Date |
|---|
Nomenclature
- Seen: The vulnerability was mentioned, discussed, or observed by the user.
- Confirmed: The vulnerability has been validated from an analyst's perspective.
- Published Proof of Concept: A public proof of concept is available for this vulnerability.
- Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
- Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
- Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
- Not confirmed: The user expressed doubt about the validity of the vulnerability.
- Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.