{"vulnerability": "CVE-2025-46560", "sightings": [{"uuid": "a646f7bf-c8bf-45e1-add3-ddc64b585d27", "vulnerability_lookup_origin": "1a89b78e-f703-45f3-bb86-59eb712668bd", "author": "2a075640-a300-48a4-bb44-bc6130783b9b", "vulnerability": "CVE-2025-46560", "type": "seen", "source": "https://t.me/cvedetector/24063", "content": "{\n  \"Source\": \"CVE FEED\",\n  \"Title\": \"CVE-2025-46560 - LLaMA LLM Multimodal Tokenizer Resource Exhaustion\", \n  \"Content\": \"CVE ID : CVE-2025-46560 \nPublished : April 30, 2025, 1:15 a.m. | 2\u00a0hours ago \nDescription : vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.8.0 and prior to 0.8.5 are affected by a critical performance vulnerability in the input preprocessing logic of the multimodal tokenizer. The code dynamically replaces placeholder tokens (e.g., &lt;|audio_|, &lt;|image_|) with repeated tokens based on precomputed lengths. Due to \u200b\u200binefficient list concatenation operations\u200b\u200b, the algorithm exhibits \u200b\u200bquadratic time complexity (O(n\u00b2))\u200b\u200b, allowing malicious actors to trigger resource exhaustion via specially crafted inputs. This issue has been patched in version 0.8.5. \nSeverity: 6.5 | MEDIUM \nVisit the link for more details, such as CVSS details, affected products, timeline, and more...\",\n  \"Detection Date\": \"30 Apr 2025\",\n  \"Type\": \"Vulnerability\"\n}\n\ud83d\udd39 t.me/cvedetector \ud83d\udd39", "creation_timestamp": "2025-04-30T05:22:33.000000Z"}, {"uuid": "337d6194-9a04-4cdb-9592-836c39ca67bb", "vulnerability_lookup_origin": "1a89b78e-f703-45f3-bb86-59eb712668bd", "author": "2a075640-a300-48a4-bb44-bc6130783b9b", "vulnerability": "CVE-2025-46560", "type": "seen", "source": "https://bsky.app/profile/cve.skyfleet.blue/post/3lnyvdljsau24", "content": "", "creation_timestamp": "2025-04-30T03:50:58.343346Z"}]}