{"uuid": "47f05669-921c-4936-a312-8366931e5b16", "vulnerability_lookup_origin": "1a89b78e-f703-45f3-bb86-59eb712668bd", "author": "2a075640-a300-48a4-bb44-bc6130783b9b", "vulnerability": "CVE-2025-52566", "type": "published-proof-of-concept", "source": "https://t.me/DarkWebInformer_CVEAlerts/19297", "content": "\ud83d\udd17 DarkWebInformer.com - Cyber Threat Intelligence\n\ud83d\udccc CVE ID: CVE-2025-52566\n\ud83d\udd25 CVSS Score: 8.6 (cvssV3_1, Vector: CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:C/C:H/I:H/A:H)\n\ud83d\udd39 Description: llama.cpp is an inference of several LLM models in C/C++. Prior to version b5721, there is a signed vs. unsigned integer overflow in llama.cpp's tokenizer implementation (llama_vocab::tokenize) (src/llama-vocab.cpp:3036) resulting in unintended behavior in tokens copying size comparison. Allowing heap-overflowing llama.cpp inferencing engine with carefully manipulated text input during tokenization process. This issue has been patched in version b5721.\n\ud83d\udccf Published: 2025-06-24T03:21:19.009Z\n\ud83d\udccf Modified: 2025-06-24T03:21:19.009Z\n\ud83d\udd17 References:\n1. https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-7rxv-5jhh-j6xx\n2. https://github.com/ggml-org/llama.cpp/commit/dd6e6d0b6a4bbe3ebfc931d1eb14db2f2b1d70af", "creation_timestamp": "2025-06-24T03:48:12.000000Z"}