{"uuid": "90f0b654-4685-43b5-82e1-63e4c9b3e63a", "vulnerability_lookup_origin": "1a89b78e-f703-45f3-bb86-59eb712668bd", "author": "2a075640-a300-48a4-bb44-bc6130783b9b", "vulnerability": "CVE-2025-24357", "type": "seen", "source": "https://t.me/cvedetector/16490", "content": "{\n  \"Source\": \"CVE FEED\",\n  \"Title\": \"CVE-2025-24357 - VLLM Deserialization Code Execution Vulnerability\", \n  \"Content\": \"CVE ID : CVE-2025-24357 \nPublished : Jan. 27, 2025, 6:15 p.m. | 22\u00a0minutes ago \nDescription : vLLM is a library for LLM inference and serving. vllm/model_executor/weight_utils.py implements hf_model_weights_iterator to load the model checkpoint, which is downloaded from huggingface. It uses the torch.load function and the weights_only parameter defaults to False. When torch.load loads malicious pickle data, it will execute arbitrary code during unpickling. This vulnerability is fixed in v0.7.0. \nSeverity: 7.5 | HIGH \nVisit the link for more details, such as CVSS details, affected products, timeline, and more...\",\n  \"Detection Date\": \"27 Jan 2025\",\n  \"Type\": \"Vulnerability\"\n}\n\ud83d\udd39 t.me/cvedetector \ud83d\udd39", "creation_timestamp": "2025-01-27T20:11:23.000000Z"}