CWE-1427
Improper Neutralization of Input Used for LLM Prompting
The product uses externally-provided data to build prompts provided to large language models (LLMs), but the way these prompts are constructed causes the LLM to fail to distinguish between user-supplied inputs and developer provided system directives.
CVE-2025-36730 (GCVE-0-2025-36730)
Vulnerability from cvelistv5
Published
2025-10-14 16:24
Modified
2025-10-14 19:11
Severity ?
VLAI Severity ?
EPSS score ?
CWE
- CWE-1427 - Improper Neutralization of Input Used for LLM Prompting
Summary
A prompt injection vulnerability exists in Windsurft version 1.10.7 in Write mode using SWE-1 model.
It is possible to create a file name that will be appended to the user prompt causing Windsurf to follow its instructions.
References
{ "containers": { "adp": [ { "metrics": [ { "other": { "content": { "id": "CVE-2025-36730", "options": [ { "Exploitation": "poc" }, { "Automatable": "no" }, { "Technical Impact": "partial" } ], "role": "CISA Coordinator", "timestamp": "2025-10-14T19:10:59.458985Z", "version": "2.0.3" }, "type": "ssvc" } } ], "providerMetadata": { "dateUpdated": "2025-10-14T19:11:07.834Z", "orgId": "134c704f-9b21-4f2e-91b3-4a467353bcc0", "shortName": "CISA-ADP" }, "title": "CISA ADP Vulnrichment" } ], "cna": { "affected": [ { "defaultStatus": "unaffected", "product": "Windsurf", "vendor": "Windsurf", "versions": [ { "status": "affected", "version": "1.10.7" } ] } ], "descriptions": [ { "lang": "en", "supportingMedia": [ { "base64": false, "type": "text/html", "value": "A prompt injection vulnerability exists in Windsurft version 1.10.7 in Write mode using SWE-1 model.\u003cbr\u003e\u003cbr\u003eIt is possible to create a file name that will be appended to the user prompt causing Windsurf to follow its instructions.\u003cbr\u003e" } ], "value": "A prompt injection vulnerability exists in Windsurft version 1.10.7 in Write mode using SWE-1 model.\n\nIt is possible to create a file name that will be appended to the user prompt causing Windsurf to follow its instructions." } ], "metrics": [ { "cvssV4_0": { "Automatable": "NOT_DEFINED", "Recovery": "NOT_DEFINED", "Safety": "NOT_DEFINED", "attackComplexity": "LOW", "attackRequirements": "NONE", "attackVector": "LOCAL", "baseScore": 4.6, "baseSeverity": "MEDIUM", "privilegesRequired": "NONE", "providerUrgency": "NOT_DEFINED", "subAvailabilityImpact": "NONE", "subConfidentialityImpact": "NONE", "subIntegrityImpact": "NONE", "userInteraction": "ACTIVE", "valueDensity": "NOT_DEFINED", "vectorString": "CVSS:4.0/AV:L/AC:L/AT:N/PR:N/UI:A/VC:L/VI:L/VA:L/SC:N/SI:N/SA:N", "version": "4.0", "vulnAvailabilityImpact": "LOW", "vulnConfidentialityImpact": "LOW", "vulnIntegrityImpact": "LOW", "vulnerabilityResponseEffort": "NOT_DEFINED" }, "format": "CVSS", "scenarios": [ { "lang": "en", "value": "GENERAL" } ] } ], "problemTypes": [ { "descriptions": [ { "cweId": "CWE-1427", "description": "CWE-1427: Improper Neutralization of Input Used for LLM Prompting", "lang": "en", "type": "CWE" } ] } ], "providerMetadata": { "dateUpdated": "2025-10-14T16:24:58.356Z", "orgId": "5ac1ecc2-367a-4d16-a0b2-35d495ddd0be", "shortName": "tenable" }, "references": [ { "url": "https://www.tenable.com/security/research/tra-2025-47" } ], "source": { "discovery": "UNKNOWN" }, "title": "Windsurf Prompt Injection via Filename", "x_generator": { "engine": "Vulnogram 0.2.0" } } }, "cveMetadata": { "assignerOrgId": "5ac1ecc2-367a-4d16-a0b2-35d495ddd0be", "assignerShortName": "tenable", "cveId": "CVE-2025-36730", "datePublished": "2025-10-14T16:24:58.356Z", "dateReserved": "2025-04-15T21:53:52.386Z", "dateUpdated": "2025-10-14T19:11:07.834Z", "state": "PUBLISHED" }, "dataType": "CVE_RECORD", "dataVersion": "5.1" }
Mitigation
Phase: Architecture and Design
Description:
- LLM-enabled applications should be designed to ensure proper sanitization of user-controllable input, ensuring that no intentionally misleading or dangerous characters can be included. Additionally, they should be designed in a way that ensures that user-controllable input is identified as untrusted and potentially dangerous.
Mitigation
Phase: Implementation
Description:
- LLM prompts should be constructed in a way that effectively differentiates between user-supplied input and developer-constructed system prompting to reduce the chance of model confusion at inference-time.
Mitigation
Phase: Architecture and Design
Description:
- LLM-enabled applications should be designed to ensure proper sanitization of user-controllable input, ensuring that no intentionally misleading or dangerous characters can be included. Additionally, they should be designed in a way that ensures that user-controllable input is identified as untrusted and potentially dangerous.
Mitigation
Phase: Implementation
Description:
- Ensure that model training includes training examples that avoid leaking secrets and disregard malicious inputs. Train the model to recognize secrets, and label training data appropriately. Note that due to the non-deterministic nature of prompting LLMs, it is necessary to perform testing of the same test case several times in order to ensure that troublesome behavior is not possible. Additionally, testing should be performed each time a new model is used or a model's weights are updated.
Mitigation
Phases: Installation, Operation
Description:
- During deployment/operation, use components that operate externally to the system to monitor the output and act as a moderator. These components are called different terms, such as supervisors or guardrails.
Mitigation
Phase: System Configuration
Description:
- During system configuration, the model could be fine-tuned to better control and neutralize potentially dangerous inputs.
No CAPEC attack patterns related to this CWE.