All the vulnerabilites related to Siemens - SINEC INS
var-202206-1428
Vulnerability from variot

In addition to the c_rehash shell command injection identified in CVE-2022-1292, further circumstances where the c_rehash script does not properly sanitise shell metacharacters to prevent command injection were found by code review. When the CVE-2022-1292 was fixed it was not discovered that there are other places in the script where the file names of certificates being hashed were possibly passed to a command executed through the shell. This script is distributed by some operating systems in a manner where it is automatically executed. On such operating systems, an attacker could execute arbitrary commands with the privileges of the script. Use of the c_rehash script is considered obsolete and should be replaced by the OpenSSL rehash command line tool. Fixed in OpenSSL 3.0.4 (Affected 3.0.0,3.0.1,3.0.2,3.0.3). Fixed in OpenSSL 1.1.1p (Affected 1.1.1-1.1.1o). Fixed in OpenSSL 1.0.2zf (Affected 1.0.2-1.0.2ze). (CVE-2022-2068). Bugs fixed (https://bugzilla.redhat.com/):

2024702 - CVE-2021-3918 nodejs-json-schema: Prototype pollution vulnerability 2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak 2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale 2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update Advisory ID: RHSA-2022:6156-01 Product: RHODF Advisory URL: https://access.redhat.com/errata/RHSA-2022:6156 Issue date: 2022-08-24 CVE Names: CVE-2021-23440 CVE-2021-23566 CVE-2021-40528 CVE-2022-0235 CVE-2022-0536 CVE-2022-0670 CVE-2022-1292 CVE-2022-1586 CVE-2022-1650 CVE-2022-1785 CVE-2022-1897 CVE-2022-1927 CVE-2022-2068 CVE-2022-2097 CVE-2022-21698 CVE-2022-22576 CVE-2022-23772 CVE-2022-23773 CVE-2022-23806 CVE-2022-24675 CVE-2022-24771 CVE-2022-24772 CVE-2022-24773 CVE-2022-24785 CVE-2022-24921 CVE-2022-25313 CVE-2022-25314 CVE-2022-27774 CVE-2022-27776 CVE-2022-27782 CVE-2022-28327 CVE-2022-29526 CVE-2022-29810 CVE-2022-29824 CVE-2022-31129 ==================================================================== 1. Summary:

Updated images that include numerous enhancements, security, and bug fixes are now available for Red Hat OpenShift Data Foundation 4.11.0 on Red Hat Enterprise Linux 8.

Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

Red Hat OpenShift Data Foundation is software-defined storage integrated with and optimized for the Red Hat OpenShift Container Platform. Red Hat OpenShift Data Foundation is a highly scalable, production-grade persistent storage for stateful applications running in the Red Hat OpenShift Container Platform. In addition to persistent storage, Red Hat OpenShift Data Foundation provisions a multicloud data management service with an S3 compatible API.

Security Fix(es):

  • eventsource: Exposure of Sensitive Information (CVE-2022-1650)

  • moment: inefficient parsing algorithm resulting in DoS (CVE-2022-31129)

  • nodejs-set-value: type confusion allows bypass of CVE-2019-10747 (CVE-2021-23440)

  • nanoid: Information disclosure via valueOf() function (CVE-2021-23566)

  • node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)

  • follow-redirects: Exposure of Sensitive Information via Authorization Header leak (CVE-2022-0536)

  • prometheus/client_golang: Denial of service using InstrumentHandlerCounter (CVE-2022-21698)

  • golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString (CVE-2022-23772)

  • golang: cmd/go: misinterpretation of branch names can lead to incorrect access control (CVE-2022-23773)

  • golang: crypto/elliptic: IsOnCurve returns true for invalid field elements (CVE-2022-23806)

  • golang: encoding/pem: fix stack overflow in Decode (CVE-2022-24675)

  • node-forge: Signature verification leniency in checking digestAlgorithm structure can lead to signature forgery (CVE-2022-24771)

  • node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery (CVE-2022-24772)

  • node-forge: Signature verification leniency in checking DigestInfo structure (CVE-2022-24773)

  • Moment.js: Path traversal in moment.locale (CVE-2022-24785)

  • golang: regexp: stack exhaustion via a deeply nested expression (CVE-2022-24921)

  • golang: crypto/elliptic: panic caused by oversized scalar (CVE-2022-28327)

  • golang: syscall: faccessat checks wrong group (CVE-2022-29526)

  • go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses (CVE-2022-29810)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Bug Fix(es):

These updated images include numerous enhancements and bug fixes. Space precludes documenting all of these changes in this advisory. Users are directed to the Red Hat OpenShift Data Foundation Release Notes for information on the most significant of these changes:

https://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index

All Red Hat OpenShift Data Foundation users are advised to upgrade to these updated images, which provide numerous bug fixes and enhancements.

  1. Solution:

Before applying this update, make sure all previously released errata relevant to your system have been applied. For details on how to apply this update, refer to: https://access.redhat.com/articles/11258

  1. Bugs fixed (https://bugzilla.redhat.com/):

1937117 - Deletion of StorageCluster doesn't remove ceph toolbox pod 1947482 - The device replacement process when deleting the volume metadata need to be fixed or modified 1973317 - libceph: read_partial_message and bad crc/signature errors 1996829 - Permissions assigned to ceph auth principals when using external storage are too broad 2004944 - CVE-2021-23440 nodejs-set-value: type confusion allows bypass of CVE-2019-10747 2027724 - Warning log for rook-ceph-toolbox in ocs-operator log 2029298 - [GSS] Noobaa is not compatible with aws bucket lifecycle rule creation policies 2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter 2047173 - [RFE] Change controller-manager pod name in odf-lvm-operator to more relevant name to lvm 2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function 2050897 - CVE-2022-0235 mcg-core-container: node-fetch: exposure of sensitive information to an unauthorized actor [openshift-data-foundation-4] 2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak 2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements 2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString 2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control 2056697 - odf-csi-addons-operator subscription failed while using custom catalog source 2058211 - Add validation for CIDR field in DRPolicy 2060487 - [ODF to ODF MS] Consumer lost connection to provider API if the endpoint node is powered off/replaced 2060790 - ODF under Storage missing for OCP 4.11 + ODF 4.10 2061713 - [KMS] The error message during creation of encrypted PVC mentions the parameter in UPPER_CASE 2063691 - [GSS] [RFE] Add termination policy to s3 route 2064426 - [GSS][External Mode] exporter python script does not support FQDN for RGW endpoint 2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression 2066514 - OCS operator to install Ceph prometheus alerts instead of Rook 2067079 - [GSS] [RFE] Add termination policy to ocs-storagecluster-cephobjectstore route 2067387 - CVE-2022-24771 node-forge: Signature verification leniency in checking digestAlgorithm structure can lead to signature forgery 2067458 - CVE-2022-24772 node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery 2067461 - CVE-2022-24773 node-forge: Signature verification leniency in checking DigestInfo structure 2069314 - OCS external mode should allow specifying names for all Ceph auth principals 2069319 - [RFE] OCS CephFS External Mode Multi-tenancy. Add cephfs subvolumegroup and path= caps per cluster. 2069812 - must-gather: rbd_vol_and_snap_info collection is broken 2069815 - must-gather: essential rbd mirror command outputs aren't collected 2070542 - After creating a new storage system it redirects to 404 error page instead of the "StorageSystems" page for OCP 4.11 2071494 - [DR] Applications are not getting deployed 2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale 2073920 - rook osd prepare failed with this error - failed to set kek as an environment variable: key encryption key is empty 2074810 - [Tracker for Bug 2074585] MCG standalone deployment page goes blank when the KMS option is enabled 2075426 - 4.10 must gather is not available after GA of 4.10 2075581 - [IBM Z] : ODF 4.11.0-38 deployment leaves the storagecluster in "Progressing" state although all the openshift-storage pods are up and Running 2076457 - After node replacement[provider], connection issue between consumer and provider if the provider node which was referenced MON-endpoint configmap (on consumer) is lost 2077242 - vg-manager missing permissions 2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode 2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar 2079866 - [DR] odf-multicluster-console is in CLBO state 2079873 - csi-nfsplugin pods are not coming up after successful patch request to update "ROOK_CSI_ENABLE_NFS": "true"' 2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses 2081680 - Add the LVM Operator into the Storage category in OperatorHub 2082028 - UI does not have the option to configure capacity, security and networks,etc. during storagesystem creation 2082078 - OBC's not getting created on primary cluster when manageds3 set as "true" for mirrorPeer 2082497 - Do not filter out removable devices 2083074 - [Tracker for Ceph BZ #2086419] Two Ceph mons crashed in ceph-16.2.7/src/mon/PaxosService.cc: 193: FAILED ceph_assert(have_pending) 2083441 - LVM operator should deploy the volumesnapshotclass resource 2083953 - [Tracker for Ceph BZ #2084579] PVC created with ocs-storagecluster-ceph-nfs storageclass is moving to pending status 2083993 - Add missing pieces for storageclassclaim 2084041 - [Console Migration] Link-able storage system name directs to blank page 2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group 2084201 - MCG operator pod is stuck in a CrashLoopBackOff; Panic Attack: [] an empty namespace may not be set when a resource name is provided" 2084503 - CLI falsely flags unique PVPool backingstore secrets as duplicates 2084546 - [Console Migration] Provider details absent under backing store in UI 2084565 - [Console Migration] The creation of new backing store , directs to a blank page 2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information 2085351 - [DR] Mirrorpeer failed to create with msg Internal error occurred 2085357 - [DR] When drpolicy is create drcluster resources are getting created under default namespace 2086557 - Thin pool in lvm operator doesn't use all disks 2086675 - [UI]No option to "add capacity" via the Installed Operators tab 2086982 - ODF 4.11 deployment is failing 2086983 - [odf-clone] Mons IP not updated correctly in the rook-ceph-mon-endpoints cm 2087078 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and 'Overview' tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown 2087107 - Set default storage class if none is set 2087237 - [UI] After clicking on Create StorageSystem, it navigates to Storage Systems tab but shows an error message 2087675 - ocs-metrics-exporter pod crashes on odf v4.11 2087732 - [Console Migration] Events page missing under new namespace store 2087755 - [Console Migration] Bucket Class details page doesn't have the complete details in UI 2088359 - Send VG Metrics even if storage is being consumed from thinPool alone 2088380 - KMS using vault on standalone MCG cluster is not enabled 2088506 - ceph-external-cluster-details-exporter.py should not accept hostname for rgw-endpoint 2088587 - Removal of external storage system with misconfigured cephobjectstore fails on noobaa webhook 2089296 - [MS v2] Storage cluster in error phase and 'ocs-provider-qe' addon installation failed with ODF 4.10.2 2089342 - prometheus pod goes into OOMKilled state during ocs-osd-controller-manager pod restarts 2089397 - [GSS]OSD pods CLBO after upgrade to 4.10 from 4.9. 2089552 - [MS v2] Cannot create StorageClassClaim 2089567 - [Console Migration] Improve the styling of Various Components 2089786 - [Console Migration] "Attach to deployment" option is missing in kebab menu for Object Bucket Claims . 2089795 - [Console Migration] Yaml and Events page is missing for Object Bucket Claims and Object Bucket. 2089797 - [RDR] rbd image failed to mount with msg rbd error output: rbd: sysfs write failed 2090278 - [LVMO] Some containers are missing resource requirements and limits 2090314 - [LVMO] CSV is missing some useful annotations 2090953 - [MCO] DRCluster created under default namespace 2091487 - [Hybrid Console] Multicluster dashboard is not displaying any metrics 2091638 - [Console Migration] Yaml page is missing for existing and newly created Block pool. 2091641 - MCG operator pod is stuck in a CrashLoopBackOff; MapSecretToNamespaceStores invalid memory address or nil pointer dereference 2091681 - Auto replication policy type detection is not happneing on DRPolicy creation page when ceph cluster is external 2091894 - All backingstores in cluster spontaneously change their own secret 2091951 - [GSS] OCS pods are restarting due to liveness probe failure 2091998 - Volume Snapshots not work with external restricted mode 2092143 - Deleting a CephBlockPool CR does not delete the underlying Ceph pool 2092217 - [External] UI for uploding JSON data for external cluster connection has some strict checks 2092220 - [Tracker for Ceph BZ #2096882] CephNFS is not reaching to Ready state on ODF on IBM Power (ppc64le) 2092349 - Enable zeroing on the thin-pool during creation 2092372 - [MS v2] StorageClassClaim is not reaching Ready Phase 2092400 - [MS v2] StorageClassClaim creation is failing with error "no StorageCluster found" 2093266 - [RDR] When mirroring is enabled rbd mirror daemon restart config should be enabled automatically 2093848 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected 2094179 - MCO fails to create DRClusters when replication mode is synchronous 2094853 - [Console Migration] Description under storage class drop down in add capacity is missing . 2094856 - [KMS] PVC creation using vaulttenantsa method is failing due to token secret missing in serviceaccount 2095155 - Use tool black to format the python external script 2096209 - ReclaimSpaceJob fails on OCP 4.11 + ODF 4.10 cluster 2096414 - Compression status for cephblockpool is reported as Enabled and Disabled at the same time 2096509 - [Console Migration] Unable to select Storage Class in Object Bucket Claim creation page 2096513 - Infinite BlockPool tabs get created when the StorageSystem details page is opened 2096823 - After upgrading the cluster from ODF4.10 to ODF4.11, the ROOK_CSI_ENABLE_CEPHFS move to False 2096937 - Storage - Data Foundation: i18n misses 2097216 - Collect StorageClassClaim details in must-gather 2097287 - [UI] Dropdown doesn't close on it's own after arbiter zone selection on 'Capacity and nodes' page 2097305 - Add translations for ODF 4.11 2098121 - Managed ODF not getting detected 2098261 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment 2098536 - [KMS] PVC creation using vaulttenantsa method is failing due to token secret missing in serviceaccount 2099265 - [KMS] The storagesystem creation page goes blank when KMS is enabled 2099581 - StorageClassClaim with encryption gets into Failed state 2099609 - The red-hat-storage/topolvm release-4.11 needs to be synced with the upstream project 2099646 - Block pool list page kebab action menu is showing empty options 2099660 - OCS dashbaords not appearing unless user clicks on "Overview" Tab 2099724 - S3 secret namespace on the managed cluster doesn't match with the namespace in the s3profile 2099965 - rbd: provide option to disable setting metadata on RBD images 2100326 - [ODF to ODF] Volume snapshot creation failed 2100352 - Make lvmo pod labels more uniform 2100946 - Avoid temporary ceph health alert for new clusters where the insecure global id is allowed longer than necessary 2101139 - [Tracker for OCP BZ #2102782] topolvm-controller get into CrashLoopBackOff few minutes after install 2101380 - Default backingstore is rejected with message INVALID_SCHEMA_PARAMS SERVER account_api#/methods/check_external_connection 2103818 - Restored snapshot don't have any content 2104833 - Need to update configmap for IBM storage odf operator GA 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS

  1. References:

https://access.redhat.com/security/cve/CVE-2021-23440 https://access.redhat.com/security/cve/CVE-2021-23566 https://access.redhat.com/security/cve/CVE-2021-40528 https://access.redhat.com/security/cve/CVE-2022-0235 https://access.redhat.com/security/cve/CVE-2022-0536 https://access.redhat.com/security/cve/CVE-2022-0670 https://access.redhat.com/security/cve/CVE-2022-1292 https://access.redhat.com/security/cve/CVE-2022-1586 https://access.redhat.com/security/cve/CVE-2022-1650 https://access.redhat.com/security/cve/CVE-2022-1785 https://access.redhat.com/security/cve/CVE-2022-1897 https://access.redhat.com/security/cve/CVE-2022-1927 https://access.redhat.com/security/cve/CVE-2022-2068 https://access.redhat.com/security/cve/CVE-2022-2097 https://access.redhat.com/security/cve/CVE-2022-21698 https://access.redhat.com/security/cve/CVE-2022-22576 https://access.redhat.com/security/cve/CVE-2022-23772 https://access.redhat.com/security/cve/CVE-2022-23773 https://access.redhat.com/security/cve/CVE-2022-23806 https://access.redhat.com/security/cve/CVE-2022-24675 https://access.redhat.com/security/cve/CVE-2022-24771 https://access.redhat.com/security/cve/CVE-2022-24772 https://access.redhat.com/security/cve/CVE-2022-24773 https://access.redhat.com/security/cve/CVE-2022-24785 https://access.redhat.com/security/cve/CVE-2022-24921 https://access.redhat.com/security/cve/CVE-2022-25313 https://access.redhat.com/security/cve/CVE-2022-25314 https://access.redhat.com/security/cve/CVE-2022-27774 https://access.redhat.com/security/cve/CVE-2022-27776 https://access.redhat.com/security/cve/CVE-2022-27782 https://access.redhat.com/security/cve/CVE-2022-28327 https://access.redhat.com/security/cve/CVE-2022-29526 https://access.redhat.com/security/cve/CVE-2022-29810 https://access.redhat.com/security/cve/CVE-2022-29824 https://access.redhat.com/security/cve/CVE-2022-31129 https://access.redhat.com/security/updates/classification/#important https://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYwZpHdzjgjWX9erEAQgy1Q//QaStGj34eQ0ap5J5gCcC1lTv7U908fNy Xo7VvwAi67IslacAiQhWNyhg+jr1c46Op7kAAC04f8n25IsM+7xYYyieJ0YDAP7N b3iySRKnPI6I9aJlN0KMm7J1jfjFmcuPMrUdDHiSGNsmK9zLmsQs3dGMaCqYX+fY sJEDPnMMulbkrPLTwSG2IEcpqGH2BoEYwPhSblt2fH0Pv6H7BWYF/+QjxkGOkGDj gz0BBnc1Foir2BpYKv6/+3FUbcXFdBXmrA5BIcZ9157Yw3RP/khf+lQ6I1KYX1Am 2LI6/6qL8HyVWyl+DEUz0DxoAQaF5x61C35uENyh/U96sYeKXtP9rvDC41TvThhf mX4woWcUN1euDfgEF22aP9/gy+OsSyfP+SV0d9JKIaM9QzCCOwyKcIM2+CeL4LZl CSAYI7M+cKsl1wYrioNBDdG8H54GcGV8kS1Hihb+Za59J7pf/4IPuHy3Cd6FBymE hTFLE9YGYeVtCufwdTw+4CEjB2jr3WtzlYcSc26SET9aPCoTUmS07BaIAoRmzcKY 3KKSKi3LvW69768OLQt8UT60WfQ7zHa+OWuEp1tVoXe/XU3je42yuptCd34axn7E 2gtZJOocJxL2FtehhxNTx7VI3Bjy2V0VGlqqf1t6/z6r0IOhqxLbKeBvH9/XF/6V ERCapzwcRuQ=gV+z -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:

Release osp-director-operator images

Security Fix(es):

  • CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read [important]
  • CVE-2021-41103 golang: containerd: insufficiently restricted permissions on container root and plugin directories [medium]

  • Solution:

OSP 16.2.z Release - OSP Director Operator Containers

  1. Summary:

This is an updated release of the Self Node Remediation Operator. The Self Node Remediation Operator replaces the Poison Pill Operator, and is delivered by Red Hat Workload Availability. Description:

The Self Node Remediation Operator works in conjunction with the Machine Health Check or the Node Health Check Operators to provide automatic remediation of unhealthy nodes by rebooting them. This minimizes downtime for stateful applications and RWO volumes, as well as restoring compute capacity in the event of transient failures.

Security Fix(es):

  • golang: compress/gzip: stack exhaustion in Reader.Read (CVE-2022-30631)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, see the CVE page(s) listed in the References section. Bugs fixed (https://bugzilla.redhat.com/):

2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read

  1. Description:

Multicluster engine for Kubernetes 2.1 images

Multicluster engine for Kubernetes provides the foundational components that are necessary for the centralized management of multiple Kubernetes-based clusters across data centers, public clouds, and private clouds.

You can use the engine to create new Red Hat OpenShift Container Platform clusters or to bring existing Kubernetes-based clusters under management by importing them. After the clusters are managed, you can use the APIs that are provided by the engine to distribute configuration based on placement policy.

Security fixes:

  • CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS

  • CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header

  • CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions

  • CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip

  • CVE-2022-30630 golang: io/fs: stack exhaustion in Glob

  • CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read

  • CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob

  • CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal

  • CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode

  • CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working

  • CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add

Bug fixes:

  • MCE 2.1.0 Images (BZ# 2090907)

  • cluster-proxy-agent not able to startup (BZ# 2109394)

  • Create cluster button skips Infrastructure page, shows blank page (BZ# 2110713)

  • AWS Icon sometimes doesn't show up in create cluster wizard (BZ# 2110734)

  • Infrastructure descriptions in create cluster catalog should be consistent and clear (BZ# 2110811)

  • The user with clusterset view permission should not able to update the namespace binding with the pencil icon on clusterset details page (BZ# 2111483)

  • hypershift cluster creation -> not all agent labels are shown in the node pools screen (BZ# 2112326)

  • CIM - SNO expansion, worker node status incorrect (BZ# 2114735)

  • Wizard fields are not pre-filled after picking credentials (BZ# 2117163)

  • ManagedClusterImageRegistry CR is wrong in pure MCE env

  • Solution:

For multicluster engine for Kubernetes, see the following documentation for details on how to install the images:

https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/multicluster_engine/install_upgrade/installing-while-connected-online-mce

  1. Bugs fixed (https://bugzilla.redhat.com/):

2090907 - MCE 2.1.0 Images 2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS 2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read 2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob 2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header 2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions 2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working 2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob 2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode 2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip 2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal 2109394 - cluster-proxy-agent not able to startup 2111483 - The user with clusterset view permission should not able to update the namespace binding with the pencil icon on clusterset details page 2112326 - [UI] hypershift cluster creation -> not all agent labels are shown in the node pools screen 2114735 - [UI] CIM - SNO expansion, worker node status incorrect 2117163 - [UI] Wizard fields are not pre-filled after picking credentials 2117447 - [ACM 2.6] ManagedClusterImageRegistry CR is wrong in pure MCE env

  1. This software, such as Apache HTTP Server, is common to multiple JBoss middleware products, and is packaged under Red Hat JBoss Core Services to allow for faster distribution of updates, and for a more consistent update experience. Bugs fixed (https://bugzilla.redhat.com/):

2064319 - CVE-2022-23943 httpd: mod_sed: Read/write beyond bounds 2064320 - CVE-2022-22721 httpd: core: Possible buffer overflow with very large or unlimited LimitXMLRequestBody 2081494 - CVE-2022-1292 openssl: c_rehash script allows command injection 2094997 - CVE-2022-26377 httpd: mod_proxy_ajp: Possible request smuggling 2095000 - CVE-2022-28330 httpd: mod_isapi: out-of-bounds read 2095002 - CVE-2022-28614 httpd: Out-of-bounds read via ap_rwrite() 2095006 - CVE-2022-28615 httpd: Out-of-bounds read in ap_strcmp_match() 2095015 - CVE-2022-30522 httpd: mod_sed: DoS vulnerability 2095020 - CVE-2022-31813 httpd: mod_proxy: X-Forwarded-For dropped by hop-by-hop mechanism 2097310 - CVE-2022-2068 openssl: the c_rehash script allows command injection 2099300 - CVE-2022-32206 curl: HTTP compression denial of service 2099305 - CVE-2022-32207 curl: Unpreserved file permissions 2099306 - CVE-2022-32208 curl: FTP-KRB bad message verification 2116639 - CVE-2022-37434 zlib: heap-based buffer over-read and overflow in inflate() in inflate.c via a large gzip header extra field 2120718 - CVE-2022-35252 curl: control code in cookie denial of service 2130769 - CVE-2022-40674 expat: a use-after-free in the doContent function in xmlparse.c 2135411 - CVE-2022-32221 curl: POST following PUT confusion 2135413 - CVE-2022-42915 curl: HTTP proxy double-free 2135416 - CVE-2022-42916 curl: HSTS bypass via IDN 2136266 - CVE-2022-40303 libxml2: integer overflows with XML_PARSE_HUGE 2136288 - CVE-2022-40304 libxml2: dict corruption caused by entity reference cycles

OpenSSL 1.0.2 users should upgrade to 1.0.2zf (premium support customers only) OpenSSL 1.1.1 users should upgrade to 1.1.1p OpenSSL 3.0 users should upgrade to 3.0.4

This issue was reported to OpenSSL on the 20th May 2022. It was found by Chancen of Qingteng 73lab. A further instance of the issue was found by Daniel Fiala of OpenSSL during a code review of the script. The fix for these issues was developed by Daniel Fiala and Tomas Mraz from OpenSSL.

Note

OpenSSL 1.0.2 is out of support and no longer receiving public updates. Extended support is available for premium support customers: https://www.openssl.org/support/contracts.html

OpenSSL 1.1.0 is out of support and no longer receiving updates of any kind.

Users of these versions should upgrade to OpenSSL 3.0 or 1.1.1.

References

URL for this Security Advisory: https://www.openssl.org/news/secadv/20220621.txt

Note: the online version of the advisory may be updated with additional details over time.

For details of OpenSSL severity classifications please see: https://www.openssl.org/policies/secpolicy.html . Summary:

The Migration Toolkit for Containers (MTC) 1.7.4 is now available. Description:

The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):

1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 2054663 - CVE-2022-0512 nodejs-url-parse: authorization bypass through user-controlled key 2057442 - CVE-2022-0639 npm-url-parse: Authorization Bypass Through User-Controlled Key 2060018 - CVE-2022-0686 npm-url-parse: Authorization bypass through user-controlled key 2060020 - CVE-2022-0691 npm-url-parse: authorization bypass through user-controlled key 2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information 2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read

  1. Solution:

For OpenShift Container Platform 4.9 see the following documentation, which will be updated shortly, for detailed release notes:

https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-release-notes.html

For Red Hat OpenShift Logging 5.3, see the following instructions to apply this update:

https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html

  1. Bugs fixed (https://bugzilla.redhat.com/):

2064698 - CVE-2020-36518 jackson-databind: denial of service via a large depth of nested objects 2135244 - CVE-2022-42003 jackson-databind: deep wrapper array nesting wrt UNWRAP_SINGLE_VALUE_ARRAYS 2135247 - CVE-2022-42004 jackson-databind: use of deeply nested arrays

  1. JIRA issues fixed (https://issues.jboss.org/):

LOG-3293 - log-file-metric-exporter container has not limits exhausting the resources of the node

  1. Bugs fixed (https://bugzilla.redhat.com/):

1937609 - VM cannot be restarted 1945593 - Live migration should be blocked for VMs with host devices 1968514 - [RFE] Add cancel migration action to virtctl 1993109 - CNV MacOS Client not signed 1994604 - [RFE] - Add a feature to virtctl to print out a message if virtctl is a different version than the server side 2001385 - no "name" label in virt-operator pod 2009793 - KBase to clarify nested support status is missing 2010318 - with sysprep config data as cfgmap volume and as cdrom disk a windows10 VMI fails to LiveMigrate 2025276 - No permissions when trying to clone to a different namespace (as Kubeadmin) 2025401 - [TEST ONLY] [CNV+OCS/ODF] Virtualization poison pill implemenation 2026357 - Migration in sequence can be reported as failed even when it succeeded 2029349 - cluster-network-addons-operator does not serve metrics through HTTPS 2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache 2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error 2031857 - Add annotation for URL to download the image 2033077 - KubeVirtComponentExceedsRequestedMemory Prometheus Rule is Failing to Evaluate 2035344 - kubemacpool-mac-controller-manager not ready 2036676 - NoReadyVirtController and NoReadyVirtOperator are never triggered 2039976 - Pod stuck in "Terminating" state when removing VM with kernel boot and container disks 2040766 - A crashed Windows VM cannot be restarted with virtctl or the UI 2041467 - [SSP] Support custom DataImportCron creating in custom namespaces 2042402 - LiveMigration with postcopy misbehave when failure occurs 2042809 - sysprep disk requires autounattend.xml if an unattend.xml exists 2045086 - KubeVirtComponentExceedsRequestedMemory Prometheus Rule is Failing to Evaluate 2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter 2047186 - When entering to a RH supported template, it changes the project (namespace) to ?OpenShift? 2051899 - 4.11.0 containers 2052094 - [rhel9-cnv] VM fails to start, virt-handler error msg: Couldn't configure ip nat rules 2052466 - Event does not include reason for inability to live migrate 2052689 - Overhead Memory consumption calculations are incorrect 2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements 2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString 2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control 2056467 - virt-template-validator pods getting scheduled on the same node 2057157 - [4.10.0] HPP-CSI-PVC fails to bind PVC when node fqdn is long 2057310 - qemu-guest-agent does not report information due to selinux denials 2058149 - cluster-network-addons-operator deployment's MULTUS_IMAGE is pointing to brew image 2058925 - Must-gather: for vms with longer name, gather_vms_details fails to collect qemu, dump xml logs 2059121 - [CNV-4.11-rhel9] virt-handler pod CrashLoopBackOff state 2060485 - virtualMachine with duplicate interfaces name causes MACs to be rejected by Kubemacpool 2060585 - [SNO] Failed to find the virt-controller leader pod 2061208 - Cannot delete network Interface if VM has multiqueue for networking enabled. 2061723 - Prevent new DataImportCron to manage DataSource if multiple DataImportCron pointing to same DataSource 2063540 - [CNV-4.11] Authorization Failed When Cloning Source Namespace 2063792 - No DataImportCron for CentOS 7 2064034 - On an upgraded cluster NetworkAddonsConfig seems to be reconciling in a loop 2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server 2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression 2064936 - Migration of vm from VMware reports pvc not large enough 2065014 - Feature Highlights in CNV 4.10 contains links to 4.7 2065019 - "Running VMs per template" in the new overview tab counts VMs that are not running 2066768 - [CNV-4.11-HCO] User Cannot List Resource "namespaces" in API group 2067246 - [CNV]: Unable to ssh to Virtual Machine post changing Flavor tiny to custom 2069287 - Two annotations for VM Template provider name 2069388 - [CNV-4.11] kubemacpool-mac-controller - TLS handshake error 2070366 - VM Snapshot Restore hangs indefinitely when backed by a snapshotclass 2070864 - non-privileged user cannot see catalog tiles 2071488 - "Migrate Node to Node" is confusing. 2071549 - [rhel-9] unable to create a non-root virt-launcher based VM 2071611 - Metrics documentation generators are missing metrics/recording rules 2071921 - Kubevirt RPM is not being built 2073669 - [rhel-9] VM fails to start 2073679 - [rhel-8] VM fails to start: missing virt-launcher-monitor downstream 2073982 - [CNV-4.11-RHEL9] 'virtctl' binary fails with 'rc1' with 'virtctl version' command 2074337 - VM created from registry cannot be started 2075200 - VLAN filtering cannot be configured with Intel X710 2075409 - [CNV-4.11-rhel9] hco-operator and hco-webhook pods CrashLoopBackOff 2076292 - Upgrade from 4.10.1->4.11 using nightly channel, is not completing with error "could not complete the upgrade process. KubeVirt is not with the expected version. Check KubeVirt observed version in the status field of its CR" 2076379 - must-gather: ruletables and qemu logs collected as a part of gather_vm_details scripts are zero bytes file 2076790 - Alert SSPDown is constantly in Firing state 2076908 - clicking on a template in the Running VMs per Template card leads to 404 2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode 2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar 2078700 - Windows template boot source should be blank 2078703 - [RFE] Please hide the user defined password when customizing cloud-init 2078709 - VM conditions column have wrong key/values 2078728 - Common template rootDisk is not named correctly 2079366 - rootdisk is not able to edit 2079674 - Configuring preferred node affinity in the console results in wrong yaml and unschedulable VM 2079783 - Actions are broken in topology view 2080132 - virt-launcher logs live migration in nanoseconds if the migration is stuck 2080155 - [RFE] Provide the progress of VM migration in the source virt launcher pod 2080547 - Metrics kubevirt_hco_out_of_band_modifications_count, does not reflect correct modification count when label is added to priorityclass/kubevirt-cluster-critical in a loop 2080833 - Missing cloud init script editor in the scripts tab 2080835 - SSH key is set using cloud init script instead of new api 2081182 - VM SSH command generated by UI points at api VIP 2081202 - cloud-init for Windows VM generated with corrupted "undefined" section 2081409 - when viewing a common template details page, user need to see the message "can't edit common template" on all tabs 2081671 - SSH service created outside the UI is not discoverable 2081831 - [RFE] Improve disk hotplug UX 2082008 - LiveMigration fails due to loss of connection to destination host 2082164 - Migration progress timeout expects absolute progress 2082912 - [CNV-4.11] HCO Being Unable to Reconcile State 2083093 - VM overview tab is crashed 2083097 - ?Mount Windows drivers disk? should not show when the template is not ?windows? 2083100 - Something keeps loading in the ?node selector? modal 2083101 - ?Restore default settings? never become available while editing CPU/Memory 2083135 - VM fails to schedule with vTPM in spec 2083256 - SSP Reconcile logging improvement when CR resources are changed 2083595 - [RFE] Disable VM descheduler if the VM is not live migratable 2084102 - [e2e] Many elements are lacking proper selector like 'data-test-id' or 'data-test' 2084122 - [4.11]Clone from filesystem to block on storage api with the same size fails 2084418 - ?Invalid SSH public key format? appears when drag ssh key file to ?Authorized SSH Key? field 2084431 - User credentials for ssh is not in correct format 2084476 - The Virtual Machine Authorized SSH Key is not shown in the scripts tab. 2084532 - Console is crashed while detaching disk 2084610 - Newly added Kubevirt-plugin pod is missing resources.requests values (cpu/memory) 2085320 - Tolerations rules is not adding correctly 2085322 - Not able to stop/restart VM if the VM is staying in "Starting" 2086272 - [dark mode] Titles in Overview tab not visible enough in dark mode 2086278 - Cloud init script edit add " hostname='' " when is should not be added 2086281 - [dark mode] Helper text in Scripts tab not visible enough on dark mode 2086286 - [dark mode] The contrast of the Labels and edit labels not look good in the dark mode 2086293 - [dark mode] Titles in Parameters tab not visible enough in dark mode 2086294 - [dark mode] Can't see the number inside the donut chart in VMs per template card 2086303 - non-priv user can't create VM when namespace is not selected 2086479 - some modals use ?Save? and some modals use ?Submit? 2086486 - cluster overview getting started card include old information 2086488 - Cannot cancel vm migration if the migration pod is not schedulable in the backend 2086769 - Missing vm.kubevirt.io/template.namespace label when creating VM with the wizard 2086803 - When clonnig a template we need to update vm labels and annotaions to match new template 2086825 - VM restore PVC uses exact source PVC request size 2086849 - Create from YAML example is not runnable 2087188 - When VM is stopped - adding disk failed to show 2087189 - When VM is stopped - adding disk failed to show 2087232 - When chosing a vm or template while in all-namespace, and returning to list, namespace is changed 2087546 - "Quick Starts" is missing in Getting started card 2087547 - Activity and Status card are missing in Virtualization Overview 2087559 - template in "VMs per template" should take user to vm list page 2087566 - Remove the ?auto upload? label from template in the catalog if the auto-upload boot source not exists 2087570 - Page title should be ?VirtualMachines? and not ?Virtual Machines? 2087577 - "VMs per template" load time is a bit long 2087578 - Terminology "VM" should be "Virtual Machine" in all places 2087582 - Remove VMI and MTV from the navigation 2087583 - [RFE] Show more info about boot source in template list 2087584 - Template provider should not be mandatory 2087587 - Improve the descriptive text in the kebab menu of template 2087589 - Red icons shows in storage disk source selection without a good reason 2087590 - [REF] "Upload a new file to a PVC" should not open the form in a new tab 2087593 - "Boot method" is not a good name in overview tab 2087603 - Align details card for single VM overview with the design doc 2087616 - align the utilization card of single VM overview with the design 2087701 - [RFE] Missing a link to VMI from running VM details page 2087717 - Message when editing template boot source is wrong 2088034 - Virtualization Overview crashes when a VirtualMachine has no labels 2088355 - disk modal shows all storage classes as default 2088361 - Attached disk keeps in loading status when add disk to a power off VM by non-privileged user 2088379 - Create VM from catalog does not respect the storageclass of the template's boot source 2088407 - Missing create button in the template list 2088471 - [HPP] hostpath-provisioner-csi does not comply with restricted security context 2088472 - Golden Images import cron jobs are not getting updated on upgrade to 4.11 2088477 - [4.11.z] VMSnapshot restore fails to provision volume with size mismatch error 2088849 - "dataimportcrontemplate.kubevirt.io/enable" field does not do any validation 2089078 - ConsolePlugin kubevirt-plugin is not getting reconciled by hco 2089271 - Virtualization appears twice in sidebar 2089327 - add network modal crash when no networks available 2089376 - Virtual Machine Template without dataVolumeTemplates gets blank page 2089477 - [RFE] Allow upload source when adding VM disk 2089700 - Drive column in Disks card of Overview page has duplicated values 2089745 - When removing all disks from customize wizard app crashes 2089789 - Add windows drivers disk is missing when template is not windows 2089825 - Top consumers card on Virtualization Overview page should keep display parameters as set by user 2089836 - Card titles on single VM Overview page does not have hyperlinks to relevant pages 2089840 - Cant create snapshot if VM is without disks 2089877 - Utilization card on single VM overview - timespan menu lacks 5min option 2089932 - Top consumers card on single VM overview - View by resource dropdown menu needs an update 2089942 - Utilization card on single VM overview - trend charts at the bottom should be linked to proper metrics 2089954 - Details card on single VM overview - VNC console has grey padding 2089963 - Details card on single VM overview - Operating system info is not available 2089967 - Network Interfaces card on single VM overview - name tooltip lacks info 2089970 - Network Interfaces card on single VM overview - IP tooltip 2089972 - Disks card on single VM overview -typo 2089979 - Single VM Details - CPU|Memory edit icon misplaced 2089982 - Single VM Details - SSH modal has redundant VM name 2090035 - Alert card is missing in single VM overview 2090036 - OS should be "Operating system" and host should be "hostname" in single vm overview 2090037 - Add template link in single vm overview details card 2090038 - The update field under the version in overview should be consistent with the operator page 2090042 - Move the edit button close to the text for "boot order" and "ssh access" 2090043 - "No resource selected" in vm boot order 2090046 - Hardware devices section In the VM details and Template details should be aligned with catalog page 2090048 - "Boot mode" should be editable while VM is running 2090054 - Services ?kubernetes" and "openshift" should not be listing in vm details 2090055 - Add link to vm template in vm details page 2090056 - "Something went wrong" shows on VM "Environment" tab 2090057 - "?" icon is too big in environment and disk tab 2090059 - Failed to add configmap in environment tab due to validate error 2090064 - Miss "remote desktop" in console dropdown list for windows VM 2090066 - [RFE] Improve guest login credentials 2090068 - Make the "name" and "Source" column wider in vm disk tab 2090131 - Key's value in "add affinity rule" modal is too small 2090350 - memory leak in virt-launcher process 2091003 - SSH service is not deleted along the VM 2091058 - After VM gets deleted, the user is redirected to a page with a different namespace 2091309 - While disabling a golden image via HCO, user should not be required to enter the whole spec. 2091406 - wrong template namespace label when creating a vm with wizard 2091754 - Scheduling and scripts tab should be editable while the VM is running 2091755 - Change bottom "Save" to "Apply" on cloud-init script form 2091756 - The root disk of cloned template should be editable 2091758 - "OS" should be "Operating system" in template filter 2091760 - The provider should be empty if it's not set during cloning 2091761 - Miss "Edit labels" and "Edit annotations" in template kebab button 2091762 - Move notification above the tabs in template details page 2091764 - Clone a template should lead to the template details 2091765 - "Edit bootsource" is keeping in load in template actions dropdown 2091766 - "Are you sure you want to leave this page?" pops up when click the "Templates" link 2091853 - On Snapshot tab of single VM "Restore" button should move to the kebab actions together with the Delete 2091863 - BootSource edit modal should list affected templates 2091868 - Catalog list view has two columns named "BootSource" 2091889 - Devices should be editable for customize template 2091897 - username is missing in the generated ssh command 2091904 - VM is not started if adding "Authorized SSH Key" during vm creation 2091911 - virt-launcher pod remains as NonRoot after LiveMigrating VM from NonRoot to Root 2091940 - SSH is not enabled in vm details after restart the VM 2091945 - delete a template should lead to templates list 2091946 - Add disk modal shows wrong units 2091982 - Got a lot of "Reconciler error" in cdi-deployment log after adding custom DataImportCron to hco 2092048 - When Boot from CD is checked in customized VM creation - Disk source should be Blank 2092052 - Virtualization should be omitted in Calatog breadcrumbs 2092071 - Getting started card in Virtualization overview can not be hidden. 2092079 - Error message stays even when problematic field is dismissed 2092158 - PrometheusRule kubevirt-hyperconverged-prometheus-rule is not getting reconciled by HCO 2092228 - Ensure Machine Type for new VMs is 8.6 2092230 - [RFE] Add indication/mark to deprecated template 2092306 - VM is stucking with WaitingForVolumeBinding if creating via "Boot from CD" 2092337 - os is empty in VM details page 2092359 - [e2e] data-test-id includes all pvc name 2092654 - [RFE] No obvious way to delete the ssh key from the VM 2092662 - No url example for rhel and windows template 2092663 - no hyperlink for URL example in disk source "url" 2092664 - no hyperlink to the cdi uploadproxy URL 2092781 - Details card should be removed for non admins. 2092783 - Top consumers' card should be removed for non admins. 2092787 - Operators links should be removed from Getting started card 2092789 - "Learn more about Operators" link should lead to the Red Hat documentation 2092951 - ?Edit BootSource? action should have more explicit information when disabled 2093282 - Remove links to 'all-namespaces/' for non-privileged user 2093691 - Creation flow drawer left padding is broken 2093713 - Required fields in creation flow should be highlighted if empty 2093715 - Optional parameters section in creation flow is missing bottom padding 2093716 - CPU|Memory modal button should say "Restore template settings? 2093772 - Add a service in environment it reminds a pending change in boot order 2093773 - Console crashed if adding a service without serial number 2093866 - Cannot create vm from the template vm-template-example 2093867 - OS for template 'vm-template-example' should matching the version of the image 2094202 - Cloud-init username field should have hint 2094207 - Cloud-init password field should have auto-generate option 2094208 - SSH key input is missing validation 2094217 - YAML view should reflect shanges in SSH form 2094222 - "?" icon should be placed after red asterisk in required fields 2094323 - Workload profile should be editable in template details page 2094405 - adding resource on enviornment isnt showing on disks list when vm is running 2094440 - Utilization pie charts figures are not based on current data 2094451 - PVC selection in VM creation flow does not work for non-priv user 2094453 - CD Source selection in VM creation flow is missing Upload option 2094465 - Typo in Source tooltip 2094471 - Node selector modal for non-privileged user 2094481 - Tolerations modal for non-privileged user 2094486 - Add affinity rule modal 2094491 - Affinity rules modal button 2094495 - Descheduler modal has same text in two lines 2094646 - [e2e] Elements on scheduling tab are missing proper data-test-id 2094665 - Dedicated Resources modal for non-privileged user 2094678 - Secrets and ConfigMaps can't be added to Windows VM 2094727 - Creation flow should have VM info in header row 2094807 - hardware devices dropdown has group title even with no devices in cluster 2094813 - Cloudinit password is seen in wizard 2094848 - Details card on Overview page - 'View details' link is missing 2095125 - OS is empty in the clone modal 2095129 - "undefined" appears in rootdisk line in clone modal 2095224 - affinity modal for non-privileged users 2095529 - VM migration cancelation in kebab action should have shorter name 2095530 - Column sizes in VM list view 2095532 - Node column in VM list view is visible to non-privileged user 2095537 - Utilization card information should display pie charts as current data and sparkline charts as overtime 2095570 - Details tab of VM should not have Node info for non-privileged user 2095573 - Disks created as environment or scripts should have proper label 2095953 - VNC console controls layout 2095955 - VNC console tabs 2096166 - Template "vm-template-example" is binding with namespace "default" 2096206 - Inconsistent capitalization in Template Actions 2096208 - Templates in the catalog list is not sorted 2096263 - Incorrectly displaying units for Disks size or Memory field in various places 2096333 - virtualization overview, related operators title is not aligned 2096492 - Cannot create vm from a cloned template if its boot source is edited 2096502 - "Restore template settings" should be removed from template CPU editor 2096510 - VM can be created without any disk 2096511 - Template shows "no Boot Source" and label "Source available" at the same time 2096620 - in templates list, edit boot reference kebab action opens a modal with different title 2096781 - Remove boot source provider while edit boot source reference 2096801 - vnc thumbnail in virtual machine overview should be active on page load 2096845 - Windows template's scripts tab is crashed 2097328 - virtctl guestfs shouldn't required uid = 0 2097370 - missing titles for optional parameters in wizard customization page 2097465 - Count is not updating for 'prometheusrule' component when metrics kubevirt_hco_out_of_band_modifications_count executed 2097586 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP 2098134 - "Workload profile" column is not showing completely in template list 2098135 - Workload is not showing correct in catalog after change the template's workload 2098282 - Javascript error when changing boot source of custom template to be an uploaded file 2099443 - No "Quick create virtualmachine" button for template 'vm-template-example' 2099533 - ConsoleQuickStart for HCO CR's VM is missing 2099535 - The cdi-uploadproxy certificate url should be opened in a new tab 2099539 - No storage option for upload while editing a disk 2099566 - Cloudinit should be replaced by cloud-init in all places 2099608 - "DynamicB" shows in vm-example disk size 2099633 - Doc links needs to be updated 2099639 - Remove user line from the ssh command section 2099802 - Details card link shouldn't be hard-coded 2100054 - Windows VM with WSL2 guest fails to migrate 2100284 - Virtualization overview is crashed 2100415 - HCO is taking too much time for reconciling kubevirt-plugin deployment 2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS 2101164 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode 2101192 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP 2101430 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page 2101454 - Cannot add PVC boot source to template in 'Edit Boot Source Reference' view as a non-priv user 2101485 - Cloudinit should be replaced by cloud-init in all places 2101628 - non-priv user cannot load dataSource while edit template's rootdisk 2101954 - [4.11]Smart clone and csi clone leaves tmp unbound PVC and ObjectTransfer 2102076 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page 2102116 - [e2e] elements on Template Scheduling tab are missing proper data-test-id 2102117 - [e2e] elements on VM Scripts tab are missing proper data-test-id 2102122 - non-priv user cannot load dataSource while edit template's rootdisk 2102124 - Cannot add PVC boot source to template in 'Edit Boot Source Reference' view as a non-priv user 2102125 - vm clone modal is displaying DV size instead of PVC size 2102127 - Cannot add NIC to VM template as non-priv user 2102129 - All templates are labeling "source available" in template list page 2102131 - The number of hardware devices is not correct in vm overview tab 2102135 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode 2102143 - vm clone modal is displaying DV size instead of PVC size 2102256 - Add button moved to right 2102448 - VM disk is deleted by uncheck "Delete disks (1x)" on delete modal 2102543 - Add button moved to right 2102544 - VM disk is deleted by uncheck "Delete disks (1x)" on delete modal 2102545 - VM filter has two "Other" checkboxes which are triggered together 2104617 - Storage status report "OpenShift Data Foundation is not available" even the operator is installed 2106175 - All pages are crashed after visit Virtualization -> Overview 2106258 - All pages are crashed after visit Virtualization -> Overview 2110178 - [Docs] Text repetition in Virtual Disk Hot plug instructions 2111359 - kubevirt plugin console is crashed after creating a vm with 2 nics 2111562 - kubevirt plugin console crashed after visit vmi page 2117872 - CVE-2022-1798 kubeVirt: Arbitrary file read on the host from KubeVirt VMs

5

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202206-1428",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "ontap select deploy administration utility",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "ontap antivirus connector",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h410c",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "fas a400",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "openssl",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "1.1.1"
      },
      {
        "model": "openssl",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "3.0.0"
      },
      {
        "model": "openssl",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "1.0.2zf"
      },
      {
        "model": "bootstrap os",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "11.0"
      },
      {
        "model": "h610c",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h300s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "solidfire",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h500s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h700s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "santricity smi-s provider",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h410s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "fas 8700",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "aff a400",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "sannav",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "broadcom",
        "version": null
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "aff 8300",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "openssl",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "1.0.2"
      },
      {
        "model": "hci management node",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "smi-s provider",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h610s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "fas 8300",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "10.0"
      },
      {
        "model": "element software",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "snapmanager",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "35"
      },
      {
        "model": "h615c",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "openssl",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "1.1.1p"
      },
      {
        "model": "openssl",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "3.0.4"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "36"
      },
      {
        "model": "aff 8700",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-2068"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "169435"
      },
      {
        "db": "PACKETSTORM",
        "id": "168150"
      },
      {
        "db": "PACKETSTORM",
        "id": "168387"
      },
      {
        "db": "PACKETSTORM",
        "id": "168182"
      },
      {
        "db": "PACKETSTORM",
        "id": "168282"
      },
      {
        "db": "PACKETSTORM",
        "id": "170165"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "db": "PACKETSTORM",
        "id": "170179"
      },
      {
        "db": "PACKETSTORM",
        "id": "168392"
      }
    ],
    "trust": 0.9
  },
  "cve": "CVE-2022-2068",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "COMPLETE",
            "baseScore": 10.0,
            "confidentialityImpact": "COMPLETE",
            "exploitabilityScore": 10.0,
            "id": "CVE-2022-2068",
            "impactScore": 10.0,
            "integrityImpact": "COMPLETE",
            "severity": "HIGH",
            "trust": 1.1,
            "vectorString": "AV:N/AC:L/Au:N/C:C/I:C/A:C",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 9.8,
            "baseSeverity": "CRITICAL",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 3.9,
            "id": "CVE-2022-2068",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2022-2068",
            "trust": 1.0,
            "value": "CRITICAL"
          },
          {
            "author": "VULMON",
            "id": "CVE-2022-2068",
            "trust": 0.1,
            "value": "HIGH"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-2068"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-2068"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "In addition to the c_rehash shell command injection identified in CVE-2022-1292, further circumstances where the c_rehash script does not properly sanitise shell metacharacters to prevent command injection were found by code review. When the CVE-2022-1292 was fixed it was not discovered that there are other places in the script where the file names of certificates being hashed were possibly passed to a command executed through the shell. This script is distributed by some operating systems in a manner where it is automatically executed. On such operating systems, an attacker could execute arbitrary commands with the privileges of the script. Use of the c_rehash script is considered obsolete and should be replaced by the OpenSSL rehash command line tool. Fixed in OpenSSL 3.0.4 (Affected 3.0.0,3.0.1,3.0.2,3.0.3). Fixed in OpenSSL 1.1.1p (Affected 1.1.1-1.1.1o). Fixed in OpenSSL 1.0.2zf (Affected 1.0.2-1.0.2ze). (CVE-2022-2068). Bugs fixed (https://bugzilla.redhat.com/):\n\n2024702 - CVE-2021-3918 nodejs-json-schema: Prototype pollution vulnerability\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2072009 - CVE-2022-24785 Moment.js: Path traversal  in moment.locale\n2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, \u0026 bugfix update\nAdvisory ID:       RHSA-2022:6156-01\nProduct:           RHODF\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:6156\nIssue date:        2022-08-24\nCVE Names:         CVE-2021-23440 CVE-2021-23566 CVE-2021-40528\n                   CVE-2022-0235 CVE-2022-0536 CVE-2022-0670\n                   CVE-2022-1292 CVE-2022-1586 CVE-2022-1650\n                   CVE-2022-1785 CVE-2022-1897 CVE-2022-1927\n                   CVE-2022-2068 CVE-2022-2097 CVE-2022-21698\n                   CVE-2022-22576 CVE-2022-23772 CVE-2022-23773\n                   CVE-2022-23806 CVE-2022-24675 CVE-2022-24771\n                   CVE-2022-24772 CVE-2022-24773 CVE-2022-24785\n                   CVE-2022-24921 CVE-2022-25313 CVE-2022-25314\n                   CVE-2022-27774 CVE-2022-27776 CVE-2022-27782\n                   CVE-2022-28327 CVE-2022-29526 CVE-2022-29810\n                   CVE-2022-29824 CVE-2022-31129\n====================================================================\n1. Summary:\n\nUpdated images that include numerous enhancements, security, and bug fixes\nare now available for Red Hat OpenShift Data Foundation 4.11.0 on Red Hat\nEnterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Data Foundation is software-defined storage integrated\nwith and optimized for the Red Hat OpenShift Container Platform. Red Hat\nOpenShift Data Foundation is a highly scalable, production-grade persistent\nstorage for stateful applications running in the Red Hat OpenShift\nContainer Platform. In addition to persistent storage, Red Hat OpenShift\nData Foundation provisions a multicloud data management service with an S3\ncompatible API. \n\nSecurity Fix(es):\n\n* eventsource: Exposure of Sensitive Information (CVE-2022-1650)\n\n* moment: inefficient parsing algorithm resulting in DoS (CVE-2022-31129)\n\n* nodejs-set-value: type confusion allows bypass of CVE-2019-10747\n(CVE-2021-23440)\n\n* nanoid: Information disclosure via valueOf() function (CVE-2021-23566)\n\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n\n* follow-redirects: Exposure of Sensitive Information via Authorization\nHeader leak (CVE-2022-0536)\n\n* prometheus/client_golang: Denial of service using\nInstrumentHandlerCounter (CVE-2022-21698)\n\n* golang: math/big: uncontrolled memory consumption due to an unhandled\noverflow via Rat.SetString (CVE-2022-23772)\n\n* golang: cmd/go: misinterpretation of branch names can lead to incorrect\naccess control (CVE-2022-23773)\n\n* golang: crypto/elliptic: IsOnCurve returns true for invalid field\nelements (CVE-2022-23806)\n\n* golang: encoding/pem: fix stack overflow in Decode (CVE-2022-24675)\n\n* node-forge: Signature verification leniency in checking `digestAlgorithm`\nstructure can lead to signature forgery (CVE-2022-24771)\n\n* node-forge: Signature verification failing to check tailing garbage bytes\ncan lead to signature forgery (CVE-2022-24772)\n\n* node-forge: Signature verification leniency in checking `DigestInfo`\nstructure (CVE-2022-24773)\n\n* Moment.js: Path traversal  in moment.locale (CVE-2022-24785)\n\n* golang: regexp: stack exhaustion via a deeply nested expression\n(CVE-2022-24921)\n\n* golang: crypto/elliptic: panic caused by oversized scalar\n(CVE-2022-28327)\n\n* golang: syscall: faccessat checks wrong group (CVE-2022-29526)\n\n* go-getter: writes SSH credentials into logfile, exposing sensitive\ncredentials to local uses (CVE-2022-29810)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fix(es):\n\nThese updated images include numerous enhancements and bug fixes. Space\nprecludes documenting all of these changes in this advisory. Users are\ndirected to the Red Hat OpenShift Data Foundation Release Notes for\ninformation on the most significant of these changes:\n\nhttps://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index\n\nAll Red Hat OpenShift Data Foundation users are advised to upgrade to these\nupdated images, which provide numerous bug fixes and enhancements. \n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. For details on how to apply this\nupdate, refer to: https://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1937117 - Deletion of StorageCluster doesn\u0027t remove ceph toolbox pod\n1947482 - The device replacement process when deleting the volume metadata need to be fixed or modified\n1973317 - libceph: read_partial_message and bad crc/signature errors\n1996829 - Permissions assigned to ceph auth principals  when using external storage are too broad\n2004944 - CVE-2021-23440 nodejs-set-value: type confusion allows bypass of CVE-2019-10747\n2027724 - Warning log for rook-ceph-toolbox in ocs-operator log\n2029298 - [GSS] Noobaa is not compatible with aws bucket lifecycle rule creation policies\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2047173 - [RFE] Change controller-manager pod name in odf-lvm-operator to more relevant name to lvm\n2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function\n2050897 - CVE-2022-0235 mcg-core-container: node-fetch: exposure of sensitive information to an unauthorized actor [openshift-data-foundation-4]\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements\n2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString\n2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control\n2056697 - odf-csi-addons-operator subscription failed while using custom catalog source\n2058211 - Add validation for CIDR field in DRPolicy\n2060487 - [ODF to ODF MS] Consumer lost connection to provider API if the endpoint node is powered off/replaced\n2060790 - ODF under Storage missing for OCP 4.11 + ODF 4.10\n2061713 - [KMS] The error message during creation of encrypted PVC mentions the parameter in UPPER_CASE\n2063691 - [GSS] [RFE] Add termination policy to s3 route\n2064426 - [GSS][External Mode] exporter python script does not support FQDN for RGW endpoint\n2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression\n2066514 - OCS operator to install Ceph prometheus alerts instead of Rook\n2067079 - [GSS] [RFE] Add termination policy to ocs-storagecluster-cephobjectstore route\n2067387 - CVE-2022-24771 node-forge: Signature verification leniency in checking `digestAlgorithm` structure can lead to signature forgery\n2067458 - CVE-2022-24772 node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery\n2067461 - CVE-2022-24773 node-forge: Signature verification leniency in checking `DigestInfo` structure\n2069314 - OCS external mode should allow specifying names for all Ceph auth principals\n2069319 - [RFE] OCS CephFS External Mode Multi-tenancy. Add cephfs subvolumegroup and path= caps per cluster. \n2069812 - must-gather: rbd_vol_and_snap_info collection is broken\n2069815 - must-gather: essential rbd mirror command outputs aren\u0027t collected\n2070542 - After creating a new storage system it redirects to 404 error page instead of the \"StorageSystems\" page for OCP 4.11\n2071494 - [DR] Applications are not getting deployed\n2072009 - CVE-2022-24785 Moment.js: Path traversal  in moment.locale\n2073920 - rook osd prepare failed with this error - failed to set kek as an environment variable: key encryption key is empty\n2074810 - [Tracker for Bug 2074585] MCG standalone deployment page goes blank when the KMS option is enabled\n2075426 - 4.10 must gather is not available after GA of 4.10\n2075581 - [IBM Z] : ODF 4.11.0-38 deployment leaves the storagecluster in \"Progressing\" state although all the openshift-storage pods are up and Running\n2076457 - After node replacement[provider], connection issue between consumer and provider if the provider node which was referenced MON-endpoint configmap (on consumer) is lost\n2077242 - vg-manager missing permissions\n2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode\n2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar\n2079866 - [DR] odf-multicluster-console is in CLBO state\n2079873 - csi-nfsplugin pods are not coming up after successful patch request to update \"ROOK_CSI_ENABLE_NFS\":  \"true\"\u0027\n2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses\n2081680 - Add the LVM Operator into the Storage category in OperatorHub\n2082028 - UI does not have the option to configure capacity, security and networks,etc. during storagesystem creation\n2082078 - OBC\u0027s not getting created on primary cluster when manageds3 set as \"true\" for mirrorPeer\n2082497 - Do not filter out removable devices\n2083074 - [Tracker for Ceph BZ #2086419] Two Ceph mons crashed in ceph-16.2.7/src/mon/PaxosService.cc: 193: FAILED ceph_assert(have_pending)\n2083441 - LVM operator should deploy the volumesnapshotclass resource\n2083953 - [Tracker for Ceph BZ #2084579] PVC created with ocs-storagecluster-ceph-nfs storageclass is moving to pending status\n2083993 - Add missing pieces for storageclassclaim\n2084041 - [Console Migration] Link-able storage system name directs to blank page\n2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group\n2084201 - MCG operator pod is stuck in a CrashLoopBackOff; Panic Attack: [] an empty namespace may not be set when a resource name is provided\"\n2084503 - CLI falsely flags unique PVPool backingstore secrets as duplicates\n2084546 - [Console Migration] Provider details absent under backing store in UI\n2084565 - [Console Migration] The creation of new backing store , directs to a blank page\n2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information\n2085351 - [DR] Mirrorpeer failed to create with msg Internal error occurred\n2085357 - [DR] When drpolicy is create drcluster resources are getting created under default namespace\n2086557 - Thin pool in lvm operator doesn\u0027t use all disks\n2086675 - [UI]No option to \"add capacity\" via the Installed Operators tab\n2086982 - ODF 4.11 deployment is failing\n2086983 - [odf-clone] Mons IP not updated correctly in the rook-ceph-mon-endpoints cm\n2087078 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and \u0027Overview\u0027 tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown\n2087107 - Set default storage class if none is set\n2087237 - [UI] After clicking on Create StorageSystem, it navigates to Storage Systems tab but shows an error message\n2087675 - ocs-metrics-exporter pod crashes on odf v4.11\n2087732 - [Console Migration] Events page missing under new namespace store\n2087755 - [Console Migration] Bucket Class details page doesn\u0027t have the complete details in UI\n2088359 - Send VG Metrics even if storage is being consumed from thinPool alone\n2088380 - KMS using vault on standalone MCG cluster is not enabled\n2088506 - ceph-external-cluster-details-exporter.py should not accept hostname for rgw-endpoint\n2088587 - Removal of external storage system with misconfigured cephobjectstore fails on noobaa webhook\n2089296 - [MS v2] Storage cluster in error phase and \u0027ocs-provider-qe\u0027 addon installation failed with ODF 4.10.2\n2089342 - prometheus pod goes into OOMKilled state during ocs-osd-controller-manager pod restarts\n2089397 - [GSS]OSD pods CLBO after upgrade to 4.10 from 4.9. \n2089552 - [MS v2] Cannot create StorageClassClaim\n2089567 - [Console Migration] Improve the styling of Various Components\n2089786 - [Console Migration] \"Attach to deployment\" option is missing in kebab menu for Object Bucket Claims . \n2089795 - [Console Migration] Yaml and Events page is missing for Object Bucket Claims and Object Bucket. \n2089797 - [RDR] rbd image failed to mount with msg rbd error output: rbd: sysfs write failed\n2090278 - [LVMO] Some containers are missing resource requirements and limits\n2090314 - [LVMO] CSV is missing some useful annotations\n2090953 - [MCO] DRCluster created under default namespace\n2091487 - [Hybrid Console] Multicluster dashboard is not displaying any metrics\n2091638 - [Console Migration] Yaml page is missing for existing and newly created Block pool. \n2091641 - MCG operator pod is stuck in a CrashLoopBackOff; MapSecretToNamespaceStores invalid memory address or nil pointer dereference\n2091681 - Auto replication policy type detection is not happneing on DRPolicy creation page when ceph cluster is external\n2091894 - All backingstores in cluster spontaneously change their own secret\n2091951 - [GSS] OCS pods are restarting due to liveness probe failure\n2091998 - Volume Snapshots not work with external restricted mode\n2092143 - Deleting a CephBlockPool CR does not delete the underlying Ceph pool\n2092217 - [External] UI for uploding JSON data for external cluster connection has some strict checks\n2092220 - [Tracker for Ceph BZ #2096882] CephNFS is not reaching to Ready state on ODF on IBM Power (ppc64le)\n2092349 - Enable zeroing on the thin-pool during creation\n2092372 - [MS v2] StorageClassClaim is not reaching Ready Phase\n2092400 - [MS v2] StorageClassClaim creation is failing with error \"no StorageCluster found\"\n2093266 - [RDR] When mirroring is enabled rbd mirror daemon restart config should be enabled automatically\n2093848 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected\n2094179 - MCO fails to create DRClusters when replication mode is synchronous\n2094853 - [Console Migration] Description under storage class drop down in add capacity is missing . \n2094856 - [KMS] PVC creation using vaulttenantsa method is failing due to token secret missing in serviceaccount\n2095155 - Use tool `black` to format the python external script\n2096209 - ReclaimSpaceJob fails on OCP 4.11 + ODF 4.10 cluster\n2096414 - Compression status for cephblockpool is reported as Enabled and Disabled at the same time\n2096509 - [Console Migration] Unable to select Storage Class in Object Bucket Claim creation page\n2096513 - Infinite BlockPool tabs get created when the StorageSystem details page is opened\n2096823 - After upgrading the cluster from ODF4.10 to ODF4.11, the ROOK_CSI_ENABLE_CEPHFS move to False\n2096937 - Storage - Data Foundation: i18n misses\n2097216 - Collect StorageClassClaim details in must-gather\n2097287 - [UI] Dropdown doesn\u0027t close on it\u0027s own after arbiter zone selection on \u0027Capacity and nodes\u0027 page\n2097305 - Add translations for ODF 4.11\n2098121 - Managed ODF not getting detected\n2098261 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment\n2098536 - [KMS] PVC creation using vaulttenantsa method is failing due to token secret missing in serviceaccount\n2099265 - [KMS] The storagesystem creation page goes blank when KMS is enabled\n2099581 - StorageClassClaim with encryption gets into Failed state\n2099609 - The red-hat-storage/topolvm release-4.11 needs to be synced with the upstream project\n2099646 - Block pool list page kebab action menu is showing empty options\n2099660 - OCS dashbaords not appearing unless user clicks on \"Overview\" Tab\n2099724 - S3 secret namespace on the managed cluster doesn\u0027t match with the namespace in the s3profile\n2099965 - rbd: provide option to disable setting metadata on RBD images\n2100326 - [ODF to ODF] Volume snapshot creation failed\n2100352 - Make lvmo pod labels more uniform\n2100946 - Avoid temporary ceph health alert for new clusters where the insecure global id is allowed longer than necessary\n2101139 - [Tracker for OCP BZ #2102782] topolvm-controller get into CrashLoopBackOff few minutes after install\n2101380 - Default backingstore is rejected with message INVALID_SCHEMA_PARAMS SERVER account_api#/methods/check_external_connection\n2103818 - Restored snapshot don\u0027t have any content\n2104833 - Need to update configmap for IBM storage odf operator GA\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-23440\nhttps://access.redhat.com/security/cve/CVE-2021-23566\nhttps://access.redhat.com/security/cve/CVE-2021-40528\nhttps://access.redhat.com/security/cve/CVE-2022-0235\nhttps://access.redhat.com/security/cve/CVE-2022-0536\nhttps://access.redhat.com/security/cve/CVE-2022-0670\nhttps://access.redhat.com/security/cve/CVE-2022-1292\nhttps://access.redhat.com/security/cve/CVE-2022-1586\nhttps://access.redhat.com/security/cve/CVE-2022-1650\nhttps://access.redhat.com/security/cve/CVE-2022-1785\nhttps://access.redhat.com/security/cve/CVE-2022-1897\nhttps://access.redhat.com/security/cve/CVE-2022-1927\nhttps://access.redhat.com/security/cve/CVE-2022-2068\nhttps://access.redhat.com/security/cve/CVE-2022-2097\nhttps://access.redhat.com/security/cve/CVE-2022-21698\nhttps://access.redhat.com/security/cve/CVE-2022-22576\nhttps://access.redhat.com/security/cve/CVE-2022-23772\nhttps://access.redhat.com/security/cve/CVE-2022-23773\nhttps://access.redhat.com/security/cve/CVE-2022-23806\nhttps://access.redhat.com/security/cve/CVE-2022-24675\nhttps://access.redhat.com/security/cve/CVE-2022-24771\nhttps://access.redhat.com/security/cve/CVE-2022-24772\nhttps://access.redhat.com/security/cve/CVE-2022-24773\nhttps://access.redhat.com/security/cve/CVE-2022-24785\nhttps://access.redhat.com/security/cve/CVE-2022-24921\nhttps://access.redhat.com/security/cve/CVE-2022-25313\nhttps://access.redhat.com/security/cve/CVE-2022-25314\nhttps://access.redhat.com/security/cve/CVE-2022-27774\nhttps://access.redhat.com/security/cve/CVE-2022-27776\nhttps://access.redhat.com/security/cve/CVE-2022-27782\nhttps://access.redhat.com/security/cve/CVE-2022-28327\nhttps://access.redhat.com/security/cve/CVE-2022-29526\nhttps://access.redhat.com/security/cve/CVE-2022-29810\nhttps://access.redhat.com/security/cve/CVE-2022-29824\nhttps://access.redhat.com/security/cve/CVE-2022-31129\nhttps://access.redhat.com/security/updates/classification/#important\nhttps://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYwZpHdzjgjWX9erEAQgy1Q//QaStGj34eQ0ap5J5gCcC1lTv7U908fNy\nXo7VvwAi67IslacAiQhWNyhg+jr1c46Op7kAAC04f8n25IsM+7xYYyieJ0YDAP7N\nb3iySRKnPI6I9aJlN0KMm7J1jfjFmcuPMrUdDHiSGNsmK9zLmsQs3dGMaCqYX+fY\nsJEDPnMMulbkrPLTwSG2IEcpqGH2BoEYwPhSblt2fH0Pv6H7BWYF/+QjxkGOkGDj\ngz0BBnc1Foir2BpYKv6/+3FUbcXFdBXmrA5BIcZ9157Yw3RP/khf+lQ6I1KYX1Am\n2LI6/6qL8HyVWyl+DEUz0DxoAQaF5x61C35uENyh/U96sYeKXtP9rvDC41TvThhf\nmX4woWcUN1euDfgEF22aP9/gy+OsSyfP+SV0d9JKIaM9QzCCOwyKcIM2+CeL4LZl\nCSAYI7M+cKsl1wYrioNBDdG8H54GcGV8kS1Hihb+Za59J7pf/4IPuHy3Cd6FBymE\nhTFLE9YGYeVtCufwdTw+4CEjB2jr3WtzlYcSc26SET9aPCoTUmS07BaIAoRmzcKY\n3KKSKi3LvW69768OLQt8UT60WfQ7zHa+OWuEp1tVoXe/XU3je42yuptCd34axn7E\n2gtZJOocJxL2FtehhxNTx7VI3Bjy2V0VGlqqf1t6/z6r0IOhqxLbKeBvH9/XF/6V\nERCapzwcRuQ=gV+z\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nRelease osp-director-operator images\n\nSecurity Fix(es):\n\n* CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n[important]\n* CVE-2021-41103 golang: containerd: insufficiently restricted permissions\non container root and plugin directories [medium]\n\n3. Solution:\n\nOSP 16.2.z Release - OSP Director Operator Containers\n\n4. Summary:\n\nThis is an updated release of the Self Node Remediation Operator. The Self\nNode Remediation Operator replaces the Poison Pill Operator, and is\ndelivered by Red Hat Workload Availability. Description:\n\nThe Self Node Remediation Operator works in conjunction with the Machine\nHealth Check or the Node Health Check Operators to provide automatic\nremediation of unhealthy nodes by rebooting them. This minimizes downtime\nfor stateful applications and RWO volumes, as well as restoring compute\ncapacity in the event of transient failures. \n\nSecurity Fix(es):\n\n* golang: compress/gzip: stack exhaustion in Reader.Read (CVE-2022-30631)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, see the CVE page(s)\nlisted in the References section. Bugs fixed (https://bugzilla.redhat.com/):\n\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n5. Description:\n\nMulticluster engine for Kubernetes 2.1 images\n\nMulticluster engine for Kubernetes provides the foundational components\nthat are necessary for the centralized management of multiple\nKubernetes-based clusters across data centers, public clouds, and private\nclouds. \n\nYou can use the engine to create new Red Hat OpenShift Container Platform\nclusters or to bring existing Kubernetes-based clusters under management by\nimporting them. After the clusters are managed, you can use the APIs that\nare provided by the engine to distribute configuration based on placement\npolicy. \n\nSecurity fixes:\n\n* CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n\n* CVE-2022-1705 golang: net/http: improper sanitization of\nTransfer-Encoding header\n\n* CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions\n\n* CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip\n\n* CVE-2022-30630 golang: io/fs: stack exhaustion in Glob\n\n* CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n* CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob\n\n* CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal\n\n* CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode\n\n* CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy -\nomit X-Forwarded-For not working\n\n* CVE-2022-30629 golang: crypto/tls: session tickets lack random\nticket_age_add\n\nBug fixes:\n\n* MCE 2.1.0 Images (BZ# 2090907)\n\n* cluster-proxy-agent not able to startup (BZ# 2109394)\n\n* Create cluster button skips Infrastructure page, shows blank page (BZ#\n2110713)\n\n* AWS Icon sometimes doesn\u0027t show up in create cluster wizard (BZ# 2110734)\n\n* Infrastructure descriptions in create cluster catalog should be\nconsistent and clear (BZ# 2110811)\n\n* The user with clusterset view permission should not able to update the\nnamespace binding with the pencil icon on clusterset details page (BZ#\n2111483)\n\n* hypershift cluster creation -\u003e not all agent labels are shown in the node\npools screen (BZ# 2112326)\n\n* CIM - SNO expansion, worker node status incorrect (BZ# 2114735)\n\n* Wizard fields are not pre-filled after picking credentials (BZ# 2117163)\n\n* ManagedClusterImageRegistry CR is wrong in pure MCE env\n\n3. Solution:\n\nFor multicluster engine for Kubernetes, see the following documentation for\ndetails on how to install the images:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/multicluster_engine/install_upgrade/installing-while-connected-online-mce\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2090907 - MCE 2.1.0 Images\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob\n2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header\n2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions\n2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working\n2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob\n2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode\n2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip\n2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal\n2109394 - cluster-proxy-agent not able to startup\n2111483 - The user with clusterset view permission should not able to update the namespace binding with the pencil icon on clusterset details page\n2112326 - [UI] hypershift cluster creation -\u003e not all agent labels are shown in the node pools screen\n2114735 - [UI] CIM - SNO expansion, worker node status incorrect\n2117163 - [UI] Wizard fields are not pre-filled after picking credentials\n2117447 - [ACM 2.6] ManagedClusterImageRegistry CR is wrong in pure MCE env\n\n5. This software, such as Apache HTTP Server, is\ncommon to multiple JBoss middleware products, and is packaged under Red Hat\nJBoss Core Services to allow for faster distribution of updates, and for a\nmore consistent update experience. Bugs fixed (https://bugzilla.redhat.com/):\n\n2064319 - CVE-2022-23943 httpd: mod_sed: Read/write beyond bounds\n2064320 - CVE-2022-22721 httpd: core: Possible buffer overflow with very large or unlimited LimitXMLRequestBody\n2081494 - CVE-2022-1292 openssl: c_rehash script allows command injection\n2094997 - CVE-2022-26377 httpd: mod_proxy_ajp: Possible request smuggling\n2095000 - CVE-2022-28330 httpd: mod_isapi: out-of-bounds read\n2095002 - CVE-2022-28614 httpd: Out-of-bounds read via ap_rwrite()\n2095006 - CVE-2022-28615 httpd: Out-of-bounds read in ap_strcmp_match()\n2095015 - CVE-2022-30522 httpd: mod_sed: DoS vulnerability\n2095020 - CVE-2022-31813 httpd: mod_proxy: X-Forwarded-For dropped by hop-by-hop mechanism\n2097310 - CVE-2022-2068 openssl: the c_rehash script allows command injection\n2099300 - CVE-2022-32206 curl: HTTP compression denial of service\n2099305 - CVE-2022-32207 curl: Unpreserved file permissions\n2099306 - CVE-2022-32208 curl: FTP-KRB bad message verification\n2116639 - CVE-2022-37434 zlib: heap-based buffer over-read and overflow in inflate() in inflate.c via a large gzip header extra field\n2120718 - CVE-2022-35252 curl: control code in cookie denial of service\n2130769 - CVE-2022-40674 expat: a use-after-free in the doContent function in xmlparse.c\n2135411 - CVE-2022-32221 curl: POST following PUT confusion\n2135413 - CVE-2022-42915 curl: HTTP proxy double-free\n2135416 - CVE-2022-42916 curl: HSTS bypass via IDN\n2136266 - CVE-2022-40303 libxml2: integer overflows with XML_PARSE_HUGE\n2136288 - CVE-2022-40304 libxml2: dict corruption caused by entity reference cycles\n\n5. \n\nOpenSSL 1.0.2 users should upgrade to 1.0.2zf (premium support customers only)\nOpenSSL 1.1.1 users should upgrade to 1.1.1p\nOpenSSL 3.0 users should upgrade to 3.0.4\n\nThis issue was reported to OpenSSL on the 20th May 2022.  It was found by\nChancen of Qingteng 73lab.  A further instance of the issue was found by\nDaniel Fiala of OpenSSL during a code review of the script.  The fix for\nthese issues was developed by Daniel Fiala and Tomas Mraz from OpenSSL. \n\nNote\n====\n\nOpenSSL 1.0.2 is out of support and no longer receiving public updates. Extended\nsupport is available for premium support customers:\nhttps://www.openssl.org/support/contracts.html\n\nOpenSSL 1.1.0 is out of support and no longer receiving updates of any kind. \n\nUsers of these versions should upgrade to OpenSSL 3.0 or 1.1.1. \n\nReferences\n==========\n\nURL for this Security Advisory:\nhttps://www.openssl.org/news/secadv/20220621.txt\n\nNote: the online version of the advisory may be updated with additional details\nover time. \n\nFor details of OpenSSL severity classifications please see:\nhttps://www.openssl.org/policies/secpolicy.html\n. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.7.4 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):\n\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n2054663 - CVE-2022-0512 nodejs-url-parse: authorization bypass through user-controlled key\n2057442 - CVE-2022-0639 npm-url-parse: Authorization Bypass Through User-Controlled Key\n2060018 - CVE-2022-0686 npm-url-parse: Authorization bypass through user-controlled key\n2060020 - CVE-2022-0691 npm-url-parse: authorization bypass through user-controlled key\n2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n5. Solution:\n\nFor OpenShift Container Platform 4.9 see the following documentation, which\nwill be updated shortly, for detailed release notes:\n\nhttps://docs.openshift.com/container-platform/4.9/logging/cluster-logging-release-notes.html\n\nFor Red Hat OpenShift Logging 5.3, see the following instructions to apply\nthis update:\n\nhttps://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2064698 - CVE-2020-36518 jackson-databind: denial of service via a large depth of nested objects\n2135244 - CVE-2022-42003 jackson-databind: deep wrapper array nesting wrt UNWRAP_SINGLE_VALUE_ARRAYS\n2135247 - CVE-2022-42004 jackson-databind: use of deeply nested arrays\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-3293 - log-file-metric-exporter container has not limits exhausting the resources of the node\n\n6. Bugs fixed (https://bugzilla.redhat.com/):\n\n1937609 - VM cannot be restarted\n1945593 - Live migration should be blocked for VMs with host devices\n1968514 - [RFE] Add cancel migration action to virtctl\n1993109 - CNV MacOS Client not signed\n1994604 - [RFE] - Add a feature to virtctl to print out a message if virtctl is a different version than the server side\n2001385 - no \"name\" label in virt-operator pod\n2009793 - KBase to clarify nested support status is missing\n2010318 - with sysprep config data as cfgmap volume and as cdrom disk a windows10 VMI fails to LiveMigrate\n2025276 - No permissions when trying to clone to a different namespace (as Kubeadmin)\n2025401 - [TEST ONLY]  [CNV+OCS/ODF]  Virtualization poison pill implemenation\n2026357 - Migration in sequence can be reported as failed even when it succeeded\n2029349 - cluster-network-addons-operator does not serve metrics through HTTPS\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2031857 - Add annotation for URL to download the image\n2033077 - KubeVirtComponentExceedsRequestedMemory Prometheus Rule is Failing to Evaluate\n2035344 - kubemacpool-mac-controller-manager not ready\n2036676 - NoReadyVirtController and NoReadyVirtOperator are never triggered\n2039976 - Pod stuck in \"Terminating\" state when removing VM with kernel boot and container disks\n2040766 - A crashed Windows VM cannot be restarted with virtctl or the UI\n2041467 - [SSP] Support custom DataImportCron creating in custom namespaces\n2042402 - LiveMigration with postcopy misbehave when failure occurs\n2042809 - sysprep disk requires autounattend.xml if an unattend.xml exists\n2045086 - KubeVirtComponentExceedsRequestedMemory Prometheus Rule is Failing to Evaluate\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2047186 - When entering to a RH supported template, it changes the project (namespace) to ?OpenShift?\n2051899 - 4.11.0 containers\n2052094 - [rhel9-cnv] VM fails to start, virt-handler error msg: Couldn\u0027t configure ip nat rules\n2052466 - Event does not include reason for inability to live migrate\n2052689 - Overhead Memory consumption calculations are incorrect\n2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements\n2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString\n2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control\n2056467 - virt-template-validator pods getting scheduled on the same node\n2057157 - [4.10.0] HPP-CSI-PVC fails to bind PVC when node fqdn is long\n2057310 - qemu-guest-agent does not report information due to selinux denials\n2058149 - cluster-network-addons-operator deployment\u0027s MULTUS_IMAGE is pointing to brew image\n2058925 - Must-gather: for vms with longer name, gather_vms_details fails to collect qemu, dump xml logs\n2059121 - [CNV-4.11-rhel9] virt-handler pod CrashLoopBackOff state\n2060485 - virtualMachine with duplicate interfaces name causes MACs to be rejected by Kubemacpool\n2060585 - [SNO] Failed to find the virt-controller leader pod\n2061208 - Cannot delete network Interface if VM has multiqueue for networking enabled. \n2061723 - Prevent new DataImportCron to manage DataSource if multiple DataImportCron pointing to same DataSource\n2063540 - [CNV-4.11] Authorization Failed When Cloning Source Namespace\n2063792 - No DataImportCron for CentOS 7\n2064034 - On an upgraded cluster NetworkAddonsConfig seems to be reconciling in a loop\n2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server\n2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression\n2064936 - Migration of vm from VMware reports pvc not large enough\n2065014 - Feature Highlights in CNV 4.10 contains links to 4.7\n2065019 - \"Running VMs per template\" in the new overview tab counts VMs that are not running\n2066768 - [CNV-4.11-HCO] User Cannot List Resource \"namespaces\" in API group\n2067246 - [CNV]: Unable to ssh to Virtual Machine post changing Flavor tiny to custom\n2069287 - Two annotations for VM Template provider name\n2069388 - [CNV-4.11] kubemacpool-mac-controller - TLS handshake error\n2070366 - VM Snapshot Restore hangs indefinitely when backed by a snapshotclass\n2070864 - non-privileged user cannot see catalog tiles\n2071488 - \"Migrate Node to Node\" is confusing. \n2071549 - [rhel-9] unable to create a non-root virt-launcher based VM\n2071611 - Metrics documentation generators are missing metrics/recording rules\n2071921 - Kubevirt RPM is not being built\n2073669 - [rhel-9] VM fails to start\n2073679 - [rhel-8] VM fails to start: missing virt-launcher-monitor downstream\n2073982 - [CNV-4.11-RHEL9] \u0027virtctl\u0027 binary fails with \u0027rc1\u0027 with \u0027virtctl version\u0027 command\n2074337 - VM created from registry cannot be started\n2075200 - VLAN filtering cannot be configured with Intel X710\n2075409 - [CNV-4.11-rhel9] hco-operator and hco-webhook pods CrashLoopBackOff\n2076292 - Upgrade from 4.10.1-\u003e4.11 using nightly channel, is not completing with error \"could not complete the upgrade process. KubeVirt is not with the expected version. Check KubeVirt observed version in the status field of its CR\"\n2076379 - must-gather: ruletables and qemu logs collected as a part of gather_vm_details scripts are zero bytes file\n2076790 - Alert SSPDown is constantly in Firing state\n2076908 - clicking on a template in the Running VMs per Template card leads to 404\n2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode\n2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar\n2078700 - Windows template boot source should be blank\n2078703 - [RFE] Please hide the user defined password when customizing cloud-init\n2078709 - VM conditions column have wrong key/values\n2078728 - Common template rootDisk is not named correctly\n2079366 - rootdisk is not able to edit\n2079674 - Configuring preferred node affinity in the console results in wrong yaml and unschedulable VM\n2079783 - Actions are broken in topology view\n2080132 - virt-launcher logs live migration in nanoseconds if the migration is stuck\n2080155 - [RFE] Provide the progress of VM migration in the source virt launcher pod\n2080547 - Metrics kubevirt_hco_out_of_band_modifications_count, does not reflect correct modification count when label is added to priorityclass/kubevirt-cluster-critical in a loop\n2080833 - Missing cloud init script editor in the scripts tab\n2080835 - SSH key is set using cloud init script instead of new api\n2081182 - VM SSH command generated by UI points at api VIP\n2081202 - cloud-init for Windows VM generated with corrupted \"undefined\" section\n2081409 - when viewing a common template details page, user need to see the message \"can\u0027t edit common template\" on all tabs\n2081671 - SSH service created outside the UI is not discoverable\n2081831 - [RFE] Improve disk hotplug UX\n2082008 - LiveMigration fails due to loss of connection to destination host\n2082164 - Migration progress timeout expects absolute progress\n2082912 - [CNV-4.11] HCO Being Unable to Reconcile State\n2083093 - VM overview tab is crashed\n2083097 - ?Mount Windows drivers disk? should not show when the template is not ?windows?\n2083100 - Something keeps loading in the ?node selector? modal\n2083101 - ?Restore default settings? never become available while editing CPU/Memory\n2083135 - VM fails to schedule with vTPM in spec\n2083256 - SSP Reconcile logging improvement when CR resources are changed\n2083595 - [RFE] Disable VM descheduler if the VM is not live migratable\n2084102 - [e2e] Many elements are lacking proper selector like \u0027data-test-id\u0027 or \u0027data-test\u0027\n2084122 - [4.11]Clone from filesystem to block on storage api with the same size fails\n2084418 - ?Invalid SSH public key format? appears when drag ssh key file to ?Authorized SSH Key? field\n2084431 - User credentials for ssh is not in correct format\n2084476 - The Virtual Machine Authorized SSH Key is not shown in the scripts tab. \n2084532 - Console is crashed while detaching disk\n2084610 - Newly added Kubevirt-plugin pod is missing resources.requests values (cpu/memory)\n2085320 - Tolerations rules is not adding correctly\n2085322 - Not able to stop/restart VM if the VM is staying in \"Starting\"\n2086272 - [dark mode] Titles in Overview tab not visible enough in dark mode\n2086278 - Cloud init script edit add \" hostname=\u0027\u0027 \" when is should not be added\n2086281 - [dark mode] Helper text in Scripts tab not visible enough on dark mode\n2086286 - [dark mode] The contrast of the Labels and edit labels not look good in the dark mode\n2086293 - [dark mode] Titles in Parameters tab not visible enough in dark mode\n2086294 - [dark mode] Can\u0027t see the number inside the donut chart in VMs per template card\n2086303 - non-priv user can\u0027t create VM when namespace is not selected\n2086479 - some modals use ?Save? and some modals use ?Submit?\n2086486 - cluster overview getting started card include old information\n2086488 - Cannot cancel vm migration if the migration pod is not schedulable in the backend\n2086769 - Missing vm.kubevirt.io/template.namespace label when creating VM with the wizard\n2086803 - When clonnig a template we need to update vm labels and annotaions to match new template\n2086825 - VM restore PVC uses exact source PVC request size\n2086849 - Create from YAML example is not runnable\n2087188 - When VM is stopped - adding disk failed to show\n2087189 - When VM is stopped - adding disk failed to show\n2087232 - When chosing a vm or template while in all-namespace, and returning to list, namespace is changed\n2087546 - \"Quick Starts\" is missing in Getting started card\n2087547 - Activity and Status card are missing in Virtualization Overview\n2087559 - template in \"VMs per template\" should take user to vm list page\n2087566 - Remove the ?auto upload? label from template in the catalog if the auto-upload boot source not exists\n2087570 - Page title should be ?VirtualMachines? and not ?Virtual Machines?\n2087577 - \"VMs per template\" load time is a bit long\n2087578 - Terminology \"VM\" should be \"Virtual Machine\" in all places\n2087582 - Remove VMI and MTV from the navigation\n2087583 - [RFE] Show more info about boot source in template list\n2087584 - Template provider should not be mandatory\n2087587 - Improve the descriptive text in the kebab menu of template\n2087589 - Red icons shows in storage disk source selection without a good reason\n2087590 - [REF] \"Upload a new file to a PVC\" should not open the form in a new tab\n2087593 - \"Boot method\" is not a good name in overview tab\n2087603 - Align details card for single VM overview with the design doc\n2087616 - align the utilization card of single VM overview with the design\n2087701 - [RFE] Missing a link to VMI from running VM details page\n2087717 - Message when editing template boot source is wrong\n2088034 - Virtualization Overview crashes when a VirtualMachine has no labels\n2088355 - disk modal shows all storage classes as default\n2088361 - Attached disk keeps in loading status when add disk to a power off VM by non-privileged user\n2088379 - Create VM from catalog does not respect the storageclass of the template\u0027s boot source\n2088407 - Missing create button in the template list\n2088471 - [HPP] hostpath-provisioner-csi does not comply with restricted security context\n2088472 - Golden Images import cron jobs are not getting updated on upgrade to 4.11\n2088477 - [4.11.z] VMSnapshot restore fails to provision volume with size mismatch error\n2088849 - \"dataimportcrontemplate.kubevirt.io/enable\" field does not do any validation\n2089078 - ConsolePlugin kubevirt-plugin is not getting reconciled by hco\n2089271 - Virtualization appears twice in sidebar\n2089327 - add network modal crash when no networks available\n2089376 - Virtual Machine Template without dataVolumeTemplates gets blank page\n2089477 - [RFE] Allow upload source when adding VM disk\n2089700 - Drive column in Disks card of Overview page has duplicated values\n2089745 - When removing all disks from customize wizard app crashes\n2089789 - Add windows drivers disk is missing when template is not windows\n2089825 - Top consumers card on Virtualization Overview page should keep display parameters as set by user\n2089836 - Card titles on single VM Overview page does not have hyperlinks to relevant pages\n2089840 - Cant create snapshot if VM is without disks\n2089877 - Utilization card on single VM overview - timespan menu lacks 5min option\n2089932 - Top consumers card on single VM overview - View by resource dropdown menu needs an update\n2089942 - Utilization card on single VM overview - trend charts at the bottom should be linked to proper metrics\n2089954 - Details card on single VM overview - VNC console has grey padding\n2089963 - Details card on single VM overview - Operating system info is not available\n2089967 - Network Interfaces card on single VM overview - name tooltip lacks info\n2089970 - Network Interfaces card on single VM overview - IP tooltip\n2089972 - Disks card on single VM overview -typo\n2089979 - Single VM Details - CPU|Memory edit icon misplaced\n2089982 - Single VM Details - SSH modal has redundant VM name\n2090035 - Alert card is missing in single VM overview\n2090036 - OS should be \"Operating system\" and host should be \"hostname\" in single vm overview\n2090037 - Add template link in single vm overview details card\n2090038 - The update field under the version in overview should be consistent with the operator page\n2090042 - Move the edit button close to the text for \"boot order\" and \"ssh access\"\n2090043 - \"No resource selected\" in vm boot order\n2090046 - Hardware devices section In the VM details and Template details should be aligned with catalog page\n2090048 - \"Boot mode\" should be editable while VM is running\n2090054 - Services ?kubernetes\" and \"openshift\" should not be listing in vm details\n2090055 - Add link to vm template in vm details page\n2090056 - \"Something went wrong\" shows on VM \"Environment\" tab\n2090057 - \"?\" icon is too big in environment and disk tab\n2090059 - Failed to add configmap in environment tab due to validate error\n2090064 - Miss \"remote desktop\" in console dropdown list for windows VM\n2090066 - [RFE] Improve guest login credentials\n2090068 - Make the \"name\" and \"Source\" column wider in vm disk tab\n2090131 - Key\u0027s value in \"add affinity rule\" modal is too small\n2090350 - memory leak in virt-launcher process\n2091003 - SSH service is not deleted along the VM\n2091058 - After VM gets deleted, the user is redirected to a page with a different namespace\n2091309 - While disabling a golden image via HCO, user should not be required to enter the whole spec. \n2091406 - wrong template namespace label when creating a vm with wizard\n2091754 - Scheduling and scripts tab should be editable while the VM is running\n2091755 - Change bottom \"Save\" to \"Apply\" on cloud-init script form\n2091756 - The root disk of cloned template should be editable\n2091758 - \"OS\" should be \"Operating system\" in template filter\n2091760 - The provider should be empty if it\u0027s not set during cloning\n2091761 - Miss \"Edit labels\" and \"Edit annotations\" in template kebab button\n2091762 - Move notification above the tabs in template details page\n2091764 - Clone a template should lead to the template details\n2091765 - \"Edit bootsource\" is keeping in load in template actions dropdown\n2091766 - \"Are you sure you want to leave this page?\" pops up when click the \"Templates\" link\n2091853 - On Snapshot tab of single VM \"Restore\" button should move to the kebab actions together with the Delete\n2091863 - BootSource edit modal should list affected templates\n2091868 - Catalog list view has two columns named \"BootSource\"\n2091889 - Devices should be editable for customize template\n2091897 - username is missing in the generated ssh command\n2091904 - VM is not started if adding \"Authorized SSH Key\" during vm creation\n2091911 - virt-launcher pod remains as NonRoot after LiveMigrating VM from NonRoot to Root\n2091940 - SSH is not enabled in vm details after restart the VM\n2091945 - delete a template should lead to templates list\n2091946 - Add disk modal shows wrong units\n2091982 - Got a lot of \"Reconciler error\" in cdi-deployment log after adding custom DataImportCron to hco\n2092048 - When Boot from CD is checked in customized VM creation - Disk source should be Blank\n2092052 - Virtualization should be omitted in Calatog breadcrumbs\n2092071 - Getting started card in Virtualization overview can not be hidden. \n2092079 - Error message stays even when problematic field is dismissed\n2092158 - PrometheusRule  kubevirt-hyperconverged-prometheus-rule is not getting reconciled by HCO\n2092228 - Ensure Machine Type for new VMs is 8.6\n2092230 - [RFE] Add indication/mark to deprecated template\n2092306 - VM is stucking with WaitingForVolumeBinding if creating via \"Boot from CD\"\n2092337 - os is empty in VM details page\n2092359 - [e2e] data-test-id includes all pvc name\n2092654 - [RFE] No obvious way to delete the ssh key from the VM\n2092662 - No url example for rhel and windows template\n2092663 - no hyperlink for URL example in disk source \"url\"\n2092664 - no hyperlink to the cdi uploadproxy URL\n2092781 - Details card should be removed for non admins. \n2092783 - Top consumers\u0027 card should be removed for non admins. \n2092787 - Operators links should be removed from Getting started card\n2092789 - \"Learn more about Operators\" link should lead to the Red Hat documentation\n2092951 - ?Edit BootSource? action should have more explicit information when disabled\n2093282 - Remove links to \u0027all-namespaces/\u0027 for non-privileged user\n2093691 - Creation flow drawer left padding is broken\n2093713 - Required fields in creation flow should be highlighted if empty\n2093715 - Optional parameters section in creation flow is missing bottom padding\n2093716 - CPU|Memory modal button should say \"Restore template settings?\n2093772 - Add a service in environment it reminds a pending change in boot order\n2093773 - Console crashed if adding a service without serial number\n2093866 - Cannot create vm from the template `vm-template-example`\n2093867 - OS for template \u0027vm-template-example\u0027 should matching the version of the image\n2094202 - Cloud-init username field should have hint\n2094207 - Cloud-init password field should have auto-generate option\n2094208 - SSH key input is missing validation\n2094217 - YAML view should reflect shanges in SSH form\n2094222 - \"?\" icon should be placed after red asterisk in required fields\n2094323 - Workload profile should be editable in template details page\n2094405 - adding resource on enviornment isnt showing on disks list when vm is running\n2094440 - Utilization pie charts figures are not based on current data\n2094451 - PVC selection in VM creation flow does not work for non-priv user\n2094453 - CD Source selection in VM creation flow is missing Upload option\n2094465 - Typo in Source tooltip\n2094471 - Node selector modal for non-privileged user\n2094481 - Tolerations modal for non-privileged user\n2094486 - Add affinity rule modal\n2094491 - Affinity rules modal button\n2094495 - Descheduler modal has same text in two lines\n2094646 - [e2e] Elements on scheduling tab are missing proper data-test-id\n2094665 - Dedicated Resources modal for non-privileged user\n2094678 - Secrets and ConfigMaps can\u0027t be added to Windows VM\n2094727 - Creation flow should have VM info in header row\n2094807 - hardware devices dropdown has group title even with no devices in cluster\n2094813 - Cloudinit password is seen in wizard\n2094848 - Details card on Overview page - \u0027View details\u0027 link is missing\n2095125 - OS is empty in the clone modal\n2095129 - \"undefined\" appears in rootdisk line in clone modal\n2095224 - affinity modal for non-privileged users\n2095529 - VM migration cancelation in kebab action should have shorter name\n2095530 - Column sizes in VM list view\n2095532 - Node column in VM list view is visible to non-privileged user\n2095537 - Utilization card information should display pie charts as current data and sparkline charts as overtime\n2095570 - Details tab of VM should not have Node info for non-privileged user\n2095573 - Disks created as environment or scripts should have proper label\n2095953 - VNC console controls layout\n2095955 - VNC console tabs\n2096166 - Template \"vm-template-example\" is binding with namespace \"default\"\n2096206 - Inconsistent capitalization in Template Actions\n2096208 - Templates in the catalog list is not sorted\n2096263 - Incorrectly displaying units for Disks size or Memory field in various places\n2096333 - virtualization overview, related operators title is not aligned\n2096492 - Cannot create vm from a cloned template if its boot source is edited\n2096502 - \"Restore template settings\" should be removed from template CPU editor\n2096510 - VM can be created without any disk\n2096511 - Template shows \"no Boot Source\" and label \"Source available\" at the same time\n2096620 - in templates list, edit boot reference kebab action opens a modal with different title\n2096781 - Remove boot source provider while edit boot source reference\n2096801 - vnc thumbnail in virtual machine overview should be active on page load\n2096845 - Windows template\u0027s scripts tab is crashed\n2097328 - virtctl guestfs shouldn\u0027t required uid = 0\n2097370 - missing titles for optional parameters in wizard customization page\n2097465 - Count is not updating for \u0027prometheusrule\u0027 component when metrics kubevirt_hco_out_of_band_modifications_count executed\n2097586 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP\n2098134 - \"Workload profile\" column is not showing completely in template list\n2098135 - Workload is not showing correct in catalog after change the template\u0027s workload\n2098282 - Javascript error when changing boot source of custom template to be an uploaded file\n2099443 - No \"Quick create virtualmachine\" button for template \u0027vm-template-example\u0027\n2099533 - ConsoleQuickStart for HCO CR\u0027s VM is missing\n2099535 - The cdi-uploadproxy certificate url should be opened in a new tab\n2099539 - No storage option for upload while editing a disk\n2099566 - Cloudinit should be replaced by cloud-init in all places\n2099608 - \"DynamicB\" shows in vm-example disk size\n2099633 - Doc links needs to be updated\n2099639 - Remove user line from the ssh command section\n2099802 - Details card link shouldn\u0027t be hard-coded\n2100054 - Windows VM with WSL2 guest fails to migrate\n2100284 - Virtualization overview is crashed\n2100415 - HCO is taking too much time for reconciling kubevirt-plugin deployment\n2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n2101164 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode\n2101192 - AccessMode should stay on ReadWriteOnce while editing a disk with storage class HPP\n2101430 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page\n2101454 - Cannot add PVC boot source to template in \u0027Edit Boot Source Reference\u0027 view as a non-priv user\n2101485 - Cloudinit should be replaced by cloud-init in all places\n2101628 - non-priv user cannot load dataSource while edit template\u0027s rootdisk\n2101954 - [4.11]Smart clone and csi clone leaves tmp unbound PVC and ObjectTransfer\n2102076 - Using CLOUD_USER_PASSWORD in Templates parameters breaks VM review page\n2102116 - [e2e] elements on Template Scheduling tab are missing proper data-test-id\n2102117 - [e2e] elements on VM Scripts tab are missing proper data-test-id\n2102122 - non-priv user cannot load dataSource while edit template\u0027s rootdisk\n2102124 - Cannot add PVC boot source to template in \u0027Edit Boot Source Reference\u0027 view as a non-priv user\n2102125 - vm clone modal is displaying DV size instead of PVC size\n2102127 - Cannot add NIC to VM template as non-priv user\n2102129 - All templates are labeling \"source available\" in template list page\n2102131 - The number of hardware devices is not correct in vm overview tab\n2102135 - [dark mode] Number of alerts in Alerts card not visible enough in dark mode\n2102143 - vm clone modal is displaying DV size instead of PVC size\n2102256 - Add button moved to right\n2102448 - VM disk is deleted by uncheck \"Delete disks (1x)\" on delete modal\n2102543 - Add button moved to right\n2102544 - VM disk is deleted by uncheck \"Delete disks (1x)\" on delete modal\n2102545 - VM filter has two \"Other\" checkboxes which are triggered together\n2104617 - Storage status report \"OpenShift Data Foundation is not available\" even the operator is installed\n2106175 - All pages are crashed after visit Virtualization -\u003e Overview\n2106258 - All pages are crashed after visit Virtualization -\u003e Overview\n2110178 - [Docs] Text repetition in Virtual Disk Hot plug instructions\n2111359 - kubevirt plugin console is crashed after creating a vm with 2 nics\n2111562 - kubevirt plugin console crashed after visit vmi page\n2117872 - CVE-2022-1798 kubeVirt: Arbitrary file read on the host from KubeVirt VMs\n\n5",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-2068"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-2068"
      },
      {
        "db": "PACKETSTORM",
        "id": "169435"
      },
      {
        "db": "PACKETSTORM",
        "id": "168150"
      },
      {
        "db": "PACKETSTORM",
        "id": "168387"
      },
      {
        "db": "PACKETSTORM",
        "id": "168182"
      },
      {
        "db": "PACKETSTORM",
        "id": "168282"
      },
      {
        "db": "PACKETSTORM",
        "id": "170165"
      },
      {
        "db": "PACKETSTORM",
        "id": "169668"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "db": "PACKETSTORM",
        "id": "170179"
      },
      {
        "db": "PACKETSTORM",
        "id": "168392"
      }
    ],
    "trust": 1.89
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2022-2068",
        "trust": 2.1
      },
      {
        "db": "SIEMENS",
        "id": "SSA-332410",
        "trust": 1.1
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-22-319-01",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-2068",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "169435",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168150",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168387",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168182",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168282",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "170165",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "169668",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168352",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "170179",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168392",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-2068"
      },
      {
        "db": "PACKETSTORM",
        "id": "169435"
      },
      {
        "db": "PACKETSTORM",
        "id": "168150"
      },
      {
        "db": "PACKETSTORM",
        "id": "168387"
      },
      {
        "db": "PACKETSTORM",
        "id": "168182"
      },
      {
        "db": "PACKETSTORM",
        "id": "168282"
      },
      {
        "db": "PACKETSTORM",
        "id": "170165"
      },
      {
        "db": "PACKETSTORM",
        "id": "169668"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "db": "PACKETSTORM",
        "id": "170179"
      },
      {
        "db": "PACKETSTORM",
        "id": "168392"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-2068"
      }
    ]
  },
  "id": "VAR-202206-1428",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.416330645
  },
  "last_update_date": "2024-11-29T22:02:07.602000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Debian Security Advisories: DSA-5169-1 openssl -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=6b57464ee127384d3d853e9cc99cf350"
      },
      {
        "title": "Amazon Linux AMI: ALAS-2022-1626",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=ALAS-2022-1626"
      },
      {
        "title": "Debian CVElist Bug Report Logs: openssl: CVE-2022-2097",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=740b837c53d462fc86f3cb0849b86ca0"
      },
      {
        "title": "Arch Linux Issues: ",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2022-2068"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2022-1832",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2022-1832"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2022-1831",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2022-1831"
      },
      {
        "title": "Amazon Linux 2: ALASOPENSSL-SNAPSAFE-2023-001",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALASOPENSSL-SNAPSAFE-2023-001"
      },
      {
        "title": "Red Hat: ",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=CVE-2022-2068"
      },
      {
        "title": "Red Hat: Moderate: Red Hat JBoss Web Server 5.7.1 release and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228917 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat JBoss Web Server 5.7.1 release and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228913 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: openssl security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225818 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Red Hat Satellite Client security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20235982 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: openssl security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226224 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Release of containers for OSP 16.2.z director operator tech preview",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226517 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Self Node Remediation Operator 0.4.1 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226184 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Satellite 6.11.5.6 async security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20235980 - Security Advisory"
      },
      {
        "title": "Amazon Linux 2022: ALAS2022-2022-123",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=ALAS2022-2022-123"
      },
      {
        "title": "Red Hat: Important: Satellite 6.12.5.2 Async Security Update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20235979 - Security Advisory"
      },
      {
        "title": "Red Hat: Critical: Multicluster Engine for Kubernetes 2.0.2 security and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226422 - Security Advisory"
      },
      {
        "title": "Brocade Security Advisories: Access Denied",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=brocade_security_advisories\u0026qid=8efbc4133194fcddd0bca99df112b683"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.11.1 bug fix and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226103 - Security Advisory"
      },
      {
        "title": "Amazon Linux 2022: ALAS2022-2022-195",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=ALAS2022-2022-195"
      },
      {
        "title": "Red Hat: Important: Node Maintenance Operator 4.11.1 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226188 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Openshift Logging Security and Bug Fix update (5.3.11)",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226182 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Logging Subsystem 5.5.0 - Red Hat OpenShift security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226051 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat OpenShift Service Mesh 2.2.2 Containers security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226283 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Logging Subsystem 5.4.5 Security and Bug Fix Update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226183 - Security Advisory"
      },
      {
        "title": "Red Hat: Critical: Red Hat Advanced Cluster Management 2.5.2 security fixes and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226507 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: RHOSDT 2.6.0 operator/operand containers Security Update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20227055 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift sandboxed containers 1.3.1 security fix and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20227058 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat JBoss Core Services Apache HTTP Server 2.4.51 SP1 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228840 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: New container image for Red Hat Ceph Storage 5.2 Security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226024 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: RHACS 3.72 enhancement and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226714 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift API for Data Protection (OADP) 1.1.0 security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226290 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Gatekeeper Operator v0.2 security and container updates",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226348 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Multicluster Engine for Kubernetes 2.1 security updates and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226345 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Red Hat JBoss Core Services Apache HTTP Server 2.4.51 SP1 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228841 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: RHSA: Submariner 0.13 - security and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226346 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift API for Data Protection (OADP) 1.0.4 security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226430 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.6.0 security updates and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226370 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.12 security updates and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226271 - Security Advisory"
      },
      {
        "title": "Red Hat: Critical: Red Hat Advanced Cluster Management 2.4.6 security update and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226696 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, \u0026 bugfix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226156 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Virtualization 4.11.1 security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228750 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: OpenShift Virtualization 4.11.0 Images security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226526 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Migration Toolkit for Containers (MTC) 1.7.4 security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226429 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: OpenShift Virtualization 4.12.0 Images security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20230408 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Openshift Logging 5.3.14 bug fix release and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228889 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Logging Subsystem 5.5.5 - Red Hat OpenShift security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228781 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: OpenShift Container Platform 4.11.0 bug fix and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225069 - Security Advisory"
      },
      {
        "title": "Smart Check Scan-Report",
        "trust": 0.1,
        "url": "https://github.com/mawinkler/c1-cs-scan-result "
      },
      {
        "title": "Repository with scripts to verify system against CVE",
        "trust": 0.1,
        "url": "https://github.com/backloop-biz/Vulnerability_checker "
      },
      {
        "title": "https://github.com/jntass/TASSL-1.1.1",
        "trust": 0.1,
        "url": "https://github.com/jntass/TASSL-1.1.1 "
      },
      {
        "title": "Repository with scripts to verify system against CVE",
        "trust": 0.1,
        "url": "https://github.com/backloop-biz/CVE_checks "
      },
      {
        "title": "https://github.com/tianocore-docs/ThirdPartySecurityAdvisories",
        "trust": 0.1,
        "url": "https://github.com/tianocore-docs/ThirdPartySecurityAdvisories "
      },
      {
        "title": "OpenSSL-CVE-lib",
        "trust": 0.1,
        "url": "https://github.com/chnzzh/OpenSSL-CVE-lib "
      },
      {
        "title": "The Register",
        "trust": 0.1,
        "url": "https://www.theregister.co.uk/2022/06/27/openssl_304_memory_corruption_bug/"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-2068"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-78",
        "trust": 1.0
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-2068"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.2,
        "url": "https://www.openssl.org/news/secadv/20220621.txt"
      },
      {
        "trust": 1.2,
        "url": "https://www.debian.org/security/2022/dsa-5169"
      },
      {
        "trust": 1.1,
        "url": "https://security.netapp.com/advisory/ntap-20220707-0008/"
      },
      {
        "trust": 1.1,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf"
      },
      {
        "trust": 1.1,
        "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=2c9c35870601b4a44d86ddbf512b38df38285cfa"
      },
      {
        "trust": 1.1,
        "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=9639817dac8bbbaa64d09efad7464ccc405527c7"
      },
      {
        "trust": 1.1,
        "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=7a9c027159fe9e1bbc2cd38a8a2914bff0d5abd9"
      },
      {
        "trust": 1.1,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/6wzzbkuhqfgskgnxxkicsrpl7amvw5m5/"
      },
      {
        "trust": 1.1,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/vcmnwkerpbkoebnl7clttx3zzczlh7xa/"
      },
      {
        "trust": 0.9,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.9,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.9,
        "url": "https://access.redhat.com/security/cve/cve-2022-1292"
      },
      {
        "trust": 0.9,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.9,
        "url": "https://access.redhat.com/security/cve/cve-2022-2068"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2022-2097"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2022-1586"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1292"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2068"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2022-1897"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2097"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1586"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2022-1927"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2022-1785"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-32208"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-32206"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-30631"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1927"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-31129"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1897"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1785"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-1650"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-25314"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-29824"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-25313"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-40528"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30631"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0536"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-34903"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1650"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-24785"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0536"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-28327"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-23806"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-27782"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-24921"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-27776"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-21698"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-22576"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-27774"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-23773"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-24675"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-23772"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-30629"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-2526"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-29154"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-37434"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-36084"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-36085"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-20838"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-4189"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-1271"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-5827"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3634"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3580"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-24370"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-23177"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-17594"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3737"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-14155"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-19603"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-13750"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-36087"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-20231"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-13751"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-20232"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-25219"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-31566"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-17595"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-36086"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-18218"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-25032"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-13435"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/78.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://github.com/backloop-biz/vulnerability_checker"
      },
      {
        "trust": 0.1,
        "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-319-01"
      },
      {
        "trust": 0.1,
        "url": "https://alas.aws.amazon.com/alas-2022-1626.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-31129"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24785"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:7055"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3918"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0391"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0391"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2015-20107"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3918"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2015-20107"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-29526"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0235"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0235"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24771"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23566"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0670"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24772"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-40528"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-29810"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23440"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23566"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0670"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23440"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6156"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24773"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6517"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-41103"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41103"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6184"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-29154"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32148"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1962"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30630"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30635"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1705"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30632"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28131"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2526"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-28131"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30633"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30632"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30633"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/multicluster_engine/install_upgrade/installing-while-connected-online-mce"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1705"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6345"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30630"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30629"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1962"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-40674"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-28614"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-23943"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32207"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22721"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26377"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:8841"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30522"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-40303"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-31813"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32207"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42915"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28615"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42916"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32206"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22721"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-35252"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-31813"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32208"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28614"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28330"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-28615"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-28330"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26377"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-40304"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32221"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23943"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30522"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32221"
      },
      {
        "trust": 0.1,
        "url": "https://www.openssl.org/support/contracts.html"
      },
      {
        "trust": 0.1,
        "url": "https://www.openssl.org/policies/secpolicy.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8559"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20095"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0691"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28500"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0686"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16845"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23337"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-42771"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0639"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6429"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16845"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0512"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28493"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36516"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24448"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26710"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:8889"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22628"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21618"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-3515"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0168"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21628"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-3709"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0617"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0924"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0562"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2639"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0908"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1055"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0865"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-35527"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-35525"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26373"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26709"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-20368"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1048"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3640"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0561"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0617"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-39399"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0562"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0854"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22629"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-29581"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1016"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2078"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22844"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42898"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2938"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21499"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-36946"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42003"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0865"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36558"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27405"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2016-3709"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0909"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1852"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0561"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35527"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0854"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30293"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27406"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0168"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21624"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1304"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26717"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21626"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28390"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36558"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26716"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30002"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36518"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27950"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27404"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23960"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3640"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30002"
      },
      {
        "trust": 0.1,
        "url": "https://issues.jboss.org/):"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36518"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0891"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1184"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35525"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22624"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2509"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26700"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-25255"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26719"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21619"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42004"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1355"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36516"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22662"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28893"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6526"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1629"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-38561"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-38185"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27191"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-35492"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35492"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1798"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1621"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44717"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44716"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-17541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43527"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4115"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-31535"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0778"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-17541"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-2068"
      },
      {
        "db": "PACKETSTORM",
        "id": "169435"
      },
      {
        "db": "PACKETSTORM",
        "id": "168150"
      },
      {
        "db": "PACKETSTORM",
        "id": "168387"
      },
      {
        "db": "PACKETSTORM",
        "id": "168182"
      },
      {
        "db": "PACKETSTORM",
        "id": "168282"
      },
      {
        "db": "PACKETSTORM",
        "id": "170165"
      },
      {
        "db": "PACKETSTORM",
        "id": "169668"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "db": "PACKETSTORM",
        "id": "170179"
      },
      {
        "db": "PACKETSTORM",
        "id": "168392"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-2068"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULMON",
        "id": "CVE-2022-2068"
      },
      {
        "db": "PACKETSTORM",
        "id": "169435"
      },
      {
        "db": "PACKETSTORM",
        "id": "168150"
      },
      {
        "db": "PACKETSTORM",
        "id": "168387"
      },
      {
        "db": "PACKETSTORM",
        "id": "168182"
      },
      {
        "db": "PACKETSTORM",
        "id": "168282"
      },
      {
        "db": "PACKETSTORM",
        "id": "170165"
      },
      {
        "db": "PACKETSTORM",
        "id": "169668"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "db": "PACKETSTORM",
        "id": "170179"
      },
      {
        "db": "PACKETSTORM",
        "id": "168392"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-2068"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-06-21T00:00:00",
        "db": "VULMON",
        "id": "CVE-2022-2068"
      },
      {
        "date": "2022-10-20T14:19:18",
        "db": "PACKETSTORM",
        "id": "169435"
      },
      {
        "date": "2022-08-25T15:22:18",
        "db": "PACKETSTORM",
        "id": "168150"
      },
      {
        "date": "2022-09-15T14:18:16",
        "db": "PACKETSTORM",
        "id": "168387"
      },
      {
        "date": "2022-08-25T15:29:18",
        "db": "PACKETSTORM",
        "id": "168182"
      },
      {
        "date": "2022-09-07T16:56:15",
        "db": "PACKETSTORM",
        "id": "168282"
      },
      {
        "date": "2022-12-08T21:28:21",
        "db": "PACKETSTORM",
        "id": "170165"
      },
      {
        "date": "2022-06-21T12:12:12",
        "db": "PACKETSTORM",
        "id": "169668"
      },
      {
        "date": "2022-09-13T15:42:14",
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "date": "2022-12-09T14:52:40",
        "db": "PACKETSTORM",
        "id": "170179"
      },
      {
        "date": "2022-09-15T14:20:18",
        "db": "PACKETSTORM",
        "id": "168392"
      },
      {
        "date": "2022-06-21T15:15:09.060000",
        "db": "NVD",
        "id": "CVE-2022-2068"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-11-07T00:00:00",
        "db": "VULMON",
        "id": "CVE-2022-2068"
      },
      {
        "date": "2023-11-07T03:46:11.177000",
        "db": "NVD",
        "id": "CVE-2022-2068"
      }
    ]
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat Security Advisory 2022-7055-01",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "169435"
      }
    ],
    "trust": 0.1
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "overflow, code execution",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "170165"
      }
    ],
    "trust": 0.1
  }
}

var-202009-0596
Vulnerability from variot

An attacker could send a specially crafted packet that could have CodeMeter (All versions prior to 7.10) send back packets containing data from the heap. CodeMeter Is vulnerable to an improper shutdown and release of resources.Information may be obtained. Siemens SIMATIC WinCC OA (Open Architecture) is a set of SCADA system of Siemens (Siemens), Germany, and it is also an integral part of HMI series. The system is mainly suitable for industries such as rail transit, building automation and public power supply. Information Server is used to report and visualize the process data stored in the Process Historian. SINEC INS is a web-based application that combines various network services in one tool. SPPA-S2000 simulates the automation component (S7) of the nuclear DCS system SPPA-T2000. SPPA-S3000 simulates the automation components of DCS system SPPA-T3000. SPPA-T3000 is a distributed control system, mainly used in fossil and large renewable energy power plants.

Many Siemens products have security vulnerabilities

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202009-0596",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "codemeter",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "wibu",
        "version": "7.10"
      },
      {
        "model": "codemeter",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "wibu",
        "version": null
      },
      {
        "model": "codemeter",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "wibu",
        "version": "7.10"
      },
      {
        "model": "information server sp1",
        "scope": "lte",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "\u003c=2019"
      },
      {
        "model": "simatic wincc oa",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "3.17"
      },
      {
        "model": "sinec ins",
        "scope": null,
        "trust": 0.6,
        "vendor": "siemens",
        "version": null
      },
      {
        "model": "sppa-s2000",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "3.04"
      },
      {
        "model": "sppa-s2000",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "3.06"
      },
      {
        "model": "sppa-t3000 r8.2 sp2",
        "scope": null,
        "trust": 0.6,
        "vendor": "siemens",
        "version": null
      },
      {
        "model": "sppa-s3000",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "3.05"
      },
      {
        "model": "sppa-s3000",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "3.04"
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51240"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011224"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-16233"
      }
    ]
  },
  "cve": "CVE-2020-16233",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 5.0,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 10.0,
            "id": "CVE-2020-16233",
            "impactScore": 2.9,
            "integrityImpact": "NONE",
            "severity": "MEDIUM",
            "trust": 1.8,
            "vectorString": "AV:N/AC:L/Au:N/C:P/I:N/A:N",
            "version": "2.0"
          },
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "CNVD",
            "availabilityImpact": "NONE",
            "baseScore": 7.8,
            "confidentialityImpact": "COMPLETE",
            "exploitabilityScore": 10.0,
            "id": "CNVD-2020-51240",
            "impactScore": 6.9,
            "integrityImpact": "NONE",
            "severity": "HIGH",
            "trust": 0.6,
            "vectorString": "AV:N/AC:L/Au:N/C:C/I:N/A:N",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 7.5,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 3.9,
            "id": "CVE-2020-16233",
            "impactScore": 3.6,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "None",
            "baseScore": 7.5,
            "baseSeverity": "High",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2020-16233",
            "impactScore": null,
            "integrityImpact": "None",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2020-16233",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "NVD",
            "id": "CVE-2020-16233",
            "trust": 0.8,
            "value": "High"
          },
          {
            "author": "CNVD",
            "id": "CNVD-2020-51240",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202009-482",
            "trust": 0.6,
            "value": "HIGH"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51240"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011224"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-482"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-16233"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "An attacker could send a specially crafted packet that could have CodeMeter (All versions prior to 7.10) send back packets containing data from the heap. CodeMeter Is vulnerable to an improper shutdown and release of resources.Information may be obtained. Siemens SIMATIC WinCC OA (Open Architecture) is a set of SCADA system of Siemens (Siemens), Germany, and it is also an integral part of HMI series. The system is mainly suitable for industries such as rail transit, building automation and public power supply. Information Server is used to report and visualize the process data stored in the Process Historian. SINEC INS is a web-based application that combines various network services in one tool. SPPA-S2000 simulates the automation component (S7) of the nuclear DCS system SPPA-T2000. SPPA-S3000 simulates the automation components of DCS system SPPA-T3000. SPPA-T3000 is a distributed control system, mainly used in fossil and large renewable energy power plants. \n\r\n\r\nMany Siemens products have security vulnerabilities",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-16233"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011224"
      },
      {
        "db": "CNVD",
        "id": "CNVD-2020-51240"
      }
    ],
    "trust": 2.16
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2020-16233",
        "trust": 3.8
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-20-203-01",
        "trust": 2.4
      },
      {
        "db": "JVN",
        "id": "JVNVU90770748",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU94568336",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011224",
        "trust": 0.8
      },
      {
        "db": "SIEMENS",
        "id": "SSA-455843",
        "trust": 0.6
      },
      {
        "db": "CNVD",
        "id": "CNVD-2020-51240",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3076.2",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3076.3",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3076",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022021806",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-482",
        "trust": 0.6
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51240"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011224"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-482"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-16233"
      }
    ]
  },
  "id": "VAR-202009-0596",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51240"
      }
    ],
    "trust": 1.3593294842857142
  },
  "iot_taxonomy": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot_taxonomy#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "category": [
          "ICS"
        ],
        "sub_category": null,
        "trust": 0.6
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51240"
      }
    ]
  },
  "last_update_date": "2024-11-23T20:22:22.621000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "CodeMeter",
        "trust": 0.8,
        "url": "https://www.wibu.com/products/codemeter.html"
      },
      {
        "title": "Patch for Various Siemens products release improper loopholes",
        "trust": 0.6,
        "url": "https://www.cnvd.org.cn/patchInfo/show/233350"
      },
      {
        "title": "ARC Security vulnerabilities",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=127903"
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51240"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011224"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-482"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-404",
        "trust": 1.0
      },
      {
        "problemtype": "Improper shutdown and release of resources (CWE-404) [ Other ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011224"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-16233"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.4,
        "url": "https://us-cert.cisa.gov/ics/advisories/icsa-20-203-01"
      },
      {
        "trust": 1.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16233"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu94568336/index.html"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu90770748/"
      },
      {
        "trust": 0.6,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-455843.pdf"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/siemens-simatic-six-vulnerabilities-via-wibu-systems-codemeter-runtime-33282"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022021806"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3076.2/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3076.3/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3076/"
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51240"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011224"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-482"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-16233"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51240"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011224"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-482"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-16233"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-09-09T00:00:00",
        "db": "CNVD",
        "id": "CNVD-2020-51240"
      },
      {
        "date": "2021-03-24T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-011224"
      },
      {
        "date": "2020-09-08T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202009-482"
      },
      {
        "date": "2020-09-16T20:15:13.817000",
        "db": "NVD",
        "id": "CVE-2020-16233"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-09-10T00:00:00",
        "db": "CNVD",
        "id": "CNVD-2020-51240"
      },
      {
        "date": "2022-03-11T06:04:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-011224"
      },
      {
        "date": "2022-02-21T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202009-482"
      },
      {
        "date": "2024-11-21T05:06:59.540000",
        "db": "NVD",
        "id": "CVE-2020-16233"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-482"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "CodeMeter\u00a0 Improper Resource Shutdown and Release Vulnerability in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011224"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "other",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-482"
      }
    ],
    "trust": 0.6
  }
}

var-202105-1325
Vulnerability from variot

In ISC DHCP 4.1-ESV-R1 -> 4.1-ESV-R16, ISC DHCP 4.4.0 -> 4.4.2 (Other branches of ISC DHCP (i.e., releases in the 4.0.x series or lower and releases in the 4.3.x series) are beyond their End-of-Life (EOL) and no longer supported by ISC. From inspection it is clear that the defect is also present in releases from those series, but they have not been officially tested for the vulnerability), The outcome of encountering the defect while reading a lease that will trigger it varies, according to: the component being affected (i.e., dhclient or dhcpd) whether the package was built as a 32-bit or 64-bit binary whether the compiler flag -fstack-protection-strong was used when compiling In dhclient, ISC has not successfully reproduced the error on a 64-bit system. However, on a 32-bit system it is possible to cause dhclient to crash when reading an improper lease, which could cause network connectivity problems for an affected system due to the absence of a running DHCP client process. In dhcpd, when run in DHCPv4 or DHCPv6 mode: if the dhcpd server binary was built for a 32-bit architecture AND the -fstack-protection-strong flag was specified to the compiler, dhcpd may exit while parsing a lease file containing an objectionable lease, resulting in lack of service to clients. Additionally, the offending lease and the lease immediately following it in the lease database may be improperly deleted. if the dhcpd server binary was built for a 64-bit architecture OR if the -fstack-protection-strong compiler flag was NOT specified, the crash will not occur, but it is possible for the offending lease and the lease which immediately followed it to be improperly deleted. There is a discrepancy between the code that handles encapsulated option information in leases transmitted "on the wire" and the code which reads and parses lease information after it has been written to disk storage. The highest threat from this vulnerability is to data confidentiality and integrity as well as service availability. (CVE-2021-25217). Bugs fixed (https://bugzilla.redhat.com/):

1918750 - CVE-2021-3114 golang: crypto/elliptic: incorrect operations on the P-224 curve 1945703 - "Guest OS Info" availability in VMI describe is flaky 1958816 - [2.6.z] KubeMacPool fails to start due to OOM likely caused by a high number of Pods running in the cluster 1963275 - migration controller null pointer dereference 1965099 - Live Migration double handoff to virt-handler causes connection failures 1965181 - CDI importer doesn't report AwaitingVDDK like it used to 1967086 - Cloning DataVolumes between namespaces fails while creating cdi-upload pod 1967887 - [2.6.6] nmstate is not progressing on a node and not configuring vlan filtering that causes an outage for VMs 1969756 - Windows VMs fail to start on air-gapped environments 1970372 - Virt-handler fails to verify container-disk 1973227 - segfault in virt-controller during pdb deletion 1974084 - 2.6.6 containers 1975212 - No Virtual Machine Templates Found [EDIT - all templates are marked as depracted] 1975727 - [Regression][VMIO][Warm] The third precopy does not end in warm migration 1977756 - [2.6.z] PVC keeps in pending when using hostpath-provisioner 1982760 - [v2v] no kind VirtualMachine is registered for version \"kubevirt.io/v1\" i... 1986989 - OpenShift Virtualization 2.6.z cannot be upgraded to 4.8.0 initially deployed starting with <= 4.8

  1. These packages include redhat-release-virtualization-host. RHVH features a Cockpit user interface for monitoring the host's resources and performing administrative tasks. Solution:

Before applying this update, make sure all previously released errata relevant to your system have been applied. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

All OpenShift Container Platform 4.7 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor

  1. Solution:

For OpenShift Container Platform 4.7 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -cli.html

  1. ========================================================================= Ubuntu Security Notice USN-4969-2 May 27, 2021

isc-dhcp vulnerability

A security issue affects these releases of Ubuntu and its derivatives:

  • Ubuntu 16.04 ESM
  • Ubuntu 14.04 ESM

Summary:

DHCP could be made to crash if it received specially crafted network traffic.

Software Description: - isc-dhcp: DHCP server and client

Details:

USN-4969-1 fixed a vulnerability in DHCP. This update provides the corresponding update for Ubuntu 14.04 ESM and 16.04 ESM.

Original advisory details:

Jon Franklin and Pawel Wieczorkiewicz discovered that DHCP incorrectly handled lease file parsing. A remote attacker could possibly use this issue to cause DHCP to crash, resulting in a denial of service.

Update instructions:

The problem can be corrected by updating your system to the following package versions:

Ubuntu 16.04 ESM: isc-dhcp-client 4.3.3-5ubuntu12.10+esm1 isc-dhcp-server 4.3.3-5ubuntu12.10+esm1

Ubuntu 14.04 ESM: isc-dhcp-client 4.2.4-7ubuntu12.13+esm1 isc-dhcp-server 4.2.4-7ubuntu12.13+esm1

In general, a standard system update will make all the necessary changes. 7.7) - ppc64, ppc64le, s390x, x86_64

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Important: dhcp security update Advisory ID: RHSA-2021:2357-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2021:2357 Issue date: 2021-06-09 CVE Names: CVE-2021-25217 ==================================================================== 1. Summary:

An update for dhcp is now available for Red Hat Enterprise Linux 7.

Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

Red Hat Enterprise Linux Client (v. 7) - x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64 Red Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Workstation (v. 7) - x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - x86_64

  1. Description:

The Dynamic Host Configuration Protocol (DHCP) is a protocol that allows individual devices on an IP network to get their own network configuration information, including an IP address, a subnet mask, and a broadcast address. The dhcp packages provide a relay agent and ISC DHCP service required to enable and administer DHCP on a network.

Security Fix(es):

  • dhcp: stack-based buffer overflow when parsing statements with colon-separated hex digits in config or lease files in dhcpd and dhclient (CVE-2021-25217)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

  1. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Bugs fixed (https://bugzilla.redhat.com/):

1963258 - CVE-2021-25217 dhcp: stack-based buffer overflow when parsing statements with colon-separated hex digits in config or lease files in dhcpd and dhclient

  1. Package List:

Red Hat Enterprise Linux Client (v. 7):

Source: dhcp-4.2.5-83.el7_9.1.src.rpm

x86_64: dhclient-4.2.5-83.el7_9.1.x86_64.rpm dhcp-common-4.2.5-83.el7_9.1.x86_64.rpm dhcp-debuginfo-4.2.5-83.el7_9.1.i686.rpm dhcp-debuginfo-4.2.5-83.el7_9.1.x86_64.rpm dhcp-libs-4.2.5-83.el7_9.1.i686.rpm dhcp-libs-4.2.5-83.el7_9.1.x86_64.rpm

Red Hat Enterprise Linux Client Optional (v. 7):

x86_64: dhcp-4.2.5-83.el7_9.1.x86_64.rpm dhcp-debuginfo-4.2.5-83.el7_9.1.i686.rpm dhcp-debuginfo-4.2.5-83.el7_9.1.x86_64.rpm dhcp-devel-4.2.5-83.el7_9.1.i686.rpm dhcp-devel-4.2.5-83.el7_9.1.x86_64.rpm

Red Hat Enterprise Linux ComputeNode (v. 7):

Source: dhcp-4.2.5-83.el7_9.1.src.rpm

x86_64: dhclient-4.2.5-83.el7_9.1.x86_64.rpm dhcp-common-4.2.5-83.el7_9.1.x86_64.rpm dhcp-debuginfo-4.2.5-83.el7_9.1.i686.rpm dhcp-debuginfo-4.2.5-83.el7_9.1.x86_64.rpm dhcp-libs-4.2.5-83.el7_9.1.i686.rpm dhcp-libs-4.2.5-83.el7_9.1.x86_64.rpm

Red Hat Enterprise Linux ComputeNode Optional (v. 7):

x86_64: dhcp-4.2.5-83.el7_9.1.x86_64.rpm dhcp-debuginfo-4.2.5-83.el7_9.1.i686.rpm dhcp-debuginfo-4.2.5-83.el7_9.1.x86_64.rpm dhcp-devel-4.2.5-83.el7_9.1.i686.rpm dhcp-devel-4.2.5-83.el7_9.1.x86_64.rpm

Red Hat Enterprise Linux Server (v. 7):

Source: dhcp-4.2.5-83.el7_9.1.src.rpm

ppc64: dhclient-4.2.5-83.el7_9.1.ppc64.rpm dhcp-4.2.5-83.el7_9.1.ppc64.rpm dhcp-common-4.2.5-83.el7_9.1.ppc64.rpm dhcp-debuginfo-4.2.5-83.el7_9.1.ppc.rpm dhcp-debuginfo-4.2.5-83.el7_9.1.ppc64.rpm dhcp-libs-4.2.5-83.el7_9.1.ppc.rpm dhcp-libs-4.2.5-83.el7_9.1.ppc64.rpm

ppc64le: dhclient-4.2.5-83.el7_9.1.ppc64le.rpm dhcp-4.2.5-83.el7_9.1.ppc64le.rpm dhcp-common-4.2.5-83.el7_9.1.ppc64le.rpm dhcp-debuginfo-4.2.5-83.el7_9.1.ppc64le.rpm dhcp-libs-4.2.5-83.el7_9.1.ppc64le.rpm

s390x: dhclient-4.2.5-83.el7_9.1.s390x.rpm dhcp-4.2.5-83.el7_9.1.s390x.rpm dhcp-common-4.2.5-83.el7_9.1.s390x.rpm dhcp-debuginfo-4.2.5-83.el7_9.1.s390.rpm dhcp-debuginfo-4.2.5-83.el7_9.1.s390x.rpm dhcp-libs-4.2.5-83.el7_9.1.s390.rpm dhcp-libs-4.2.5-83.el7_9.1.s390x.rpm

x86_64: dhclient-4.2.5-83.el7_9.1.x86_64.rpm dhcp-4.2.5-83.el7_9.1.x86_64.rpm dhcp-common-4.2.5-83.el7_9.1.x86_64.rpm dhcp-debuginfo-4.2.5-83.el7_9.1.i686.rpm dhcp-debuginfo-4.2.5-83.el7_9.1.x86_64.rpm dhcp-libs-4.2.5-83.el7_9.1.i686.rpm dhcp-libs-4.2.5-83.el7_9.1.x86_64.rpm

Red Hat Enterprise Linux Server Optional (v. 7):

ppc64: dhcp-debuginfo-4.2.5-83.el7_9.1.ppc.rpm dhcp-debuginfo-4.2.5-83.el7_9.1.ppc64.rpm dhcp-devel-4.2.5-83.el7_9.1.ppc.rpm dhcp-devel-4.2.5-83.el7_9.1.ppc64.rpm

ppc64le: dhcp-debuginfo-4.2.5-83.el7_9.1.ppc64le.rpm dhcp-devel-4.2.5-83.el7_9.1.ppc64le.rpm

s390x: dhcp-debuginfo-4.2.5-83.el7_9.1.s390.rpm dhcp-debuginfo-4.2.5-83.el7_9.1.s390x.rpm dhcp-devel-4.2.5-83.el7_9.1.s390.rpm dhcp-devel-4.2.5-83.el7_9.1.s390x.rpm

x86_64: dhcp-debuginfo-4.2.5-83.el7_9.1.i686.rpm dhcp-debuginfo-4.2.5-83.el7_9.1.x86_64.rpm dhcp-devel-4.2.5-83.el7_9.1.i686.rpm dhcp-devel-4.2.5-83.el7_9.1.x86_64.rpm

Red Hat Enterprise Linux Workstation (v. 7):

Source: dhcp-4.2.5-83.el7_9.1.src.rpm

x86_64: dhclient-4.2.5-83.el7_9.1.x86_64.rpm dhcp-4.2.5-83.el7_9.1.x86_64.rpm dhcp-common-4.2.5-83.el7_9.1.x86_64.rpm dhcp-debuginfo-4.2.5-83.el7_9.1.i686.rpm dhcp-debuginfo-4.2.5-83.el7_9.1.x86_64.rpm dhcp-libs-4.2.5-83.el7_9.1.i686.rpm dhcp-libs-4.2.5-83.el7_9.1.x86_64.rpm

Red Hat Enterprise Linux Workstation Optional (v. 7):

x86_64: dhcp-debuginfo-4.2.5-83.el7_9.1.i686.rpm dhcp-debuginfo-4.2.5-83.el7_9.1.x86_64.rpm dhcp-devel-4.2.5-83.el7_9.1.i686.rpm dhcp-devel-4.2.5-83.el7_9.1.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. References:

https://access.redhat.com/security/cve/CVE-2021-25217 https://access.redhat.com/security/updates/classification/#important

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYMCeytzjgjWX9erEAQgPYw/+K6NTT5tvNy0WHRy46UioFuzIbxlMOPzm zXmk61B2Dgod7DCU3EbF9u7nSViaQds11pDCrTejH70WrqNQSaWMhsASgtNmQ42q 0oVWQwqyB8mP/73BwYJQ84eZDGwsyqQf/9MO96g4c0jlZOAu9vSxvSflQ4DY8m9L 0+pk3/zHOsUz3Za7Ns/1wa8pmq3hxAt0z6Z6ri0Ka8CEHg7W7ELGC67ih1BOcpP5 mdWOSfTW+F1EzmerDW0eom09R/Ndfo/FdGeCbEq1K6kvcrPy4e/tsyBCquPYPFar aTADxJPMObDTY0dJhqw1qZ5cERLnhJaj8GzWc0Ne2KIAFig/NcVhEZL8RtvrNWhO JIaVZ7zK6bi1VASVVIAP8yQzwdZFEbfMREOa705gMvXMz1Ux08YvsbrelD/LeJXe 45C2+zGvM7KDd/AlrhopZPbBJI07tbNe8qWzFggJtBTMVg28i5K7DjFjvASFZFrV 8nKdWae1GOEtH23fygGOoW4m0KkGWd1Tc/lte6Wy788KOa/yF3IQkWeTSo5KG33Q UHCzx6NzHyeAgW7K9QvvpIjfbxIAyBbebsIkhOhySjfsAp28lKkaZZRVF/sNWIvG GRibEMi366KUTR5AiTMAjHoYgIDzp7nywWiYBhf9SuNgqV3kG0Yz7fd1ac0+qcH5 zPKanVJNoQs=9+pl -----END PGP SIGNATURE-----

-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . 8) - aarch64, noarch, ppc64le, s390x, x86_64


  1. Gentoo Linux Security Advisory GLSA 202305-22

                                       https://security.gentoo.org/

Severity: Normal Title: ISC DHCP: Multiple Vulnerabilities Date: May 03, 2023 Bugs: #875521, #792324 ID: 202305-22


Synopsis

Multiple vulnerabilities have been discovered in ISC DHCP, the worst of which could result in denial of service.

Affected packages

-------------------------------------------------------------------
 Package              /     Vulnerable     /            Unaffected
-------------------------------------------------------------------

1 net-misc/dhcp < 4.4.3_p1 >= 4.4.3_p1

Description

Multiple vulnerabilities have been discovered in ISC DHCP. Please review the CVE identifiers referenced below for details.

Impact

Please review the referenced CVE identifiers for details.

Workaround

There is no known workaround at this time.

Resolution

All ISC DHCP users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-misc/dhcp-4.4.3_p1"

References

[ 1 ] CVE-2021-25217 https://nvd.nist.gov/vuln/detail/CVE-2021-25217 [ 2 ] CVE-2022-2928 https://nvd.nist.gov/vuln/detail/CVE-2022-2928 [ 3 ] CVE-2022-2929 https://nvd.nist.gov/vuln/detail/CVE-2022-2929

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202305-22

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2023 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202105-1325",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "34"
      },
      {
        "model": "ruggedcom rox rx1501",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "2.15.0"
      },
      {
        "model": "ontap select deploy administration utility",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "ruggedcom rox rx1512",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "2.3.0"
      },
      {
        "model": "ruggedcom rox rx1500",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "2.15.0"
      },
      {
        "model": "ruggedcom rox mx5000",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "2.3.0"
      },
      {
        "model": "solidfire \\\u0026 hci management node",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "dhcp",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "isc",
        "version": "4.4.2"
      },
      {
        "model": "ruggedcom rox rx1500",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "2.3.0"
      },
      {
        "model": "ruggedcom rox rx1400",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "2.15.0"
      },
      {
        "model": "ruggedcom rox rx5000",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "2.3.0"
      },
      {
        "model": "ruggedcom rox rx1510",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "2.3.0"
      },
      {
        "model": "ruggedcom rox rx1511",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "2.3.0"
      },
      {
        "model": "ruggedcom rox rx1501",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "2.3.0"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "33"
      },
      {
        "model": "ruggedcom rox rx5000",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "2.15.0"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "ruggedcom rox rx1511",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "2.15.0"
      },
      {
        "model": "ruggedcom rox mx5000",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "2.15.0"
      },
      {
        "model": "ruggedcom rox rx1536",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "2.15.0"
      },
      {
        "model": "dhcp",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "isc",
        "version": "4.4.0"
      },
      {
        "model": "dhcp",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "isc",
        "version": "4.1-esv"
      },
      {
        "model": "ruggedcom rox rx1512",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "2.15.0"
      },
      {
        "model": "ruggedcom rox rx1510",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "2.15.0"
      },
      {
        "model": "ruggedcom rox rx1524",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "2.15.0"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "9.0"
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-25217"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "163789"
      },
      {
        "db": "PACKETSTORM",
        "id": "163196"
      },
      {
        "db": "PACKETSTORM",
        "id": "163155"
      },
      {
        "db": "PACKETSTORM",
        "id": "163240"
      },
      {
        "db": "PACKETSTORM",
        "id": "163400"
      },
      {
        "db": "PACKETSTORM",
        "id": "163129"
      },
      {
        "db": "PACKETSTORM",
        "id": "163137"
      },
      {
        "db": "PACKETSTORM",
        "id": "163051"
      },
      {
        "db": "PACKETSTORM",
        "id": "163052"
      }
    ],
    "trust": 0.9
  },
  "cve": "CVE-2021-25217",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "LOW",
            "accessVector": "ADJACENT_NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 3.3,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 6.5,
            "id": "CVE-2021-25217",
            "impactScore": 2.9,
            "integrityImpact": "NONE",
            "severity": "LOW",
            "trust": 1.1,
            "vectorString": "AV:A/AC:L/Au:N/C:N/I:N/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "ADJACENT",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 7.4,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 2.8,
            "id": "CVE-2021-25217",
            "impactScore": 4.0,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "CHANGED",
            "trust": 2.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:A/AC:L/PR:N/UI:N/S:C/C:N/I:N/A:H",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2021-25217",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "security-officer@isc.org",
            "id": "CVE-2021-25217",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "VULMON",
            "id": "CVE-2021-25217",
            "trust": 0.1,
            "value": "LOW"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-25217"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-25217"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-25217"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "In ISC DHCP 4.1-ESV-R1 -\u003e 4.1-ESV-R16, ISC DHCP 4.4.0 -\u003e 4.4.2 (Other branches of ISC DHCP (i.e., releases in the 4.0.x series or lower and releases in the 4.3.x series) are beyond their End-of-Life (EOL) and no longer supported by ISC. From inspection it is clear that the defect is also present in releases from those series, but they have not been officially tested for the vulnerability), The outcome of encountering the defect while reading a lease that will trigger it varies, according to: the component being affected (i.e., dhclient or dhcpd) whether the package was built as a 32-bit or 64-bit binary whether the compiler flag -fstack-protection-strong was used when compiling In dhclient, ISC has not successfully reproduced the error on a 64-bit system. However, on a 32-bit system it is possible to cause dhclient to crash when reading an improper lease, which could cause network connectivity problems for an affected system due to the absence of a running DHCP client process. In dhcpd, when run in DHCPv4 or DHCPv6 mode: if the dhcpd server binary was built for a 32-bit architecture AND the -fstack-protection-strong flag was specified to the compiler, dhcpd may exit while parsing a lease file containing an objectionable lease, resulting in lack of service to clients. Additionally, the offending lease and the lease immediately following it in the lease database may be improperly deleted. if the dhcpd server binary was built for a 64-bit architecture OR if the -fstack-protection-strong compiler flag was NOT specified, the crash will not occur, but it is possible for the offending lease and the lease which immediately followed it to be improperly deleted. There is a discrepancy between the code that handles encapsulated option information in leases transmitted \"on the wire\" and the code which reads and parses lease information after it has been written to disk storage. The highest threat from this vulnerability is to data confidentiality and integrity as well as service availability. (CVE-2021-25217). Bugs fixed (https://bugzilla.redhat.com/):\n\n1918750 - CVE-2021-3114 golang: crypto/elliptic: incorrect operations on the P-224 curve\n1945703 - \"Guest OS Info\" availability in VMI describe is flaky\n1958816 - [2.6.z] KubeMacPool fails to start due to OOM likely caused by a high number of Pods running in the cluster\n1963275 - migration controller null pointer dereference\n1965099 - Live Migration double handoff to virt-handler causes connection failures\n1965181 - CDI importer doesn\u0027t report AwaitingVDDK like it used to\n1967086 - Cloning DataVolumes between namespaces fails while creating cdi-upload pod\n1967887 - [2.6.6] nmstate is not progressing on a node and not configuring vlan filtering that causes an outage for VMs\n1969756 - Windows VMs fail to start on air-gapped environments\n1970372 - Virt-handler fails to verify container-disk\n1973227 - segfault in virt-controller during pdb deletion\n1974084 - 2.6.6 containers\n1975212 - No Virtual Machine Templates Found [EDIT - all templates are marked as depracted]\n1975727 - [Regression][VMIO][Warm] The third precopy does not end in warm migration\n1977756 - [2.6.z] PVC keeps in pending when using hostpath-provisioner\n1982760 - [v2v] no kind VirtualMachine is registered for version \\\"kubevirt.io/v1\\\" i... \n1986989 - OpenShift Virtualization 2.6.z cannot be upgraded to 4.8.0 initially deployed starting with \u003c= 4.8\n\n5. \nThese packages include redhat-release-virtualization-host. \nRHVH features a Cockpit user interface for monitoring the host\u0027s resources\nand\nperforming administrative tasks. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nAll OpenShift Container Platform 4.7 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor\n\n4. Solution:\n\nFor OpenShift Container Platform 4.7 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -cli.html\n\n5. =========================================================================\nUbuntu Security Notice USN-4969-2\nMay 27, 2021\n\nisc-dhcp vulnerability\n=========================================================================\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 16.04 ESM\n- Ubuntu 14.04 ESM\n\nSummary:\n\nDHCP could be made to crash if it received specially crafted network\ntraffic. \n\nSoftware Description:\n- isc-dhcp: DHCP server and client\n\nDetails:\n\nUSN-4969-1 fixed a vulnerability in DHCP. This update provides\nthe corresponding update for Ubuntu 14.04 ESM and 16.04 ESM. \n\n\nOriginal advisory details:\n\n Jon Franklin and Pawel Wieczorkiewicz discovered that DHCP incorrectly\n handled lease file parsing. A remote attacker could possibly use this issue\n to cause DHCP to crash, resulting in a denial of service. \n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 16.04 ESM:\n  isc-dhcp-client                 4.3.3-5ubuntu12.10+esm1\n  isc-dhcp-server                 4.3.3-5ubuntu12.10+esm1\n\nUbuntu 14.04 ESM:\n  isc-dhcp-client                 4.2.4-7ubuntu12.13+esm1\n  isc-dhcp-server                 4.2.4-7ubuntu12.13+esm1\n\nIn general, a standard system update will make all the necessary changes. 7.7) - ppc64, ppc64le, s390x, x86_64\n\n3. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Important: dhcp security update\nAdvisory ID:       RHSA-2021:2357-01\nProduct:           Red Hat Enterprise Linux\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2021:2357\nIssue date:        2021-06-09\nCVE Names:         CVE-2021-25217\n====================================================================\n1. Summary:\n\nAn update for dhcp is now available for Red Hat Enterprise Linux 7. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64\nRed Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Workstation (v. 7) - x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - x86_64\n\n3. Description:\n\nThe Dynamic Host Configuration Protocol (DHCP) is a protocol that allows\nindividual devices on an IP network to get their own network configuration\ninformation, including an IP address, a subnet mask, and a broadcast\naddress. The dhcp packages provide a relay agent and ISC DHCP service\nrequired to enable and administer DHCP on a network. \n\nSecurity Fix(es):\n\n* dhcp: stack-based buffer overflow when parsing statements with\ncolon-separated hex digits in config or lease files in dhcpd and dhclient\n(CVE-2021-25217)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1963258 - CVE-2021-25217 dhcp: stack-based buffer overflow when parsing statements with colon-separated hex digits in config or lease files in dhcpd and dhclient\n\n6. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\ndhcp-4.2.5-83.el7_9.1.src.rpm\n\nx86_64:\ndhclient-4.2.5-83.el7_9.1.x86_64.rpm\ndhcp-common-4.2.5-83.el7_9.1.x86_64.rpm\ndhcp-debuginfo-4.2.5-83.el7_9.1.i686.rpm\ndhcp-debuginfo-4.2.5-83.el7_9.1.x86_64.rpm\ndhcp-libs-4.2.5-83.el7_9.1.i686.rpm\ndhcp-libs-4.2.5-83.el7_9.1.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nx86_64:\ndhcp-4.2.5-83.el7_9.1.x86_64.rpm\ndhcp-debuginfo-4.2.5-83.el7_9.1.i686.rpm\ndhcp-debuginfo-4.2.5-83.el7_9.1.x86_64.rpm\ndhcp-devel-4.2.5-83.el7_9.1.i686.rpm\ndhcp-devel-4.2.5-83.el7_9.1.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode (v. 7):\n\nSource:\ndhcp-4.2.5-83.el7_9.1.src.rpm\n\nx86_64:\ndhclient-4.2.5-83.el7_9.1.x86_64.rpm\ndhcp-common-4.2.5-83.el7_9.1.x86_64.rpm\ndhcp-debuginfo-4.2.5-83.el7_9.1.i686.rpm\ndhcp-debuginfo-4.2.5-83.el7_9.1.x86_64.rpm\ndhcp-libs-4.2.5-83.el7_9.1.i686.rpm\ndhcp-libs-4.2.5-83.el7_9.1.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nx86_64:\ndhcp-4.2.5-83.el7_9.1.x86_64.rpm\ndhcp-debuginfo-4.2.5-83.el7_9.1.i686.rpm\ndhcp-debuginfo-4.2.5-83.el7_9.1.x86_64.rpm\ndhcp-devel-4.2.5-83.el7_9.1.i686.rpm\ndhcp-devel-4.2.5-83.el7_9.1.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\ndhcp-4.2.5-83.el7_9.1.src.rpm\n\nppc64:\ndhclient-4.2.5-83.el7_9.1.ppc64.rpm\ndhcp-4.2.5-83.el7_9.1.ppc64.rpm\ndhcp-common-4.2.5-83.el7_9.1.ppc64.rpm\ndhcp-debuginfo-4.2.5-83.el7_9.1.ppc.rpm\ndhcp-debuginfo-4.2.5-83.el7_9.1.ppc64.rpm\ndhcp-libs-4.2.5-83.el7_9.1.ppc.rpm\ndhcp-libs-4.2.5-83.el7_9.1.ppc64.rpm\n\nppc64le:\ndhclient-4.2.5-83.el7_9.1.ppc64le.rpm\ndhcp-4.2.5-83.el7_9.1.ppc64le.rpm\ndhcp-common-4.2.5-83.el7_9.1.ppc64le.rpm\ndhcp-debuginfo-4.2.5-83.el7_9.1.ppc64le.rpm\ndhcp-libs-4.2.5-83.el7_9.1.ppc64le.rpm\n\ns390x:\ndhclient-4.2.5-83.el7_9.1.s390x.rpm\ndhcp-4.2.5-83.el7_9.1.s390x.rpm\ndhcp-common-4.2.5-83.el7_9.1.s390x.rpm\ndhcp-debuginfo-4.2.5-83.el7_9.1.s390.rpm\ndhcp-debuginfo-4.2.5-83.el7_9.1.s390x.rpm\ndhcp-libs-4.2.5-83.el7_9.1.s390.rpm\ndhcp-libs-4.2.5-83.el7_9.1.s390x.rpm\n\nx86_64:\ndhclient-4.2.5-83.el7_9.1.x86_64.rpm\ndhcp-4.2.5-83.el7_9.1.x86_64.rpm\ndhcp-common-4.2.5-83.el7_9.1.x86_64.rpm\ndhcp-debuginfo-4.2.5-83.el7_9.1.i686.rpm\ndhcp-debuginfo-4.2.5-83.el7_9.1.x86_64.rpm\ndhcp-libs-4.2.5-83.el7_9.1.i686.rpm\ndhcp-libs-4.2.5-83.el7_9.1.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nppc64:\ndhcp-debuginfo-4.2.5-83.el7_9.1.ppc.rpm\ndhcp-debuginfo-4.2.5-83.el7_9.1.ppc64.rpm\ndhcp-devel-4.2.5-83.el7_9.1.ppc.rpm\ndhcp-devel-4.2.5-83.el7_9.1.ppc64.rpm\n\nppc64le:\ndhcp-debuginfo-4.2.5-83.el7_9.1.ppc64le.rpm\ndhcp-devel-4.2.5-83.el7_9.1.ppc64le.rpm\n\ns390x:\ndhcp-debuginfo-4.2.5-83.el7_9.1.s390.rpm\ndhcp-debuginfo-4.2.5-83.el7_9.1.s390x.rpm\ndhcp-devel-4.2.5-83.el7_9.1.s390.rpm\ndhcp-devel-4.2.5-83.el7_9.1.s390x.rpm\n\nx86_64:\ndhcp-debuginfo-4.2.5-83.el7_9.1.i686.rpm\ndhcp-debuginfo-4.2.5-83.el7_9.1.x86_64.rpm\ndhcp-devel-4.2.5-83.el7_9.1.i686.rpm\ndhcp-devel-4.2.5-83.el7_9.1.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\ndhcp-4.2.5-83.el7_9.1.src.rpm\n\nx86_64:\ndhclient-4.2.5-83.el7_9.1.x86_64.rpm\ndhcp-4.2.5-83.el7_9.1.x86_64.rpm\ndhcp-common-4.2.5-83.el7_9.1.x86_64.rpm\ndhcp-debuginfo-4.2.5-83.el7_9.1.i686.rpm\ndhcp-debuginfo-4.2.5-83.el7_9.1.x86_64.rpm\ndhcp-libs-4.2.5-83.el7_9.1.i686.rpm\ndhcp-libs-4.2.5-83.el7_9.1.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. 7):\n\nx86_64:\ndhcp-debuginfo-4.2.5-83.el7_9.1.i686.rpm\ndhcp-debuginfo-4.2.5-83.el7_9.1.x86_64.rpm\ndhcp-devel-4.2.5-83.el7_9.1.i686.rpm\ndhcp-devel-4.2.5-83.el7_9.1.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-25217\nhttps://access.redhat.com/security/updates/classification/#important\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYMCeytzjgjWX9erEAQgPYw/+K6NTT5tvNy0WHRy46UioFuzIbxlMOPzm\nzXmk61B2Dgod7DCU3EbF9u7nSViaQds11pDCrTejH70WrqNQSaWMhsASgtNmQ42q\n0oVWQwqyB8mP/73BwYJQ84eZDGwsyqQf/9MO96g4c0jlZOAu9vSxvSflQ4DY8m9L\n0+pk3/zHOsUz3Za7Ns/1wa8pmq3hxAt0z6Z6ri0Ka8CEHg7W7ELGC67ih1BOcpP5\nmdWOSfTW+F1EzmerDW0eom09R/Ndfo/FdGeCbEq1K6kvcrPy4e/tsyBCquPYPFar\naTADxJPMObDTY0dJhqw1qZ5cERLnhJaj8GzWc0Ne2KIAFig/NcVhEZL8RtvrNWhO\nJIaVZ7zK6bi1VASVVIAP8yQzwdZFEbfMREOa705gMvXMz1Ux08YvsbrelD/LeJXe\n45C2+zGvM7KDd/AlrhopZPbBJI07tbNe8qWzFggJtBTMVg28i5K7DjFjvASFZFrV\n8nKdWae1GOEtH23fygGOoW4m0KkGWd1Tc/lte6Wy788KOa/yF3IQkWeTSo5KG33Q\nUHCzx6NzHyeAgW7K9QvvpIjfbxIAyBbebsIkhOhySjfsAp28lKkaZZRVF/sNWIvG\nGRibEMi366KUTR5AiTMAjHoYgIDzp7nywWiYBhf9SuNgqV3kG0Yz7fd1ac0+qcH5\nzPKanVJNoQs=9+pl\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. 8) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202305-22\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n    Title: ISC DHCP: Multiple Vulnerabilities\n     Date: May 03, 2023\n     Bugs: #875521, #792324\n       ID: 202305-22\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been discovered in ISC DHCP, the worst of\nwhich could result in denial of service. \n\nAffected packages\n=================\n\n    -------------------------------------------------------------------\n     Package              /     Vulnerable     /            Unaffected\n    -------------------------------------------------------------------\n  1  net-misc/dhcp              \u003c 4.4.3_p1                \u003e= 4.4.3_p1\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in ISC DHCP. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll ISC DHCP users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-misc/dhcp-4.4.3_p1\"\n\nReferences\n==========\n\n[ 1 ] CVE-2021-25217\n      https://nvd.nist.gov/vuln/detail/CVE-2021-25217\n[ 2 ] CVE-2022-2928\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2928\n[ 3 ] CVE-2022-2929\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2929\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202305-22\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2023 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-25217"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-25217"
      },
      {
        "db": "PACKETSTORM",
        "id": "163789"
      },
      {
        "db": "PACKETSTORM",
        "id": "163196"
      },
      {
        "db": "PACKETSTORM",
        "id": "163155"
      },
      {
        "db": "PACKETSTORM",
        "id": "163240"
      },
      {
        "db": "PACKETSTORM",
        "id": "163400"
      },
      {
        "db": "PACKETSTORM",
        "id": "162840"
      },
      {
        "db": "PACKETSTORM",
        "id": "162841"
      },
      {
        "db": "PACKETSTORM",
        "id": "163129"
      },
      {
        "db": "PACKETSTORM",
        "id": "163137"
      },
      {
        "db": "PACKETSTORM",
        "id": "163051"
      },
      {
        "db": "PACKETSTORM",
        "id": "163052"
      },
      {
        "db": "PACKETSTORM",
        "id": "172130"
      }
    ],
    "trust": 2.07
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2021-25217",
        "trust": 2.3
      },
      {
        "db": "SIEMENS",
        "id": "SSA-637483",
        "trust": 1.1
      },
      {
        "db": "SIEMENS",
        "id": "SSA-406691",
        "trust": 1.1
      },
      {
        "db": "OPENWALL",
        "id": "OSS-SECURITY/2021/05/26/6",
        "trust": 1.1
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-22-258-05",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-25217",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "163789",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "163196",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "163155",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "163240",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "163400",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "162840",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "162841",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "163129",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "163137",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "163051",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "163052",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "172130",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-25217"
      },
      {
        "db": "PACKETSTORM",
        "id": "163789"
      },
      {
        "db": "PACKETSTORM",
        "id": "163196"
      },
      {
        "db": "PACKETSTORM",
        "id": "163155"
      },
      {
        "db": "PACKETSTORM",
        "id": "163240"
      },
      {
        "db": "PACKETSTORM",
        "id": "163400"
      },
      {
        "db": "PACKETSTORM",
        "id": "162840"
      },
      {
        "db": "PACKETSTORM",
        "id": "162841"
      },
      {
        "db": "PACKETSTORM",
        "id": "163129"
      },
      {
        "db": "PACKETSTORM",
        "id": "163137"
      },
      {
        "db": "PACKETSTORM",
        "id": "163051"
      },
      {
        "db": "PACKETSTORM",
        "id": "163052"
      },
      {
        "db": "PACKETSTORM",
        "id": "172130"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-25217"
      }
    ]
  },
  "id": "VAR-202105-1325",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.366531175
  },
  "last_update_date": "2024-11-29T21:52:01.308000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Debian CVElist Bug Report Logs: isc-dhcp: CVE-2021-25217: A buffer overrun in lease file parsing code can be used to exploit a common vulnerability shared by dhcpd and dhclient",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=b55bb445f71f0d88702845d3582e2b5c"
      },
      {
        "title": "Amazon Linux AMI: ALAS-2021-1510",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=ALAS-2021-1510"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2021-1654",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2021-1654"
      },
      {
        "title": "Red Hat: CVE-2021-25217",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=CVE-2021-25217"
      },
      {
        "title": "Arch Linux Issues: ",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2021-25217 log"
      },
      {
        "title": "Palo Alto Networks Security Advisory: PAN-SA-2024-0001 Informational Bulletin: Impact of OSS CVEs in PAN-OS",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=palo_alto_networks_security_advisory\u0026qid=34f98e4f4344c97599fe2d33618956a7"
      },
      {
        "title": "Completion for lacework",
        "trust": 0.1,
        "url": "https://github.com/fbreton/lacework "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-25217"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-119",
        "trust": 1.0
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-25217"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.2,
        "url": "https://security.gentoo.org/glsa/202305-22"
      },
      {
        "trust": 1.1,
        "url": "https://kb.isc.org/docs/cve-2021-25217"
      },
      {
        "trust": 1.1,
        "url": "http://www.openwall.com/lists/oss-security/2021/05/26/6"
      },
      {
        "trust": 1.1,
        "url": "https://lists.debian.org/debian-lts-announce/2021/06/msg00002.html"
      },
      {
        "trust": 1.1,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-406691.pdf"
      },
      {
        "trust": 1.1,
        "url": "https://security.netapp.com/advisory/ntap-20220325-0011/"
      },
      {
        "trust": 1.1,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf"
      },
      {
        "trust": 1.1,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/z2lb42jwiv4m4wdnxx5vgip26feywkif/"
      },
      {
        "trust": 1.1,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/5qi4dyc7j4bghew3nh4xhmwthyc36uk4/"
      },
      {
        "trust": 1.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25217"
      },
      {
        "trust": 0.9,
        "url": "https://access.redhat.com/security/cve/cve-2021-25217"
      },
      {
        "trust": 0.9,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.9,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.9,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-27219"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3560"
      },
      {
        "trust": 0.2,
        "url": "https://ubuntu.com/security/notices/usn-4969-1"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/119.html"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=989157"
      },
      {
        "trust": 0.1,
        "url": "https://alas.aws.amazon.com/alas-2021-1510.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25039"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14347"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14346"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8286"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28196"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15358"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25712"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23240"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12364"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13543"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3520"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9951"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13434"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25037"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23239"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36242"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-25037"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3537"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8231"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33909"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3518"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-32399"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9948"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13012"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28935"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3516"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13434"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2017-14502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-25034"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8285"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-25035"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9169"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-14866"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26116"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-25038"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14345"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14866"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13584"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26137"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13543"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14360"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25040"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13584"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29361"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3517"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25042"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20201"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-25042"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12362"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25038"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25659"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-25041"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:3119"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-25036"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27619"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25215"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9983"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3177"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9169"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3326"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25036"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14344"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-25013"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25035"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-2708"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14345"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14344"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23336"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14361"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8927"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12362"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3114"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-28211"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-25039"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13012"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14346"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-25040"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12364"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-2708"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2016-10228"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25041"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8284"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33910"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25034"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27618"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2469"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2420"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24489"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/articles/2974891"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24489"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27219"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2519"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3560"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2554"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2555"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/updating/updating-cluster"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/isc-dhcp/4.3.5-3ubuntu7.3"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/isc-dhcp/4.4.1-2.1ubuntu5.20.04.2"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/isc-dhcp/4.4.1-2.2ubuntu6.1"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/isc-dhcp/4.4.1-2.1ubuntu10.1"
      },
      {
        "trust": 0.1,
        "url": "https://ubuntu.com/security/notices/usn-4969-2"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2405"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2418"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2357"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2359"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2929"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2928"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-25217"
      },
      {
        "db": "PACKETSTORM",
        "id": "163789"
      },
      {
        "db": "PACKETSTORM",
        "id": "163196"
      },
      {
        "db": "PACKETSTORM",
        "id": "163155"
      },
      {
        "db": "PACKETSTORM",
        "id": "163240"
      },
      {
        "db": "PACKETSTORM",
        "id": "163400"
      },
      {
        "db": "PACKETSTORM",
        "id": "162840"
      },
      {
        "db": "PACKETSTORM",
        "id": "162841"
      },
      {
        "db": "PACKETSTORM",
        "id": "163129"
      },
      {
        "db": "PACKETSTORM",
        "id": "163137"
      },
      {
        "db": "PACKETSTORM",
        "id": "163051"
      },
      {
        "db": "PACKETSTORM",
        "id": "163052"
      },
      {
        "db": "PACKETSTORM",
        "id": "172130"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-25217"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULMON",
        "id": "CVE-2021-25217"
      },
      {
        "db": "PACKETSTORM",
        "id": "163789"
      },
      {
        "db": "PACKETSTORM",
        "id": "163196"
      },
      {
        "db": "PACKETSTORM",
        "id": "163155"
      },
      {
        "db": "PACKETSTORM",
        "id": "163240"
      },
      {
        "db": "PACKETSTORM",
        "id": "163400"
      },
      {
        "db": "PACKETSTORM",
        "id": "162840"
      },
      {
        "db": "PACKETSTORM",
        "id": "162841"
      },
      {
        "db": "PACKETSTORM",
        "id": "163129"
      },
      {
        "db": "PACKETSTORM",
        "id": "163137"
      },
      {
        "db": "PACKETSTORM",
        "id": "163051"
      },
      {
        "db": "PACKETSTORM",
        "id": "163052"
      },
      {
        "db": "PACKETSTORM",
        "id": "172130"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-25217"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2021-05-26T00:00:00",
        "db": "VULMON",
        "id": "CVE-2021-25217"
      },
      {
        "date": "2021-08-11T16:15:17",
        "db": "PACKETSTORM",
        "id": "163789"
      },
      {
        "date": "2021-06-17T18:09:00",
        "db": "PACKETSTORM",
        "id": "163196"
      },
      {
        "date": "2021-06-15T15:18:36",
        "db": "PACKETSTORM",
        "id": "163155"
      },
      {
        "date": "2021-06-22T19:32:24",
        "db": "PACKETSTORM",
        "id": "163240"
      },
      {
        "date": "2021-07-06T15:19:09",
        "db": "PACKETSTORM",
        "id": "163400"
      },
      {
        "date": "2021-05-27T13:30:32",
        "db": "PACKETSTORM",
        "id": "162840"
      },
      {
        "date": "2021-05-27T13:30:42",
        "db": "PACKETSTORM",
        "id": "162841"
      },
      {
        "date": "2021-06-14T15:49:07",
        "db": "PACKETSTORM",
        "id": "163129"
      },
      {
        "date": "2021-06-15T14:41:42",
        "db": "PACKETSTORM",
        "id": "163137"
      },
      {
        "date": "2021-06-09T13:43:37",
        "db": "PACKETSTORM",
        "id": "163051"
      },
      {
        "date": "2021-06-09T13:43:47",
        "db": "PACKETSTORM",
        "id": "163052"
      },
      {
        "date": "2023-05-03T15:37:18",
        "db": "PACKETSTORM",
        "id": "172130"
      },
      {
        "date": "2021-05-26T22:15:07.947000",
        "db": "NVD",
        "id": "CVE-2021-25217"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-11-07T00:00:00",
        "db": "VULMON",
        "id": "CVE-2021-25217"
      },
      {
        "date": "2023-11-07T03:31:24.893000",
        "db": "NVD",
        "id": "CVE-2021-25217"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "162840"
      },
      {
        "db": "PACKETSTORM",
        "id": "162841"
      }
    ],
    "trust": 0.2
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat Security Advisory 2021-3119-01",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "163789"
      }
    ],
    "trust": 0.1
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "overflow",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "163196"
      },
      {
        "db": "PACKETSTORM",
        "id": "163155"
      },
      {
        "db": "PACKETSTORM",
        "id": "163240"
      },
      {
        "db": "PACKETSTORM",
        "id": "163400"
      },
      {
        "db": "PACKETSTORM",
        "id": "163129"
      },
      {
        "db": "PACKETSTORM",
        "id": "163137"
      },
      {
        "db": "PACKETSTORM",
        "id": "163051"
      },
      {
        "db": "PACKETSTORM",
        "id": "163052"
      }
    ],
    "trust": 0.8
  }
}

var-202207-0107
Vulnerability from variot

AES OCB mode for 32-bit x86 platforms using the AES-NI assembly optimised implementation will not encrypt the entirety of the data under some circumstances. This could reveal sixteen bytes of data that was preexisting in the memory that wasn't written. In the special case of "in place" encryption, sixteen bytes of the plaintext would be revealed. Since OpenSSL does not support OCB based cipher suites for TLS and DTLS, they are both unaffected. Fixed in OpenSSL 3.0.5 (Affected 3.0.0-3.0.4). Fixed in OpenSSL 1.1.1q (Affected 1.1.1-1.1.1p). The issue in CVE-2022-1292 did not find other places in the c_rehash script where it possibly passed the file names of certificates being hashed to a command executed through the shell. Some operating systems distribute this script in a manner where it is automatically executed. On these operating systems, this flaw allows an malicious user to execute arbitrary commands with the privileges of the script. (CVE-2022-2097). Summary:

Submariner 0.13 packages that fix security issues and bugs, as well as adds various enhancements that are now available for Red Hat Advanced Cluster Management for Kubernetes version 2.6. Description:

Submariner enables direct networking between pods and services on different Kubernetes clusters that are either on-premises or in the cloud.

For more information about Submariner, see the Submariner open source community website at: https://submariner.io/.

This advisory contains bug fixes and enhancements to the Submariner container images. Description:

Red Hat OpenShift Service Mesh is Red Hat's distribution of the Istio service mesh project, tailored for installation into an OpenShift Container Platform installation.

This advisory covers the RPM packages for the release. Solution:

The OpenShift Service Mesh Release Notes provide information on the features and known issues:

https://docs.openshift.com/container-platform/latest/service_mesh/v2x/servicemesh-release-notes.html

  1. JIRA issues fixed (https://issues.jboss.org/):

OSSM-1105 - IOR doesn't support a host with namespace/ prefix OSSM-1205 - Specifying logging parameter will make istio-ingressgateway and istio-egressgateway failed to start OSSM-1668 - [Regression] jwksResolverCA field in SMCP is missing OSSM-1718 - Istio Operator pauses reconciliation when gateway deployed to non-control plane namespace OSSM-1775 - [Regression] Incorrect 3scale image specified for 2.0 control planes OSSM-1800 - IOR should copy labels from Gateway to Route OSSM-1805 - Reconcile SMCP when Kiali is not available OSSM-1846 - SMCP fails to reconcile when enabling PILOT_ENABLE_GATEWAY_API_DEPLOYMENT_CONTROLLER OSSM-1868 - Container release for Maistra 2.2.2

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Important: Node Maintenance Operator 4.11.1 security update Advisory ID: RHSA-2022:6188-01 Product: RHWA Advisory URL: https://access.redhat.com/errata/RHSA-2022:6188 Issue date: 2022-08-25 CVE Names: CVE-2022-1292 CVE-2022-1586 CVE-2022-1705 CVE-2022-1962 CVE-2022-2068 CVE-2022-2097 CVE-2022-28131 CVE-2022-30630 CVE-2022-30631 CVE-2022-30632 CVE-2022-30633 CVE-2022-32148 =====================================================================

  1. Summary:

An update for node-maintenance-must-gather-container, node-maintenance-operator-bundle-container, and node-maintenance-operator-container is now available for Node Maintenance Operator 4.11 for RHEL 8. This Operator is delivered by Red Hat Workload Availability.

Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

This is an updated release of the Node Maintenance Operator. The Node Maintenance Operator cordons off nodes from the rest of the cluster and drains all the pods from the nodes. By placing nodes under maintenance, administrators can proactively power down nodes, move workloads to other parts of the cluster, and ensure that workloads do not get interrupted.

Security Fix(es):

  • golang: compress/gzip: stack exhaustion in Reader.Read (CVE-2022-30631)

  • golang: net/http: improper sanitization of Transfer-Encoding header (CVE-2022-1705)

  • golang: go/parser: stack exhaustion in all Parse* functions (CVE-2022-1962)

  • golang: encoding/xml: stack exhaustion in Decoder.Skip (CVE-2022-28131)

  • golang: io/fs: stack exhaustion in Glob (CVE-2022-30630)

  • golang: path/filepath: stack exhaustion in Glob (CVE-2022-30632)

  • golang: encoding/xml: stack exhaustion in Unmarshal (CVE-2022-30633)

  • golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working (CVE-2022-32148)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, see the CVE page(s) listed in the References section.

  1. Solution:

For details on how to apply this update, which includes the changes described in this advisory, see:

https://access.redhat.com/articles/11258

  1. Bugs fixed (https://bugzilla.redhat.com/):

2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read 2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob 2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header 2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions 2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working 2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob 2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip 2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal

  1. References:

https://access.redhat.com/security/cve/CVE-2022-1292 https://access.redhat.com/security/cve/CVE-2022-1586 https://access.redhat.com/security/cve/CVE-2022-1705 https://access.redhat.com/security/cve/CVE-2022-1962 https://access.redhat.com/security/cve/CVE-2022-2068 https://access.redhat.com/security/cve/CVE-2022-2097 https://access.redhat.com/security/cve/CVE-2022-28131 https://access.redhat.com/security/cve/CVE-2022-30630 https://access.redhat.com/security/cve/CVE-2022-30631 https://access.redhat.com/security/cve/CVE-2022-30632 https://access.redhat.com/security/cve/CVE-2022-30633 https://access.redhat.com/security/cve/CVE-2022-32148 https://access.redhat.com/security/updates/classification/#important

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYwe6gdzjgjWX9erEAQiS+g/+IhfsKqfRH2EJsNNn/WFyeLJxogITZN4l W5egpFt9cNMXkx9RsKZR287l9vrT7BkhLsNRKkzWsYg1RMPEa36ko5Xf1sGchLHt mMLJ26mnolPtSVseJgdeczeaMZYo6xvSzx1lmV6MKJJZBAjkhddewYlbijSz8znf 8T+yEG0kMNYzI0Mj8pLb6fldYyYVdKfLwFCXpqA9YxDAN38RtrJiF15R0MD8rhYT FsIpnthidpK6cKpHHkeOB3R7wN7Opjz92mEzwedFpTJT/gfIiOtCbgpCEjq4Ry3u rMn4ziM9CknQtk4KMjiJm3/Rv+8osFpWYLsitg4+t0DERCDMhTSCyhPoG6XD1EVH 2T0sY5ZhvH1C9Y0fhCKyx7aJ/0iNsGB/uYgPCo9rkuTpvtfaDdsxhPOZ5kQO6+sN a21rS1HtWXqXSWBaEQIJRat0HGsSowVsOa9YZc5eXPvedWiCzBCag6Fuqa6ht1wI +0EKC3O+2G7tk2wbm2mvueQmke9v93aA1ucNCWOE5V4GbbRy6yJKsQbVCErTa+YR OH/R8n6fBZdTcgPZvKng90Mg94Tkf6fauyTiwkESMiIR3qCMf4M7rC+jQMZnss9v +4XElYdV1K1f9S7TJ+YpueoXBJaPi+ASbLqAzPey712GAyo/LKIyzJQXkgVlMRF6 CAU70Y4WQpQ= =+GUz -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:

OpenShift sandboxed containers support for OpenShift Container Platform provides users with built-in support for running Kata containers as an additional, optional runtime.

Space precludes documenting all of the updates to OpenShift sandboxed containers in this advisory. Description:

OpenShift API for Data Protection (OADP) enables you to back up and restore application resources, persistent volume data, and internal container images to external backup storage. OADP enables both file system-based and snapshot-based backups for persistent volumes. Bugs fixed (https://bugzilla.redhat.com/):

2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter 2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode 2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar 2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add 2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read

  1. JIRA issues fixed (https://issues.jboss.org/):

OADP-145 - Restic Restore stuck on InProgress status when app is deployed with DeploymentConfig OADP-154 - Ensure support for backing up resources based on different label selectors OADP-194 - Remove the registry dependency from OADP OADP-199 - Enable support for restore of existing resources OADP-224 - Restore silently ignore resources if they exist - restore log not updated OADP-225 - Restore doesn't update velero.io/backup-name when a resource is updated OADP-234 - Implementation of incremental restore OADP-324 - Add label to Expired backups failing garbage collection OADP-382 - 1.1: Update downstream OLM channels to support different x and y-stream releases OADP-422 - [GCP] An attempt of snapshoting volumes on CSI storageclass using Velero-native snapshots fails because it's unable to find the zone OADP-423 - CSI Backup is not blocked and does not wait for snapshot to complete OADP-478 - volumesnapshotcontent cannot be deleted; SnapshotDeleteError Failed to delete snapshot OADP-528 - The volumesnapshotcontent is not removed for the synced backup OADP-533 - OADP Backup via Ceph CSI snapshot hangs indefinitely on OpenShift v4.10 OADP-538 - typo on noDefaultBackupLocation error on DPA CR OADP-552 - Validate OADP with 4.11 and Pod Security Admissions OADP-558 - Empty Failed Backup CRs can't be removed OADP-585 - OADP 1.0.3: CSI functionality is broken on OCP 4.11 due to missing v1beta1 API version OADP-586 - registry deployment still exists on 1.1 build, and the registry pod gets recreated endlessly OADP-592 - OADP must-gather add support for insecure tls OADP-597 - BSL validation logs OADP-598 - Data mover performance on backup blocks backup process OADP-599 - [Data Mover] Datamover Restic secret cannot be configured per bsl OADP-600 - Operator should validate volsync installation and raise warning if data mover is enabled OADP-602 - Support GCP for openshift-velero-plugin registry OADP-605 - [OCP 4.11] CSI restore fails with admission webhook \"volumesnapshotclasses.snapshot.storage.k8s.io\" denied OADP-607 - DataMover: VSB is stuck on SnapshotBackupDone OADP-610 - Data mover fails if a stale volumesnapshot exists in application namespace OADP-613 - DataMover: upstream documentation refers wrong CRs OADP-637 - Restic backup fails with CA certificate OADP-643 - [Data Mover] VSB and VSR names are not unique OADP-644 - VolumeSnapshotBackup and VolumeSnapshotRestore timeouts should be configurable OADP-648 - Remove default limits for velero and restic pods OADP-652 - Data mover VolSync pod errors with Noobaa OADP-655 - DataMover: volsync-dst-vsr pod completes although not all items where restored in the namespace OADP-660 - Data mover restic secret does not support Azure OADP-698 - DataMover: volume-snapshot-mover pod points to upstream image OADP-715 - Restic restore fails: restic-wait container continuously fails with "Not found: /restores//.velero/" OADP-716 - Incremental restore: second restore of a namespace partially fails OADP-736 - Data mover VSB always fails with volsync 0.5

  1. ========================================================================== Ubuntu Security Notice USN-6457-1 October 30, 2023

nodejs vulnerabilities

A security issue affects these releases of Ubuntu and its derivatives:

  • Ubuntu 22.04 LTS

Summary:

Several security issues were fixed in Node.js.

Software Description: - nodejs: An open-source, cross-platform JavaScript runtime environment.

Details:

Tavis Ormandy discovered that Node.js incorrectly handled certain inputs. If a user or an automated system were tricked into opening a specially crafted input file, a remote attacker could possibly use this issue to cause a denial of service. (CVE-2022-0778)

Elison Niven discovered that Node.js incorrectly handled certain inputs. If a user or an automated system were tricked into opening a specially crafted input file, a remote attacker could possibly use this issue to execute arbitrary code. (CVE-2022-1292)

Chancen and Daniel Fiala discovered that Node.js incorrectly handled certain inputs. If a user or an automated system were tricked into opening a specially crafted input file, a remote attacker could possibly use this issue to execute arbitrary code. (CVE-2022-2068)

Alex Chernyakhovsky discovered that Node.js incorrectly handled certain inputs. If a user or an automated system were tricked into opening a specially crafted input file, a remote attacker could possibly use this issue to execute arbitrary code. (CVE-2022-2097)

Update instructions:

The problem can be corrected by updating your system to the following package versions:

Ubuntu 22.04 LTS: libnode-dev 12.22.9~dfsg-1ubuntu3.1 libnode72 12.22.9~dfsg-1ubuntu3.1 nodejs 12.22.9~dfsg-1ubuntu3.1 nodejs-doc 12.22.9~dfsg-1ubuntu3.1

In general, a standard system update will make all the necessary changes. OpenSSL Security Advisory [5 July 2022]

Heap memory corruption with RSA private key operation (CVE-2022-2274)

Severity: High

The OpenSSL 3.0.4 release introduced a serious bug in the RSA implementation for X86_64 CPUs supporting the AVX512IFMA instructions. This issue makes the RSA implementation with 2048 bit private keys incorrect on such machines and memory corruption will happen during the computation. As a consequence of the memory corruption an attacker may be able to trigger a remote code execution on the machine performing the computation.

SSL/TLS servers or other servers using 2048 bit RSA private keys running on machines supporting AVX512IFMA instructions of the X86_64 architecture are affected by this issue.

Note that on a vulnerable machine, proper testing of OpenSSL would fail and should be noticed before deployment.

This issue was reported to OpenSSL on 22nd June 2022 by Xi Ruoyao. The fix was developed by Xi Ruoyao.

This issue affects versions 1.1.1 and 3.0. It was addressed in the releases of 1.1.1q and 3.0.5 on the 5th July 2022.

OpenSSL 1.1.1 users should upgrade to 1.1.1q OpenSSL 3.0 users should upgrade to 3.0.5

This issue was reported to OpenSSL on the 15th June 2022 by Alex Chernyakhovsky from Google. The fix was developed by Alex Chernyakhovsky, David Benjamin and Alejandro Sedeño from Google.

References

URL for this Security Advisory: https://www.openssl.org/news/secadv/20220705.txt

Note: the online version of the advisory may be updated with additional details over time.

For details of OpenSSL severity classifications please see: https://www.openssl.org/policies/secpolicy.html . Bugs fixed (https://bugzilla.redhat.com/):

2064698 - CVE-2020-36518 jackson-databind: denial of service via a large depth of nested objects 2135244 - CVE-2022-42003 jackson-databind: deep wrapper array nesting wrt UNWRAP_SINGLE_VALUE_ARRAYS 2135247 - CVE-2022-42004 jackson-databind: use of deeply nested arrays

  1. JIRA issues fixed (https://issues.jboss.org/):

LOG-3293 - log-file-metric-exporter container has not limits exhausting the resources of the node

  1. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:

https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/

Security fixes:

  • moment: inefficient parsing algorithim resulting in DoS (CVE-2022-31129)
  • vm2: Sandbox Escape in vm2 (CVE-2022-36067)

Bug fixes:

  • Submariner Globalnet e2e tests failed on MTU between On-Prem to Public clusters (BZ# 2074547)

  • OCP 4.11 - Install fails because of: pods "management-ingress-63029-5cf6789dd6-" is forbidden: unable to validate against any security context constrain (BZ# 2082254)

  • subctl gather fails to gather libreswan data if CableDriver field is missing/empty in Submariner Spec (BZ# 2083659)

  • Yaml editor for creating vSphere cluster moves to next line after typing (BZ# 2086883)

  • Submariner addon status doesn't track all deployment failures (BZ# 2090311)

  • Unable to deploy Hypershift operator on MCE hub using ManagedClusterAddOn without including s3 secret (BZ# 2091170)

  • After switching to ACM 2.5 the managed clusters log "unable to create ClusterClaim" errors (BZ# 2095481)

  • Enforce failed and report the violation after modified memory value in limitrange policy (BZ# 2100036)

  • Creating an application fails with "This application has no subscription match selector (spec.selector.matchExpressions)" (BZ# 2101577)

  • Inconsistent cluster resource statuses between "All Subscription" topology and individual topologies (BZ# 2102273)

  • managed cluster is in "unknown" state for 120 mins after OADP restore

  • RHACM 2.5.2 images (BZ# 2104553)

  • Subscription UI does not allow binding to label with empty value (BZ# 2104961)

  • Upgrade to 2.5.1 from 2.5.0 fails due to missing Subscription CRD (BZ# 2106069)

  • Region information is not available for Azure cloud in managedcluster CR (BZ# 2107134)

  • cluster uninstall log points to incorrect container name (BZ# 2107359)

  • ACM shows wrong path for Argo CD applicationset git generator (BZ# 2107885)

  • Single node checkbox not visible for 4.11 images (BZ# 2109134)

  • Unable to deploy hypershift cluster when enabling validate-cluster-security (BZ# 2109544)

  • Deletion of Application (including app related resources) from the console fails to delete PlacementRule for the application (BZ# 20110026)

  • After the creation by a policy of job or deployment (in case the object is missing)ACM is trying to add new containers instead of updating (BZ# 2117728)

  • pods in CrashLoopBackoff on 3.11 managed cluster (BZ# 2122292)

  • ArgoCD and AppSet Applications do not deploy to local-cluster (BZ# 2124707)

  • Bugs fixed (https://bugzilla.redhat.com/):

2074547 - Submariner Globalnet e2e tests failed on MTU between On-Prem to Public clusters 2082254 - OCP 4.11 - Install fails because of: pods "management-ingress-63029-5cf6789dd6-" is forbidden: unable to validate against any security context constraint 2083659 - subctl gather fails to gather libreswan data if CableDriver field is missing/empty in Submariner Spec 2086883 - Yaml editor for creating vSphere cluster moves to next line after typing 2090311 - Submariner addon status doesn't track all deployment failures 2091170 - Unable to deploy Hypershift operator on MCE hub using ManagedClusterAddOn without including s3 secret 2095481 - After switching to ACM 2.5 the managed clusters log "unable to create ClusterClaim" errors 2100036 - Enforce failed and report the violation after modified memory value in limitrange policy 2101577 - Creating an application fails with "This application has no subscription match selector (spec.selector.matchExpressions)" 2102273 - Inconsistent cluster resource statuses between "All Subscription" topology and individual topologies 2103653 - managed cluster is in "unknown" state for 120 mins after OADP restore 2104553 - RHACM 2.5.2 images 2104961 - Subscription UI does not allow binding to label with empty value 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS 2106069 - Upgrade to 2.5.1 from 2.5.0 fails due to missing Subscription CRD 2107134 - Region information is not available for Azure cloud in managedcluster CR 2107359 - cluster uninstall log points to incorrect container name 2107885 - ACM shows wrong path for Argo CD applicationset git generator 2109134 - Single node checkbox not visible for 4.11 images 2110026 - Deletion of Application (including app related resources) from the console fails to delete PlacementRule for the application 2117728 - After the creation by a policy of job or deployment (in case the object is missing)ACM is trying to add new containers instead of updating 2122292 - pods in CrashLoopBackoff on 3.11 managed cluster 2124707 - ArgoCD and AppSet Applications do not deploy to local-cluster 2124794 - CVE-2022-36067 vm2: Sandbox Escape in vm2

5

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202207-0107",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "11.0"
      },
      {
        "model": "h500s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "active iq unified manager",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h300s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "clustered data ontap antivirus connector",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h410s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h410c",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "openssl",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "1.1.1"
      },
      {
        "model": "openssl",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "1.1.1q"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "36"
      },
      {
        "model": "h700s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "openssl",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "3.0.0"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "10.0"
      },
      {
        "model": "openssl",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "3.0.5"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "35"
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-2097"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "168265"
      },
      {
        "db": "PACKETSTORM",
        "id": "168222"
      },
      {
        "db": "PACKETSTORM",
        "id": "168351"
      },
      {
        "db": "PACKETSTORM",
        "id": "168187"
      },
      {
        "db": "PACKETSTORM",
        "id": "169443"
      },
      {
        "db": "PACKETSTORM",
        "id": "168228"
      },
      {
        "db": "PACKETSTORM",
        "id": "170179"
      },
      {
        "db": "PACKETSTORM",
        "id": "168378"
      }
    ],
    "trust": 0.8
  },
  "cve": "CVE-2022-2097",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 5.0,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 10.0,
            "id": "CVE-2022-2097",
            "impactScore": 2.9,
            "integrityImpact": "NONE",
            "severity": "MEDIUM",
            "trust": 1.1,
            "vectorString": "AV:N/AC:L/Au:N/C:P/I:N/A:N",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 5.3,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "LOW",
            "exploitabilityScore": 3.9,
            "id": "CVE-2022-2097",
            "impactScore": 1.4,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2022-2097",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2022-2097",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-2097"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-2097"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "AES OCB mode for 32-bit x86 platforms using the AES-NI assembly optimised implementation will not encrypt the entirety of the data under some circumstances. This could reveal sixteen bytes of data that was preexisting in the memory that wasn\u0027t written. In the special case of \"in place\" encryption, sixteen bytes of the plaintext would be revealed. Since OpenSSL does not support OCB based cipher suites for TLS and DTLS, they are both unaffected. Fixed in OpenSSL 3.0.5 (Affected 3.0.0-3.0.4). Fixed in OpenSSL 1.1.1q (Affected 1.1.1-1.1.1p). The issue in CVE-2022-1292 did not find other places in the `c_rehash` script where it possibly passed the file names of certificates being hashed to a command executed through the shell. Some operating systems distribute this script in a manner where it is automatically executed. On these operating systems, this flaw allows an malicious user to execute arbitrary commands with the privileges of the script. (CVE-2022-2097). Summary:\n\nSubmariner 0.13 packages that fix security issues and bugs, as well as adds\nvarious enhancements that are now available for Red Hat Advanced Cluster\nManagement for Kubernetes version 2.6. Description:\n\nSubmariner enables direct networking between pods and services on different\nKubernetes clusters that are either on-premises or in the cloud. \n\nFor more information about Submariner, see the Submariner open source\ncommunity website at: https://submariner.io/. \n\nThis advisory contains bug fixes and enhancements to the Submariner\ncontainer images. Description:\n\nRed Hat OpenShift Service Mesh is Red Hat\u0027s distribution of the Istio\nservice mesh project, tailored for installation into an OpenShift Container\nPlatform installation. \n\nThis advisory covers the RPM packages for the release. Solution:\n\nThe OpenShift Service Mesh Release Notes provide information on the\nfeatures and known issues:\n\nhttps://docs.openshift.com/container-platform/latest/service_mesh/v2x/servicemesh-release-notes.html\n\n4. JIRA issues fixed (https://issues.jboss.org/):\n\nOSSM-1105 - IOR doesn\u0027t support a host with namespace/ prefix\nOSSM-1205 - Specifying logging parameter will make istio-ingressgateway and istio-egressgateway failed to start\nOSSM-1668 - [Regression] jwksResolverCA field in SMCP is missing\nOSSM-1718 - Istio Operator pauses reconciliation when gateway deployed to non-control plane namespace\nOSSM-1775 - [Regression] Incorrect 3scale image specified for 2.0 control planes\nOSSM-1800 - IOR should copy labels from Gateway to Route\nOSSM-1805 - Reconcile SMCP when Kiali is not available\nOSSM-1846 - SMCP fails to reconcile when enabling PILOT_ENABLE_GATEWAY_API_DEPLOYMENT_CONTROLLER\nOSSM-1868 - Container release for Maistra 2.2.2\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Important: Node Maintenance Operator 4.11.1 security update\nAdvisory ID:       RHSA-2022:6188-01\nProduct:           RHWA\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:6188\nIssue date:        2022-08-25\nCVE Names:         CVE-2022-1292 CVE-2022-1586 CVE-2022-1705 \n                   CVE-2022-1962 CVE-2022-2068 CVE-2022-2097 \n                   CVE-2022-28131 CVE-2022-30630 CVE-2022-30631 \n                   CVE-2022-30632 CVE-2022-30633 CVE-2022-32148 \n=====================================================================\n\n1. Summary:\n\nAn update for node-maintenance-must-gather-container,\nnode-maintenance-operator-bundle-container, and\nnode-maintenance-operator-container is now available for Node Maintenance\nOperator 4.11 for RHEL 8. This Operator is delivered by Red Hat Workload\nAvailability. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Description:\n\nThis is an updated release of the Node Maintenance Operator. The Node\nMaintenance Operator cordons off nodes from the rest of the cluster and\ndrains all the pods from the nodes. By placing nodes under maintenance,\nadministrators can proactively power down nodes, move workloads to other\nparts of the cluster, and ensure that workloads do not get interrupted. \n\nSecurity Fix(es):\n\n* golang: compress/gzip: stack exhaustion in Reader.Read (CVE-2022-30631)\n\n* golang: net/http: improper sanitization of Transfer-Encoding header\n(CVE-2022-1705)\n\n* golang: go/parser: stack exhaustion in all Parse* functions\n(CVE-2022-1962)\n\n* golang: encoding/xml: stack exhaustion in Decoder.Skip (CVE-2022-28131)\n\n* golang: io/fs: stack exhaustion in Glob (CVE-2022-30630)\n\n* golang: path/filepath: stack exhaustion in Glob (CVE-2022-30632)\n\n* golang: encoding/xml: stack exhaustion in Unmarshal (CVE-2022-30633)\n\n* golang: net/http/httputil: NewSingleHostReverseProxy - omit\nX-Forwarded-For not working (CVE-2022-32148)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, see the CVE page(s)\nlisted in the References section. \n\n3. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, see:\n\nhttps://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob\n2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header\n2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions\n2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working\n2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob\n2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip\n2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-1292\nhttps://access.redhat.com/security/cve/CVE-2022-1586\nhttps://access.redhat.com/security/cve/CVE-2022-1705\nhttps://access.redhat.com/security/cve/CVE-2022-1962\nhttps://access.redhat.com/security/cve/CVE-2022-2068\nhttps://access.redhat.com/security/cve/CVE-2022-2097\nhttps://access.redhat.com/security/cve/CVE-2022-28131\nhttps://access.redhat.com/security/cve/CVE-2022-30630\nhttps://access.redhat.com/security/cve/CVE-2022-30631\nhttps://access.redhat.com/security/cve/CVE-2022-30632\nhttps://access.redhat.com/security/cve/CVE-2022-30633\nhttps://access.redhat.com/security/cve/CVE-2022-32148\nhttps://access.redhat.com/security/updates/classification/#important\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYwe6gdzjgjWX9erEAQiS+g/+IhfsKqfRH2EJsNNn/WFyeLJxogITZN4l\nW5egpFt9cNMXkx9RsKZR287l9vrT7BkhLsNRKkzWsYg1RMPEa36ko5Xf1sGchLHt\nmMLJ26mnolPtSVseJgdeczeaMZYo6xvSzx1lmV6MKJJZBAjkhddewYlbijSz8znf\n8T+yEG0kMNYzI0Mj8pLb6fldYyYVdKfLwFCXpqA9YxDAN38RtrJiF15R0MD8rhYT\nFsIpnthidpK6cKpHHkeOB3R7wN7Opjz92mEzwedFpTJT/gfIiOtCbgpCEjq4Ry3u\nrMn4ziM9CknQtk4KMjiJm3/Rv+8osFpWYLsitg4+t0DERCDMhTSCyhPoG6XD1EVH\n2T0sY5ZhvH1C9Y0fhCKyx7aJ/0iNsGB/uYgPCo9rkuTpvtfaDdsxhPOZ5kQO6+sN\na21rS1HtWXqXSWBaEQIJRat0HGsSowVsOa9YZc5eXPvedWiCzBCag6Fuqa6ht1wI\n+0EKC3O+2G7tk2wbm2mvueQmke9v93aA1ucNCWOE5V4GbbRy6yJKsQbVCErTa+YR\nOH/R8n6fBZdTcgPZvKng90Mg94Tkf6fauyTiwkESMiIR3qCMf4M7rC+jQMZnss9v\n+4XElYdV1K1f9S7TJ+YpueoXBJaPi+ASbLqAzPey712GAyo/LKIyzJQXkgVlMRF6\nCAU70Y4WQpQ=\n=+GUz\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nOpenShift sandboxed containers support for OpenShift Container Platform\nprovides users with built-in support for running Kata containers as an\nadditional, optional runtime. \n\nSpace precludes documenting all of the updates to OpenShift sandboxed\ncontainers in this advisory. Description:\n\nOpenShift API for Data Protection (OADP) enables you to back up and restore\napplication resources, persistent volume data, and internal container\nimages to external backup storage. OADP enables both file system-based and\nsnapshot-based backups for persistent volumes. Bugs fixed (https://bugzilla.redhat.com/):\n\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode\n2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nOADP-145 - Restic Restore stuck on InProgress status when app is deployed with DeploymentConfig\nOADP-154 - Ensure support for backing up resources based on different label selectors\nOADP-194 - Remove the registry dependency from OADP\nOADP-199 - Enable support for restore of existing resources\nOADP-224 - Restore silently ignore resources if they exist - restore log not updated\nOADP-225 - Restore doesn\u0027t update velero.io/backup-name when a resource is updated\nOADP-234 - Implementation of incremental restore\nOADP-324 - Add label to Expired backups failing garbage collection\nOADP-382 - 1.1: Update downstream OLM channels to support different x and y-stream releases\nOADP-422 - [GCP] An attempt of snapshoting volumes on CSI storageclass using Velero-native snapshots fails because it\u0027s unable to find the zone\nOADP-423 - CSI Backup is not blocked and does not wait for snapshot to complete\nOADP-478 - volumesnapshotcontent cannot be deleted; SnapshotDeleteError Failed to delete snapshot\nOADP-528 - The volumesnapshotcontent is not removed for the synced backup\nOADP-533 - OADP Backup via Ceph CSI snapshot hangs indefinitely on OpenShift v4.10\nOADP-538 - typo on noDefaultBackupLocation error on DPA CR\nOADP-552 - Validate OADP with 4.11 and Pod Security Admissions\nOADP-558 - Empty Failed Backup CRs can\u0027t be removed\nOADP-585 - OADP 1.0.3: CSI functionality is broken on OCP 4.11 due to missing v1beta1 API version\nOADP-586 - registry deployment still exists on 1.1 build, and the registry pod gets recreated endlessly\nOADP-592 - OADP must-gather add support for insecure tls\nOADP-597 - BSL validation logs\nOADP-598 - Data mover performance on backup blocks backup process\nOADP-599 - [Data Mover] Datamover Restic secret cannot be configured per bsl\nOADP-600 - Operator should validate volsync installation and raise warning if data mover is enabled\nOADP-602 - Support GCP for openshift-velero-plugin registry\nOADP-605 - [OCP 4.11] CSI restore fails with admission webhook \\\"volumesnapshotclasses.snapshot.storage.k8s.io\\\" denied\nOADP-607 - DataMover: VSB is stuck on SnapshotBackupDone\nOADP-610 - Data mover fails if a stale volumesnapshot exists in application namespace\nOADP-613 - DataMover: upstream documentation refers wrong CRs\nOADP-637 - Restic backup fails with CA certificate\nOADP-643 - [Data Mover] VSB and VSR names are not unique\nOADP-644 - VolumeSnapshotBackup and VolumeSnapshotRestore timeouts should be configurable\nOADP-648 - Remove default limits for velero and restic pods\nOADP-652 - Data mover VolSync pod errors with Noobaa\nOADP-655 - DataMover: volsync-dst-vsr pod completes although not all items where restored in the namespace\nOADP-660 - Data mover restic secret does not support Azure\nOADP-698 - DataMover: volume-snapshot-mover pod points to upstream image\nOADP-715 - Restic restore fails: restic-wait container continuously fails with \"Not found: /restores/\u003cpod-volume\u003e/.velero/\u003crestore-UID\u003e\"\nOADP-716 - Incremental restore: second restore of a namespace partially fails\nOADP-736 - Data mover VSB always fails with volsync 0.5\n\n6. ==========================================================================\nUbuntu Security Notice USN-6457-1\nOctober 30, 2023\n\nnodejs vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 22.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in Node.js. \n\nSoftware Description:\n- nodejs: An open-source, cross-platform JavaScript runtime environment. \n\nDetails:\n\nTavis Ormandy discovered that Node.js incorrectly handled certain inputs. If a\nuser or an automated system were tricked into opening a specially crafted\ninput file, a remote attacker could possibly use this issue to cause a\ndenial of service. (CVE-2022-0778)\n\nElison Niven discovered that Node.js incorrectly handled certain inputs. If a\nuser or an automated system were tricked into opening a specially crafted\ninput file, a remote attacker could possibly use this issue to execute\narbitrary code. (CVE-2022-1292)\n\nChancen and Daniel Fiala discovered that Node.js incorrectly handled certain\ninputs. If a user or an automated system were tricked into opening a specially\ncrafted input file, a remote attacker could possibly use this issue to execute\narbitrary code. (CVE-2022-2068)\n\nAlex Chernyakhovsky discovered that Node.js incorrectly handled certain\ninputs. If a user or an automated system were tricked into opening a specially\ncrafted input file, a remote attacker could possibly use this issue to execute\narbitrary code. (CVE-2022-2097)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 22.04 LTS:\n   libnode-dev                     12.22.9~dfsg-1ubuntu3.1\n   libnode72                       12.22.9~dfsg-1ubuntu3.1\n   nodejs                          12.22.9~dfsg-1ubuntu3.1\n   nodejs-doc                      12.22.9~dfsg-1ubuntu3.1\n\nIn general, a standard system update will make all the necessary changes. OpenSSL Security Advisory [5 July 2022]\n=======================================\n\nHeap memory corruption with RSA private key operation (CVE-2022-2274)\n=====================================================================\n\nSeverity: High\n\nThe OpenSSL 3.0.4 release introduced a serious bug in the RSA\nimplementation for X86_64 CPUs supporting the AVX512IFMA instructions. \nThis issue makes the RSA implementation with 2048 bit private keys\nincorrect on such machines and memory corruption will happen during\nthe computation. As a consequence of the memory corruption an attacker\nmay be able to trigger a remote code execution on the machine performing\nthe computation. \n\nSSL/TLS servers or other servers using 2048 bit RSA private keys running\non machines supporting AVX512IFMA instructions of the X86_64 architecture\nare affected by this issue. \n\nNote that on a vulnerable machine, proper testing of OpenSSL would fail and\nshould be noticed before deployment. \n\nThis issue was reported to OpenSSL on 22nd June 2022 by Xi Ruoyao. The\nfix was developed by Xi Ruoyao. \n\nThis issue affects versions 1.1.1 and 3.0.  It was addressed in the\nreleases of 1.1.1q and 3.0.5 on the 5th July 2022. \n\nOpenSSL 1.1.1 users should upgrade to 1.1.1q\nOpenSSL 3.0 users should upgrade to 3.0.5\n\nThis issue was reported to OpenSSL on the 15th June 2022 by Alex\nChernyakhovsky from Google. The fix was developed by Alex Chernyakhovsky,\nDavid Benjamin and Alejandro Sede\u00f1o from Google. \n\nReferences\n==========\n\nURL for this Security Advisory:\nhttps://www.openssl.org/news/secadv/20220705.txt\n\nNote: the online version of the advisory may be updated with additional details\nover time. \n\nFor details of OpenSSL severity classifications please see:\nhttps://www.openssl.org/policies/secpolicy.html\n. Bugs fixed (https://bugzilla.redhat.com/):\n\n2064698 - CVE-2020-36518 jackson-databind: denial of service via a large depth of nested objects\n2135244 - CVE-2022-42003 jackson-databind: deep wrapper array nesting wrt UNWRAP_SINGLE_VALUE_ARRAYS\n2135247 - CVE-2022-42004 jackson-databind: use of deeply nested arrays\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-3293 - log-file-metric-exporter container has not limits exhausting the resources of the node\n\n6. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/\n\nSecurity fixes:\n\n* moment: inefficient parsing algorithim resulting in DoS (CVE-2022-31129)\n* vm2: Sandbox Escape in vm2 (CVE-2022-36067)\n\nBug fixes:\n\n* Submariner Globalnet e2e tests failed on MTU between On-Prem to Public\nclusters (BZ# 2074547)\n\n* OCP 4.11 - Install fails because of: pods\n\"management-ingress-63029-5cf6789dd6-\" is forbidden: unable to validate\nagainst any security context constrain (BZ# 2082254)\n\n* subctl gather fails to gather libreswan data if CableDriver field is\nmissing/empty in Submariner Spec (BZ# 2083659)\n\n* Yaml editor for creating vSphere cluster moves to next line after typing\n(BZ# 2086883)\n\n* Submariner addon status doesn\u0027t track all deployment failures (BZ#\n2090311)\n\n* Unable to deploy Hypershift operator on MCE hub using ManagedClusterAddOn\nwithout including s3 secret (BZ# 2091170)\n\n* After switching to ACM 2.5 the managed clusters log \"unable to create\nClusterClaim\" errors (BZ# 2095481)\n\n* Enforce failed and report the violation after modified memory value in\nlimitrange policy (BZ# 2100036)\n\n* Creating an application fails with \"This application has no subscription\nmatch selector (spec.selector.matchExpressions)\" (BZ# 2101577)\n\n* Inconsistent cluster resource statuses between \"All Subscription\"\ntopology and individual topologies (BZ# 2102273)\n\n* managed cluster is in \"unknown\" state for 120 mins after OADP restore\n\n* RHACM 2.5.2 images (BZ# 2104553)\n\n* Subscription UI does not allow binding to label with empty value (BZ#\n2104961)\n\n* Upgrade to 2.5.1 from 2.5.0 fails due to missing Subscription CRD (BZ#\n2106069)\n\n* Region information is not available for Azure cloud in managedcluster CR\n(BZ# 2107134)\n\n* cluster uninstall log points to incorrect container name (BZ# 2107359)\n\n* ACM shows wrong path for Argo CD applicationset git generator (BZ#\n2107885)\n\n* Single node checkbox not visible for 4.11 images (BZ# 2109134)\n\n* Unable to deploy hypershift cluster when enabling\nvalidate-cluster-security (BZ# 2109544)\n\n* Deletion of Application (including app related resources) from the\nconsole fails to delete PlacementRule for the application (BZ# 20110026)\n\n* After the creation by a policy of job or deployment (in case the object\nis missing)ACM is trying to add new containers instead of updating (BZ#\n2117728)\n\n* pods in CrashLoopBackoff on 3.11 managed cluster (BZ# 2122292)\n\n* ArgoCD and AppSet Applications do not deploy to local-cluster (BZ#\n2124707)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2074547 - Submariner Globalnet e2e tests failed on MTU between On-Prem to Public clusters\n2082254 - OCP 4.11 - Install fails because of: pods \"management-ingress-63029-5cf6789dd6-\" is forbidden: unable to validate against any security context constraint\n2083659 - subctl gather fails to gather libreswan data if CableDriver field is missing/empty in Submariner Spec\n2086883 - Yaml editor for creating vSphere cluster moves to next line after typing\n2090311 - Submariner addon status doesn\u0027t track all deployment failures\n2091170 - Unable to deploy Hypershift operator on MCE hub using ManagedClusterAddOn without including s3 secret\n2095481 - After switching to ACM 2.5 the managed clusters log \"unable to create ClusterClaim\" errors\n2100036 - Enforce failed and report the violation after modified memory value in limitrange policy\n2101577 - Creating an application fails with \"This application has no subscription match selector (spec.selector.matchExpressions)\"\n2102273 - Inconsistent cluster resource statuses between \"All Subscription\" topology and individual topologies\n2103653 - managed cluster is in \"unknown\" state for 120 mins after OADP restore\n2104553 - RHACM 2.5.2 images\n2104961 - Subscription UI does not allow binding to label with empty value\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n2106069 - Upgrade to 2.5.1 from 2.5.0 fails due to missing Subscription CRD\n2107134 - Region information is not available for Azure cloud in managedcluster CR\n2107359 - cluster uninstall log points to incorrect container name\n2107885 - ACM shows wrong path for Argo CD applicationset git generator\n2109134 - Single node checkbox not visible for 4.11 images\n2110026 - Deletion of Application (including app related resources) from the console fails to delete PlacementRule for the application\n2117728 - After the creation by a policy of job or deployment (in case the object is missing)ACM is trying to add new containers instead of updating\n2122292 - pods in CrashLoopBackoff on 3.11 managed cluster\n2124707 - ArgoCD and AppSet Applications do not deploy to local-cluster\n2124794 - CVE-2022-36067 vm2:  Sandbox Escape in vm2\n\n5",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-2097"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-2097"
      },
      {
        "db": "PACKETSTORM",
        "id": "168265"
      },
      {
        "db": "PACKETSTORM",
        "id": "168222"
      },
      {
        "db": "PACKETSTORM",
        "id": "168351"
      },
      {
        "db": "PACKETSTORM",
        "id": "168187"
      },
      {
        "db": "PACKETSTORM",
        "id": "169443"
      },
      {
        "db": "PACKETSTORM",
        "id": "168228"
      },
      {
        "db": "PACKETSTORM",
        "id": "175432"
      },
      {
        "db": "PACKETSTORM",
        "id": "169666"
      },
      {
        "db": "PACKETSTORM",
        "id": "170179"
      },
      {
        "db": "PACKETSTORM",
        "id": "168378"
      }
    ],
    "trust": 1.89
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2022-2097",
        "trust": 2.1
      },
      {
        "db": "SIEMENS",
        "id": "SSA-332410",
        "trust": 1.1
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-23-017-03",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-2097",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168265",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168222",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168351",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168187",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "169443",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168228",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "175432",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "169666",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "170179",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168378",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-2097"
      },
      {
        "db": "PACKETSTORM",
        "id": "168265"
      },
      {
        "db": "PACKETSTORM",
        "id": "168222"
      },
      {
        "db": "PACKETSTORM",
        "id": "168351"
      },
      {
        "db": "PACKETSTORM",
        "id": "168187"
      },
      {
        "db": "PACKETSTORM",
        "id": "169443"
      },
      {
        "db": "PACKETSTORM",
        "id": "168228"
      },
      {
        "db": "PACKETSTORM",
        "id": "175432"
      },
      {
        "db": "PACKETSTORM",
        "id": "169666"
      },
      {
        "db": "PACKETSTORM",
        "id": "170179"
      },
      {
        "db": "PACKETSTORM",
        "id": "168378"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-2097"
      }
    ]
  },
  "id": "VAR-202207-0107",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-11-29T21:02:33.755000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Amazon Linux 2: ALAS2-2023-1974",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2023-1974"
      },
      {
        "title": "Red Hat: ",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=CVE-2022-2097"
      },
      {
        "title": "Debian CVElist Bug Report Logs: openssl: CVE-2022-2097",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=740b837c53d462fc86f3cb0849b86ca0"
      },
      {
        "title": "Red Hat: Moderate: openssl security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225818 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: openssl security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226224 - Security Advisory"
      },
      {
        "title": "Debian Security Advisories: DSA-5343-1 openssl -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=b6a11b827fe9cfaea9c113b2ad37856f"
      },
      {
        "title": "Red Hat: Important: Release of containers for OSP 16.2.z director operator tech preview",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226517 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Self Node Remediation Operator 0.4.1 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226184 - Security Advisory"
      },
      {
        "title": "Amazon Linux 2022: ALAS2022-2022-147",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=ALAS2022-2022-147"
      },
      {
        "title": "Red Hat: Critical: Multicluster Engine for Kubernetes 2.0.2 security and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226422 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.11.1 bug fix and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226103 - Security Advisory"
      },
      {
        "title": "Brocade Security Advisories: Access Denied",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=brocade_security_advisories\u0026qid=38e06d13217149784c0941a3098b8989"
      },
      {
        "title": "Amazon Linux 2022: ALAS2022-2022-195",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=ALAS2022-2022-195"
      },
      {
        "title": "Red Hat: Important: Node Maintenance Operator 4.11.1 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226188 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Openshift Logging Security and Bug Fix update (5.3.11)",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226182 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Logging Subsystem 5.5.0 - Red Hat OpenShift security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226051 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat OpenShift Service Mesh 2.2.2 Containers security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226283 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Logging Subsystem 5.4.5 Security and Bug Fix Update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226183 - Security Advisory"
      },
      {
        "title": "Red Hat: Critical: Red Hat Advanced Cluster Management 2.5.2 security fixes and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226507 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: RHOSDT 2.6.0 operator/operand containers Security Update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20227055 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift sandboxed containers 1.3.1 security fix and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20227058 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: New container image for Red Hat Ceph Storage 5.2 Security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226024 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: RHACS 3.72 enhancement and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226714 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift API for Data Protection (OADP) 1.1.0 security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226290 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Gatekeeper Operator v0.2 security and container updates",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226348 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Multicluster Engine for Kubernetes 2.1 security updates and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226345 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: RHSA: Submariner 0.13 - security and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226346 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift API for Data Protection (OADP) 1.0.4 security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226430 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.6.0 security updates and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226370 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.12 security updates and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226271 - Security Advisory"
      },
      {
        "title": "Red Hat: Critical: Red Hat Advanced Cluster Management 2.4.6 security update and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226696 - Security Advisory"
      },
      {
        "title": "Hitachi Security Advisories: Multiple Vulnerabilities in Hitachi Command Suite, Hitachi Automation Director, Hitachi Configuration Manager, Hitachi Infrastructure Analytics Advisor and Hitachi Ops Center",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2023-126"
      },
      {
        "title": "Red Hat: Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, \u0026 bugfix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226156 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Virtualization 4.11.1 security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228750 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: OpenShift Virtualization 4.11.0 Images security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226526 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Migration Toolkit for Containers (MTC) 1.7.4 security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226429 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: OpenShift Virtualization 4.12.0 Images security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20230408 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Openshift Logging 5.3.14 bug fix release and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228889 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Logging Subsystem 5.5.5 - Red Hat OpenShift security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228781 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: OpenShift Container Platform 4.11.0 bug fix and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225069 - Security Advisory"
      },
      {
        "title": "https://github.com/jntass/TASSL-1.1.1",
        "trust": 0.1,
        "url": "https://github.com/jntass/TASSL-1.1.1 "
      },
      {
        "title": "BIF - The Fairwinds Base Image Finder Client",
        "trust": 0.1,
        "url": "https://github.com/FairwindsOps/bif "
      },
      {
        "title": "https://github.com/tianocore-docs/ThirdPartySecurityAdvisories",
        "trust": 0.1,
        "url": "https://github.com/tianocore-docs/ThirdPartySecurityAdvisories "
      },
      {
        "title": "GitHub Actions CI App Pipeline",
        "trust": 0.1,
        "url": "https://github.com/isgo-golgo13/gokit-gorillakit-enginesvc "
      },
      {
        "title": "https://github.com/cdupuis/image-api",
        "trust": 0.1,
        "url": "https://github.com/cdupuis/image-api "
      },
      {
        "title": "OpenSSL-CVE-lib",
        "trust": 0.1,
        "url": "https://github.com/chnzzh/OpenSSL-CVE-lib "
      },
      {
        "title": "PoC in GitHub",
        "trust": 0.1,
        "url": "https://github.com/nomi-sec/PoC-in-GitHub "
      },
      {
        "title": "PoC in GitHub",
        "trust": 0.1,
        "url": "https://github.com/manas3c/CVE-POC "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-2097"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-327",
        "trust": 1.0
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-2097"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.2,
        "url": "https://www.openssl.org/news/secadv/20220705.txt"
      },
      {
        "trust": 1.1,
        "url": "https://security.netapp.com/advisory/ntap-20220715-0011/"
      },
      {
        "trust": 1.1,
        "url": "https://security.gentoo.org/glsa/202210-02"
      },
      {
        "trust": 1.1,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf"
      },
      {
        "trust": 1.1,
        "url": "https://www.debian.org/security/2023/dsa-5343"
      },
      {
        "trust": 1.1,
        "url": "https://lists.debian.org/debian-lts-announce/2023/02/msg00019.html"
      },
      {
        "trust": 1.1,
        "url": "https://security.netapp.com/advisory/ntap-20230420-0008/"
      },
      {
        "trust": 1.1,
        "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=a98f339ddd7e8f487d6e0088d4a9a42324885a93"
      },
      {
        "trust": 1.1,
        "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=919925673d6c9cfed3c1085497f5dfbbed5fc431"
      },
      {
        "trust": 1.1,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/v6567jerrhhjw2gngjgkdrnhr7snpzk7/"
      },
      {
        "trust": 1.1,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/r6ck57nbqftpumxapjurcgxuyt76nqak/"
      },
      {
        "trust": 1.1,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/vcmnwkerpbkoebnl7clttx3zzczlh7xa/"
      },
      {
        "trust": 1.0,
        "url": "https://security.netapp.com/advisory/ntap-20240621-0006/"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2097"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2022-2097"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2068"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1292"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2022-1292"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2022-1586"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2022-2068"
      },
      {
        "trust": 0.8,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.8,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1586"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2022-32206"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2022-32208"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1962"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-30630"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-30632"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-30631"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-1962"
      },
      {
        "trust": 0.4,
        "url": "https://issues.jboss.org/):"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-1785"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-1897"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-1927"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-29154"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-25314"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-32148"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-1705"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-30629"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-40528"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-25313"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-28131"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2526"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-25314"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-28131"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-30633"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-40528"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1705"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-25313"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-2526"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-29824"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1897"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1927"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30632"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1785"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24675"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-24675"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-29154"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-30635"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30633"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30630"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3634"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-21698"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1271"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-26691"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3634"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21698"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-1271"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-34903"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32206"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/327.html"
      },
      {
        "trust": 0.1,
        "url": "https://alas.aws.amazon.com/al2/alas-2023-1974.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://github.com/fairwindsops/bif"
      },
      {
        "trust": 0.1,
        "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-23-017-03"
      },
      {
        "trust": 0.1,
        "url": "https://alas.aws.amazon.com/al2022/alas-2022-195.html"
      },
      {
        "trust": 0.1,
        "url": "https://submariner.io/getting-started/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-38561"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6346"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-29824"
      },
      {
        "trust": 0.1,
        "url": "https://submariner.io/."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30629"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/add-ons/submariner#submariner-deploy-console"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-38561"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/latest/service_mesh/v2x/servicemesh-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30635"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6283"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-31107"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-31107"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6430"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6188"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32148"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30631"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.11/sandboxed_containers/sandboxed-containers-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0391"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0391"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:7058"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/latest/sandboxed_containers/upgrade-sandboxed-containers.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2832"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-40674"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2015-20107"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2015-20107"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2832"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26691"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28327"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6290"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-28327"
      },
      {
        "trust": 0.1,
        "url": "https://ubuntu.com/security/notices/usn-6457-1"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/nodejs/12.22.9~dfsg-1ubuntu3.1"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2274"
      },
      {
        "trust": 0.1,
        "url": "https://www.openssl.org/policies/secpolicy.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36516"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24448"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26710"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:8889"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22628"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21618"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-3515"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0168"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21628"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-3709"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0617"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0924"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0562"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2639"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0908"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1055"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0865"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-35527"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-35525"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26373"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26709"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-20368"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1048"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3640"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0561"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0617"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-39399"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0562"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0854"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22629"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-29581"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1016"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2078"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22844"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42898"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2938"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21499"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-36946"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42003"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0865"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36558"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27405"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2016-3709"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0909"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1852"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0561"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35527"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0854"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30293"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27406"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0168"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21624"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1304"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26717"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21626"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28390"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36558"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26716"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30002"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36518"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27950"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27404"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23960"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3640"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30002"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36518"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0891"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1184"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35525"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22624"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2509"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26700"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-25255"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26719"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21619"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-42004"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-37434"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1355"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36516"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22662"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28893"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/install/index#installing"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6507"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#critical"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32250"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-31129"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-36067"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32208"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1012"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1012"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32250"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-31129"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-2097"
      },
      {
        "db": "PACKETSTORM",
        "id": "168265"
      },
      {
        "db": "PACKETSTORM",
        "id": "168222"
      },
      {
        "db": "PACKETSTORM",
        "id": "168351"
      },
      {
        "db": "PACKETSTORM",
        "id": "168187"
      },
      {
        "db": "PACKETSTORM",
        "id": "169443"
      },
      {
        "db": "PACKETSTORM",
        "id": "168228"
      },
      {
        "db": "PACKETSTORM",
        "id": "175432"
      },
      {
        "db": "PACKETSTORM",
        "id": "169666"
      },
      {
        "db": "PACKETSTORM",
        "id": "170179"
      },
      {
        "db": "PACKETSTORM",
        "id": "168378"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-2097"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULMON",
        "id": "CVE-2022-2097"
      },
      {
        "db": "PACKETSTORM",
        "id": "168265"
      },
      {
        "db": "PACKETSTORM",
        "id": "168222"
      },
      {
        "db": "PACKETSTORM",
        "id": "168351"
      },
      {
        "db": "PACKETSTORM",
        "id": "168187"
      },
      {
        "db": "PACKETSTORM",
        "id": "169443"
      },
      {
        "db": "PACKETSTORM",
        "id": "168228"
      },
      {
        "db": "PACKETSTORM",
        "id": "175432"
      },
      {
        "db": "PACKETSTORM",
        "id": "169666"
      },
      {
        "db": "PACKETSTORM",
        "id": "170179"
      },
      {
        "db": "PACKETSTORM",
        "id": "168378"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-2097"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-07-05T00:00:00",
        "db": "VULMON",
        "id": "CVE-2022-2097"
      },
      {
        "date": "2022-09-07T16:37:33",
        "db": "PACKETSTORM",
        "id": "168265"
      },
      {
        "date": "2022-09-01T16:33:07",
        "db": "PACKETSTORM",
        "id": "168222"
      },
      {
        "date": "2022-09-13T15:41:58",
        "db": "PACKETSTORM",
        "id": "168351"
      },
      {
        "date": "2022-08-26T14:31:21",
        "db": "PACKETSTORM",
        "id": "168187"
      },
      {
        "date": "2022-10-20T14:21:57",
        "db": "PACKETSTORM",
        "id": "169443"
      },
      {
        "date": "2022-09-01T16:34:06",
        "db": "PACKETSTORM",
        "id": "168228"
      },
      {
        "date": "2023-10-31T13:11:25",
        "db": "PACKETSTORM",
        "id": "175432"
      },
      {
        "date": "2022-07-05T12:12:12",
        "db": "PACKETSTORM",
        "id": "169666"
      },
      {
        "date": "2022-12-09T14:52:40",
        "db": "PACKETSTORM",
        "id": "170179"
      },
      {
        "date": "2022-09-14T15:08:07",
        "db": "PACKETSTORM",
        "id": "168378"
      },
      {
        "date": "2022-07-05T11:15:08.340000",
        "db": "NVD",
        "id": "CVE-2022-2097"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-11-07T00:00:00",
        "db": "VULMON",
        "id": "CVE-2022-2097"
      },
      {
        "date": "2024-06-21T19:15:23.083000",
        "db": "NVD",
        "id": "CVE-2022-2097"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "175432"
      },
      {
        "db": "PACKETSTORM",
        "id": "169666"
      }
    ],
    "trust": 0.2
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat Security Advisory 2022-6346-01",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "168265"
      }
    ],
    "trust": 0.1
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "arbitrary",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "175432"
      }
    ],
    "trust": 0.1
  }
}

var-202011-0840
Vulnerability from variot

Axios NPM package 0.21.0 contains a Server-Side Request Forgery (SSRF) vulnerability where an attacker is able to bypass a proxy by providing a URL that responds with a redirect to a restricted host or IP address

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202011-0840",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "axios",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "axios",
        "version": "0.19.0"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "axios",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "axios",
        "version": "0.21.0"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "axios",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "axios",
        "version": "0.21.0"
      },
      {
        "model": "axios",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "axios",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-013151"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-28168"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Siemens reported these vulnerabilities to CISA.",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202011-650"
      }
    ],
    "trust": 0.6
  },
  "cve": "CVE-2020-28168",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 4.3,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "CVE-2020-28168",
            "impactScore": 2.9,
            "integrityImpact": "NONE",
            "severity": "MEDIUM",
            "trust": 1.9,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:N/A:N",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "HIGH",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 5.9,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.2,
            "id": "CVE-2020-28168",
            "impactScore": 3.6,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:N/A:N",
            "version": "3.1"
          },
          {
            "attackComplexity": "High",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "None",
            "baseScore": 5.9,
            "baseSeverity": "Medium",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2020-28168",
            "impactScore": null,
            "integrityImpact": "None",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:N/A:N",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2020-28168",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "NVD",
            "id": "CVE-2020-28168",
            "trust": 0.8,
            "value": "Medium"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202011-650",
            "trust": 0.6,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2020-28168",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2020-28168"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-013151"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202011-650"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-28168"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Axios NPM package 0.21.0 contains a Server-Side Request Forgery (SSRF) vulnerability where an attacker is able to bypass a proxy by providing a URL that responds with a redirect to a restricted host or IP address",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-28168"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-013151"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-28168"
      }
    ],
    "trust": 1.71
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2020-28168",
        "trust": 3.3
      },
      {
        "db": "SIEMENS",
        "id": "SSA-637483",
        "trust": 1.6
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-22-258-05",
        "trust": 1.4
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-24-277-02",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU99475301",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU90178687",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-013151",
        "trust": 0.8
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4616",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202011-650",
        "trust": 0.6
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-28168",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2020-28168"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-013151"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202011-650"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-28168"
      }
    ]
  },
  "id": "VAR-202011-0840",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-11-23T19:35:59.972000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Requests\u00a0that\u00a0follow\u00a0a\u00a0redirect\u00a0are\u00a0not\u00a0passing\u00a0via\u00a0the\u00a0proxy\u00a0#3369",
        "trust": 0.8,
        "url": "https://github.com/axios/axios/issues/3369"
      },
      {
        "title": "Axios Fixes for code issue vulnerabilities",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=134944"
      },
      {
        "title": "Debian CVElist Bug Report Logs: node-axios: CVE-2020-28168",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=073b117b4a58cf2da488286e32905713"
      },
      {
        "title": "IBM: Security Bulletin: IBM App Connect Enterprise Certified Container may be vulnerable to a Server-Side Request Forgery vulnerability (CVE-2020-28168)",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=40b72bb161d1b7da9de5abec310d3cb1"
      },
      {
        "title": "Django-Voice-Converter-with-Yandex-Speech-kit",
        "trust": 0.1,
        "url": "https://github.com/art610/Django-Voice-Converter-with-Yandex-Speech-kit "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2020-28168"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-013151"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202011-650"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-918",
        "trust": 1.0
      },
      {
        "problemtype": "Server-side request forgery (CWE-918) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-013151"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-28168"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.6,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf"
      },
      {
        "trust": 1.6,
        "url": "https://github.com/axios/axios/issues/3369"
      },
      {
        "trust": 1.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28168"
      },
      {
        "trust": 1.4,
        "url": "https://lists.apache.org/thread.html/r25d53acd06f29244b8a103781b0339c5e7efee9099a4d52f0c230e4a@%3ccommits.druid.apache.org%3e"
      },
      {
        "trust": 1.4,
        "url": "https://lists.apache.org/thread.html/r954d80fd18e9dafef6e813963eb7e08c228151c2b6268ecd63b35d1f@%3ccommits.druid.apache.org%3e"
      },
      {
        "trust": 1.4,
        "url": "https://lists.apache.org/thread.html/rdfd2901b8b697a3f6e2c9c6ecc688fd90d7f881937affb5144d61d6e@%3ccommits.druid.apache.org%3e"
      },
      {
        "trust": 1.0,
        "url": "https://lists.apache.org/thread.html/rdfd2901b8b697a3f6e2c9c6ecc688fd90d7f881937affb5144d61d6e%40%3ccommits.druid.apache.org%3e"
      },
      {
        "trust": 1.0,
        "url": "https://lists.apache.org/thread.html/r954d80fd18e9dafef6e813963eb7e08c228151c2b6268ecd63b35d1f%40%3ccommits.druid.apache.org%3e"
      },
      {
        "trust": 1.0,
        "url": "https://lists.apache.org/thread.html/r25d53acd06f29244b8a103781b0339c5e7efee9099a4d52f0c230e4a%40%3ccommits.druid.apache.org%3e"
      },
      {
        "trust": 0.8,
        "url": "http://jvn.jp/vu/jvnvu99475301/index.html"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu90178687/"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-24-277-02"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-affect-ibm-cloud-pak-for-automation/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-app-connect-enterprise-certified-container-may-be-vulnerable-to-a-server-side-request-forgery-vulnerability-cve-2020-28168/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4616"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/node-js-axios-information-disclosure-via-server-side-request-forgery-34243"
      },
      {
        "trust": 0.6,
        "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-013151"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202011-650"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-28168"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULMON",
        "id": "CVE-2020-28168"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-013151"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202011-650"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-28168"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-11-06T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-28168"
      },
      {
        "date": "2021-06-21T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-013151"
      },
      {
        "date": "2020-11-06T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202011-650"
      },
      {
        "date": "2020-11-06T20:15:13.163000",
        "db": "NVD",
        "id": "CVE-2020-28168"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-09-13T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-28168"
      },
      {
        "date": "2024-10-07T01:03:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-013151"
      },
      {
        "date": "2022-09-19T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202011-650"
      },
      {
        "date": "2024-11-21T05:22:25.573000",
        "db": "NVD",
        "id": "CVE-2020-28168"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202011-650"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Axios\u00a0NPM\u00a0 Server-side request forgery vulnerability in package",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-013151"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "code problem",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202011-650"
      }
    ],
    "trust": 0.6
  }
}

var-202009-1544
Vulnerability from variot

Protocol encryption can be easily broken for CodeMeter (All versions prior to 6.90 are affected, including Version 6.90 or newer only if CodeMeter Runtime is running as server) and the server accepts external connections, which may allow an attacker to remotely communicate with the CodeMeter API. CodeMeter Contains a cryptographic vulnerability.Information is obtained, information is tampered with, and service is disrupted (DoS) It may be put into a state. Siemens SIMATIC WinCC OA (Open Architecture) is a set of SCADA system of Siemens (Siemens), Germany, and it is also an integral part of HMI series. The system is mainly suitable for industries such as rail transit, building automation and public power supply. Information Server is used to report and visualize the process data stored in the Process Historian. SINEC INS is a web-based application that combines various network services in one tool. SPPA-S2000 simulates the automation component (S7) of the nuclear DCS system SPPA-T2000. SPPA-S3000 simulates the automation components of DCS system SPPA-T3000. SPPA-T3000 is a distributed control system, mainly used in fossil and large renewable energy power plants.

Many Siemens products have security vulnerabilities. Attackers can use the vulnerability to communicate with CodeMeter API remotely

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202009-1544",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "codemeter",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "wibu",
        "version": "6.90"
      },
      {
        "model": "codemeter",
        "scope": null,
        "trust": 0.8,
        "vendor": "wibu",
        "version": null
      },
      {
        "model": "codemeter",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "wibu",
        "version": null
      },
      {
        "model": "codemeter",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "wibu",
        "version": "6.90"
      },
      {
        "model": "information server sp1",
        "scope": "lte",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "\u003c=2019"
      },
      {
        "model": "simatic wincc oa",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "3.17"
      },
      {
        "model": "sinec ins",
        "scope": null,
        "trust": 0.6,
        "vendor": "siemens",
        "version": null
      },
      {
        "model": "sppa-s2000",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "3.04"
      },
      {
        "model": "sppa-s2000",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "3.06"
      },
      {
        "model": "sppa-t3000 r8.2 sp2",
        "scope": null,
        "trust": 0.6,
        "vendor": "siemens",
        "version": null
      },
      {
        "model": "sppa-s3000",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "3.05"
      },
      {
        "model": "sppa-s3000",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "3.04"
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51242"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011222"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-14517"
      }
    ]
  },
  "cve": "CVE-2020-14517",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 7.5,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 10.0,
            "id": "CVE-2020-14517",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "HIGH",
            "trust": 1.8,
            "vectorString": "AV:N/AC:L/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "CNVD",
            "availabilityImpact": "COMPLETE",
            "baseScore": 9.7,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 10.0,
            "id": "CNVD-2020-51242",
            "impactScore": 9.5,
            "integrityImpact": "COMPLETE",
            "severity": "HIGH",
            "trust": 0.6,
            "vectorString": "AV:N/AC:L/Au:N/C:P/I:C/A:C",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 9.8,
            "baseSeverity": "CRITICAL",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 3.9,
            "id": "CVE-2020-14517",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 9.8,
            "baseSeverity": "Critical",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2020-14517",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2020-14517",
            "trust": 1.0,
            "value": "CRITICAL"
          },
          {
            "author": "NVD",
            "id": "CVE-2020-14517",
            "trust": 0.8,
            "value": "Critical"
          },
          {
            "author": "CNVD",
            "id": "CNVD-2020-51242",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202009-489",
            "trust": 0.6,
            "value": "CRITICAL"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51242"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011222"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-489"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-14517"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Protocol encryption can be easily broken for CodeMeter (All versions prior to 6.90 are affected, including Version 6.90 or newer only if CodeMeter Runtime is running as server) and the server accepts external connections, which may allow an attacker to remotely communicate with the CodeMeter API. CodeMeter Contains a cryptographic vulnerability.Information is obtained, information is tampered with, and service is disrupted  (DoS) It may be put into a state. Siemens SIMATIC WinCC OA (Open Architecture) is a set of SCADA system of Siemens (Siemens), Germany, and it is also an integral part of HMI series. The system is mainly suitable for industries such as rail transit, building automation and public power supply. Information Server is used to report and visualize the process data stored in the Process Historian. SINEC INS is a web-based application that combines various network services in one tool. SPPA-S2000 simulates the automation component (S7) of the nuclear DCS system SPPA-T2000. SPPA-S3000 simulates the automation components of DCS system SPPA-T3000. SPPA-T3000 is a distributed control system, mainly used in fossil and large renewable energy power plants. \n\r\n\r\nMany Siemens products have security vulnerabilities. Attackers can use the vulnerability to communicate with CodeMeter API remotely",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-14517"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011222"
      },
      {
        "db": "CNVD",
        "id": "CNVD-2020-51242"
      }
    ],
    "trust": 2.16
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2020-14517",
        "trust": 3.8
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-20-203-01",
        "trust": 2.4
      },
      {
        "db": "JVN",
        "id": "JVNVU90770748",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU94568336",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011222",
        "trust": 0.8
      },
      {
        "db": "SIEMENS",
        "id": "SSA-455843",
        "trust": 0.6
      },
      {
        "db": "CNVD",
        "id": "CNVD-2020-51242",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3076.2",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3076.3",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3076",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022021806",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-489",
        "trust": 0.6
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51242"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011222"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-489"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-14517"
      }
    ]
  },
  "id": "VAR-202009-1544",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51242"
      }
    ],
    "trust": 1.3593294842857142
  },
  "iot_taxonomy": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot_taxonomy#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "category": [
          "ICS"
        ],
        "sub_category": null,
        "trust": 0.6
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51242"
      }
    ]
  },
  "last_update_date": "2024-11-23T21:26:24.181000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "CodeMeter",
        "trust": 0.8,
        "url": "https://www.wibu.com/products/codemeter.html"
      },
      {
        "title": "Patch for Vulnerabilities in insufficient encryption strength of many Siemens products",
        "trust": 0.6,
        "url": "https://www.cnvd.org.cn/patchInfo/show/233344"
      },
      {
        "title": "ARC  and MATIO Security vulnerabilities",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=127910"
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51242"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011222"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-489"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-326",
        "trust": 1.0
      },
      {
        "problemtype": "CWE-327",
        "trust": 1.0
      },
      {
        "problemtype": "Inadequate encryption strength (CWE-326) [ Other ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011222"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-14517"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.4,
        "url": "https://us-cert.cisa.gov/ics/advisories/icsa-20-203-01"
      },
      {
        "trust": 1.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14517"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu94568336/"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu90770748/"
      },
      {
        "trust": 0.6,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-455843.pdf"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/siemens-simatic-six-vulnerabilities-via-wibu-systems-codemeter-runtime-33282"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022021806"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3076.2/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3076.3/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3076/"
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51242"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011222"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-489"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-14517"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51242"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011222"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-489"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-14517"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-09-10T00:00:00",
        "db": "CNVD",
        "id": "CNVD-2020-51242"
      },
      {
        "date": "2021-03-24T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-011222"
      },
      {
        "date": "2020-09-08T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202009-489"
      },
      {
        "date": "2020-09-16T20:15:13.647000",
        "db": "NVD",
        "id": "CVE-2020-14517"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-09-10T00:00:00",
        "db": "CNVD",
        "id": "CNVD-2020-51242"
      },
      {
        "date": "2022-03-15T05:10:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-011222"
      },
      {
        "date": "2022-02-21T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202009-489"
      },
      {
        "date": "2024-11-21T05:03:26.437000",
        "db": "NVD",
        "id": "CVE-2020-14517"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-489"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "CodeMeter\u00a0 Vulnerability in cryptography",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011222"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "encryption problem",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-489"
      }
    ],
    "trust": 0.6
  }
}

var-202203-0665
Vulnerability from variot

BIND 9.16.11 -> 9.16.26, 9.17.0 -> 9.18.0 and versions 9.16.11-S1 -> 9.16.26-S1 of the BIND Supported Preview Edition. Specifically crafted TCP streams can cause connections to BIND to remain in CLOSE_WAIT status for an indefinite period of time, even after the client has terminated the connection. BIND , even after the client closes the connection. Bogus NS records supplied by the forwarders may be cached and used by name if it needs to recurse for any reason. This issue causes it to obtain and pass on potentially incorrect answers. This flaw allows a remote malicious user to manipulate cache results with incorrect records, leading to queries made to the wrong servers, possibly resulting in false information received on the client's end. This issue results in BIND consuming resources, leading to a denial of service. (CVE-2022-0396). ========================================================================== Ubuntu Security Notice USN-5332-1 March 17, 2022

bind9 vulnerabilities

A security issue affects these releases of Ubuntu and its derivatives:

  • Ubuntu 21.10
  • Ubuntu 20.04 LTS
  • Ubuntu 18.04 LTS

Summary:

Several security issues were fixed in Bind.

Software Description: - bind9: Internet Domain Name Server

Details:

Xiang Li, Baojun Liu, Chaoyi Lu, and Changgen Zou discovered that Bind incorrectly handled certain bogus NS records when using forwarders. A remote attacker could possibly use this issue to manipulate cache results. This issue only affected Ubuntu 21.10. (CVE-2022-0396)

Update instructions:

The problem can be corrected by updating your system to the following package versions:

Ubuntu 21.10: bind9 1:9.16.15-1ubuntu1.2

Ubuntu 20.04 LTS: bind9 1:9.16.1-0ubuntu2.10

Ubuntu 18.04 LTS: bind9 1:9.11.3+dfsg-1ubuntu1.17

In general, a standard system update will make all the necessary changes.

For the oldstable distribution (buster), this problem has been fixed in version 1:9.11.5.P4+dfsg-5.1+deb10u7.

For the stable distribution (bullseye), this problem has been fixed in version 1:9.16.27-1~deb11u1.

We recommend that you upgrade your bind9 packages.

For the detailed security status of bind9 please refer to its security tracker page at: https://security-tracker.debian.org/tracker/bind9

Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/

Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmI010UACgkQEMKTtsN8 Tjbp3xAAil38qfAIdNkaIxY2bauvTyZDWzr6KUjph0vzmLEoAFQ3bysVSGlCnZk9 IgdyfPRWQ+Bjau1/dlhNYaTlnQajbeyvCXfJcjRRgtUDCp7abZcOcb1WDu8jWLGW iRtKsvKKrTKkIou5LgDlyqZyf6OzjgRdwtm86GDPQiCaSEpmbRt+APj5tkIA9R1G ELWuZsjbIraBU0TsNfOalgNpAWtSBayxKtWB69J8rxUV69JI194A4AJ0wm9SPpFV G/TzlyHp1dUZJRLNmZOZU/dq4pPsXzh9I4QCg1kJWsVHe2ycAJKho6hr5iy43fNl MuokfI9YnU6/9SjHrQAWp1X/6MYCR8NieJ933W89/Zb8eTjTZC8EQGo6fkA287G8 glQOrJHMQyV+b97lT67+ioTHNzTEBXTih7ZDeC1TlLqypCNYhRF/ll0Hx/oeiJFU rbjh2Og9huhD5JH8z8YAvY2g81e7KdPxazuKJnQpxGutqddCuwBvyI9fovYrah9W bYD6rskLZM2x90RI2LszHisl6FV5k37PaczamlRqGgbbMb9YlnDFjJUbM8rZZgD4 +8u/AkHq2+11pTtZ40NYt1gpdidmIC/gzzha2TfZCHMs44KPMMdH+Fid1Kc6/Cq8 QygtL4M387J9HXUrlN7NDUOrDVuVqfBG+ve3i9GCZzYjwtajTAQ= =6st2 -----END PGP SIGNATURE----- . - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202210-25


                                       https://security.gentoo.org/

Severity: Low Title: ISC BIND: Multiple Vulnerabilities Date: October 31, 2022 Bugs: #820563, #835439, #872206 ID: 202210-25


Synopsis

Multiple vulnerabilities have been discovered in ISC BIND, the worst of which could result in denial of service.

Affected packages

-------------------------------------------------------------------
 Package              /     Vulnerable     /            Unaffected
-------------------------------------------------------------------

1 net-dns/bind < 9.16.33 >= 9.16.33 2 net-dns/bind-tools < 9.16.33 >= 9.16.33

Description

Multiple vulnerabilities have been discovered in ISC BIND. Please review the CVE identifiers referenced below for details.

Impact

Please review the referenced CVE identifiers for details.

Workaround

There is no known workaround at this time.

Resolution

All ISC BIND users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-dns/bind-9.16.33"

All ISC BIND-tools users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-dns/bind-tools-9.16.33"

References

[ 1 ] CVE-2021-25219 https://nvd.nist.gov/vuln/detail/CVE-2021-25219 [ 2 ] CVE-2021-25220 https://nvd.nist.gov/vuln/detail/CVE-2021-25220 [ 3 ] CVE-2022-0396 https://nvd.nist.gov/vuln/detail/CVE-2022-0396 [ 4 ] CVE-2022-2795 https://nvd.nist.gov/vuln/detail/CVE-2022-2795 [ 5 ] CVE-2022-2881 https://nvd.nist.gov/vuln/detail/CVE-2022-2881 [ 6 ] CVE-2022-2906 https://nvd.nist.gov/vuln/detail/CVE-2022-2906 [ 7 ] CVE-2022-3080 https://nvd.nist.gov/vuln/detail/CVE-2022-3080 [ 8 ] CVE-2022-38177 https://nvd.nist.gov/vuln/detail/CVE-2022-38177 [ 9 ] CVE-2022-38178 https://nvd.nist.gov/vuln/detail/CVE-2022-38178

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202210-25

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5 . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Moderate: bind security update Advisory ID: RHSA-2022:8068-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2022:8068 Issue date: 2022-11-15 CVE Names: CVE-2021-25220 CVE-2022-0396 ==================================================================== 1. Summary:

An update for bind is now available for Red Hat Enterprise Linux 9.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

Red Hat CodeReady Linux Builder (v. 9) - aarch64, noarch, ppc64le, s390x, x86_64 Red Hat Enterprise Linux AppStream (v. 9) - aarch64, noarch, ppc64le, s390x, x86_64

  1. Description:

The Berkeley Internet Name Domain (BIND) is an implementation of the Domain Name System (DNS) protocols. BIND includes a DNS server (named); a resolver library (routines for applications to use when interfacing with DNS); and tools for verifying that the DNS server is operating correctly.

Security Fix(es):

  • bind: DNS forwarders - cache poisoning vulnerability (CVE-2021-25220)

  • bind: DoS from specifically crafted TCP packets (CVE-2022-0396)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Additional Changes:

For detailed information on changes in this release, see the Red Hat Enterprise Linux 9.1 Release Notes linked from the References section.

  1. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

After installing the update, the BIND daemon (named) will be restarted automatically.

  1. Bugs fixed (https://bugzilla.redhat.com/):

2064512 - CVE-2021-25220 bind: DNS forwarders - cache poisoning vulnerability 2064513 - CVE-2022-0396 bind: DoS from specifically crafted TCP packets 2104863 - bind-doc is not shipped to public

  1. Package List:

Red Hat Enterprise Linux AppStream (v. 9):

Source: bind-9.16.23-5.el9_1.src.rpm

aarch64: bind-9.16.23-5.el9_1.aarch64.rpm bind-chroot-9.16.23-5.el9_1.aarch64.rpm bind-debuginfo-9.16.23-5.el9_1.aarch64.rpm bind-debugsource-9.16.23-5.el9_1.aarch64.rpm bind-dnssec-utils-9.16.23-5.el9_1.aarch64.rpm bind-dnssec-utils-debuginfo-9.16.23-5.el9_1.aarch64.rpm bind-libs-9.16.23-5.el9_1.aarch64.rpm bind-libs-debuginfo-9.16.23-5.el9_1.aarch64.rpm bind-utils-9.16.23-5.el9_1.aarch64.rpm bind-utils-debuginfo-9.16.23-5.el9_1.aarch64.rpm

noarch: bind-dnssec-doc-9.16.23-5.el9_1.noarch.rpm bind-license-9.16.23-5.el9_1.noarch.rpm python3-bind-9.16.23-5.el9_1.noarch.rpm

ppc64le: bind-9.16.23-5.el9_1.ppc64le.rpm bind-chroot-9.16.23-5.el9_1.ppc64le.rpm bind-debuginfo-9.16.23-5.el9_1.ppc64le.rpm bind-debugsource-9.16.23-5.el9_1.ppc64le.rpm bind-dnssec-utils-9.16.23-5.el9_1.ppc64le.rpm bind-dnssec-utils-debuginfo-9.16.23-5.el9_1.ppc64le.rpm bind-libs-9.16.23-5.el9_1.ppc64le.rpm bind-libs-debuginfo-9.16.23-5.el9_1.ppc64le.rpm bind-utils-9.16.23-5.el9_1.ppc64le.rpm bind-utils-debuginfo-9.16.23-5.el9_1.ppc64le.rpm

s390x: bind-9.16.23-5.el9_1.s390x.rpm bind-chroot-9.16.23-5.el9_1.s390x.rpm bind-debuginfo-9.16.23-5.el9_1.s390x.rpm bind-debugsource-9.16.23-5.el9_1.s390x.rpm bind-dnssec-utils-9.16.23-5.el9_1.s390x.rpm bind-dnssec-utils-debuginfo-9.16.23-5.el9_1.s390x.rpm bind-libs-9.16.23-5.el9_1.s390x.rpm bind-libs-debuginfo-9.16.23-5.el9_1.s390x.rpm bind-utils-9.16.23-5.el9_1.s390x.rpm bind-utils-debuginfo-9.16.23-5.el9_1.s390x.rpm

x86_64: bind-9.16.23-5.el9_1.x86_64.rpm bind-chroot-9.16.23-5.el9_1.x86_64.rpm bind-debuginfo-9.16.23-5.el9_1.x86_64.rpm bind-debugsource-9.16.23-5.el9_1.x86_64.rpm bind-dnssec-utils-9.16.23-5.el9_1.x86_64.rpm bind-dnssec-utils-debuginfo-9.16.23-5.el9_1.x86_64.rpm bind-libs-9.16.23-5.el9_1.x86_64.rpm bind-libs-debuginfo-9.16.23-5.el9_1.x86_64.rpm bind-utils-9.16.23-5.el9_1.x86_64.rpm bind-utils-debuginfo-9.16.23-5.el9_1.x86_64.rpm

Red Hat CodeReady Linux Builder (v. 9):

aarch64: bind-debuginfo-9.16.23-5.el9_1.aarch64.rpm bind-debugsource-9.16.23-5.el9_1.aarch64.rpm bind-devel-9.16.23-5.el9_1.aarch64.rpm bind-dnssec-utils-debuginfo-9.16.23-5.el9_1.aarch64.rpm bind-libs-debuginfo-9.16.23-5.el9_1.aarch64.rpm bind-utils-debuginfo-9.16.23-5.el9_1.aarch64.rpm

noarch: bind-doc-9.16.23-5.el9_1.noarch.rpm

ppc64le: bind-debuginfo-9.16.23-5.el9_1.ppc64le.rpm bind-debugsource-9.16.23-5.el9_1.ppc64le.rpm bind-devel-9.16.23-5.el9_1.ppc64le.rpm bind-dnssec-utils-debuginfo-9.16.23-5.el9_1.ppc64le.rpm bind-libs-debuginfo-9.16.23-5.el9_1.ppc64le.rpm bind-utils-debuginfo-9.16.23-5.el9_1.ppc64le.rpm

s390x: bind-debuginfo-9.16.23-5.el9_1.s390x.rpm bind-debugsource-9.16.23-5.el9_1.s390x.rpm bind-devel-9.16.23-5.el9_1.s390x.rpm bind-dnssec-utils-debuginfo-9.16.23-5.el9_1.s390x.rpm bind-libs-debuginfo-9.16.23-5.el9_1.s390x.rpm bind-utils-debuginfo-9.16.23-5.el9_1.s390x.rpm

x86_64: bind-debuginfo-9.16.23-5.el9_1.i686.rpm bind-debuginfo-9.16.23-5.el9_1.x86_64.rpm bind-debugsource-9.16.23-5.el9_1.i686.rpm bind-debugsource-9.16.23-5.el9_1.x86_64.rpm bind-devel-9.16.23-5.el9_1.i686.rpm bind-devel-9.16.23-5.el9_1.x86_64.rpm bind-dnssec-utils-debuginfo-9.16.23-5.el9_1.i686.rpm bind-dnssec-utils-debuginfo-9.16.23-5.el9_1.x86_64.rpm bind-libs-9.16.23-5.el9_1.i686.rpm bind-libs-debuginfo-9.16.23-5.el9_1.i686.rpm bind-libs-debuginfo-9.16.23-5.el9_1.x86_64.rpm bind-utils-debuginfo-9.16.23-5.el9_1.i686.rpm bind-utils-debuginfo-9.16.23-5.el9_1.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. References:

https://access.redhat.com/security/cve/CVE-2021-25220 https://access.redhat.com/security/cve/CVE-2022-0396 https://access.redhat.com/security/updates/classification/#moderate https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/9.1_release_notes/index

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBY3PhLdzjgjWX9erEAQhVSw/9HlIwMZZuRgTsbY2yARvJ+sRk08hViRo6 ++sV0vMtt3ym5eQES1al4uwAFbVH3B+EZLVuox02PnKVvIM35QnzVFxSa24HToTp l3tl+c9QnDwx3VGceX9og5o/ezSKqT8UeMQF/gamcB5kwGbbeb+Gp7cpSyXsmjB1 h418DMq/BBE1kLx2MAmIAn/r8x8ISsRbk3j96VEtLrQDtbSKCrE7jmQMaGRB4NhK 4pcgEdcVC6mpBIBRSoLqSVvY9cEdbWqB2LBKArSic/GS2RFfXiSTbPP+kHhd8WHF 0pHQpQa2CXqWuoyrk4cmlvyqmp+C1oCuwsjUWm3dIouIpLU3P1PH3Xua+DMcHfNl z3wW5E8hihVQ7taw/c6jKMlIrPVzdNM7zfdqV4PBoMQ6y6nPDP23wNGIBMIArjO/ n841K1Lzp1vrChLKgtYOK4H/s6Fbtb/+fe6Q5wOVPPEeksfoKzjJjZj/J7J+RymH Bd6n+f9iMQzOkj9zb6cgrvt2aLcr29XHfcCRH81i/CEPAEFGT86qOXqIZO0+qV/u qhHDKy3rLqYsOR4BlwhFhovUGCt8rBJ8LOiZlUTxzNG4PNze4F1hG1d0qzYQv0Iw zfOrgT8NGDmGCt2nwtmy813NDmzVegwrS7w0ayLzpcwcJMVOoO0nKi5kzX1slEyu rbPwX0ROLTo=0klO -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202203-0665",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "h700s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "h500e",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "bind",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "isc",
        "version": "9.16.11"
      },
      {
        "model": "h300s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "36"
      },
      {
        "model": "bind",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "isc",
        "version": "9.18.0"
      },
      {
        "model": "h300e",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "35"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "34"
      },
      {
        "model": "h500s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "bind",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "isc",
        "version": "9.17.0"
      },
      {
        "model": "h410c",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "bind",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "isc",
        "version": "9.16.27"
      },
      {
        "model": "h700e",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h410s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "bind",
        "scope": null,
        "trust": 0.8,
        "vendor": "isc",
        "version": null
      },
      {
        "model": "fedora",
        "scope": null,
        "trust": 0.8,
        "vendor": "fedora",
        "version": null
      },
      {
        "model": "esmpro/serveragent",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u672c\u96fb\u6c17",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001799"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0396"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Siemens reported these vulnerabilities to CISA.",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202203-1543"
      }
    ],
    "trust": 0.6
  },
  "cve": "CVE-2022-0396",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 4.3,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 8.6,
            "id": "CVE-2022-0396",
            "impactScore": 2.9,
            "integrityImpact": "NONE",
            "severity": "MEDIUM",
            "trust": 1.9,
            "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "LOW",
            "baseScore": 5.3,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 3.9,
            "id": "CVE-2022-0396",
            "impactScore": 1.4,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 2.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "OTHER",
            "availabilityImpact": "Low",
            "baseScore": 5.3,
            "baseSeverity": "Medium",
            "confidentialityImpact": "None",
            "exploitabilityScore": null,
            "id": "JVNDB-2022-001799",
            "impactScore": null,
            "integrityImpact": "None",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2022-0396",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "security-officer@isc.org",
            "id": "CVE-2022-0396",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "NVD",
            "id": "CVE-2022-0396",
            "trust": 0.8,
            "value": "Medium"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202203-1543",
            "trust": 0.6,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2022-0396",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-0396"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001799"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202203-1543"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0396"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0396"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "BIND 9.16.11 -\u003e 9.16.26, 9.17.0 -\u003e 9.18.0 and versions 9.16.11-S1 -\u003e 9.16.26-S1 of the BIND Supported Preview Edition. Specifically crafted TCP streams can cause connections to BIND to remain in CLOSE_WAIT status for an indefinite period of time, even after the client has terminated the connection. BIND , even after the client closes the connection. Bogus NS records supplied by the forwarders may be cached and used by name if it needs to recurse for any reason. This issue causes it to obtain and pass on potentially incorrect answers. This flaw allows a remote malicious user to manipulate cache results with incorrect records, leading to queries made to the wrong servers, possibly resulting in false information received on the client\u0027s end. This issue results in BIND consuming resources, leading to a denial of service. (CVE-2022-0396). ==========================================================================\nUbuntu Security Notice USN-5332-1\nMarch 17, 2022\n\nbind9 vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 21.10\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in Bind. \n\nSoftware Description:\n- bind9: Internet Domain Name Server\n\nDetails:\n\nXiang Li, Baojun Liu, Chaoyi Lu, and Changgen Zou discovered that Bind\nincorrectly handled certain bogus NS records when using forwarders. A\nremote attacker could possibly use this issue to manipulate cache results. This issue only affected\nUbuntu 21.10. (CVE-2022-0396)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 21.10:\n  bind9                           1:9.16.15-1ubuntu1.2\n\nUbuntu 20.04 LTS:\n  bind9                           1:9.16.1-0ubuntu2.10\n\nUbuntu 18.04 LTS:\n  bind9                           1:9.11.3+dfsg-1ubuntu1.17\n\nIn general, a standard system update will make all the necessary changes. \n\nFor the oldstable distribution (buster), this problem has been fixed\nin version 1:9.11.5.P4+dfsg-5.1+deb10u7. \n\nFor the stable distribution (bullseye), this problem has been fixed in\nversion 1:9.16.27-1~deb11u1. \n\nWe recommend that you upgrade your bind9 packages. \n\nFor the detailed security status of bind9 please refer to\nits security tracker page at:\nhttps://security-tracker.debian.org/tracker/bind9\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmI010UACgkQEMKTtsN8\nTjbp3xAAil38qfAIdNkaIxY2bauvTyZDWzr6KUjph0vzmLEoAFQ3bysVSGlCnZk9\nIgdyfPRWQ+Bjau1/dlhNYaTlnQajbeyvCXfJcjRRgtUDCp7abZcOcb1WDu8jWLGW\niRtKsvKKrTKkIou5LgDlyqZyf6OzjgRdwtm86GDPQiCaSEpmbRt+APj5tkIA9R1G\nELWuZsjbIraBU0TsNfOalgNpAWtSBayxKtWB69J8rxUV69JI194A4AJ0wm9SPpFV\nG/TzlyHp1dUZJRLNmZOZU/dq4pPsXzh9I4QCg1kJWsVHe2ycAJKho6hr5iy43fNl\nMuokfI9YnU6/9SjHrQAWp1X/6MYCR8NieJ933W89/Zb8eTjTZC8EQGo6fkA287G8\nglQOrJHMQyV+b97lT67+ioTHNzTEBXTih7ZDeC1TlLqypCNYhRF/ll0Hx/oeiJFU\nrbjh2Og9huhD5JH8z8YAvY2g81e7KdPxazuKJnQpxGutqddCuwBvyI9fovYrah9W\nbYD6rskLZM2x90RI2LszHisl6FV5k37PaczamlRqGgbbMb9YlnDFjJUbM8rZZgD4\n+8u/AkHq2+11pTtZ40NYt1gpdidmIC/gzzha2TfZCHMs44KPMMdH+Fid1Kc6/Cq8\nQygtL4M387J9HXUrlN7NDUOrDVuVqfBG+ve3i9GCZzYjwtajTAQ=\n=6st2\n-----END PGP SIGNATURE-----\n. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202210-25\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Low\n    Title: ISC BIND: Multiple Vulnerabilities\n     Date: October 31, 2022\n     Bugs: #820563, #835439, #872206\n       ID: 202210-25\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been discovered in ISC BIND, the worst of\nwhich could result in denial of service. \n\nAffected packages\n=================\n\n    -------------------------------------------------------------------\n     Package              /     Vulnerable     /            Unaffected\n    -------------------------------------------------------------------\n  1  net-dns/bind               \u003c 9.16.33                  \u003e= 9.16.33\n  2  net-dns/bind-tools         \u003c 9.16.33                  \u003e= 9.16.33\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in ISC BIND. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll ISC BIND users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-dns/bind-9.16.33\"\n\nAll ISC BIND-tools users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-dns/bind-tools-9.16.33\"\n\nReferences\n==========\n\n[ 1 ] CVE-2021-25219\n      https://nvd.nist.gov/vuln/detail/CVE-2021-25219\n[ 2 ] CVE-2021-25220\n      https://nvd.nist.gov/vuln/detail/CVE-2021-25220\n[ 3 ] CVE-2022-0396\n      https://nvd.nist.gov/vuln/detail/CVE-2022-0396\n[ 4 ] CVE-2022-2795\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2795\n[ 5 ] CVE-2022-2881\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2881\n[ 6 ] CVE-2022-2906\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2906\n[ 7 ] CVE-2022-3080\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3080\n[ 8 ] CVE-2022-38177\n      https://nvd.nist.gov/vuln/detail/CVE-2022-38177\n[ 9 ] CVE-2022-38178\n      https://nvd.nist.gov/vuln/detail/CVE-2022-38178\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202210-25\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Moderate: bind security update\nAdvisory ID:       RHSA-2022:8068-01\nProduct:           Red Hat Enterprise Linux\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:8068\nIssue date:        2022-11-15\nCVE Names:         CVE-2021-25220 CVE-2022-0396\n====================================================================\n1. Summary:\n\nAn update for bind is now available for Red Hat Enterprise Linux 9. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat CodeReady Linux Builder (v. 9) - aarch64, noarch, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux AppStream (v. 9) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe Berkeley Internet Name Domain (BIND) is an implementation of the Domain\nName System (DNS) protocols. BIND includes a DNS server (named); a resolver\nlibrary (routines for applications to use when interfacing with DNS); and\ntools for verifying that the DNS server is operating correctly. \n\nSecurity Fix(es):\n\n* bind: DNS forwarders - cache poisoning vulnerability (CVE-2021-25220)\n\n* bind: DoS from specifically crafted TCP packets (CVE-2022-0396)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 9.1 Release Notes linked from the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nAfter installing the update, the BIND daemon (named) will be restarted\nautomatically. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2064512 - CVE-2021-25220 bind: DNS forwarders - cache poisoning vulnerability\n2064513 - CVE-2022-0396 bind: DoS from specifically crafted TCP packets\n2104863 - bind-doc is not shipped to public\n\n6. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 9):\n\nSource:\nbind-9.16.23-5.el9_1.src.rpm\n\naarch64:\nbind-9.16.23-5.el9_1.aarch64.rpm\nbind-chroot-9.16.23-5.el9_1.aarch64.rpm\nbind-debuginfo-9.16.23-5.el9_1.aarch64.rpm\nbind-debugsource-9.16.23-5.el9_1.aarch64.rpm\nbind-dnssec-utils-9.16.23-5.el9_1.aarch64.rpm\nbind-dnssec-utils-debuginfo-9.16.23-5.el9_1.aarch64.rpm\nbind-libs-9.16.23-5.el9_1.aarch64.rpm\nbind-libs-debuginfo-9.16.23-5.el9_1.aarch64.rpm\nbind-utils-9.16.23-5.el9_1.aarch64.rpm\nbind-utils-debuginfo-9.16.23-5.el9_1.aarch64.rpm\n\nnoarch:\nbind-dnssec-doc-9.16.23-5.el9_1.noarch.rpm\nbind-license-9.16.23-5.el9_1.noarch.rpm\npython3-bind-9.16.23-5.el9_1.noarch.rpm\n\nppc64le:\nbind-9.16.23-5.el9_1.ppc64le.rpm\nbind-chroot-9.16.23-5.el9_1.ppc64le.rpm\nbind-debuginfo-9.16.23-5.el9_1.ppc64le.rpm\nbind-debugsource-9.16.23-5.el9_1.ppc64le.rpm\nbind-dnssec-utils-9.16.23-5.el9_1.ppc64le.rpm\nbind-dnssec-utils-debuginfo-9.16.23-5.el9_1.ppc64le.rpm\nbind-libs-9.16.23-5.el9_1.ppc64le.rpm\nbind-libs-debuginfo-9.16.23-5.el9_1.ppc64le.rpm\nbind-utils-9.16.23-5.el9_1.ppc64le.rpm\nbind-utils-debuginfo-9.16.23-5.el9_1.ppc64le.rpm\n\ns390x:\nbind-9.16.23-5.el9_1.s390x.rpm\nbind-chroot-9.16.23-5.el9_1.s390x.rpm\nbind-debuginfo-9.16.23-5.el9_1.s390x.rpm\nbind-debugsource-9.16.23-5.el9_1.s390x.rpm\nbind-dnssec-utils-9.16.23-5.el9_1.s390x.rpm\nbind-dnssec-utils-debuginfo-9.16.23-5.el9_1.s390x.rpm\nbind-libs-9.16.23-5.el9_1.s390x.rpm\nbind-libs-debuginfo-9.16.23-5.el9_1.s390x.rpm\nbind-utils-9.16.23-5.el9_1.s390x.rpm\nbind-utils-debuginfo-9.16.23-5.el9_1.s390x.rpm\n\nx86_64:\nbind-9.16.23-5.el9_1.x86_64.rpm\nbind-chroot-9.16.23-5.el9_1.x86_64.rpm\nbind-debuginfo-9.16.23-5.el9_1.x86_64.rpm\nbind-debugsource-9.16.23-5.el9_1.x86_64.rpm\nbind-dnssec-utils-9.16.23-5.el9_1.x86_64.rpm\nbind-dnssec-utils-debuginfo-9.16.23-5.el9_1.x86_64.rpm\nbind-libs-9.16.23-5.el9_1.x86_64.rpm\nbind-libs-debuginfo-9.16.23-5.el9_1.x86_64.rpm\nbind-utils-9.16.23-5.el9_1.x86_64.rpm\nbind-utils-debuginfo-9.16.23-5.el9_1.x86_64.rpm\n\nRed Hat CodeReady Linux Builder (v. 9):\n\naarch64:\nbind-debuginfo-9.16.23-5.el9_1.aarch64.rpm\nbind-debugsource-9.16.23-5.el9_1.aarch64.rpm\nbind-devel-9.16.23-5.el9_1.aarch64.rpm\nbind-dnssec-utils-debuginfo-9.16.23-5.el9_1.aarch64.rpm\nbind-libs-debuginfo-9.16.23-5.el9_1.aarch64.rpm\nbind-utils-debuginfo-9.16.23-5.el9_1.aarch64.rpm\n\nnoarch:\nbind-doc-9.16.23-5.el9_1.noarch.rpm\n\nppc64le:\nbind-debuginfo-9.16.23-5.el9_1.ppc64le.rpm\nbind-debugsource-9.16.23-5.el9_1.ppc64le.rpm\nbind-devel-9.16.23-5.el9_1.ppc64le.rpm\nbind-dnssec-utils-debuginfo-9.16.23-5.el9_1.ppc64le.rpm\nbind-libs-debuginfo-9.16.23-5.el9_1.ppc64le.rpm\nbind-utils-debuginfo-9.16.23-5.el9_1.ppc64le.rpm\n\ns390x:\nbind-debuginfo-9.16.23-5.el9_1.s390x.rpm\nbind-debugsource-9.16.23-5.el9_1.s390x.rpm\nbind-devel-9.16.23-5.el9_1.s390x.rpm\nbind-dnssec-utils-debuginfo-9.16.23-5.el9_1.s390x.rpm\nbind-libs-debuginfo-9.16.23-5.el9_1.s390x.rpm\nbind-utils-debuginfo-9.16.23-5.el9_1.s390x.rpm\n\nx86_64:\nbind-debuginfo-9.16.23-5.el9_1.i686.rpm\nbind-debuginfo-9.16.23-5.el9_1.x86_64.rpm\nbind-debugsource-9.16.23-5.el9_1.i686.rpm\nbind-debugsource-9.16.23-5.el9_1.x86_64.rpm\nbind-devel-9.16.23-5.el9_1.i686.rpm\nbind-devel-9.16.23-5.el9_1.x86_64.rpm\nbind-dnssec-utils-debuginfo-9.16.23-5.el9_1.i686.rpm\nbind-dnssec-utils-debuginfo-9.16.23-5.el9_1.x86_64.rpm\nbind-libs-9.16.23-5.el9_1.i686.rpm\nbind-libs-debuginfo-9.16.23-5.el9_1.i686.rpm\nbind-libs-debuginfo-9.16.23-5.el9_1.x86_64.rpm\nbind-utils-debuginfo-9.16.23-5.el9_1.i686.rpm\nbind-utils-debuginfo-9.16.23-5.el9_1.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-25220\nhttps://access.redhat.com/security/cve/CVE-2022-0396\nhttps://access.redhat.com/security/updates/classification/#moderate\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/9.1_release_notes/index\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBY3PhLdzjgjWX9erEAQhVSw/9HlIwMZZuRgTsbY2yARvJ+sRk08hViRo6\n++sV0vMtt3ym5eQES1al4uwAFbVH3B+EZLVuox02PnKVvIM35QnzVFxSa24HToTp\nl3tl+c9QnDwx3VGceX9og5o/ezSKqT8UeMQF/gamcB5kwGbbeb+Gp7cpSyXsmjB1\nh418DMq/BBE1kLx2MAmIAn/r8x8ISsRbk3j96VEtLrQDtbSKCrE7jmQMaGRB4NhK\n4pcgEdcVC6mpBIBRSoLqSVvY9cEdbWqB2LBKArSic/GS2RFfXiSTbPP+kHhd8WHF\n0pHQpQa2CXqWuoyrk4cmlvyqmp+C1oCuwsjUWm3dIouIpLU3P1PH3Xua+DMcHfNl\nz3wW5E8hihVQ7taw/c6jKMlIrPVzdNM7zfdqV4PBoMQ6y6nPDP23wNGIBMIArjO/\nn841K1Lzp1vrChLKgtYOK4H/s6Fbtb/+fe6Q5wOVPPEeksfoKzjJjZj/J7J+RymH\nBd6n+f9iMQzOkj9zb6cgrvt2aLcr29XHfcCRH81i/CEPAEFGT86qOXqIZO0+qV/u\nqhHDKy3rLqYsOR4BlwhFhovUGCt8rBJ8LOiZlUTxzNG4PNze4F1hG1d0qzYQv0Iw\nzfOrgT8NGDmGCt2nwtmy813NDmzVegwrS7w0ayLzpcwcJMVOoO0nKi5kzX1slEyu\nrbPwX0ROLTo=0klO\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-0396"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001799"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-0396"
      },
      {
        "db": "PACKETSTORM",
        "id": "166354"
      },
      {
        "db": "PACKETSTORM",
        "id": "169261"
      },
      {
        "db": "PACKETSTORM",
        "id": "169773"
      },
      {
        "db": "PACKETSTORM",
        "id": "169587"
      },
      {
        "db": "PACKETSTORM",
        "id": "169894"
      }
    ],
    "trust": 2.16
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2022-0396",
        "trust": 3.8
      },
      {
        "db": "SIEMENS",
        "id": "SSA-637483",
        "trust": 1.7
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-22-258-05",
        "trust": 1.5
      },
      {
        "db": "JVN",
        "id": "JVNVU99475301",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU98927070",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001799",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "166354",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "169773",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "169587",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "169894",
        "trust": 0.7
      },
      {
        "db": "CS-HELP",
        "id": "SB2022031701",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022031728",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022041925",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022032124",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4616",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1149",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1180",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5750",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1719",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1160",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202203-1543",
        "trust": 0.6
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-0396",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "169261",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-0396"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001799"
      },
      {
        "db": "PACKETSTORM",
        "id": "166354"
      },
      {
        "db": "PACKETSTORM",
        "id": "169261"
      },
      {
        "db": "PACKETSTORM",
        "id": "169773"
      },
      {
        "db": "PACKETSTORM",
        "id": "169587"
      },
      {
        "db": "PACKETSTORM",
        "id": "169894"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202203-1543"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0396"
      }
    ]
  },
  "id": "VAR-202203-0665",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-11-23T19:37:55.535000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "DoS\u00a0from\u00a0specifically\u00a0crafted\u00a0TCP\u00a0packets NEC NEC Product security information",
        "trust": 0.8,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/NYD7US4HZRFUGAJ66ZTHFBYVP5N3OQBY/"
      },
      {
        "title": "ISC BIND Remediation of resource management error vulnerabilities",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=186055"
      },
      {
        "title": "Ubuntu Security Notice: USN-5332-1: Bind vulnerabilities",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-5332-1"
      },
      {
        "title": "Red Hat: Moderate: bind security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228068 - Security Advisory"
      },
      {
        "title": "Debian Security Advisories: DSA-5105-1 bind9 -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=16d84b908a424f50b3236db9219500e3"
      },
      {
        "title": "Arch Linux Advisories: [ASA-202204-5] bind: denial of service",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=ASA-202204-5"
      },
      {
        "title": "Arch Linux Issues: ",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2022-0396"
      },
      {
        "title": "Amazon Linux 2022: ALAS2022-2022-166",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=ALAS2022-2022-166"
      },
      {
        "title": "Amazon Linux 2022: ALAS2022-2022-138",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=ALAS2022-2022-138"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-0396"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001799"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202203-1543"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-404",
        "trust": 1.0
      },
      {
        "problemtype": "Improper shutdown and release of resources (CWE-404) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001799"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0396"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.8,
        "url": "https://kb.isc.org/v1/docs/cve-2022-0396"
      },
      {
        "trust": 1.8,
        "url": "https://security.gentoo.org/glsa/202210-25"
      },
      {
        "trust": 1.7,
        "url": "https://security.netapp.com/advisory/ntap-20220408-0001/"
      },
      {
        "trust": 1.7,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf"
      },
      {
        "trust": 1.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0396"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/nyd7us4hzrfugaj66zthfbyvp5n3oqby/"
      },
      {
        "trust": 0.9,
        "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.8,
        "url": "http://jvn.jp/vu/jvnvu98927070/index.html"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu99475301/"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2022-0396"
      },
      {
        "trust": 0.7,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/nyd7us4hzrfugaj66zthfbyvp5n3oqby/"
      },
      {
        "trust": 0.6,
        "url": "https://cxsecurity.com/cveshow/cve-2022-0396/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4616"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166354/ubuntu-security-notice-usn-5332-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169894/red-hat-security-advisory-2022-8068-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/isc-bind-denial-of-service-via-keep-response-order-tcp-connection-slots-37817"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022031728"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1160"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169773/red-hat-security-advisory-2022-7643-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1180"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169587/gentoo-linux-security-advisory-202210-25.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022041925"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1719"
      },
      {
        "trust": 0.6,
        "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5750"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022031701"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022032124"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1149"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25220"
      },
      {
        "trust": 0.2,
        "url": "https://ubuntu.com/security/notices/usn-5332-1"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-25220"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.2,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.2,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/404.html"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/2022/dsa-5105"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://alas.aws.amazon.com/al2022/alas-2022-166.html"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/bind9/1:9.16.1-0ubuntu2.10"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/bind9/1:9.16.15-1ubuntu1.2"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/bind9/1:9.11.3+dfsg-1ubuntu1.17"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/faq"
      },
      {
        "trust": 0.1,
        "url": "https://security-tracker.debian.org/tracker/bind9"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.7_release_notes/index"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:7643"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-38178"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2906"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2881"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2795"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25219"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3080"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-38177"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/9.1_release_notes/index"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:8068"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-0396"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001799"
      },
      {
        "db": "PACKETSTORM",
        "id": "166354"
      },
      {
        "db": "PACKETSTORM",
        "id": "169261"
      },
      {
        "db": "PACKETSTORM",
        "id": "169773"
      },
      {
        "db": "PACKETSTORM",
        "id": "169587"
      },
      {
        "db": "PACKETSTORM",
        "id": "169894"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202203-1543"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0396"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULMON",
        "id": "CVE-2022-0396"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001799"
      },
      {
        "db": "PACKETSTORM",
        "id": "166354"
      },
      {
        "db": "PACKETSTORM",
        "id": "169261"
      },
      {
        "db": "PACKETSTORM",
        "id": "169773"
      },
      {
        "db": "PACKETSTORM",
        "id": "169587"
      },
      {
        "db": "PACKETSTORM",
        "id": "169894"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202203-1543"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0396"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-03-23T00:00:00",
        "db": "VULMON",
        "id": "CVE-2022-0396"
      },
      {
        "date": "2022-05-12T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-001799"
      },
      {
        "date": "2022-03-17T15:54:20",
        "db": "PACKETSTORM",
        "id": "166354"
      },
      {
        "date": "2022-03-28T19:12:00",
        "db": "PACKETSTORM",
        "id": "169261"
      },
      {
        "date": "2022-11-08T13:49:24",
        "db": "PACKETSTORM",
        "id": "169773"
      },
      {
        "date": "2022-10-31T14:50:53",
        "db": "PACKETSTORM",
        "id": "169587"
      },
      {
        "date": "2022-11-16T16:09:16",
        "db": "PACKETSTORM",
        "id": "169894"
      },
      {
        "date": "2022-03-16T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202203-1543"
      },
      {
        "date": "2022-03-23T11:15:08.380000",
        "db": "NVD",
        "id": "CVE-2022-0396"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-11-16T00:00:00",
        "db": "VULMON",
        "id": "CVE-2022-0396"
      },
      {
        "date": "2022-09-20T06:14:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-001799"
      },
      {
        "date": "2022-11-17T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202203-1543"
      },
      {
        "date": "2024-11-21T06:38:32.280000",
        "db": "NVD",
        "id": "CVE-2022-0396"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "166354"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202203-1543"
      }
    ],
    "trust": 0.7
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "BIND\u00a0 connection indefinitely \u00a0CLOSE_WAIT\u00a0 Vulnerabilities that remain in status",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001799"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "resource management error",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202203-1543"
      }
    ],
    "trust": 0.6
  }
}

var-202411-0476
Vulnerability from variot

A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 3). The affected application does not properly invalidate sessions when the associated user is deleted or disabled or their permissions are modified. This could allow an authenticated attacker to continue performing malicious actions even after their user account has been disabled. Siemens' SINEC INS contains a session expiration vulnerability.Information may be obtained and information may be tampered with

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202411-0476",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": null
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012785"
      },
      {
        "db": "NVD",
        "id": "CVE-2024-46892"
      }
    ]
  },
  "cve": "CVE-2024-46892",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 8.1,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.8,
            "id": "CVE-2024-46892",
            "impactScore": 5.2,
            "integrityImpact": "HIGH",
            "privilegesRequired": "LOW",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:N",
            "version": "3.1"
          },
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "productcert@siemens.com",
            "availabilityImpact": "NONE",
            "baseScore": 4.9,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 1.2,
            "id": "CVE-2024-46892",
            "impactScore": 3.6,
            "integrityImpact": "HIGH",
            "privilegesRequired": "HIGH",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:N/I:H/A:N",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "None",
            "baseScore": 8.1,
            "baseSeverity": "High",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2024-46892",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "Low",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:N",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2024-46892",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "productcert@siemens.com",
            "id": "CVE-2024-46892",
            "trust": 1.0,
            "value": "Medium"
          },
          {
            "author": "NVD",
            "id": "CVE-2024-46892",
            "trust": 0.8,
            "value": "High"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012785"
      },
      {
        "db": "NVD",
        "id": "CVE-2024-46892"
      },
      {
        "db": "NVD",
        "id": "CVE-2024-46892"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 3). The affected application does not properly invalidate sessions when the associated user is deleted or disabled or their permissions are modified. This could allow an authenticated attacker to continue performing malicious actions even after their user account has been disabled. Siemens\u0027 SINEC INS contains a session expiration vulnerability.Information may be obtained and information may be tampered with",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2024-46892"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012785"
      }
    ],
    "trust": 1.62
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2024-46892",
        "trust": 2.6
      },
      {
        "db": "SIEMENS",
        "id": "SSA-915275",
        "trust": 1.8
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-24-319-08",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU96191615",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012785",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012785"
      },
      {
        "db": "NVD",
        "id": "CVE-2024-46892"
      }
    ]
  },
  "id": "VAR-202411-0476",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-11-16T19:59:23.265000Z",
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-613",
        "trust": 1.0
      },
      {
        "problemtype": "Inappropriate session deadline (CWE-613) [ others ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012785"
      },
      {
        "db": "NVD",
        "id": "CVE-2024-46892"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.8,
        "url": "https://cert-portal.siemens.com/productcert/html/ssa-915275.html"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu96191615/"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2024-46892"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-24-319-08"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012785"
      },
      {
        "db": "NVD",
        "id": "CVE-2024-46892"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012785"
      },
      {
        "db": "NVD",
        "id": "CVE-2024-46892"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2024-11-15T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2024-012785"
      },
      {
        "date": "2024-11-12T13:15:09.940000",
        "db": "NVD",
        "id": "CVE-2024-46892"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2024-11-15T07:58:00",
        "db": "JVNDB",
        "id": "JVNDB-2024-012785"
      },
      {
        "date": "2024-11-13T23:13:06.400000",
        "db": "NVD",
        "id": "CVE-2024-46892"
      }
    ]
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Siemens\u0027 \u00a0SINEC\u00a0INS\u00a0 Session deadline vulnerability in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012785"
      }
    ],
    "trust": 0.8
  }
}

var-202312-0206
Vulnerability from variot

A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 2). The radius configuration mechanism of affected products does not correctly check uploaded certificates. A malicious admin could upload a crafted certificate resulting in a denial-of-service condition or potentially issue commands on system level. Siemens' SINEC INS for, OS A command injection vulnerability exists.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202312-0206",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": null
      },
      {
        "model": "sinec ins",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": null
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": "1.0"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-019616"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-48428"
      }
    ]
  },
  "cve": "CVE-2023-48428",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "productcert@siemens.com",
            "availabilityImpact": "HIGH",
            "baseScore": 7.2,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 1.2,
            "id": "CVE-2023-48428",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "HIGH",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "OTHER",
            "availabilityImpact": "High",
            "baseScore": 7.2,
            "baseSeverity": "High",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "JVNDB-2023-019616",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "High",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "productcert@siemens.com",
            "id": "CVE-2023-48428",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "OTHER",
            "id": "JVNDB-2023-019616",
            "trust": 0.8,
            "value": "High"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-019616"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-48428"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 2). The radius configuration mechanism of affected products does not correctly check uploaded certificates. A malicious admin could upload a crafted certificate resulting in a denial-of-service condition or potentially issue commands on system level. Siemens\u0027 SINEC INS for, OS A command injection vulnerability exists.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2023-48428"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-019616"
      }
    ],
    "trust": 1.62
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2023-48428",
        "trust": 2.6
      },
      {
        "db": "SIEMENS",
        "id": "SSA-077170",
        "trust": 1.8
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-23-348-16",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU98271228",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-019616",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-019616"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-48428"
      }
    ]
  },
  "id": "VAR-202312-0206",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-08-14T13:09:14.254000Z",
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-78",
        "trust": 1.0
      },
      {
        "problemtype": "OS Command injection (CWE-78) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-019616"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-48428"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.8,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu98271228/"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-48428"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-348-16"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-019616"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-48428"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-019616"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-48428"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2024-01-15T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2023-019616"
      },
      {
        "date": "2023-12-12T12:15:14.873000",
        "db": "NVD",
        "id": "CVE-2023-48428"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2024-01-15T02:20:00",
        "db": "JVNDB",
        "id": "JVNDB-2023-019616"
      },
      {
        "date": "2023-12-14T19:38:27.703000",
        "db": "NVD",
        "id": "CVE-2023-48428"
      }
    ]
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Siemens\u0027 \u00a0SINEC\u00a0INS\u00a0 In \u00a0OS\u00a0 Command injection vulnerability",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-019616"
      }
    ],
    "trust": 0.8
  }
}

var-202309-0672
Vulnerability from variot

A heap buffer overflow vulnerability in Wibu CodeMeter Runtime network service up to version 7.60b allows an unauthenticated, remote attacker to achieve RCE and gain full access of the host system. Wibu-Systems AG of CodeMeter Runtime Products from multiple vendors, such as the following, contain out-of-bounds write vulnerabilities.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. PSS(R)CAPE is a transmission and distribution network protection simulation software. PSS(R)E is a power system simulation and analysis tool for transmission operation and planning. PSS(R)ODMS is a CIM-based network model management tool with network analysis capabilities for planning and operational planning of transmission utilities. SIMATIC PCS neo is a distributed control system (DCS). SIMATIC WinCC Open Architecture (OA) is part of the SIMATIC HMI family. It is designed for applications requiring a high degree of customer-specific adaptability, large or complex applications, and projects that impose specific system requirements or functionality. SIMIT Simulation Platform allows simulating factory settings to predict failures at an early planning stage. SINEC INS (Infrastructure Network Services) is a web-based application that combines various network services in one tool. SINEMA Remote Connect is a management platform for remote networks that allows simple management of tunnel connections (VPN) between headquarters, service technicians and installed machines or plants.

Siemens Industrial product WIBU system CodeMeter has a heap buffer overflow vulnerability, which is caused by failure to perform correct boundary checks. An attacker could exploit this vulnerability to cause a buffer overflow and execute arbitrary code on the system

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202309-0672",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "oseon",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "3.0.22"
      },
      {
        "model": "tubedesign",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "14.06.150"
      },
      {
        "model": "programmingtube",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "4.6.3"
      },
      {
        "model": "trutopsfab",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "15.00.23.00"
      },
      {
        "model": "teczonebend",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "23.06.01"
      },
      {
        "model": "trutopsweld",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "9.0.28148.1"
      },
      {
        "model": "trutops cell sw48",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "02.26.0"
      },
      {
        "model": "trutopsprint",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "01.00"
      },
      {
        "model": "trutops",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "08.00"
      },
      {
        "model": "e-mobility charging suite",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "phoenixcontact",
        "version": "1.7.0"
      },
      {
        "model": "module type package designer",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "phoenixcontact",
        "version": "1.2.0"
      },
      {
        "model": "trutopsfab",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "22.8.25"
      },
      {
        "model": "trutopsfab storage smallstore",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "14.06.20"
      },
      {
        "model": "activation wizard",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "phoenixcontact",
        "version": "1.6"
      },
      {
        "model": "trutops",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "12.01.00.00"
      },
      {
        "model": "tubedesign",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "08.00"
      },
      {
        "model": "iol-conf",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "phoenixcontact",
        "version": "1.7.0"
      },
      {
        "model": "trutopsboost",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "06.00.23.00"
      },
      {
        "model": "topscalculation",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "22.00.00"
      },
      {
        "model": "trutopsprint",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "00.06.00"
      },
      {
        "model": "trutops cell classic",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "09.09.02"
      },
      {
        "model": "programmingtube",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "1.0.1"
      },
      {
        "model": "trutopsboost",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "16.0.22"
      },
      {
        "model": "fl network manager",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "phoenixcontact",
        "version": "7.0"
      },
      {
        "model": "teczonebend",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "18.02.r8"
      },
      {
        "model": "trutops mark 3d",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "06.01"
      },
      {
        "model": "codemeter runtime",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "wibu",
        "version": "7.60c"
      },
      {
        "model": "trutopsprintmultilaserassistant",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "01.02"
      },
      {
        "model": "trumpflicenseexpert",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "1.5.2"
      },
      {
        "model": "trutops mark 3d",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "01.00"
      },
      {
        "model": "module type package designer",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "phoenixcontact",
        "version": "1.2.0"
      },
      {
        "model": "plcnext engineer",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "phoenixcontact",
        "version": "2023.6"
      },
      {
        "model": "trumpflicenseexpert",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "1.11.1"
      },
      {
        "model": "trutopsweld",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "7.0.198.241"
      },
      {
        "model": "trutops cell sw48",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "01.00"
      },
      {
        "model": "tops unfold",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "05.03.00.00"
      },
      {
        "model": "oseon",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "1.0.0"
      },
      {
        "model": "topscalculation",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "14.00"
      },
      {
        "model": "trutopsfab storage smallstore",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "trumpf",
        "version": "20.04.20.00"
      },
      {
        "model": "trutopsweld",
        "scope": null,
        "trust": 0.8,
        "vendor": "trumpf",
        "version": null
      },
      {
        "model": "programmingtube",
        "scope": null,
        "trust": 0.8,
        "vendor": "trumpf",
        "version": null
      },
      {
        "model": "codemeter runtime",
        "scope": null,
        "trust": 0.8,
        "vendor": "wibu",
        "version": null
      },
      {
        "model": "trutopsboost",
        "scope": null,
        "trust": 0.8,
        "vendor": "trumpf",
        "version": null
      },
      {
        "model": "trutopsprintmultilaserassistant",
        "scope": null,
        "trust": 0.8,
        "vendor": "trumpf",
        "version": null
      },
      {
        "model": "trutopsprint",
        "scope": null,
        "trust": 0.8,
        "vendor": "trumpf",
        "version": null
      },
      {
        "model": "oseon",
        "scope": null,
        "trust": 0.8,
        "vendor": "trumpf",
        "version": null
      },
      {
        "model": "trutops cell sw48",
        "scope": null,
        "trust": 0.8,
        "vendor": "trumpf",
        "version": null
      },
      {
        "model": "trutopsfab",
        "scope": null,
        "trust": 0.8,
        "vendor": "trumpf",
        "version": null
      },
      {
        "model": "tops unfold",
        "scope": null,
        "trust": 0.8,
        "vendor": "trumpf",
        "version": null
      },
      {
        "model": "trutops mark 3d",
        "scope": null,
        "trust": 0.8,
        "vendor": "trumpf",
        "version": null
      },
      {
        "model": "trutopsfab storage smallstore",
        "scope": null,
        "trust": 0.8,
        "vendor": "trumpf",
        "version": null
      },
      {
        "model": "tubedesign",
        "scope": null,
        "trust": 0.8,
        "vendor": "trumpf",
        "version": null
      },
      {
        "model": "trutops",
        "scope": null,
        "trust": 0.8,
        "vendor": "trumpf",
        "version": null
      },
      {
        "model": "trumpflicenseexpert",
        "scope": null,
        "trust": 0.8,
        "vendor": "trumpf",
        "version": null
      },
      {
        "model": "topscalculation",
        "scope": null,
        "trust": 0.8,
        "vendor": "trumpf",
        "version": null
      },
      {
        "model": "teczonebend",
        "scope": null,
        "trust": 0.8,
        "vendor": "trumpf",
        "version": null
      },
      {
        "model": "trutops cell classic",
        "scope": null,
        "trust": 0.8,
        "vendor": "trumpf",
        "version": null
      },
      {
        "model": "sinec ins",
        "scope": null,
        "trust": 0.6,
        "vendor": "siemens",
        "version": null
      },
      {
        "model": "simit simulation platform",
        "scope": null,
        "trust": 0.6,
        "vendor": "siemens",
        "version": null
      },
      {
        "model": "sinema remote connect",
        "scope": null,
        "trust": 0.6,
        "vendor": "siemens",
        "version": null
      },
      {
        "model": "simatic wincc oa",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "v3.17"
      },
      {
        "model": "simatic wincc oa",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "v3.18"
      },
      {
        "model": "pss cape",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "v14\u003cv14.2023-08-23"
      },
      {
        "model": "pss cape",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "v15\u003cv15.0.22"
      },
      {
        "model": "pss e",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "v34\u003cv34.9.6"
      },
      {
        "model": "pss odms",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "v13.0"
      },
      {
        "model": "pss odms",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "v13.1\u003cv13.1.12.1"
      },
      {
        "model": "simatic pcs neo",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "v3"
      },
      {
        "model": "simatic pcs neo",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "v4"
      },
      {
        "model": "simatic wincc oa p006",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "v3.19\u003cv3.19"
      },
      {
        "model": "pss e",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "v35"
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2023-69811"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-012536"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-3935"
      }
    ]
  },
  "cve": "CVE-2023-3935",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "HIGH",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "CNVD",
            "availabilityImpact": "COMPLETE",
            "baseScore": 7.6,
            "confidentialityImpact": "COMPLETE",
            "exploitabilityScore": 4.9,
            "id": "CNVD-2023-69811",
            "impactScore": 10.0,
            "integrityImpact": "COMPLETE",
            "severity": "HIGH",
            "trust": 0.6,
            "vectorString": "AV:N/AC:H/Au:N/C:C/I:C/A:C",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "info@cert.vde.com",
            "availabilityImpact": "HIGH",
            "baseScore": 9.8,
            "baseSeverity": "CRITICAL",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 3.9,
            "id": "CVE-2023-3935",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 2.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "OTHER",
            "availabilityImpact": "High",
            "baseScore": 9.8,
            "baseSeverity": "Critical",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "JVNDB-2023-012536",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "info@cert.vde.com",
            "id": "CVE-2023-3935",
            "trust": 1.0,
            "value": "CRITICAL"
          },
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2023-3935",
            "trust": 1.0,
            "value": "CRITICAL"
          },
          {
            "author": "OTHER",
            "id": "JVNDB-2023-012536",
            "trust": 0.8,
            "value": "Critical"
          },
          {
            "author": "CNVD",
            "id": "CNVD-2023-69811",
            "trust": 0.6,
            "value": "HIGH"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2023-69811"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-012536"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-3935"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-3935"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A heap buffer overflow vulnerability in Wibu CodeMeter Runtime network service up to version 7.60b allows an unauthenticated, remote attacker to achieve RCE and gain full access of the host system. Wibu-Systems AG of CodeMeter Runtime Products from multiple vendors, such as the following, contain out-of-bounds write vulnerabilities.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. PSS(R)CAPE is a transmission and distribution network protection simulation software. PSS(R)E is a power system simulation and analysis tool for transmission operation and planning. PSS(R)ODMS is a CIM-based network model management tool with network analysis capabilities for planning and operational planning of transmission utilities. SIMATIC PCS neo is a distributed control system (DCS). SIMATIC WinCC Open Architecture (OA) is part of the SIMATIC HMI family. It is designed for applications requiring a high degree of customer-specific adaptability, large or complex applications, and projects that impose specific system requirements or functionality. SIMIT Simulation Platform allows simulating factory settings to predict failures at an early planning stage. SINEC INS (Infrastructure Network Services) is a web-based application that combines various network services in one tool. SINEMA Remote Connect is a management platform for remote networks that allows simple management of tunnel connections (VPN) between headquarters, service technicians and installed machines or plants. \n\r\n\r\nSiemens Industrial product WIBU system CodeMeter has a heap buffer overflow vulnerability, which is caused by failure to perform correct boundary checks. An attacker could exploit this vulnerability to cause a buffer overflow and execute arbitrary code on the system",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2023-3935"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-012536"
      },
      {
        "db": "CNVD",
        "id": "CNVD-2023-69811"
      },
      {
        "db": "VULMON",
        "id": "CVE-2023-3935"
      }
    ],
    "trust": 2.25
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2023-3935",
        "trust": 3.3
      },
      {
        "db": "CERT@VDE",
        "id": "VDE-2023-031",
        "trust": 1.9
      },
      {
        "db": "CERT@VDE",
        "id": "VDE-2023-030",
        "trust": 1.8
      },
      {
        "db": "JVN",
        "id": "JVNVU92598492",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU92008538",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU98137233",
        "trust": 0.8
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-24-004-01",
        "trust": 0.8
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-23-320-03",
        "trust": 0.8
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-23-257-06",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-012536",
        "trust": 0.8
      },
      {
        "db": "SIEMENS",
        "id": "SSA-240541",
        "trust": 0.6
      },
      {
        "db": "CNVD",
        "id": "CNVD-2023-69811",
        "trust": 0.6
      },
      {
        "db": "VULMON",
        "id": "CVE-2023-3935",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2023-69811"
      },
      {
        "db": "VULMON",
        "id": "CVE-2023-3935"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-012536"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-3935"
      }
    ]
  },
  "id": "VAR-202309-0672",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2023-69811"
      }
    ],
    "trust": 1.1685151266666667
  },
  "iot_taxonomy": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot_taxonomy#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "category": [
          "ICS"
        ],
        "sub_category": null,
        "trust": 0.6
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2023-69811"
      }
    ]
  },
  "last_update_date": "2024-08-14T12:13:07.282000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Patch for Siemens Industrial product WIBU system CodeMeter heap buffer overflow vulnerability",
        "trust": 0.6,
        "url": "https://www.cnvd.org.cn/patchInfo/show/460931"
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2023-69811"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-787",
        "trust": 1.0
      },
      {
        "problemtype": "Out-of-bounds writing (CWE-787) [ others ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-012536"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-3935"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.9,
        "url": "https://cdn.wibu.com/fileadmin/wibu_downloads/security_advisories/advisorywibu-230704-01-v3.0.pdf"
      },
      {
        "trust": 1.9,
        "url": "https://cert.vde.com/en/advisories/vde-2023-031/"
      },
      {
        "trust": 1.8,
        "url": "https://cert.vde.com/en/advisories/vde-2023-030/"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu98137233/"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu92598492/"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu92008538/index.html"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-3935"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-257-06"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-320-03"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-24-004-01"
      },
      {
        "trust": 0.6,
        "url": "https://cert-portal.siemens.com/productcert/html/ssa-240541.html"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/787.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2023-69811"
      },
      {
        "db": "VULMON",
        "id": "CVE-2023-3935"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-012536"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-3935"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "CNVD",
        "id": "CNVD-2023-69811"
      },
      {
        "db": "VULMON",
        "id": "CVE-2023-3935"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-012536"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-3935"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-09-14T00:00:00",
        "db": "CNVD",
        "id": "CNVD-2023-69811"
      },
      {
        "date": "2023-09-13T00:00:00",
        "db": "VULMON",
        "id": "CVE-2023-3935"
      },
      {
        "date": "2023-12-18T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2023-012536"
      },
      {
        "date": "2023-09-13T14:15:09.147000",
        "db": "NVD",
        "id": "CVE-2023-3935"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-09-15T00:00:00",
        "db": "CNVD",
        "id": "CNVD-2023-69811"
      },
      {
        "date": "2023-09-13T00:00:00",
        "db": "VULMON",
        "id": "CVE-2023-3935"
      },
      {
        "date": "2024-01-09T02:47:00",
        "db": "JVNDB",
        "id": "JVNDB-2023-012536"
      },
      {
        "date": "2024-01-25T20:24:58.783000",
        "db": "NVD",
        "id": "CVE-2023-3935"
      }
    ]
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Wibu-Systems\u00a0AG\u00a0 of \u00a0CodeMeter\u00a0Runtime\u00a0 Out-of-bounds write vulnerability in products from multiple vendors such as",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-012536"
      }
    ],
    "trust": 0.8
  }
}

var-202108-1941
Vulnerability from variot

axios is vulnerable to Inefficient Regular Expression Complexity. axios Exists in a resource exhaustion vulnerability.Service operation interruption (DoS) It may be in a state. Pillow is a Python-based image processing library. There is currently no information about this vulnerability, please feel free to follow CNNVD or manufacturer announcements. Relevant releases/architectures:

2.0 - ppc64le, s390x, x86_64

  1. Solution:

The OpenShift Service Mesh release notes provide information on the features and known issues:

https://docs.openshift.com/container-platform/latest/service_mesh/v2x/servicemesh-release-notes.html

  1. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  2. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Moderate: OpenShift Container Platform 4.10.3 security update Advisory ID: RHSA-2022:0056-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2022:0056 Issue date: 2022-03-10 CVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 CVE-2022-24407 =====================================================================

  1. Summary:

Red Hat OpenShift Container Platform release 4.10.3 is now available with updates to packages and images that fix several bugs and add enhancements.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

This advisory contains the container images for Red Hat OpenShift Container Platform 4.10.3. See the following advisory for the RPM packages for this release:

https://access.redhat.com/errata/RHSA-2022:0055

Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:

https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html

Security Fix(es):

  • gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)
  • grafana: Snapshot authentication bypass (CVE-2021-39226)
  • golang: net/http: limit growth of header canonicalization cache (CVE-2021-44716)
  • nodejs-axios: Regular expression denial of service in trim function (CVE-2021-3749)
  • golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)
  • grafana: Forward OAuth Identity Token can allow users to access some data sources (CVE-2022-21673)
  • grafana: directory traversal vulnerability (CVE-2021-43813)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

You may download the oc tool and use it to inspect release image metadata as follows:

(For x86_64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-x86_64

The image digest is sha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56

(For s390x architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-s390x

The image digest is sha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69

(For ppc64le architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le

The image digest is sha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c

All OpenShift Container Platform 4.10 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html

  1. Solution:

For OpenShift Container Platform 4.10 see the following documentation, which will be updated shortly for this release, for moderate instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html

  1. Bugs fixed (https://bugzilla.redhat.com/):

1808240 - Always return metrics value for pods under the user's namespace 1815189 - feature flagged UI does not always become available after operator installation 1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters 1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly 1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal 1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered 1878925 - 'oc adm upgrade --to ...' rejects versions which occur only in history, while the cluster-version operator supports history fallback 1880738 - origin e2e test deletes original worker 1882983 - oVirt csi driver should refuse to provision RWX and ROX PV 1886450 - Keepalived router id check not documented for RHV/VMware IPI 1889488 - The metrics endpoint for the Scheduler is not protected by RBAC 1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom 1896474 - Path based routing is broken for some combinations 1897431 - CIDR support for additional network attachment with the bridge CNI plug-in 1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes 1907433 - Excessive logging in image operator 1909906 - The router fails with PANIC error when stats port already in use 1911173 - [MSTR-998] Many charts' legend names show {{}} instead of words 1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. 1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true) 1917893 - [ovirt] install fails: due to terraform error "Cannot attach Virtual Disk: Disk is locked" on vm resource 1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1926522 - oc adm catalog does not clean temporary files 1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. 1928141 - kube-storage-version-migrator constantly reporting type "Upgradeable" status Unknown 1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it's storageclass is not yet finished, confusing users 1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x 1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade 1937085 - RHV UPI inventory playbook missing guarantee_memory 1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion 1938236 - vsphere-problem-detector does not support overriding log levels via storage CR 1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods 1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer 1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s] 1942913 - ThanosSidecarUnhealthy isn't resilient to WAL replays. 1943363 - [ovn] CNO should gracefully terminate ovn-northd 1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17 1948080 - authentication should not set Available=False APIServices_Error with 503s 1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set 1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0 1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer 1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs 1953264 - "remote error: tls: bad certificate" logs in prometheus-operator container 1955300 - Machine config operator reports unavailable for 23m during upgrade 1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set 1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set 1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters 1956496 - Needs SR-IOV Docs Upstream 1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret 1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid 1956964 - upload a boot-source to OpenShift virtualization using the console 1957547 - [RFE]VM name is not auto filled in dev console 1958349 - ovn-controller doesn't release the memory after cluster-density run 1959352 - [scale] failed to get pod annotation: timed out waiting for annotations 1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not 1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial] 1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects 1961391 - String updates 1961509 - DHCP daemon pod should have CPU and memory requests set but not limits 1962066 - Edit machine/machineset specs not working 1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent 1963053 - oc whoami --show-console should show the web console URL, not the server api URL 1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters 1964327 - Support containers with name:tag@digest 1964789 - Send keys and disconnect does not work for VNC console 1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7 1966445 - Unmasking a service doesn't work if it masked using MCO 1966477 - Use GA version in KAS/OAS/OauthAS to avoid: "audit.k8s.io/v1beta1" is deprecated and will be removed in a future release, use "audit.k8s.io/v1" instead 1966521 - kube-proxy's userspace implementation consumes excessive CPU 1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up 1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount 1970218 - MCO writes incorrect file contents if compression field is specified 1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel] 1970805 - Cannot create build when docker image url contains dir structure 1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io 1972827 - image registry does not remain available during upgrade 1972962 - Should set the minimum value for the --max-icsp-size flag of oc adm catalog mirror 1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run 1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established 1976301 - [ci] e2e-azure-upi is permafailing 1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. 1976674 - CCO didn't set Upgradeable to False when cco mode is configured to Manual on azure platform 1976894 - Unidling a StatefulSet does not work as expected 1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases 1977414 - Build Config timed out waiting for condition 400: Bad Request 1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus 1978528 - systemd-coredump started and failed intermittently for unknown reasons 1978581 - machine-config-operator: remove runlevel from mco namespace 1979562 - Cluster operators: don't show messages when neither progressing, degraded or unavailable 1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9 1979966 - OCP builds always fail when run on RHEL7 nodes 1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading 1981549 - Machine-config daemon does not recover from broken Proxy configuration 1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel] 1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues 1982063 - 'Control Plane' is not translated in Simplified Chinese language in Home->Overview page 1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands 1982662 - Workloads - DaemonSets - Add storage: i18n misses 1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE "/secrets/encryption-config" on single node clusters 1983758 - upgrades are failing on disruptive tests 1983964 - Need Device plugin configuration for the NIC "needVhostNet" & "isRdma" 1984592 - global pull secret not working in OCP4.7.4+ for additional private registries 1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs 1985486 - Cluster Proxy not used during installation on OSP with Kuryr 1985724 - VM Details Page missing translations 1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted 1985933 - Downstream image registry recommendation 1985965 - oVirt CSI driver does not report volume stats 1986216 - [scale] SNO: Slow Pod recovery due to "timed out waiting for OVS port binding" 1986237 - "MachineNotYetDeleted" in Pending state , alert not fired 1986239 - crictl create fails with "PID namespace requested, but sandbox infra container invalid" 1986302 - console continues to fetch prometheus alert and silences for normal user 1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI 1986338 - error creating list of resources in Import YAML 1986502 - yaml multi file dnd duplicates previous dragged files 1986819 - fix string typos for hot-plug disks 1987044 - [OCPV48] Shutoff VM is being shown as "Starting" in WebUI when using spec.runStrategy Manual/RerunOnFailure 1987136 - Declare operatorframework.io/arch. labels for all operators 1987257 - Go-http-client user-agent being used for oc adm mirror requests 1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold 1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP 1988406 - SSH key dropped when selecting "Customize virtual machine" in UI 1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade 1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with "Unable to connect to the server" 1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs 1989438 - expected replicas is wrong 1989502 - Developer Catalog is disappearing after short time 1989843 - 'More' and 'Show Less' functions are not translated on several page 1990014 - oc debug does not work for Windows pods 1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created 1990193 - 'more' and 'Show Less' is not being translated on Home -> Search page 1990255 - Partial or all of the Nodes/StorageClasses don't appear back on UI after text is removed from search bar 1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI 1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi- symlinks 1990556 - get-resources.sh doesn't honor the no_proxy settings even with no_proxy var 1990625 - Ironic agent registers with SLAAC address with privacy-stable 1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time 1991067 - github.com can not be resolved inside pods where cluster is running on openstack. 1991573 - Enable typescript strictNullCheck on network-policies files 1991641 - Baremetal Cluster Operator still Available After Delete Provisioning 1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator 1991819 - Misspelled word "ocurred" in oc inspect cmd 1991942 - Alignment and spacing fixes 1992414 - Two rootdisks show on storage step if 'This is a CD-ROM boot source' is checked 1992453 - The configMap failed to save on VM environment tab 1992466 - The button 'Save' and 'Reload' are not translated on vm environment tab 1992475 - The button 'Open console in New Window' and 'Disconnect' are not translated on vm console tab 1992509 - Could not customize boot source due to source PVC not found 1992541 - all the alert rules' annotations "summary" and "description" should comply with the OpenShift alerting guidelines 1992580 - storageProfile should stay with the same value by check/uncheck the apply button 1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply 1992777 - [IBMCLOUD] Default "ibm_iam_authorization_policy" is not working as expected in all scenarios 1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed) 1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing 1994094 - Some hardcodes are detected at the code level in OpenShift console components 1994142 - Missing required cloud config fields for IBM Cloud 1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools 1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart 1995335 - [SCALE] ovnkube CNI: remove ovs flows check 1995493 - Add Secret to workload button and Actions button are not aligned on secret details page 1995531 - Create RDO-based Ironic image to be promoted to OKD 1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator 1995887 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs 1995924 - CMO should report Upgradeable: false when HA workload is incorrectly spread 1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole 1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN 1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down 1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page 1996647 - Provide more useful degraded message in auth operator on DNS errors 1996736 - Large number of 501 lr-policies in INCI2 env 1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes 1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP 1996928 - Enable default operator indexes on ARM 1997028 - prometheus-operator update removes env var support for thanos-sidecar 1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used 1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller. 1997245 - "Subscription already exists in openshift-storage namespace" error message is seen while installing odf-operator via UI 1997269 - Have to refresh console to install kube-descheduler 1997478 - Storage operator is not available after reboot cluster instances 1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 1997967 - storageClass is not reserved from default wizard to customize wizard 1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order 1998038 - [e2e][automation] add tests for UI for VM disk hot-plug 1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus 1998174 - Create storageclass gp3-csi after install ocp cluster on aws 1998183 - "r: Bad Gateway" info is improper 1998235 - Firefox warning: Cookie “csrf-token” will be soon rejected 1998377 - Filesystem table head is not full displayed in disk tab 1998378 - Virtual Machine is 'Not available' in Home -> Overview -> Cluster inventory 1998519 - Add fstype when create localvolumeset instance on web console 1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses 1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page 1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable 1999091 - Console update toast notification can appear multiple times 1999133 - removing and recreating static pod manifest leaves pod in error state 1999246 - .indexignore is not ingore when oc command load dc configuration 1999250 - ArgoCD in GitOps operator can't manage namespaces 1999255 - ovnkube-node always crashes out the first time it starts 1999261 - ovnkube-node log spam (and security token leak?) 1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -> Operator Installation page 1999314 - console-operator is slow to mark Degraded as False once console starts working 1999425 - kube-apiserver with "[SHOULD NOT HAPPEN] failed to update managedFields" err="failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck) 1999556 - "master" pool should be updated before the CVO reports available at the new version occurred 1999578 - AWS EFS CSI tests are constantly failing 1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages 1999619 - cloudinit is malformatted if a user sets a password during VM creation flow 1999621 - Empty ssh_authorized_keys entry is added to VM's cloudinit if created from a customize flow 1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined 1999668 - openshift-install destroy cluster panic's when given invalid credentials to cloud provider (Azure Stack Hub) 1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource 1999771 - revert "force cert rotation every couple days for development" in 4.10 1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function 1999796 - Openshift Console Helm tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace. 1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions 1999903 - Click "This is a CD-ROM boot source" ticking "Use template size PVC" on pvc upload form 1999983 - No way to clear upload error from template boot source 2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter 2000096 - Git URL is not re-validated on edit build-config form reload 2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig 2000236 - Confusing usage message from dynkeepalived CLI 2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported 2000430 - bump cluster-api-provider-ovirt version in installer 2000450 - 4.10: Enable static PV multi-az test 2000490 - All critical alerts shipped by CMO should have links to a runbook 2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded) 2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster 2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled 2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console 2000754 - IPerf2 tests should be lower 2000846 - Structure logs in the entire codebase of Local Storage Operator 2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24 2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM 2000938 - CVO does not respect changes to a Deployment strategy 2000963 - 'Inline-volume (default fs)] volumes should store data' tests are failing on OKD with updated selinux-policy 2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don't have snapshot and should be fullClone 2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole 2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api 2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error 2001337 - Details Card in ODF Dashboard mentions OCS 2001339 - fix text content hotplug 2001413 - [e2e][automation] add/delete nic and disk to template 2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log 2001442 - Empty termination.log file for the kube-apiserver has too permissive mode 2001479 - IBM Cloud DNS unable to create/update records 2001566 - Enable alerts for prometheus operator in UWM 2001575 - Clicking on the perspective switcher shows a white page with loader 2001577 - Quick search placeholder is not displayed properly when the search string is removed 2001578 - [e2e][automation] add tests for vm dashboard tab 2001605 - PVs remain in Released state for a long time after the claim is deleted 2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options 2001620 - Cluster becomes degraded if it can't talk to Manila 2001760 - While creating 'Backing Store', 'Bucket Class', 'Namespace Store' user is navigated to 'Installed Operators' page after clicking on ODF 2001761 - Unable to apply cluster operator storage for SNO on GCP platform. 2001765 - Some error message in the log of diskmaker-manager caused confusion 2001784 - show loading page before final results instead of showing a transient message No log files exist 2001804 - Reload feature on Environment section in Build Config form does not work properly 2001810 - cluster admin unable to view BuildConfigs in all namespaces 2001817 - Failed to load RoleBindings list that will lead to ‘Role name’ is not able to be selected on Create RoleBinding page as well 2001823 - OCM controller must update operator status 2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start 2001835 - Could not select image tag version when create app from dev console 2001855 - Add capacity is disabled for ocs-storagecluster 2001856 - Repeating event: MissingVersion no image found for operand pod 2001959 - Side nav list borders don't extend to edges of container 2002007 - Layout issue on "Something went wrong" page 2002010 - ovn-kube may never attempt to retry a pod creation 2002012 - Cannot change volume mode when cloning a VM from a template 2002027 - Two instances of Dotnet helm chart show as one in topology 2002075 - opm render does not automatically pulling in the image(s) used in the deployments 2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster 2002125 - Network policy details page heading should be updated to Network Policy details 2002133 - [e2e][automation] add support/virtualization and improve deleteResource 2002134 - [e2e][automation] add test to verify vm details tab 2002215 - Multipath day1 not working on s390x 2002238 - Image stream tag is not persisted when switching from yaml to form editor 2002262 - [vSphere] Incorrect user agent in vCenter sessions list 2002266 - SinkBinding create form doesn't allow to use subject name, instead of label selector 2002276 - OLM fails to upgrade operators immediately 2002300 - Altering the Schedule Profile configurations doesn't affect the placement of the pods 2002354 - Missing DU configuration "Done" status reporting during ZTP flow 2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn't use commonjs 2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation 2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN 2002397 - Resources search is inconsistent 2002434 - CRI-O leaks some children PIDs 2002443 - Getting undefined error on create local volume set page 2002461 - DNS operator performs spurious updates in response to API's defaulting of service's internalTrafficPolicy 2002504 - When the openshift-cluster-storage-operator is degraded because of "VSphereProblemDetectorController_SyncError", the insights operator is not sending the logs from all pods. 2002559 - User preference for topology list view does not follow when a new namespace is created 2002567 - Upstream SR-IOV worker doc has broken links 2002588 - Change text to be sentence case to align with PF 2002657 - ovn-kube egress IP monitoring is using a random port over the node network 2002713 - CNO: OVN logs should have millisecond resolution 2002748 - [ICNI2] 'ErrorAddingLogicalPort' failed to handle external GW check: timeout waiting for namespace event 2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite 2002763 - Two storage systems getting created with external mode RHCS 2002808 - KCM does not use web identity credentials 2002834 - Cluster-version operator does not remove unrecognized volume mounts 2002896 - Incorrect result return when user filter data by name on search page 2002950 - Why spec.containers.command is not created with "oc create deploymentconfig --image= -- " 2003096 - [e2e][automation] check bootsource URL is displaying on review step 2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role 2003120 - CI: Uncaught error with ResizeObserver on operand details page 2003145 - Duplicate operand tab titles causes "two children with the same key" warning 2003164 - OLM, fatal error: concurrent map writes 2003178 - [FLAKE][knative] The UI doesn't show updated traffic distribution after accepting the form 2003193 - Kubelet/crio leaks netns and veth ports in the host 2003195 - OVN CNI should ensure host veths are removed 2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting '-e JENKINS_PASSWORD=password' ENV which was working for old container images 2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace 2003239 - "[sig-builds][Feature:Builds][Slow] can use private repositories as build input" tests fail outside of CI 2003244 - Revert libovsdb client code 2003251 - Patternfly components with list element has list item bullet when they should not. 2003252 - "[sig-builds][Feature:Builds][Slow] starting a build using CLI start-build test context override environment BUILD_LOGLEVEL in buildconfig" tests do not work as expected outside of CI 2003269 - Rejected pods should be filtered from admission regression 2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release 2003426 - [e2e][automation] add test for vm details bootorder 2003496 - [e2e][automation] add test for vm resources requirment settings 2003641 - All metal ipi jobs are failing in 4.10 2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state 2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node 2003683 - Samples operator is panicking in CI 2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster "Connection Details" page 2003715 - Error on creating local volume set after selection of the volume mode 2003743 - Remove workaround keeping /boot RW for kdump support 2003775 - etcd pod on CrashLoopBackOff after master replacement procedure 2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver 2003792 - Monitoring metrics query graph flyover panel is useless 2003808 - Add Sprint 207 translations 2003845 - Project admin cannot access image vulnerabilities view 2003859 - sdn emits events with garbage messages 2003896 - (release-4.10) ApiRequestCounts conditional gatherer 2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas 2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes 2004059 - [e2e][automation] fix current tests for downstream 2004060 - Trying to use basic spring boot sample causes crash on Firefox 2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn't close after selection 2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently 2004203 - build config's created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver 2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory 2004449 - Boot option recovery menu prevents image boot 2004451 - The backup filename displayed in the RecentBackup message is incorrect 2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts 2004508 - TuneD issues with the recent ConfigParser changes. 2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions 2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs 2004578 - Monitoring and node labels missing for an external storage platform 2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days 2004596 - [4.10] Bootimage bump tracker 2004597 - Duplicate ramdisk log containers running 2004600 - Duplicate ramdisk log containers running 2004609 - output of "crictl inspectp" is not complete 2004625 - BMC credentials could be logged if they change 2004632 - When LE takes a large amount of time, multiple whereabouts are seen 2004721 - ptp/worker custom threshold doesn't change ptp events threshold 2004736 - [knative] Create button on new Broker form is inactive despite form being filled 2004796 - [e2e][automation] add test for vm scheduling policy 2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque 2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card 2004901 - [e2e][automation] improve kubevirt devconsole tests 2004962 - Console frontend job consuming too much CPU in CI 2005014 - state of ODF StorageSystem is misreported during installation or uninstallation 2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines 2005179 - pods status filter is not taking effect 2005182 - sync list of deprecated apis about to be removed 2005282 - Storage cluster name is given as title in StorageSystem details page 2005355 - setuptools 58 makes Kuryr CI fail 2005407 - ClusterNotUpgradeable Alert should be set to Severity Info 2005415 - PTP operator with sidecar api configured throws bind: address already in use 2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console 2005554 - The switch status of the button "Show default project" is not revealed correctly in code 2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable 2005761 - QE - Implementing crw-basic feature file 2005783 - Fix accessibility issues in the "Internal" and "Internal - Attached Mode" Installation Flow 2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty 2005854 - SSH NodePort service is created for each VM 2005901 - KS, KCM and KA going Degraded during master nodes upgrade 2005902 - Current UI flow for MCG only deployment is confusing and doesn't reciprocate any message to the end-user 2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics 2005971 - Change telemeter to report the Application Services product usage metrics 2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files 2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased 2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types 2006101 - Power off fails for drivers that don't support Soft power off 2006243 - Metal IPI upgrade jobs are running out of disk space 2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn't use the 0th address 2006308 - Backing Store YAML tab on click displays a blank screen on UI 2006325 - Multicast is broken across nodes 2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators 2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource 2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn't have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2006690 - OS boot failure "x64 Exception Type 06 - Invalid Opcode Exception" 2006714 - add retry for etcd errors in kube-apiserver 2006767 - KubePodCrashLooping may not fire 2006803 - Set CoreDNS cache entries for forwarded zones 2006861 - Add Sprint 207 part 2 translations 2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap 2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors 2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded 2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick 2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails 2007271 - CI Integration for Knative test cases 2007289 - kubevirt tests are failing in CI 2007322 - Devfile/Dockerfile import does not work for unsupported git host 2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3. 2007379 - Events are not generated for master offset for ordinary clock 2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace 2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address 2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error 2007522 - No new local-storage-operator-metadata-container is build for 4.10 2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10 2007580 - Azure cilium installs are failing e2e tests 2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10 2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes 2007692 - 4.9 "old-rhcos" jobs are permafailing with storage test failures 2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow 2007757 - must-gather extracts imagestreams in the "openshift" namespace, but not Templates 2007802 - AWS machine actuator get stuck if machine is completely missing 2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator 2008119 - The serviceAccountIssuer field on Authentication CR is reseted to “” when installation process 2008151 - Topology breaks on clicking in empty state 2008185 - Console operator go.mod should use go 1.16.version 2008201 - openstack-az job is failing on haproxy idle test 2008207 - vsphere CSI driver doesn't set resource limits 2008223 - gather_audit_logs: fix oc command line to get the current audit profile 2008235 - The Save button in the Edit DC form remains disabled 2008256 - Update Internationalization README with scope info 2008321 - Add correct documentation link for MON_DISK_LOW 2008462 - Disable PodSecurity feature gate for 4.10 2008490 - Backing store details page does not contain all the kebab actions. 2008521 - gcp-hostname service should correct invalid search entries in resolv.conf 2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount 2008539 - Registry doesn't fall back to secondary ImageContentSourcePolicy Mirror 2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers 2008599 - Azure Stack UPI does not have Internal Load Balancer 2008612 - Plugin asset proxy does not pass through browser cache headers 2008712 - VPA webhook timeout prevents all pods from starting 2008733 - kube-scheduler: exposed /debug/pprof port 2008911 - Prometheus repeatedly scaling prometheus-operator replica set 2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial] 2008987 - OpenShift SDN Hosted Egress IP's are not being scheduled to nodes after upgrade to 4.8.12 2009055 - Instances of OCS to be replaced with ODF on UI 2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs 2009083 - opm blocks pruning of existing bundles during add 2009111 - [IPI-on-GCP] 'Install a cluster with nested virtualization enabled' failed due to unable to launch compute instances 2009131 - [e2e][automation] add more test about vmi 2009148 - [e2e][automation] test vm nic presets and options 2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator 2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family 2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted 2009384 - UI changes to support BindableKinds CRD changes 2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped 2009424 - Deployment upgrade is failing availability check 2009454 - Change web terminal subscription permissions from get to list 2009465 - container-selinux should come from rhel8-appstream 2009514 - Bump OVS to 2.16-15 2009555 - Supermicro X11 system not booting from vMedia with AI 2009623 - Console: Observe > Metrics page: Table pagination menu shows bullet points 2009664 - Git Import: Edit of knative service doesn't work as expected for git import flow 2009699 - Failure to validate flavor RAM 2009754 - Footer is not sticky anymore in import forms 2009785 - CRI-O's version file should be pinned by MCO 2009791 - Installer: ibmcloud ignores install-config values 2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13 2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo 2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests 2009873 - Stale Logical Router Policies and Annotations for a given node 2009879 - There should be test-suite coverage to ensure admin-acks work as expected 2009888 - SRO package name collision between official and community version 2010073 - uninstalling and then reinstalling sriov-network-operator is not working 2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node. 2010181 - Environment variables not getting reset on reload on deployment edit form 2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2010341 - OpenShift Alerting Rules Style-Guide Compliance 2010342 - Local console builds can have out of memory errors 2010345 - OpenShift Alerting Rules Style-Guide Compliance 2010348 - Reverts PIE build mode for K8S components 2010352 - OpenShift Alerting Rules Style-Guide Compliance 2010354 - OpenShift Alerting Rules Style-Guide Compliance 2010359 - OpenShift Alerting Rules Style-Guide Compliance 2010368 - OpenShift Alerting Rules Style-Guide Compliance 2010376 - OpenShift Alerting Rules Style-Guide Compliance 2010662 - Cluster is unhealthy after image-registry-operator tests 2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent) 2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API 2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address 2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing 2010864 - Failure building EFS operator 2010910 - ptp worker events unable to identify interface for multiple interfaces 2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24 2010921 - Azure Stack Hub does not handle additionalTrustBundle 2010931 - SRO CSV uses non default category "Drivers and plugins" 2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. 2011038 - optional operator conditions are confusing 2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass 2011171 - diskmaker-manager constantly redeployed by LSO when creating LV's 2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image 2011368 - Tooltip in pipeline visualization shows misleading data 2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels 2011411 - Managed Service's Cluster overview page contains link to missing Storage dashboards 2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster 2011513 - Kubelet rejects pods that use resources that should be freed by completed pods 2011668 - Machine stuck in deleting phase in VMware "reconciler failed to Delete machine" 2011693 - (release-4.10) "insightsclient_request_recvreport_total" metric is always incremented 2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn't export namespace labels anymore 2011733 - Repository README points to broken documentarion link 2011753 - Ironic resumes clean before raid configuration job is actually completed 2011809 - The nodes page in the openshift console doesn't work. You just get a blank page 2011822 - Obfuscation doesn't work at clusters with OVN 2011882 - SRO helm charts not synced with templates 2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot 2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages 2011903 - vsphere-problem-detector: session leak 2011927 - OLM should allow users to specify a proxy for GRPC connections 2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods 2011960 - [tracker] Storage operator is not available after reboot cluster instances 2011971 - ICNI2 pods are stuck in ContainerCreating state 2011972 - Ingress operator not creating wildcard route for hypershift clusters 2011977 - SRO bundle references non-existent image 2012069 - Refactoring Status controller 2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI 2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group 2012233 - [IBMCLOUD] IPI: "Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)" 2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig 2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off 2012407 - [e2e][automation] improve vm tab console tests 2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don't have namespace label 2012562 - migration condition is not detected in list view 2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written 2012780 - The port 50936 used by haproxy is occupied by kube-apiserver 2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working 2012902 - Neutron Ports assigned to Completed Pods are not reused Edit 2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack 2012971 - Disable operands deletes 2013034 - Cannot install to openshift-nmstate namespace 2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine) 2013199 - post reboot of node SRIOV policy taking huge time 2013203 - UI breaks when trying to create block pool before storage cluster/system creation 2013222 - Full breakage for nightly payload promotion 2013273 - Nil pointer exception when phc2sys options are missing 2013321 - TuneD: high CPU utilization of the TuneD daemon. 2013416 - Multiple assets emit different content to the same filename 2013431 - Application selector dropdown has incorrect font-size and positioning 2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8 2013545 - Service binding created outside topology is not visible 2013599 - Scorecard support storage is not included in ocp4.9 2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide) 2013646 - fsync controller will show false positive if gaps in metrics are observed. 2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default 2013751 - Service details page is showing wrong in-cluster hostname 2013787 - There are two tittle 'Network Attachment Definition Details' on NAD details page 2013871 - Resource table headings are not aligned with their column data 2013895 - Cannot enable accelerated network via MachineSets on Azure 2013920 - "--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude" 2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG) 2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain 2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab) 2013996 - Project detail page: Action "Delete Project" does nothing for the default project 2014071 - Payload imagestream new tags not properly updated during cluster upgrade 2014153 - SRIOV exclusive pooling 2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace 2014238 - AWS console test is failing on importing duplicate YAML definitions 2014245 - Several aria-labels, external links, and labels aren't internationalized 2014248 - Several files aren't internationalized 2014352 - Could not filter out machine by using node name on machines page 2014464 - Unexpected spacing/padding below navigation groups in developer perspective 2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages 2014486 - Integration Tests: OLM single namespace operator tests failing 2014488 - Custom operator cannot change orders of condition tables 2014497 - Regex slows down different forms and creates too much recursion errors in the log 2014538 - Kuryr controller crash looping on self._get_vip_port(loadbalancer).id 'NoneType' object has no attribute 'id' 2014614 - Metrics scraping requests should be assigned to exempt priority level 2014710 - TestIngressStatus test is broken on Azure 2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly 2014995 - oc adm must-gather cannot gather audit logs with 'None' audit profile 2015115 - [RFE] PCI passthrough 2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl '--resource-group-name' parameter 2015154 - Support ports defined networks and primarySubnet 2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic 2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production 2015386 - Possibility to add labels to the built-in OCP alerts 2015395 - Table head on Affinity Rules modal is not fully expanded 2015416 - CI implementation for Topology plugin 2015418 - Project Filesystem query returns No datapoints found 2015420 - No vm resource in project view's inventory 2015422 - No conflict checking on snapshot name 2015472 - Form and YAML view switch button should have distinguishable status 2015481 - [4.10] sriov-network-operator daemon pods are failing to start 2015493 - Cloud Controller Manager Operator does not respect 'additionalTrustBundle' setting 2015496 - Storage - PersistentVolumes : Claim colum value 'No Claim' in English 2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on 'Add Capacity' button click 2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu 2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. 2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart 'x% used' is in English 2015549 - Observe - Metrics: Column heading and pagination text is in English 2015557 - Workloads - DeploymentConfigs : Error message is in English 2015568 - Compute - Nodes : CPU column's values are in English 2015635 - Storage operator fails causing installation to fail on ASH 2015660 - "Finishing boot source customization" screen should not use term "patched" 2015793 - [hypershift] The collect-profiles job's pods should run on the control-plane node 2015806 - Metrics view in Deployment reports "Forbidden" when not cluster-admin 2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning 2015837 - OS_CLOUD overwrites install-config's platform.openstack.cloud 2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch 2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail 2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed) 2016008 - [4.10] Bootimage bump tracker 2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver 2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator 2016054 - No e2e CI presubmit configured for release component cluster-autoscaler 2016055 - No e2e CI presubmit configured for release component console 2016058 - openshift-sync does not synchronise in "ose-jenkins:v4.8" 2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager 2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers 2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. 2016179 - Add Sprint 208 translations 2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager 2016235 - should update to 7.5.11 for grafana resources version label 2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails 2016334 - shiftstack: SRIOV nic reported as not supported 2016352 - Some pods start before CA resources are present 2016367 - Empty task box is getting created for a pipeline without finally task 2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts 2016438 - Feature flag gating is missing in few extensions contributed via knative plugin 2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc 2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets 2016453 - Complete i18n for GaugeChart defaults 2016479 - iface-id-ver is not getting updated for existing lsp 2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear 2016951 - dynamic actions list is not disabling "open console" for stopped vms 2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available 2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances 2017016 - [REF] Virtualization menu 2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn 2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly 2017130 - t is not a function error navigating to details page 2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue 2017244 - ovirt csi operator static files creation is in the wrong order 2017276 - [4.10] Volume mounts not created with the correct security context 2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. 2017427 - NTO does not restart TuneD daemon when profile application is taking too long 2017535 - Broken Argo CD link image on GitOps Details Page 2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references 2017564 - On-prem prepender dispatcher script overwrites DNS search settings 2017565 - CCMO does not handle additionalTrustBundle on Azure Stack 2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice 2017606 - [e2e][automation] add test to verify send key for VNC console 2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes 2017656 - VM IP address is "undefined" under VM details -> ssh field 2017663 - SSH password authentication is disabled when public key is not supplied 2017680 - [gcp] Couldn’t enable support for instances with GPUs on GCP 2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set 2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource 2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults 2017761 - [e2e][automation] dummy bug for 4.9 test dependency 2017872 - Add Sprint 209 translations 2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances 2017879 - Add Chinese translation for "alternate" 2017882 - multus: add handling of pod UIDs passed from runtime 2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods 2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI 2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS 2018094 - the tooltip length is limited 2018152 - CNI pod is not restarted when It cannot start servers due to ports being used 2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time 2018234 - user settings are saved in local storage instead of on cluster 2018264 - Delete Export button doesn't work in topology sidebar (general issue with unknown CSV?) 2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports) 2018275 - Topology graph doesn't show context menu for Export CSV 2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked 2018380 - Migrate docs links to access.redhat.com 2018413 - Error: context deadline exceeded, OCP 4.8.9 2018428 - PVC is deleted along with VM even with "Delete Disks" unchecked 2018445 - [e2e][automation] enhance tests for downstream 2018446 - [e2e][automation] move tests to different level 2018449 - [e2e][automation] add test about create/delete network attachment definition 2018490 - [4.10] Image provisioning fails with file name too long 2018495 - Fix typo in internationalization README 2018542 - Kernel upgrade does not reconcile DaemonSet 2018880 - Get 'No datapoints found.' when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit 2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes 2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950 2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10 2018985 - The rootdisk size is 15Gi of windows VM in customize wizard 2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync. 2019096 - Update SRO leader election timeout to support SNO 2019129 - SRO in operator hub points to wrong repo for README 2019181 - Performance profile does not apply 2019198 - ptp offset metrics are not named according to the log output 2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest 2019284 - Stop action should not in the action list while VMI is not running 2019346 - zombie processes accumulation and Argument list too long 2019360 - [RFE] Virtualization Overview page 2019452 - Logger object in LSO appends to existing logger recursively 2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect 2019634 - Pause and migration is enabled in action list for a user who has view only permission 2019636 - Actions in VM tabs should be disabled when user has view only permission 2019639 - "Take snapshot" should be disabled while VM image is still been importing 2019645 - Create button is not removed on "Virtual Machines" page for view only user 2019646 - Permission error should pop-up immediately while clicking "Create VM" button on template page for view only user 2019647 - "Remove favorite" and "Create new Template" should be disabled in template action list for view only user 2019717 - cant delete VM with un-owned pvc attached 2019722 - The shared-resource-csi-driver-node pod runs as “BestEffort” qosClass 2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as "Always" 2019744 - [RFE] Suggest users to download newest RHEL 8 version 2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level 2019827 - Display issue with top-level menu items running demo plugin 2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded 2019886 - Kuryr unable to finish ports recovery upon controller restart 2019948 - [RFE] Restructring Virtualization links 2019972 - The Nodes section doesn't display the csr of the nodes that are trying to join the cluster 2019977 - Installer doesn't validate region causing binary to hang with a 60 minute timeout 2019986 - Dynamic demo plugin fails to build 2019992 - instance:node_memory_utilisation:ratio metric is incorrect 2020001 - Update dockerfile for demo dynamic plugin to reflect dir change 2020003 - MCD does not regard "dangling" symlinks as a files, attempts to write through them on next backup, resulting in "not writing through dangling symlink" error and degradation. 2020107 - cluster-version-operator: remove runlevel from CVO namespace 2020153 - Creation of Windows high performance VM fails 2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn't be public 2020250 - Replacing deprecated ioutil 2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build 2020275 - ClusterOperators link in console returns blank page during upgrades 2020377 - permissions error while using tcpdump option with must-gather 2020489 - coredns_dns metrics don't include the custom zone metrics data due to CoreDNS prometheus plugin is not defined 2020498 - "Show PromQL" button is disabled 2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature 2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI 2020664 - DOWN subports are not cleaned up 2020904 - When trying to create a connection from the Developer view between VMs, it fails 2021016 - 'Prometheus Stats' of dashboard 'Prometheus Overview' miss data on console compared with Grafana 2021017 - 404 page not found error on knative eventing page 2021031 - QE - Fix the topology CI scripts 2021048 - [RFE] Added MAC Spoof check 2021053 - Metallb operator presented as community operator 2021067 - Extensive number of requests from storage version operator in cluster 2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes 2021135 - [azure-file-csi-driver] "make unit-test" returns non-zero code, but tests pass 2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node 2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating 2021152 - imagePullPolicy is "Always" for ptp operator images 2021191 - Project admins should be able to list available network attachment defintions 2021205 - Invalid URL in git import form causes validation to not happen on URL change 2021322 - cluster-api-provider-azure should populate purchase plan information 2021337 - Dynamic Plugins: ResourceLink doesn't render when passed a groupVersionKind 2021364 - Installer requires invalid AWS permission s3:GetBucketReplication 2021400 - Bump documentationBaseURL to 4.10 2021405 - [e2e][automation] VM creation wizard Cloud Init editor 2021433 - "[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified" test fail permanently on disconnected 2021466 - [e2e][automation] Windows guest tool mount 2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver 2021551 - Build is not recognizing the USER group from an s2i image 2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character 2021629 - api request counts for current hour are incorrect 2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page 2021693 - Modals assigned modal-lg class are no longer the correct width 2021724 - Observe > Dashboards: Graph lines are not visible when obscured by other lines 2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled 2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags 2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem 2022053 - dpdk application with vhost-net is not able to start 2022114 - Console logging every proxy request 2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack (Intermittent) 2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long 2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error . 2022447 - ServiceAccount in manifests conflicts with OLM 2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. 2022509 - getOverrideForManifest does not check manifest.GVK.Group 2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache 2022612 - no namespace field for "Kubernetes / Compute Resources / Namespace (Pods)" admin console dashboard 2022627 - Machine object not picking up external FIP added to an openstack vm 2022646 - configure-ovs.sh failure - Error: unknown connection 'WARN:' 2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox 2022801 - Add Sprint 210 translations 2022811 - Fix kubelet log rotation file handle leak 2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations 2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests 2022880 - Pipeline renders with minor visual artifact with certain task dependencies 2022886 - Incorrect URL in operator description 2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config 2023060 - [e2e][automation] Windows VM with CDROM migration 2023077 - [e2e][automation] Home Overview Virtualization status 2023090 - [e2e][automation] Examples of Import URL for VM templates 2023102 - [e2e][automation] Cloudinit disk of VM from custom template 2023216 - ACL for a deleted egressfirewall still present on node join switch 2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9 2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy 2023342 - SCC admission should take ephemeralContainers into account 2023356 - Devfiles can't be loaded in Safari on macOS (403 - Forbidden) 2023434 - Update Azure Machine Spec API to accept Marketplace Images 2023500 - Latency experienced while waiting for volumes to attach to node 2023522 - can't remove package from index: database is locked 2023560 - "Network Attachment Definitions" has no project field on the top in the list view 2023592 - [e2e][automation] add mac spoof check for nad 2023604 - ACL violation when deleting a provisioning-configuration resource 2023607 - console returns blank page when normal user without any projects visit Installed Operators page 2023638 - Downgrade support level for extended control plane integration to Dev Preview 2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10 2023675 - Changing CNV Namespace 2023779 - Fix Patch 104847 in 4.9 2023781 - initial hardware devices is not loading in wizard 2023832 - CCO updates lastTransitionTime for non-Status changes 2023839 - Bump recommended FCOS to 34.20211031.3.0 2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly 2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from "registry:5000" repository 2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8 2024055 - External DNS added extra prefix for the TXT record 2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully 2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json 2024199 - 400 Bad Request error for some queries for the non admin user 2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode 2024262 - Sample catalog is not displayed when one API call to the backend fails 2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability 2024316 - modal about support displays wrong annotation 2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected 2024399 - Extra space is in the translated text of "Add/Remove alternate service" on Create Route page 2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view 2024493 - Observe > Alerting > Alerting rules page throws error trying to destructure undefined 2024515 - test-blocker: Ceph-storage-plugin tests failing 2024535 - hotplug disk missing OwnerReference 2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image 2024547 - Detail page is breaking for namespace store , backing store and bucket class. 2024551 - KMS resources not getting created for IBM FlashSystem storage 2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel 2024613 - pod-identity-webhook starts without tls 2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded 2024665 - Bindable services are not shown on topology 2024731 - linuxptp container: unnecessary checking of interfaces 2024750 - i18n some remaining OLM items 2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured 2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack 2024841 - test Keycloak with latest tag 2024859 - Not able to deploy an existing image from private image registry using developer console 2024880 - Egress IP breaks when network policies are applied 2024900 - Operator upgrade kube-apiserver 2024932 - console throws "Unauthorized" error after logging out 2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up 2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick 2025230 - ClusterAutoscalerUnschedulablePods should not be a warning 2025266 - CreateResource route has exact prop which need to be removed 2025301 - [e2e][automation] VM actions availability in different VM states 2025304 - overwrite storage section of the DV spec instead of the pvc section 2025431 - [RFE]Provide specific windows source link 2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36 2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node 2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn't work for ExternalTrafficPolicy=local 2025481 - Update VM Snapshots UI 2025488 - [DOCS] Update the doc for nmstate operator installation 2025592 - ODC 4.9 supports invalid devfiles only 2025765 - It should not try to load from storageProfile after unchecking"Apply optimized StorageProfile settings" 2025767 - VMs orphaned during machineset scaleup 2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns "kubevirt-hyperconverged" while using customize wizard 2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size’s vCPUsAvailable instead of vCPUs for the sku. 2025821 - Make "Network Attachment Definitions" available to regular user 2025823 - The console nav bar ignores plugin separator in existing sections 2025830 - CentOS capitalizaion is wrong 2025837 - Warn users that the RHEL URL expire 2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin- 2025903 - [UI] RoleBindings tab doesn't show correct rolebindings 2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2026178 - OpenShift Alerting Rules Style-Guide Compliance 2026209 - Updation of task is getting failed (tekton hub integration) 2026223 - Internal error occurred: failed calling webhook "ptpconfigvalidationwebhook.openshift.io" 2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates 2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct 2026352 - Kube-Scheduler revision-pruner fail during install of new cluster 2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment 2026383 - Error when rendering custom Grafana dashboard through ConfigMap 2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation 2026396 - Cachito Issues: sriov-network-operator Image build failure 2026488 - openshift-controller-manager - delete event is repeating pathologically 2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. 2026560 - Cluster-version operator does not remove unrecognized volume mounts 2026699 - fixed a bug with missing metadata 2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator 2026898 - Description/details are missing for Local Storage Operator 2027132 - Use the specific icon for Fedora and CentOS template 2027238 - "Node Exporter / USE Method / Cluster" CPU utilization graph shows incorrect legend 2027272 - KubeMemoryOvercommit alert should be human readable 2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group 2027288 - Devfile samples can't be loaded after fixing it on Safari (redirect caching issue) 2027299 - The status of checkbox component is not revealed correctly in code 2027311 - K8s watch hooks do not work when fetching core resources 2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation 2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don't use the downstream images 2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation 2027498 - [IBMCloud] SG Name character length limitation 2027501 - [4.10] Bootimage bump tracker 2027524 - Delete Application doesn't delete Channels or Brokers 2027563 - e2e/add-flow-ci.feature fix accessibility violations 2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges 2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions 2027685 - openshift-cluster-csi-drivers pods crashing on PSI 2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced 2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string 2027917 - No settings in hostfirmwaresettings and schema objects for masters 2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf 2027982 - nncp stucked at ConfigurationProgressing 2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters 2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed 2028030 - Panic detected in cluster-image-registry-operator pod 2028042 - Desktop viewer for Windows VM shows "no Service for the RDP (Remote Desktop Protocol) can be found" 2028054 - Cloud controller manager operator can't get leader lease when upgrading from 4.8 up to 4.9 2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin 2028141 - Console tests doesn't pass on Node.js 15 and 16 2028160 - Remove i18nKey in network-policy-peer-selectors.tsx 2028162 - Add Sprint 210 translations 2028170 - Remove leading and trailing whitespace 2028174 - Add Sprint 210 part 2 translations 2028187 - Console build doesn't pass on Node.js 16 because node-sass doesn't support it 2028217 - Cluster-version operator does not default Deployment replicas to one 2028240 - Multiple CatalogSources causing higher CPU use than necessary 2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn't be set in HostFirmwareSettings 2028325 - disableDrain should be set automatically on SNO 2028484 - AWS EBS CSI driver's livenessprobe does not respect operator's loglevel 2028531 - Missing netFilter to the list of parameters when platform is OpenStack 2028610 - Installer doesn't retry on GCP rate limiting 2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting 2028695 - destroy cluster does not prune bootstrap instance profile 2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs 2028802 - CRI-O panic due to invalid memory address or nil pointer dereference 2028816 - VLAN IDs not released on failures 2028881 - Override not working for the PerformanceProfile template 2028885 - Console should show an error context if it logs an error object 2028949 - Masthead dropdown item hover text color is incorrect 2028963 - Whereabouts should reconcile stranded IP addresses 2029034 - enabling ExternalCloudProvider leads to inoperative cluster 2029178 - Create VM with wizard - page is not displayed 2029181 - Missing CR from PGT 2029273 - wizard is not able to use if project field is "All Projects" 2029369 - Cypress tests github rate limit errors 2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out 2029394 - missing empty text for hardware devices at wizard review 2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used 2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl 2029521 - EFS CSI driver cannot delete volumes under load 2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle 2029579 - Clicking on an Application which has a Helm Release in it causes an error 2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn't for HPE 2029645 - Sync upstream 1.15.0 downstream 2029671 - VM action "pause" and "clone" should be disabled while VM disk is still being importing 2029742 - [ovn] Stale lr-policy-list and snat rules left for egressip 2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage 2029785 - CVO panic when an edge is included in both edges and conditionaledges 2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp) 2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error 2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace 2030228 - Fix StorageSpec resources field to use correct API 2030229 - Mirroring status card reflect wrong data 2030240 - Hide overview page for non-privileged user 2030305 - Export App job do not completes 2030347 - kube-state-metrics exposes metrics about resource annotations 2030364 - Shared resource CSI driver monitoring is not setup correctly 2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets 2030534 - Node selector/tolerations rules are evaluated too early 2030539 - Prometheus is not highly available 2030556 - Don't display Description or Message fields for alerting rules if those annotations are missing 2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation 2030574 - console service uses older "service.alpha.openshift.io" for the service serving certificates. 2030677 - BOND CNI: There is no option to configure MTU on a Bond interface 2030692 - NPE in PipelineJobListener.upsertWorkflowJob 2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache 2030806 - CVE-2021-44717 golang: syscall: don't close fd 0 on ForkExec error 2030847 - PerformanceProfile API version should be v2 2030961 - Customizing the OAuth server URL does not apply to upgraded cluster 2031006 - Application name input field is not autofocused when user selects "Create application" 2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex 2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn't be started 2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue 2031057 - Topology sidebar for Knative services shows a small pod ring with "0 undefined" as tooltip 2031060 - Failing CSR Unit test due to expired test certificate 2031085 - ovs-vswitchd running more threads than expected 2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1 2031228 - CVE-2021-43813 grafana: directory traversal vulnerability 2031502 - [RFE] New common templates crash the ui 2031685 - Duplicated forward upstreams should be removed from the dns operator 2031699 - The displayed ipv6 address of a dns upstream should be case sensitive 2031797 - [RFE] Order and text of Boot source type input are wrong 2031826 - CI tests needed to confirm driver-toolkit image contents 2031831 - OCP Console - Global CSS overrides affecting dynamic plugins 2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional 2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled) 2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain) 2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself 2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource 2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64 2032141 - open the alertrule link in new tab, got empty page 2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy 2032296 - Cannot create machine with ephemeral disk on Azure 2032407 - UI will show the default openshift template wizard for HANA template 2032415 - Templates page - remove "support level" badge and add "support level" column which should not be hard coded 2032421 - [RFE] UI integration with automatic updated images 2032516 - Not able to import git repo with .devfile.yaml 2032521 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the aws_vpc_dhcp_options_association resource 2032547 - hardware devices table have filter when table is empty 2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool 2032566 - Cluster-ingress-router does not support Azure Stack 2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso 2032589 - DeploymentConfigs ignore resolve-names annotation 2032732 - Fix styling conflicts due to recent console-wide CSS changes 2032831 - Knative Services and Revisions are not shown when Service has no ownerReference 2032851 - Networking is "not available" in Virtualization Overview 2032926 - Machine API components should use K8s 1.23 dependencies 2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24 2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster 2033013 - Project dropdown in user preferences page is broken 2033044 - Unable to change import strategy if devfile is invalid 2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable 2033111 - IBM VPC operator library bump removed global CLI args 2033138 - "No model registered for Templates" shows on customize wizard 2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected 2033239 - [IPI on Alibabacloud] 'openshift-install' gets the wrong region (‘cn-hangzhou’) selected 2033257 - unable to use configmap for helm charts 2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn’t triggered 2033290 - Product builds for console are failing 2033382 - MAPO is missing machine annotations 2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations 2033403 - Devfile catalog does not show provider information 2033404 - Cloud event schema is missing source type and resource field is using wrong value 2033407 - Secure route data is not pre-filled in edit flow form 2033422 - CNO not allowing LGW conversion from SGW in runtime 2033434 - Offer darwin/arm64 oc in clidownloads 2033489 - CCM operator failing on baremetal platform 2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver 2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains 2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating "cluster-infrastructure-02-config.yml" status, which leads to bootstrap failed and all master nodes NotReady 2033538 - Gather Cost Management Metrics Custom Resource 2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined 2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page 2033634 - list-style-type: disc is applied to the modal dropdowns 2033720 - Update samples in 4.10 2033728 - Bump OVS to 2.16.0-33 2033729 - remove runtime request timeout restriction for azure 2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended 2033749 - Azure Stack Terraform fails without Local Provider 2033750 - Local volume should pull multi-arch image for kube-rbac-proxy 2033751 - Bump kubernetes to 1.23 2033752 - make verify fails due to missing yaml-patch 2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource 2034004 - [e2e][automation] add tests for VM snapshot improvements 2034068 - [e2e][automation] Enhance tests for 4.10 downstream 2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore 2034097 - [OVN] After edit EgressIP object, the status is not correct 2034102 - [OVN] Recreate the deleted EgressIP object got InvalidEgressIP warning 2034129 - blank page returned when clicking 'Get started' button 2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0 2034153 - CNO does not verify MTU migration for OpenShiftSDN 2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled 2034170 - Use function.knative.dev for Knative Functions related labels 2034190 - unable to add new VirtIO disks to VMs 2034192 - Prometheus fails to insert reporting metrics when the sample limit is met 2034243 - regular user cant load template list 2034245 - installing a cluster on aws, gcp always fails with "Error: Incompatible provider version" 2034248 - GPU/Host device modal is too small 2034257 - regular user Create VM missing permissions alert 2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial] 2034287 - do not block upgrades if we can't create storageclass in 4.10 in vsphere 2034300 - Du validator policy is NonCompliant after DU configuration completed 2034319 - Negation constraint is not validating packages 2034322 - CNO doesn't pick up settings required when ExternalControlPlane topology 2034350 - The CNO should implement the Whereabouts IP reconciliation cron job 2034362 - update description of disk interface 2034398 - The Whereabouts IPPools CRD should include the podref field 2034409 - Default CatalogSources should be pointing to 4.10 index images 2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics 2034413 - cloud-network-config-controller fails to init with secret "cloud-credentials" not found in manual credential mode 2034460 - Summary: cloud-network-config-controller does not account for different environment 2034474 - Template's boot source is "Unknown source" before and after set enableCommonBootImageImport to true 2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren't working properly 2034493 - Change cluster version operator log level 2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list 2034527 - IPI deployment fails 'timeout reached while inspecting the node' when provisioning network ipv6 2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer 2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART 2034537 - Update team 2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds 2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success 2034577 - Current OVN gateway mode should be reflected on node annotation as well 2034621 - context menu not popping up for application group 2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10 2034624 - Warn about unsupported CSI driver in vsphere operator 2034647 - missing volumes list in snapshot modal 2034648 - Rebase openshift-controller-manager to 1.23 2034650 - Rebase openshift/builder to 1.23 2034705 - vSphere: storage e2e tests logging configuration data 2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail. 2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment 2034785 - ptpconfig with summary_interval cannot be applied 2034823 - RHEL9 should be starred in template list 2034838 - An external router can inject routes if no service is added 2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent 2034879 - Lifecycle hook's name and owner shouldn't be allowed to be empty 2034881 - Cloud providers components should use K8s 1.23 dependencies 2034884 - ART cannot build the image because it tries to download controller-gen 2034889 - oc adm prune deployments does not work 2034898 - Regression in recently added Events feature 2034957 - update openshift-apiserver to kube 1.23.1 2035015 - ClusterLogForwarding CR remains stuck remediating forever 2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster 2035141 - [RFE] Show GPU/Host devices in template's details tab 2035146 - "kubevirt-plugin~PVC cannot be empty" shows on add-disk modal while adding existing PVC 2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting 2035199 - IPv6 support in mtu-migration-dispatcher.yaml 2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing 2035250 - Peering with ebgp peer over multi-hops doesn't work 2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices 2035315 - invalid test cases for AWS passthrough mode 2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env 2035321 - Add Sprint 211 translations 2035326 - [ExternalCloudProvider] installation with additional network on workers fails 2035328 - Ccoctl does not ignore credentials request manifest marked for deletion 2035333 - Kuryr orphans ports on 504 errors from Neutron 2035348 - Fix two grammar issues in kubevirt-plugin.json strings 2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets 2035409 - OLM E2E test depends on operator package that's no longer published 2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address 2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to 'ecs-cn-hangzhou.aliyuncs.com' timeout, although the specified region is 'us-east-1' 2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster 2035467 - UI: Queried metrics can't be ordered on Oberve->Metrics page 2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers 2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class 2035602 - [e2e][automation] add tests for Virtualization Overview page cards 2035703 - Roles -> RoleBindings tab doesn't show RoleBindings correctly 2035704 - RoleBindings list page filter doesn't apply 2035705 - Azure 'Destroy cluster' get stuck when the cluster resource group is already not existing. 2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed 2035772 - AccessMode and VolumeMode is not reserved for customize wizard 2035847 - Two dashes in the Cronjob / Job pod name 2035859 - the output of opm render doesn't contain olm.constraint which is defined in dependencies.yaml 2035882 - [BIOS setting values] Create events for all invalid settings in spec 2035903 - One redundant capi-operator credential requests in “oc adm extract --credentials-requests” 2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen 2035927 - Cannot enable HighNodeUtilization scheduler profile 2035933 - volume mode and access mode are empty in customize wizard review tab 2035969 - "ip a " shows "Error: Peer netns reference is invalid" after create test pods 2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation 2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error 2036029 - New added cloud-network-config operator doesn’t supported aws sts format credential 2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend 2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes 2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23 2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23 2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments 2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists 2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected 2036826 - oc adm prune deployments can prune the RC/RS 2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform 2036861 - kube-apiserver is degraded while enable multitenant 2036937 - Command line tools page shows wrong download ODO link 2036940 - oc registry login fails if the file is empty or stdout 2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container 2036989 - Route URL copy to clipboard button wraps to a separate line by itself 2036990 - ZTP "DU Done inform policy" never becomes compliant on multi-node clusters 2036993 - Machine API components should use Go lang version 1.17 2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log. 2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api 2037073 - Alertmanager container fails to start because of startup probe never being successful 2037075 - Builds do not support CSI volumes 2037167 - Some log level in ibm-vpc-block-csi-controller are hard code 2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles 2037182 - PingSource badge color is not matched with knativeEventing color 2037203 - "Running VMs" card is too small in Virtualization Overview 2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly 2037237 - Add "This is a CD-ROM boot source" to customize wizard 2037241 - default TTL for noobaa cache buckets should be 0 2037246 - Cannot customize auto-update boot source 2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately 2037288 - Remove stale image reference 2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources 2037483 - Rbacs for Pods within the CBO should be more restrictive 2037484 - Bump dependencies to k8s 1.23 2037554 - Mismatched wave number error message should include the wave numbers that are in conflict 2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform] 2037635 - impossible to configure custom certs for default console route in ingress config 2037637 - configure custom certificate for default console route doesn't take effect for OCP >= 4.8 2037638 - Builds do not support CSI volumes as volume sources 2037664 - text formatting issue in Installed Operators list table 2037680 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080 2037689 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080 2037801 - Serverless installation is failing on CI jobs for e2e tests 2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format 2037856 - use lease for leader election 2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10 2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests 2037904 - upgrade operator deployment failed due to memory limit too low for manager container 2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation] 2038034 - non-privileged user cannot see auto-update boot source 2038053 - Bump dependencies to k8s 1.23 2038088 - Remove ipa-downloader references 2038160 - The default project missed the annotation : openshift.io/node-selector: "" 2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional 2038196 - must-gather is missing collecting some metal3 resources 2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777) 2038253 - Validator Policies are long lived 2038272 - Failures to build a PreprovisioningImage are not reported 2038384 - Azure Default Instance Types are Incorrect 2038389 - Failing test: [sig-arch] events should not repeat pathologically 2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket 2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips 2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained 2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect 2038663 - update kubevirt-plugin OWNERS 2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via "oc adm groups new" 2038705 - Update ptp reviewers 2038761 - Open Observe->Targets page, wait for a while, page become blank 2038768 - All the filters on the Observe->Targets page can't work 2038772 - Some monitors failed to display on Observe->Targets page 2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node 2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces 2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard 2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation 2038864 - E2E tests fail because multi-hop-net was not created 2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console 2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured 2038968 - Move feature gates from a carry patch to openshift/api 2039056 - Layout issue with breadcrumbs on API explorer page 2039057 - Kind column is not wide enough in API explorer page 2039064 - Bulk Import e2e test flaking at a high rate 2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled 2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters 2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost 2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy 2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator 2039170 - [upgrade]Error shown on registry operator "missing the cloud-provider-config configmap" after upgrade 2039227 - Improve image customization server parameter passing during installation 2039241 - Improve image customization server parameter passing during installation 2039244 - Helm Release revision history page crashes the UI 2039294 - SDN controller metrics cannot be consumed correctly by prometheus 2039311 - oc Does Not Describe Build CSI Volumes 2039315 - Helm release list page should only fetch secrets for deployed charts 2039321 - SDN controller metrics are not being consumed by prometheus 2039330 - Create NMState button doesn't work in OperatorHub web console 2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations 2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters. 2039359 - oc adm prune deployments can't prune the RS where the associated Deployment no longer exists 2039382 - gather_metallb_logs does not have execution permission 2039406 - logout from rest session after vsphere operator sync is finished 2039408 - Add GCP region northamerica-northeast2 to allowed regions 2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration 2039425 - No need to set KlusterletAddonConfig CR applicationManager->enabled: true in RAN ztp deployment 2039491 - oc - git:// protocol used in unit tests 2039516 - Bump OVN to ovn21.12-21.12.0-25 2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate 2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled 2039541 - Resolv-prepender script duplicating entries 2039586 - [e2e] update centos8 to centos stream8 2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty 2039619 - [AWS] In tree provisioner storageclass aws disk type should contain 'gp3' and csi provisioner storageclass default aws disk type should be 'gp3' 2039670 - Create PDBs for control plane components 2039678 - Page goes blank when create image pull secret 2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported 2039743 - React missing key warning when open operator hub detail page (and maybe others as well) 2039756 - React missing key warning when open KnativeServing details 2039770 - Observe dashboard doesn't react on time-range changes after browser reload when perspective is changed in another tab 2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard 2039781 - [GSS] OBC is not visible by admin of a Project on Console 2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector 2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled 2039880 - Log level too low for control plane metrics 2039919 - Add E2E test for router compression feature 2039981 - ZTP for standard clusters installs stalld on master nodes 2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead 2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced 2040143 - [IPI on Alibabacloud] suggest to remove region "cn-nanjing" or provide better error message 2040150 - Update ConfigMap keys for IBM HPCS 2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth 2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository 2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp 2040376 - "unknown instance type" error for supported m6i.xlarge instance 2040394 - Controller: enqueue the failed configmap till services update 2040467 - Cannot build ztp-site-generator container image 2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn't take affect in OpenShift 4 2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps 2040535 - Auto-update boot source is not available in customize wizard 2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name 2040603 - rhel worker scaleup playbook failed because missing some dependency of podman 2040616 - rolebindings page doesn't load for normal users 2040620 - [MAPO] Error pulling MAPO image on installation 2040653 - Topology sidebar warns that another component is updated while rendering 2040655 - User settings update fails when selecting application in topology sidebar 2040661 - Different react warnings about updating state on unmounted components when leaving topology 2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation 2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi 2040694 - Three upstream HTTPClientConfig struct fields missing in the operator 2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers 2040710 - cluster-baremetal-operator cannot update BMC subscription CR 2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms 2040782 - Import YAML page blocks input with more then one generateName attribute 2040783 - The Import from YAML summary page doesn't show the resource name if created via generateName attribute 2040791 - Default PGT policies must be 'inform' to integrate with the Lifecycle Operator 2040793 - Fix snapshot e2e failures 2040880 - do not block upgrades if we can't connect to vcenter 2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10 2041093 - autounattend.xml missing 2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates 2041319 - [IPI on Alibabacloud] installation in region "cn-shanghai" failed, due to "Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped" 2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23 2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller 2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener 2041441 - Provision volume with size 3000Gi even if sizeRange: '[10-2000]GiB' in storageclass on IBM cloud 2041466 - Kubedescheduler version is missing from the operator logs 2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses 2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods) 2041492 - Spacing between resources in inventory card is too small 2041509 - GCP Cloud provider components should use K8s 1.23 dependencies 2041510 - cluster-baremetal-operator doesn't run baremetal-operator's subscription webhook 2041541 - audit: ManagedFields are dropped using API not annotation 2041546 - ovnkube: set election timer at RAFT cluster creation time 2041554 - use lease for leader election 2041581 - KubeDescheduler operator log shows "Use of insecure cipher detected" 2041583 - etcd and api server cpu mask interferes with a guaranteed workload 2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure 2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation 2041620 - bundle CSV alm-examples does not parse 2041641 - Fix inotify leak and kubelet retaining memory 2041671 - Delete templates leads to 404 page 2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category 2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled 2041750 - [IPI on Alibabacloud] trying "create install-config" with region "cn-wulanchabu (China (Ulanqab))" (or "ap-southeast-6 (Philippines (Manila))", "cn-guangzhou (China (Guangzhou))") failed due to invalid endpoint 2041763 - The Observe > Alerting pages no longer have their default sort order applied 2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken 2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied 2041882 - cloud-network-config operator can't work normal on GCP workload identity cluster 2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases 2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist 2041971 - [vsphere] Reconciliation of mutating webhooks didn't happen 2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile 2041999 - [PROXY] external dns pod cannot recognize custom proxy CA 2042001 - unexpectedly found multiple load balancers 2042029 - kubedescheduler fails to install completely 2042036 - [IBMCLOUD] "openshift-install explain installconfig.platform.ibmcloud" contains not yet supported custom vpc parameters 2042049 - Seeing warning related to unrecognized feature gate in kubescheduler & KCM logs 2042059 - update discovery burst to reflect lots of CRDs on openshift clusters 2042069 - Revert toolbox to rhcos-toolbox 2042169 - Can not delete egressnetworkpolicy in Foreground propagation 2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool 2042265 - [IBM]"--scale-down-utilization-threshold" doesn't work on IBMCloud 2042274 - Storage API should be used when creating a PVC 2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection 2042366 - Lifecycle hooks should be independently managed 2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway 2042382 - [e2e][automation] CI takes more then 2 hours to run 2042395 - Add prerequisites for active health checks test 2042438 - Missing rpms in openstack-installer image 2042466 - Selection does not happen when switching from Topology Graph to List View 2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver 2042567 - insufficient info on CodeReady Containers configuration 2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk 2042619 - Overview page of the console is broken for hypershift clusters 2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running 2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud 2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud 2042770 - [IPI on Alibabacloud] with vpcID & vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly 2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring) 2042851 - Create template from SAP HANA template flow - VM is created instead of a new template 2042906 - Edit machineset with same machine deletion hook name succeed 2042960 - azure-file CI fails with "gid(0) in storageClass and pod fsgroup(1000) are not equal" 2043003 - [IPI on Alibabacloud] 'destroy cluster' of a failed installation (bug2041694) stuck after 'stage=Nat gateways' 2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial] 2043043 - Cluster Autoscaler should use K8s 1.23 dependencies 2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props) 2043078 - Favorite system projects not visible in the project selector after toggling "Show default projects". 2043117 - Recommended operators links are erroneously treated as external 2043130 - Update CSI sidecars to the latest release for 4.10 2043234 - Missing validation when creating several BGPPeers with the same peerAddress 2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler 2043254 - crio does not bind the security profiles directory 2043296 - Ignition fails when reusing existing statically-keyed LUKS volume 2043297 - [4.10] Bootimage bump tracker 2043316 - RHCOS VM fails to boot on Nutanix AOS 2043446 - Rebase aws-efs-utils to the latest upstream version. 2043556 - Add proper ci-operator configuration to ironic and ironic-agent images 2043577 - DPU network operator 2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator 2043675 - Too many machines deleted by cluster autoscaler when scaling down 2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation 2043709 - Logging flags no longer being bound to command line 2043721 - Installer bootstrap hosts using outdated kubelet containing bugs 2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather 2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23 2043780 - Bump router to k8s.io/api 1.23 2043787 - Bump cluster-dns-operator to k8s.io/api 1.23 2043801 - Bump CoreDNS to k8s.io/api 1.23 2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown 2043961 - [OVN-K] If pod creation fails, retry doesn't work as expected. 2044201 - Templates golden image parameters names should be supported 2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8] 2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter “csi.storage.k8s.io/fstype” create pvc,pod successfully but write data to the pod's volume failed of "Permission denied" 2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects 2044347 - Bump to kubernetes 1.23.3 2044481 - collect sharedresource cluster scoped instances with must-gather 2044496 - Unable to create hardware events subscription - failed to add finalizers 2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources 2044680 - Additional libovsdb performance and resource consumption fixes 2044704 - Observe > Alerting pages should not show runbook links in 4.10 2044717 - [e2e] improve tests for upstream test environment 2044724 - Remove namespace column on VM list page when a project is selected 2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff 2044808 - machine-config-daemon-pull.service: use cp instead of cat when extracting MCD in OKD 2045024 - CustomNoUpgrade alerts should be ignored 2045112 - vsphere-problem-detector has missing rbac rules for leases 2045199 - SnapShot with Disk Hot-plug hangs 2045561 - Cluster Autoscaler should use the same default Group value as Cluster API 2045591 - Reconciliation of aws pod identity mutating webhook did not happen 2045849 - Add Sprint 212 translations 2045866 - MCO Operator pod spam "Error creating event" warning messages in 4.10 2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin 2045916 - [IBMCloud] Default machine profile in installer is unreliable 2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment 2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify 2046137 - oc output for unknown commands is not human readable 2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance 2046297 - Bump DB reconnect timeout 2046517 - In Notification drawer, the "Recommendations" header shows when there isn't any recommendations 2046597 - Observe > Targets page may show the wrong service monitor is multiple monitors have the same namespace & label selectors 2046626 - Allow setting custom metrics for Ansible-based Operators 2046683 - [AliCloud]"--scale-down-utilization-threshold" doesn't work on AliCloud 2047025 - Installation fails because of Alibaba CSI driver operator is degraded 2047190 - Bump Alibaba CSI driver for 4.10 2047238 - When using communities and localpreferences together, only localpreference gets applied 2047255 - alibaba: resourceGroupID not found 2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions 2047317 - Update HELM OWNERS files under Dev Console 2047455 - [IBM Cloud] Update custom image os type 2047496 - Add image digest feature 2047779 - do not degrade cluster if storagepolicy creation fails 2047927 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used 2047929 - use lease for leader election 2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2048046 - New route annotation to show another URL or hide topology URL decorator doesn't work for Knative Services 2048048 - Application tab in User Preferences dropdown menus are too wide. 2048050 - Topology list view items are not highlighted on keyboard navigation 2048117 - [IBM]Shouldn't change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value 2048413 - Bond CNI: Failed to attach Bond NAD to pod 2048443 - Image registry operator panics when finalizes config deletion 2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-* 2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt 2048598 - Web terminal view is broken 2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure 2048891 - Topology page is crashed 2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class 2049043 - Cannot create VM from template 2049156 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used 2049886 - Placeholder bug for OCP 4.10.0 metadata release 2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning 2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2 2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0 2050227 - Installation on PSI fails with: 'openstack platform does not have the required standard-attr-tag network extension' 2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s] 2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members 2050310 - ContainerCreateError when trying to launch large (>500) numbers of pods across nodes 2050370 - alert data for burn budget needs to be updated to prevent regression 2050393 - ZTP missing support for local image registry and custom machine config 2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud 2050737 - Remove metrics and events for master port offsets 2050801 - Vsphere upi tries to access vsphere during manifests generation phase 2050883 - Logger object in LSO does not log source location accurately 2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit 2052062 - Whereabouts should implement client-go 1.22+ 2052125 - [4.10] Crio appears to be coredumping in some scenarios 2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config 2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. 2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests 2052598 - kube-scheduler should use configmap lease 2052599 - kube-controller-manger should use configmap lease 2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh 2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total not valid 2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop 2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. 2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1 2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch 2052756 - [4.10] PVs are not being cleaned up after PVC deletion 2053175 - oc adm catalog mirror throws 'missing signature key' error when using file://local/index 2053218 - ImagePull fails with error "unable to pull manifest from example.com/busy.box:v5 invalid reference format" 2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs 2053268 - inability to detect static lifecycle failure 2053314 - requestheader IDP test doesn't wait for cleanup, causing high failure rates 2053323 - OpenShift-Ansible BYOH Unit Tests are Broken 2053339 - Remove dev preview badge from IBM FlashSystem deployment windows 2053751 - ztp-site-generate container is missing convenience entrypoint 2053945 - [4.10] Failed to apply sriov policy on intel nics 2054109 - Missing "app" label 2054154 - RoleBinding in project without subject is causing "Project access" page to fail 2054244 - Latest pipeline run should be listed on the top of the pipeline run list 2054288 - console-master-e2e-gcp-console is broken 2054562 - DPU network operator 4.10 branch need to sync with master 2054897 - Unable to deploy hw-event-proxy operator 2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently 2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line 2055371 - Remove Check which enforces summary_interval must match logSyncInterval 2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11 2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API 2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured 2056479 - ovirt-csi-driver-node pods are crashing intermittently 2056572 - reconcilePrecaching error: cannot list resource "clusterserviceversions" in API group "operators.coreos.com" at the cluster scope" 2056629 - [4.10] EFS CSI driver can't unmount volumes with "wait: no child processes" 2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs 2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation 2056948 - post 1.23 rebase: regression in service-load balancer reliability 2057438 - Service Level Agreement (SLA) always show 'Unknown' 2057721 - Fix Proxy support in RHACM 2.4.2 2057724 - Image creation fails when NMstateConfig CR is empty 2058641 - [4.10] Pod density test causing problems when using kube-burner 2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install 2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials 2060956 - service domain can't be resolved when networkpolicy is used in OCP 4.10-rc

  1. References:

https://access.redhat.com/security/cve/CVE-2014-3577 https://access.redhat.com/security/cve/CVE-2016-10228 https://access.redhat.com/security/cve/CVE-2017-14502 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2018-1000858 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9169 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-25013 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-8927 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-9952 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-13434 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-15358 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-25660 https://access.redhat.com/security/cve/CVE-2020-25677 https://access.redhat.com/security/cve/CVE-2020-27618 https://access.redhat.com/security/cve/CVE-2020-27781 https://access.redhat.com/security/cve/CVE-2020-29361 https://access.redhat.com/security/cve/CVE-2020-29362 https://access.redhat.com/security/cve/CVE-2020-29363 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/cve/CVE-2021-3326 https://access.redhat.com/security/cve/CVE-2021-3449 https://access.redhat.com/security/cve/CVE-2021-3450 https://access.redhat.com/security/cve/CVE-2021-3516 https://access.redhat.com/security/cve/CVE-2021-3517 https://access.redhat.com/security/cve/CVE-2021-3518 https://access.redhat.com/security/cve/CVE-2021-3520 https://access.redhat.com/security/cve/CVE-2021-3521 https://access.redhat.com/security/cve/CVE-2021-3537 https://access.redhat.com/security/cve/CVE-2021-3541 https://access.redhat.com/security/cve/CVE-2021-3733 https://access.redhat.com/security/cve/CVE-2021-3749 https://access.redhat.com/security/cve/CVE-2021-20305 https://access.redhat.com/security/cve/CVE-2021-21684 https://access.redhat.com/security/cve/CVE-2021-22946 https://access.redhat.com/security/cve/CVE-2021-22947 https://access.redhat.com/security/cve/CVE-2021-25215 https://access.redhat.com/security/cve/CVE-2021-27218 https://access.redhat.com/security/cve/CVE-2021-30666 https://access.redhat.com/security/cve/CVE-2021-30761 https://access.redhat.com/security/cve/CVE-2021-30762 https://access.redhat.com/security/cve/CVE-2021-33928 https://access.redhat.com/security/cve/CVE-2021-33929 https://access.redhat.com/security/cve/CVE-2021-33930 https://access.redhat.com/security/cve/CVE-2021-33938 https://access.redhat.com/security/cve/CVE-2021-36222 https://access.redhat.com/security/cve/CVE-2021-37750 https://access.redhat.com/security/cve/CVE-2021-39226 https://access.redhat.com/security/cve/CVE-2021-41190 https://access.redhat.com/security/cve/CVE-2021-43813 https://access.redhat.com/security/cve/CVE-2021-44716 https://access.redhat.com/security/cve/CVE-2021-44717 https://access.redhat.com/security/cve/CVE-2022-0532 https://access.redhat.com/security/cve/CVE-2022-21673 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL 0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne eGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM CEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF aDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC Y/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp sQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO RDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN rs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry bSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z 7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT b5PUYUBIZLc= =GUDA -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Summary:

The Migration Toolkit for Containers (MTC) 1.6.0 is now available. Description:

The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Solution:

For details on how to install and use MTC, refer to:

https://docs.openshift.com/container-platform/4.8/migration_toolkit_for_con tainers/installing-mtc.html

  1. Bugs fixed (https://bugzilla.redhat.com/):

1878824 - Web console is not accessible when deployed on OpenShift cluster on IBM Cloud 1887526 - "Stage" pods fail when migrating from classic OpenShift source cluster on IBM Cloud with block storage 1899562 - MigMigration custom resource does not display an error message when a migration fails because of volume mount error 1936886 - Service account token of existing remote cluster cannot be updated by using the web console 1936894 - "Ready" status of MigHook and MigPlan custom resources is not synchronized automatically 1949117 - "Migration plan resources" page displays a permanent error message when a migration plan is deleted from the backend 1951869 - MigPlan custom resource does not detect invalid source cluster reference 1968621 - Paused deployment config causes a migration to hang 1970338 - Parallel migrations fail because the initial backup is missing 1974737 - Migration plan name length in the "Migration plan" wizard is not validated 1975369 - "Debug view" link text on "Migration plans" page can be improved 1975372 - Destination namespace in MigPlan custom resource is not validated 1976895 - Namespace mapping cannot be changed using the Migration Plan wizard 1981810 - "Excluded" resources are not excluded from the migration 1982026 - Direct image migration fails if the source URI contains a double slash ("//") 1994985 - Web console crashes when a MigPlan custom resource is created with an empty namespaces list 1996169 - When "None" is selected as the target storage class in the web console, the setting is ignored and the default storage class is used 1996627 - MigPlan custom resource displays a "PvUsageAnalysisFailed" warning after a successful PVC migration 1996784 - "Migration resources" tree on the "Migration details" page is not displayed 1996902 - "Select all" checkbox on the "Namespaces" page of the "Migration plan" wizard remains selected after a namespace is unselected 1996904 - "Migration" dialogs on the "Migration plans" page display inconsistent capitalization 1996906 - "Migration details" page link is displayed for a migration plan with no associated migrations 1996938 - Search function on "Migration plans" page displays no results 1997051 - Indirect migration from MTC 1.5.1 to 1.6.0 fails during "StageBackup" phase 1997127 - Direct volume migration "retry" feature does not work correctly after a network failure 1997173 - Migration of custom resource definitions to OpenShift Container Platform 4.9 fails because of API version incompatibility 1997180 - "migration-log-reader" pod does not log invalid Rsync options 1997665 - Selected PVCs in the "State migration" dialog are reset because of background polling 1997694 - "Update operator" link on the "Clusters" page is incorrect 1997827 - "Migration plan" wizard displays PVC names incorrectly formatted after running state migration 1998062 - Rsync pod uses upstream image 1998283 - "Migration step details" link on the "Migrations" page does not work 1998550 - "Migration plan" wizard does not support certain screen resolutions 1998581 - "Migration details" link on "Migration plans" page displays "latestIsFailed" error 1999113 - "oc describe" and "oc log" commands on "Migration resources" tree cannot be copied after failed migration 1999381 - MigPlan custom resource displays "Stage completed with warnings" status after successful migration 1999528 - Position of the "Add migration plan" button is different from the other "Add" buttons 1999765 - "Migrate" button on "State migration" dialog is enabled when no PVCs are selected 1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function 2000205 - "Options" menu on the "Migration details" page displays incorrect items 2000218 - Validation incorrectly blocks namespace mapping if a source cluster namespace is the same as the destination namespace 2000243 - "Migration plan" wizard does not allow a migration within the same cluster 2000644 - Invalid migration plan causes "controller" pod to crash 2000875 - State migration status on "Migrations" page displays "Stage succeeded" message 2000979 - "clusterIPs" parameter of "service" object can cause Velero errors 2001089 - Direct volume migration fails because of missing CA path configuration 2001173 - Migration plan requires two clusters 2001786 - Migration fails during "Stage Backup" step because volume path on host not found 2001829 - Migration does not complete when the namespace contains a cron job with a PVC 2001941 - Fixing PVC conflicts in state migration plan using the web console causes the migration to run twice 2002420 - "Stage" pod not created for completed application pod, causing the "mig-controller" to stall 2002608 - Migration of unmounted PVC fails during "StageBackup" phase 2002897 - Rollback migration does not complete when the namespace contains a cron job 2003603 - "View logs" dialog displays the "--selector" option, which does not print all logs 2004601 - Migration plan status on "Migration plans" page is "Ready" after migration completed with warnings 2004923 - Web console displays "New operator version available" notification for incorrect operator 2005143 - Combining Rsync and Stunnel in a single pod can degrade performance 2006316 - Web console cannot create migration plan in a proxy environment 2007175 - Web console cannot be launched in a proxy environment

  1. JIRA issues fixed (https://issues.jboss.org/):

MIG-785 - Search for "Crane" in the Operator Hub should display the Migration Toolkit for Containers

  1. Description:

The release of RHACS 3.67 provides the following new features, bug fixes, security patches and system changes:

OpenShift Dedicated support

RHACS 3.67 is thoroughly tested and supported on OpenShift Dedicated on Amazon Web Services and Google Cloud Platform. Use OpenShift OAuth server as an identity provider If you are using RHACS with OpenShift, you can now configure the built-in OpenShift OAuth server as an identity provider for RHACS. Enhancements for CI outputs Red Hat has improved the usability of RHACS CI integrations. CI outputs now show additional detailed information about the vulnerabilities and the security policies responsible for broken builds. Runtime Class policy criteria Users can now use RHACS to define the container runtime configuration that may be used to run a pod’s containers using the Runtime Class policy criteria.

Bug Fixes The release of RHACS 3.67 includes the following bug fixes:

  1. Previously, when using RHACS with the Compliance Operator integration, RHACS did not respect or populate Compliance Operator TailoredProfiles. This has been fixed. Previously, the Alpine Linux package manager (APK) in Image policy looked for the presence of apk package in the image rather than the apk-tools package. This issue has been fixed.

System changes The release of RHACS 3.67 includes the following system changes:

  1. Scanner now identifies vulnerabilities in Ubuntu 21.10 images. The Port exposure method policy criteria now include route as an exposure method. The OpenShift: Kubeadmin Secret Accessed security policy now allows the OpenShift Compliance Operator to check for the existence of the Kubeadmin secret without creating a violation. The OpenShift Compliance Operator integration now supports using TailoredProfiles. The RHACS Jenkins plugin now provides additional security information. When you enable the environment variable ROX_NETWORK_ACCESS_LOG for Central, the logs contain the Request URI and X-Forwarded-For header values. The default uid:gid pair for the Scanner image is now 65534:65534. RHACS adds a new default Scope Manager role that includes minimum permissions to create and modify access scopes. In addition to manually uploading vulnerability definitions in offline mode, you can now upload definitions in online mode. You can now format the output of the following roxctl CLI commands in table, csv, or JSON format: image scan, image check & deployment check
  2. You can now use a regular expression for the deployment name while specifying policy exclusions

  3. Solution:

To take advantage of these new features, fixes and changes, please upgrade Red Hat Advanced Cluster Security for Kubernetes to version 3.67. Bugs fixed (https://bugzilla.redhat.com/):

1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe 1978144 - CVE-2021-32690 helm: information disclosure vulnerability 1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet 1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function 2005445 - CVE-2021-3801 nodejs-prismjs: ReDoS vulnerability 2006044 - CVE-2021-39293 golang: archive/zip: malformed archive may cause panic or memory exhaustion (incomplete fix of CVE-2021-33196) 2016640 - CVE-2020-27304 civetweb: directory traversal when using the built-in example HTTP form-based file upload mechanism via the mg_handle_form_request API

  1. JIRA issues fixed (https://issues.jboss.org/):

RHACS-65 - Release RHACS 3.67.0

  1. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:

https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana gement_for_kubernetes/2.4/html/release_notes/

Security fixes:

  • CVE-2021-33623: nodejs-trim-newlines: ReDoS in .end() method

  • CVE-2021-32626: redis: Lua scripts can overflow the heap-based Lua stack

  • CVE-2021-32627: redis: Integer overflow issue with Streams

  • CVE-2021-32628: redis: Integer overflow bug in the ziplist data structure

  • CVE-2021-32672: redis: Out of bounds read in lua debugger protocol parser

  • CVE-2021-32675: redis: Denial of service via Redis Standard Protocol (RESP) request

  • CVE-2021-32687: redis: Integer overflow issue with intsets

  • CVE-2021-32690: helm: information disclosure vulnerability

  • CVE-2021-32803: nodejs-tar: Insufficient symlink protection allowing arbitrary file creation and overwrite

  • CVE-2021-32804: nodejs-tar: Insufficient absolute path sanitization allowing arbitrary file creation and overwrite

  • CVE-2021-23017: nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a pointer to a root domain name

  • CVE-2021-3711: openssl: SM2 Decryption Buffer Overflow

  • CVE-2021-3712: openssl: Read buffer overruns processing ASN.1 strings

  • CVE-2021-3749: nodejs-axios: Regular expression denial of service in trim function

  • CVE-2021-41099: redis: Integer overflow issue with strings

Bug fixes:

  • RFE ACM Application management UI doesn't reflect object status (Bugzilla

1965321)

  • RHACM 2.4 files (Bugzilla #1983663)

  • Hive Operator CrashLoopBackOff when deploying ACM with latest downstream 2.4 (Bugzilla #1993366)

  • submariner-addon pod failing in RHACM 2.4 latest ds snapshot (Bugzilla

1994668)

  • ACM 2.4 install on OCP 4.9 ipv6 disconnected hub fails due to multicluster pod in clb (Bugzilla #2000274)

  • pre-network-manager-config failed due to timeout when static config is used (Bugzilla #2003915)

  • InfraEnv condition does not reflect the actual error message (Bugzilla

2009204, 2010030)

  • Flaky test point to a nil pointer conditions list (Bugzilla #2010175)

  • InfraEnv status shows 'Failed to create image: internal error (Bugzilla

2010272)

  • subctl diagnose firewall intra-cluster - failed VXLAN checks (Bugzilla

2013157)

  • pre-network-manager-config failed due to timeout when static config is used (Bugzilla #2014084)

  • Bugs fixed (https://bugzilla.redhat.com/):

1963121 - CVE-2021-23017 nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a pointer to a root domain name 1965321 - RFE ACM Application management UI doesn't reflect object status 1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method 1978144 - CVE-2021-32690 helm: information disclosure vulnerability 1983663 - RHACM 2.4.0 images 1990409 - CVE-2021-32804 nodejs-tar: Insufficient absolute path sanitization allowing arbitrary file creation and overwrite 1990415 - CVE-2021-32803 nodejs-tar: Insufficient symlink protection allowing arbitrary file creation and overwrite 1993366 - Hive Operator CrashLoopBackOff when deploying ACM with latest downstream 2.4 1994668 - submariner-addon pod failing in RHACM 2.4 latest ds snapshot 1995623 - CVE-2021-3711 openssl: SM2 Decryption Buffer Overflow 1995634 - CVE-2021-3712 openssl: Read buffer overruns processing ASN.1 strings 1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function 2000274 - ACM 2.4 install on OCP 4.9 ipv6 disconnected hub fails due to multicluster pod in clb 2003915 - pre-network-manager-config failed due to timeout when static config is used 2009204 - InfraEnv condition does not reflect the actual error message 2010030 - InfraEnv condition does not reflect the actual error message 2010175 - Flaky test point to a nil pointer conditions list 2010272 - InfraEnv status shows 'Failed to create image: internal error 2010991 - CVE-2021-32687 redis: Integer overflow issue with intsets 2011000 - CVE-2021-32675 redis: Denial of service via Redis Standard Protocol (RESP) request 2011001 - CVE-2021-32672 redis: Out of bounds read in lua debugger protocol parser 2011004 - CVE-2021-32628 redis: Integer overflow bug in the ziplist data structure 2011010 - CVE-2021-32627 redis: Integer overflow issue with Streams 2011017 - CVE-2021-32626 redis: Lua scripts can overflow the heap-based Lua stack 2011020 - CVE-2021-41099 redis: Integer overflow issue with strings 2013157 - subctl diagnose firewall intra-cluster - failed VXLAN checks 2014084 - pre-network-manager-config failed due to timeout when static config is used

5

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202108-1941",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "axios",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "axios",
        "version": "0.21.1"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "goldengate",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "21.1"
      },
      {
        "model": "goldengate",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "21.7.0.0.0"
      },
      {
        "model": "axios",
        "scope": null,
        "trust": 0.8,
        "vendor": "axios",
        "version": null
      },
      {
        "model": "axios",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "axios",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-011290"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-3749"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Siemens reported these vulnerabilities to CISA.",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202108-2780"
      }
    ],
    "trust": 0.6
  },
  "cve": "CVE-2021-3749",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "COMPLETE",
            "baseScore": 7.8,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 10.0,
            "id": "CVE-2021-3749",
            "impactScore": 6.9,
            "integrityImpact": "NONE",
            "severity": "HIGH",
            "trust": 1.9,
            "vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:C",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "security@huntr.dev",
            "availabilityImpact": "HIGH",
            "baseScore": 7.5,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 3.9,
            "id": "CVE-2021-3749",
            "impactScore": 3.6,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.8,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
            "version": "3.0"
          },
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 7.5,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 3.9,
            "id": "CVE-2021-3749",
            "impactScore": 3.6,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2021-3749",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "security@huntr.dev",
            "id": "CVE-2021-3749",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "NVD",
            "id": "CVE-2021-3749",
            "trust": 0.8,
            "value": "High"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202104-975",
            "trust": 0.6,
            "value": "MEDIUM"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202108-2780",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "VULMON",
            "id": "CVE-2021-3749",
            "trust": 0.1,
            "value": "HIGH"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-3749"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-011290"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-975"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202108-2780"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-3749"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-3749"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "axios is vulnerable to Inefficient Regular Expression Complexity. axios Exists in a resource exhaustion vulnerability.Service operation interruption (DoS) It may be in a state. Pillow is a Python-based image processing library. \nThere is currently no information about this vulnerability, please feel free to follow CNNVD or manufacturer announcements. Relevant releases/architectures:\n\n2.0 - ppc64le, s390x, x86_64\n\n3. Solution:\n\nThe OpenShift Service Mesh release notes provide information on the\nfeatures and known issues:\n\nhttps://docs.openshift.com/container-platform/latest/service_mesh/v2x/servicemesh-release-notes.html\n\n5.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Moderate: OpenShift Container Platform 4.10.3 security update\nAdvisory ID:       RHSA-2022:0056-01\nProduct:           Red Hat OpenShift Enterprise\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:0056\nIssue date:        2022-03-10\nCVE Names:         CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 \n                   CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 \n                   CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n                   CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n                   CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n                   CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n                   CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n                   CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n                   CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n                   CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 \n                   CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 \n                   CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 \n                   CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 \n                   CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 \n                   CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 \n                   CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 \n                   CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n                   CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 \n                   CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 \n                   CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 \n                   CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 \n                   CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 \n                   CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 \n                   CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 \n                   CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 \n                   CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 \n                   CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 \n                   CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 \n                   CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 \n                   CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 \n                   CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 \n                   CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 \n                   CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 \n                   CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 \n                   CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 \n                   CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 \n                   CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 \n                   CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 \n                   CVE-2022-24407 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.10.3 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.10.3. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:0055\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nSecurity Fix(es):\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n* grafana: Snapshot authentication bypass (CVE-2021-39226)\n* golang: net/http: limit growth of header canonicalization cache\n(CVE-2021-44716)\n* nodejs-axios: Regular expression denial of service in trim function\n(CVE-2021-3749)\n* golang: syscall: don\u0027t close fd 0 on ForkExec error (CVE-2021-44717)\n* grafana: Forward OAuth Identity Token can allow users to access some data\nsources (CVE-2022-21673)\n* grafana: directory traversal vulnerability (CVE-2021-43813)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-x86_64\n\nThe image digest is\nsha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56\n\n(For s390x architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-s390x\n\nThe image digest is\nsha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69\n\n(For ppc64le architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le\n\nThe image digest is\nsha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c\n\nAll OpenShift Container Platform 4.10 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.10 see the following documentation,\nwhich will be updated shortly for this release, for moderate instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1808240 - Always return metrics value for pods under the user\u0027s namespace\n1815189 - feature flagged UI does not always become available after operator installation\n1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters\n1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly\n1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal\n1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered\n1878925 - \u0027oc adm upgrade --to ...\u0027 rejects versions which occur only in history, while the cluster-version operator supports history fallback\n1880738 - origin e2e test deletes original worker\n1882983 - oVirt csi driver should refuse to provision RWX and ROX PV\n1886450 - Keepalived router id check not documented for RHV/VMware IPI\n1889488 - The metrics endpoint for the Scheduler is not protected by RBAC\n1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom\n1896474 - Path based routing is broken for some combinations\n1897431 - CIDR support for  additional network attachment with the bridge CNI plug-in\n1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes\n1907433 - Excessive logging in image operator\n1909906 - The router fails with PANIC error when stats port already in use\n1911173 - [MSTR-998] Many charts\u0027 legend names show {{}} instead of words\n1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. \n1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)\n1917893 - [ovirt] install fails: due to terraform error \"Cannot attach Virtual Disk: Disk is locked\" on vm resource\n1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1926522 - oc adm catalog does not clean temporary files\n1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. \n1928141 - kube-storage-version-migrator constantly reporting type \"Upgradeable\" status Unknown\n1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it\u0027s storageclass is not yet finished, confusing users\n1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x\n1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade\n1937085 - RHV UPI inventory playbook missing guarantee_memory\n1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion\n1938236 - vsphere-problem-detector does not support overriding log levels via storage CR\n1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods\n1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer\n1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]\n1942913 - ThanosSidecarUnhealthy isn\u0027t resilient to WAL replays. \n1943363 - [ovn] CNO should gracefully terminate ovn-northd\n1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17\n1948080 - authentication should not set Available=False APIServices_Error with 503s\n1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set\n1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0\n1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer\n1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs\n1953264 - \"remote error: tls: bad certificate\" logs in prometheus-operator container\n1955300 - Machine config operator reports unavailable for 23m during upgrade\n1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set\n1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set\n1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters\n1956496 - Needs SR-IOV Docs Upstream\n1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret\n1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid\n1956964 - upload a boot-source to OpenShift virtualization using the console\n1957547 - [RFE]VM name is not auto filled in dev console\n1958349 - ovn-controller doesn\u0027t release the memory after cluster-density run\n1959352 - [scale] failed to get pod annotation: timed out waiting for annotations\n1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not\n1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]\n1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects\n1961391 - String updates\n1961509 - DHCP daemon pod should have CPU and memory requests set but not limits\n1962066 - Edit machine/machineset specs not working\n1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1963053 - `oc whoami --show-console` should show the web console URL, not the server api URL\n1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1964327 - Support containers with name:tag@digest\n1964789 - Send keys and disconnect does not work for VNC console\n1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7\n1966445 - Unmasking a service doesn\u0027t work if it masked using MCO\n1966477 - Use GA version in KAS/OAS/OauthAS to avoid: \"audit.k8s.io/v1beta1\" is deprecated and will be removed in a future release, use \"audit.k8s.io/v1\" instead\n1966521 - kube-proxy\u0027s userspace implementation consumes excessive CPU\n1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up\n1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount\n1970218 - MCO writes incorrect file contents if compression field is specified\n1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]\n1970805 - Cannot create build when docker image url contains dir structure\n1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io\n1972827 - image registry does not remain available during upgrade\n1972962 - Should set the minimum value for the `--max-icsp-size` flag of `oc adm catalog mirror`\n1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run\n1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established\n1976301 - [ci] e2e-azure-upi is permafailing\n1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. \n1976674 - CCO didn\u0027t set Upgradeable to False when cco mode is configured to Manual on azure platform\n1976894 - Unidling a StatefulSet does not work as expected\n1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases\n1977414 - Build Config timed out waiting for condition 400: Bad Request\n1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus\n1978528 - systemd-coredump started and failed intermittently for unknown reasons\n1978581 - machine-config-operator: remove runlevel from mco namespace\n1979562 - Cluster operators: don\u0027t show messages when neither progressing, degraded or unavailable\n1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9\n1979966 - OCP builds always fail when run on RHEL7 nodes\n1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading\n1981549 - Machine-config daemon does not recover from broken Proxy configuration\n1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]\n1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues\n1982063 - \u0027Control Plane\u0027  is not translated in Simplified Chinese language in Home-\u003eOverview page\n1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands\n1982662 - Workloads - DaemonSets - Add storage: i18n misses\n1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE \"*/secrets/encryption-config\" on single node clusters\n1983758 - upgrades are failing on disruptive tests\n1983964 - Need Device plugin configuration for the NIC \"needVhostNet\" \u0026 \"isRdma\"\n1984592 - global pull secret not working in OCP4.7.4+ for additional private registries\n1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs\n1985486 - Cluster Proxy not used during installation on OSP with Kuryr\n1985724 - VM Details Page missing translations\n1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted\n1985933 - Downstream image registry recommendation\n1985965 - oVirt CSI driver does not report volume stats\n1986216 - [scale] SNO: Slow Pod recovery due to \"timed out waiting for OVS port binding\"\n1986237 - \"MachineNotYetDeleted\" in Pending state , alert not fired\n1986239 - crictl create fails with \"PID namespace requested, but sandbox infra container invalid\"\n1986302 - console continues to fetch prometheus alert and silences for normal user\n1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI\n1986338 - error creating list of resources in Import YAML\n1986502 - yaml multi file dnd duplicates previous dragged files\n1986819 - fix string typos for hot-plug disks\n1987044 - [OCPV48] Shutoff VM is being shown as \"Starting\" in WebUI when using spec.runStrategy Manual/RerunOnFailure\n1987136 - Declare operatorframework.io/arch.* labels for all operators\n1987257 - Go-http-client user-agent being used for oc adm mirror requests\n1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold\n1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP\n1988406 - SSH key dropped when selecting \"Customize virtual machine\" in UI\n1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade\n1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with \"Unable to connect to the server\"\n1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs\n1989438 - expected replicas is wrong\n1989502 - Developer Catalog is disappearing after short time\n1989843 - \u0027More\u0027 and \u0027Show Less\u0027 functions are not translated on several page\n1990014 - oc debug \u003cpod-name\u003e does not work for Windows pods\n1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created\n1990193 - \u0027more\u0027 and \u0027Show Less\u0027  is not being translated on Home -\u003e Search page\n1990255 - Partial or all of the Nodes/StorageClasses don\u0027t appear back on UI after text is removed from search bar\n1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI\n1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi-* symlinks\n1990556 - get-resources.sh doesn\u0027t honor the no_proxy settings even with no_proxy var\n1990625 - Ironic agent registers with SLAAC address with privacy-stable\n1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time\n1991067 - github.com can not be resolved inside pods where cluster is running on openstack. \n1991573 - Enable typescript strictNullCheck on network-policies files\n1991641 - Baremetal Cluster Operator still Available After Delete Provisioning\n1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator\n1991819 - Misspelled word \"ocurred\"  in oc inspect cmd\n1991942 - Alignment and spacing fixes\n1992414 - Two rootdisks show on storage step if \u0027This is a CD-ROM boot source\u0027  is checked\n1992453 - The configMap failed to save on VM environment tab\n1992466 - The button \u0027Save\u0027 and \u0027Reload\u0027 are not translated on vm environment tab\n1992475 - The button \u0027Open console in New Window\u0027 and \u0027Disconnect\u0027 are not translated on vm console tab\n1992509 - Could not customize boot source due to source PVC not found\n1992541 - all the alert rules\u0027 annotations \"summary\" and \"description\" should comply with the OpenShift alerting guidelines\n1992580 - storageProfile should stay with the same value by check/uncheck the apply button\n1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply\n1992777 - [IBMCLOUD] Default \"ibm_iam_authorization_policy\" is not working as expected in all scenarios\n1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed)\n1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing\n1994094 - Some hardcodes are detected at the code level in OpenShift console components\n1994142 - Missing required cloud config fields for IBM Cloud\n1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools\n1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart\n1995335 - [SCALE] ovnkube CNI: remove ovs flows check\n1995493 - Add Secret to workload button and Actions button are not aligned on secret details page\n1995531 - Create RDO-based Ironic image to be promoted to OKD\n1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator\n1995887 - [OVN]After reboot egress node,  lr-policy-list was not correct, some duplicate records or missed internal IPs\n1995924 - CMO should report `Upgradeable: false` when HA workload is incorrectly spread\n1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole\n1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN\n1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down\n1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page\n1996647 - Provide more useful degraded message in auth operator on DNS errors\n1996736 - Large number of 501 lr-policies in INCI2 env\n1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes\n1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP\n1996928 - Enable default operator indexes on ARM\n1997028 - prometheus-operator update removes env var support for thanos-sidecar\n1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used\n1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller. \n1997245 - \"Subscription already exists in openshift-storage namespace\" error message is seen while installing odf-operator via UI\n1997269 - Have to refresh console to install kube-descheduler\n1997478 - Storage operator is not available after reboot cluster instances\n1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n1997967 - storageClass is not reserved from default wizard to customize wizard\n1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order\n1998038 - [e2e][automation] add tests for UI for VM disk hot-plug\n1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus\n1998174 - Create storageclass gp3-csi  after install ocp cluster on aws\n1998183 - \"r: Bad Gateway\" info is improper\n1998235 - Firefox warning: Cookie \u201ccsrf-token\u201d will be soon rejected\n1998377 - Filesystem table head is not full displayed in disk tab\n1998378 - Virtual Machine is \u0027Not available\u0027 in Home -\u003e Overview -\u003e Cluster inventory\n1998519 - Add fstype when create localvolumeset instance on web console\n1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses\n1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page\n1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable\n1999091 - Console update toast notification can appear multiple times\n1999133 - removing and recreating static pod manifest leaves pod in error state\n1999246 - .indexignore is not ingore when oc command load dc configuration\n1999250 - ArgoCD in GitOps operator can\u0027t manage namespaces\n1999255 - ovnkube-node always crashes out the first time it starts\n1999261 - ovnkube-node log spam (and security token leak?)\n1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -\u003e Operator Installation page\n1999314 - console-operator is slow to mark Degraded as False once console starts working\n1999425 - kube-apiserver with \"[SHOULD NOT HAPPEN] failed to update managedFields\" err=\"failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)\n1999556 - \"master\" pool should be updated before the CVO reports available at the new version occurred\n1999578 - AWS EFS CSI tests are constantly failing\n1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages\n1999619 - cloudinit is malformatted if a user sets a password during VM creation flow\n1999621 - Empty ssh_authorized_keys entry is added to VM\u0027s cloudinit if created from a customize flow\n1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined\n1999668 - openshift-install destroy cluster panic\u0027s when given invalid credentials to cloud provider (Azure Stack Hub)\n1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource\n1999771 - revert \"force cert rotation every couple days for development\" in 4.10\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n1999796 - Openshift Console `Helm` tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace. \n1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions\n1999903 - Click \"This is a CD-ROM boot source\" ticking \"Use template size PVC\" on pvc upload form\n1999983 - No way to clear upload error from template boot source\n2000081 - [IPI baremetal]  The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter\n2000096 - Git URL is not re-validated on edit build-config form reload\n2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig\n2000236 - Confusing usage message from dynkeepalived CLI\n2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported\n2000430 - bump cluster-api-provider-ovirt version in installer\n2000450 - 4.10: Enable static PV multi-az test\n2000490 - All critical alerts shipped by CMO should have links to a runbook\n2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)\n2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster\n2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled\n2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console\n2000754 - IPerf2 tests should be lower\n2000846 - Structure logs in the entire codebase of Local Storage Operator\n2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24\n2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM\n2000938 - CVO does not respect changes to a Deployment strategy\n2000963 - \u0027Inline-volume (default fs)] volumes should store data\u0027 tests are failing on OKD with updated selinux-policy\n2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don\u0027t have snapshot and should be fullClone\n2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole\n2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error\n2001337 - Details Card in ODF Dashboard mentions OCS\n2001339 - fix text content hotplug\n2001413 - [e2e][automation] add/delete nic and disk to template\n2001441 - Test: oc adm must-gather runs successfully for audit logs -  fail due to startup log\n2001442 - Empty termination.log file for the kube-apiserver has too permissive mode\n2001479 - IBM Cloud DNS unable to create/update records\n2001566 - Enable alerts for prometheus operator in UWM\n2001575 - Clicking on the perspective switcher shows a white page with loader\n2001577 - Quick search placeholder is not displayed properly when the search string is removed\n2001578 - [e2e][automation] add tests for vm dashboard tab\n2001605 - PVs remain in Released state for a long time after the claim is deleted\n2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options\n2001620 - Cluster becomes degraded if it can\u0027t talk to Manila\n2001760 - While creating \u0027Backing Store\u0027, \u0027Bucket Class\u0027, \u0027Namespace Store\u0027 user is navigated to \u0027Installed Operators\u0027 page after clicking on ODF\n2001761 - Unable to apply cluster operator storage for SNO on GCP platform. \n2001765 - Some error message in the log  of diskmaker-manager caused confusion\n2001784 - show loading page before final results instead of showing a transient message No log files exist\n2001804 - Reload feature on Environment section in Build Config form does not work properly\n2001810 - cluster admin unable to view BuildConfigs in all namespaces\n2001817 - Failed to load RoleBindings list that will lead to \u2018Role name\u2019 is not able to be selected on Create RoleBinding page as well\n2001823 - OCM controller must update operator status\n2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start\n2001835 - Could not select image tag version when create app from dev console\n2001855 - Add capacity is disabled for ocs-storagecluster\n2001856 - Repeating event: MissingVersion no image found for operand pod\n2001959 - Side nav list borders don\u0027t extend to edges of container\n2002007 - Layout issue on \"Something went wrong\" page\n2002010 - ovn-kube may never attempt to retry a pod creation\n2002012 - Cannot change volume mode when cloning a VM from a template\n2002027 - Two instances of Dotnet helm chart show as one in topology\n2002075 - opm render does not automatically pulling in the image(s) used in the deployments\n2002121 - [OVN] upgrades failed for IPI  OSP16 OVN  IPSec cluster\n2002125 - Network policy details page heading should be updated to Network Policy details\n2002133 - [e2e][automation] add support/virtualization and improve deleteResource\n2002134 - [e2e][automation] add test to verify vm details tab\n2002215 - Multipath day1 not working on s390x\n2002238 - Image stream tag is not persisted when switching from yaml to form editor\n2002262 - [vSphere] Incorrect user agent in vCenter sessions list\n2002266 - SinkBinding create form doesn\u0027t allow to use subject name, instead of label selector\n2002276 - OLM fails to upgrade operators immediately\n2002300 - Altering the Schedule Profile configurations doesn\u0027t affect the placement of the pods\n2002354 - Missing DU configuration \"Done\" status reporting during ZTP flow\n2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn\u0027t use commonjs\n2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation\n2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN\n2002397 - Resources search is inconsistent\n2002434 - CRI-O leaks some children PIDs\n2002443 - Getting undefined error on create local volume set page\n2002461 - DNS operator performs spurious updates in response to API\u0027s defaulting of service\u0027s internalTrafficPolicy\n2002504 - When the openshift-cluster-storage-operator is degraded because of \"VSphereProblemDetectorController_SyncError\", the insights operator is not sending the logs from all pods. \n2002559 - User preference for topology list view does not follow when a new namespace is created\n2002567 - Upstream SR-IOV worker doc has broken links\n2002588 - Change text to be sentence case to align with PF\n2002657 - ovn-kube egress IP monitoring is using a random port over the node network\n2002713 - CNO: OVN logs should have millisecond resolution\n2002748 - [ICNI2] \u0027ErrorAddingLogicalPort\u0027 failed to handle external GW check: timeout waiting for namespace event\n2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite\n2002763 - Two storage systems getting created with external mode RHCS\n2002808 - KCM does not use web identity credentials\n2002834 - Cluster-version operator does not remove unrecognized volume mounts\n2002896 - Incorrect result return when user filter data by name on search page\n2002950 - Why spec.containers.command is not created with \"oc create deploymentconfig \u003cdc-name\u003e --image=\u003cimage\u003e -- \u003ccommand\u003e\"\n2003096 - [e2e][automation] check bootsource URL is displaying on review step\n2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role\n2003120 - CI: Uncaught error with ResizeObserver on operand details page\n2003145 - Duplicate operand tab titles causes \"two children with the same key\" warning\n2003164 - OLM, fatal error: concurrent map writes\n2003178 - [FLAKE][knative] The UI doesn\u0027t show updated traffic distribution after accepting the form\n2003193 - Kubelet/crio leaks netns and veth ports in the host\n2003195 - OVN CNI should ensure host veths are removed\n2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting \u0027-e JENKINS_PASSWORD=password\u0027  ENV  which was working for old container images\n2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2003239 - \"[sig-builds][Feature:Builds][Slow] can use private repositories as build input\" tests fail outside of CI\n2003244 - Revert libovsdb client code\n2003251 - Patternfly components with list element has list item bullet when they should not. \n2003252 - \"[sig-builds][Feature:Builds][Slow] starting a build using CLI  start-build test context override environment BUILD_LOGLEVEL in buildconfig\" tests do not work as expected outside of CI\n2003269 - Rejected pods should be filtered from admission regression\n2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release\n2003426 - [e2e][automation]  add test for vm details bootorder\n2003496 - [e2e][automation] add test for vm resources requirment settings\n2003641 - All metal ipi jobs are failing in 4.10\n2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state\n2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node\n2003683 - Samples operator is panicking in CI\n2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster \"Connection Details\" page\n2003715 - Error on creating local volume set after selection of the volume mode\n2003743 - Remove workaround keeping /boot RW for kdump support\n2003775 - etcd pod on CrashLoopBackOff after master replacement procedure\n2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver\n2003792 - Monitoring metrics query graph flyover panel is useless\n2003808 - Add Sprint 207 translations\n2003845 - Project admin cannot access image vulnerabilities view\n2003859 - sdn emits events with garbage messages\n2003896 - (release-4.10) ApiRequestCounts conditional gatherer\n2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas\n2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes\n2004059 - [e2e][automation] fix current tests for downstream\n2004060 - Trying to use basic spring boot sample causes crash on Firefox\n2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn\u0027t close after selection\n2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently\n2004203 - build config\u0027s created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver\n2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory\n2004449 - Boot option recovery menu prevents image boot\n2004451 - The backup filename displayed in the RecentBackup message is incorrect\n2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts\n2004508 - TuneD issues with the recent ConfigParser changes. \n2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions\n2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs\n2004578 - Monitoring and node labels missing for an external storage platform\n2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days\n2004596 - [4.10] Bootimage bump tracker\n2004597 - Duplicate ramdisk log containers running\n2004600 - Duplicate ramdisk log containers running\n2004609 - output of \"crictl inspectp\" is not complete\n2004625 - BMC credentials could be logged if they change\n2004632 - When LE takes a large amount of time, multiple whereabouts are seen\n2004721 - ptp/worker custom threshold doesn\u0027t change ptp events threshold\n2004736 - [knative] Create button on new Broker form is inactive despite form being filled\n2004796 - [e2e][automation] add test for vm scheduling policy\n2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque\n2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card\n2004901 - [e2e][automation] improve kubevirt devconsole tests\n2004962 - Console frontend job consuming too much CPU in CI\n2005014 - state of ODF StorageSystem is misreported during installation or uninstallation\n2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines\n2005179 - pods status filter is not taking effect\n2005182 - sync list of deprecated apis about to be removed\n2005282 - Storage cluster name is given as title in StorageSystem details page\n2005355 - setuptools 58 makes Kuryr CI fail\n2005407 - ClusterNotUpgradeable Alert should be set to Severity Info\n2005415 - PTP operator with sidecar api configured throws  bind: address already in use\n2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console\n2005554 - The switch status of the button \"Show default project\" is not revealed correctly in code\n2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable\n2005761 - QE - Implementing crw-basic feature file\n2005783 - Fix accessibility issues in the \"Internal\" and \"Internal - Attached Mode\" Installation Flow\n2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty\n2005854 - SSH NodePort service is created for each VM\n2005901 - KS, KCM and KA going Degraded during master nodes upgrade\n2005902 - Current UI flow for MCG only deployment is confusing and doesn\u0027t reciprocate any message to the end-user\n2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics\n2005971 - Change telemeter to report the Application Services product usage metrics\n2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files\n2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased\n2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types\n2006101 - Power off fails for drivers that don\u0027t support Soft power off\n2006243 - Metal IPI upgrade jobs are running out of disk space\n2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn\u0027t use the 0th address\n2006308 - Backing Store YAML tab on click displays a blank screen on UI\n2006325 - Multicast is broken across nodes\n2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators\n2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource\n2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2006690 - OS boot failure \"x64 Exception Type 06 - Invalid Opcode Exception\"\n2006714 - add retry for etcd errors in kube-apiserver\n2006767 - KubePodCrashLooping may not fire\n2006803 - Set CoreDNS cache entries for forwarded zones\n2006861 - Add Sprint 207 part 2 translations\n2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap\n2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors\n2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded\n2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick\n2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails\n2007271 - CI Integration for Knative test cases\n2007289 - kubevirt tests are failing in CI\n2007322 - Devfile/Dockerfile import does not work for unsupported git host\n2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3. \n2007379 - Events are not generated for master offset  for ordinary clock\n2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace\n2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address\n2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error\n2007522 - No new local-storage-operator-metadata-container is build for 4.10\n2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10\n2007580 - Azure cilium installs are failing e2e tests\n2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10\n2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes\n2007692 - 4.9 \"old-rhcos\" jobs are permafailing with storage test failures\n2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow\n2007757 - must-gather extracts imagestreams in the \"openshift\" namespace, but not Templates\n2007802 - AWS machine actuator get stuck if machine is completely missing\n2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator\n2008119 - The serviceAccountIssuer field on Authentication CR is reseted to \u201c\u201d when installation process\n2008151 - Topology breaks on clicking in empty state\n2008185 - Console operator go.mod should use go 1.16.version\n2008201 - openstack-az job is failing on haproxy idle test\n2008207 - vsphere CSI driver doesn\u0027t set resource limits\n2008223 - gather_audit_logs: fix oc command line to get the current audit profile\n2008235 - The Save button in the Edit DC form remains disabled\n2008256 - Update Internationalization README with scope info\n2008321 - Add correct documentation link for MON_DISK_LOW\n2008462 - Disable PodSecurity feature gate for 4.10\n2008490 - Backing store details page does not contain all the kebab actions. \n2008521 - gcp-hostname service should correct invalid search entries in resolv.conf\n2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount\n2008539 - Registry doesn\u0027t fall back to secondary ImageContentSourcePolicy Mirror\n2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers\n2008599 - Azure Stack UPI does not have Internal Load Balancer\n2008612 - Plugin asset proxy does not pass through browser cache headers\n2008712 - VPA webhook timeout prevents all pods from starting\n2008733 - kube-scheduler: exposed /debug/pprof port\n2008911 - Prometheus repeatedly scaling prometheus-operator replica set\n2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2008987 - OpenShift SDN Hosted Egress IP\u0027s are not being scheduled to nodes after upgrade to 4.8.12\n2009055 - Instances of OCS to be replaced with ODF on UI\n2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs\n2009083 - opm blocks pruning of existing bundles during add\n2009111 - [IPI-on-GCP] \u0027Install a cluster with nested virtualization enabled\u0027 failed due to unable to launch compute instances\n2009131 - [e2e][automation] add more test about vmi\n2009148 - [e2e][automation] test vm nic presets and options\n2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator\n2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family\n2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted\n2009384 - UI changes to support BindableKinds CRD changes\n2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped\n2009424 - Deployment upgrade is failing availability check\n2009454 - Change web terminal subscription permissions from get to list\n2009465 - container-selinux should come from rhel8-appstream\n2009514 - Bump OVS to 2.16-15\n2009555 - Supermicro X11 system not booting from vMedia with AI\n2009623 - Console: Observe \u003e Metrics page: Table pagination menu shows bullet points\n2009664 - Git Import: Edit of knative service doesn\u0027t work as expected for git import flow\n2009699 - Failure to validate flavor RAM\n2009754 - Footer is not sticky anymore in import forms\n2009785 - CRI-O\u0027s version file should be pinned by MCO\n2009791 - Installer:  ibmcloud ignores install-config values\n2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13\n2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo\n2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2009873 - Stale Logical Router Policies and Annotations for a given node\n2009879 - There should be test-suite coverage to ensure admin-acks work as expected\n2009888 - SRO package name collision between official and community version\n2010073 - uninstalling and then reinstalling sriov-network-operator is not working\n2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node. \n2010181 - Environment variables not getting reset on reload on deployment edit form\n2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2010341 - OpenShift Alerting Rules Style-Guide Compliance\n2010342 - Local console builds can have out of memory errors\n2010345 - OpenShift Alerting Rules Style-Guide Compliance\n2010348 - Reverts PIE build mode for K8S components\n2010352 - OpenShift Alerting Rules Style-Guide Compliance\n2010354 - OpenShift Alerting Rules Style-Guide Compliance\n2010359 - OpenShift Alerting Rules Style-Guide Compliance\n2010368 - OpenShift Alerting Rules Style-Guide Compliance\n2010376 - OpenShift Alerting Rules Style-Guide Compliance\n2010662 - Cluster is unhealthy after image-registry-operator tests\n2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)\n2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API\n2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address\n2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing\n2010864 - Failure building EFS operator\n2010910 - ptp worker events unable to identify interface for multiple interfaces\n2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24\n2010921 - Azure Stack Hub does not handle additionalTrustBundle\n2010931 - SRO CSV uses non default category \"Drivers and plugins\"\n2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. \n2011038 - optional operator conditions are confusing\n2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass\n2011171 - diskmaker-manager constantly redeployed by LSO when creating LV\u0027s\n2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image\n2011368 - Tooltip in pipeline visualization shows misleading data\n2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels\n2011411 - Managed Service\u0027s Cluster overview page contains link to missing Storage dashboards\n2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster\n2011513 - Kubelet rejects pods that use resources that should be freed by completed pods\n2011668 - Machine stuck in deleting phase in VMware \"reconciler failed to Delete machine\"\n2011693 - (release-4.10) \"insightsclient_request_recvreport_total\" metric is always incremented\n2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn\u0027t export namespace labels anymore\n2011733 - Repository README points to broken documentarion link\n2011753 - Ironic resumes clean before raid configuration job is actually completed\n2011809 - The nodes page in the openshift console doesn\u0027t work. You just get a blank page\n2011822 - Obfuscation doesn\u0027t work at clusters with OVN\n2011882 - SRO helm charts not synced with templates\n2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot\n2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages\n2011903 - vsphere-problem-detector: session leak\n2011927 - OLM should allow users to specify a proxy for GRPC connections\n2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods\n2011960 - [tracker] Storage operator is not available after reboot cluster instances\n2011971 - ICNI2 pods are stuck in ContainerCreating state\n2011972 - Ingress operator not creating wildcard route for hypershift  clusters\n2011977 - SRO bundle references non-existent image\n2012069 - Refactoring Status controller\n2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI\n2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group\n2012233 - [IBMCLOUD] IPI: \"Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)\"\n2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig\n2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off\n2012407 - [e2e][automation] improve vm tab console tests\n2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don\u0027t have namespace label\n2012562 - migration condition is not detected in list view\n2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written\n2012780 - The port 50936 used by haproxy is occupied by kube-apiserver\n2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working\n2012902 - Neutron Ports assigned to Completed Pods are not reused Edit\n2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack\n2012971 - Disable operands deletes\n2013034 - Cannot install to openshift-nmstate namespace\n2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)\n2013199 - post reboot of node SRIOV policy taking huge time\n2013203 - UI breaks when trying to create block pool before storage cluster/system creation\n2013222 - Full breakage for nightly payload promotion\n2013273 - Nil pointer exception when phc2sys options are missing\n2013321 - TuneD: high CPU utilization of the TuneD daemon. \n2013416 - Multiple assets emit different content to the same filename\n2013431 - Application selector dropdown has incorrect font-size and positioning\n2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2013545 - Service binding created outside topology is not visible\n2013599 - Scorecard support storage is not included in ocp4.9\n2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)\n2013646 - fsync controller will show false positive if gaps in metrics are observed. \n2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default\n2013751 - Service details page is showing wrong in-cluster hostname\n2013787 - There are two tittle \u0027Network Attachment Definition Details\u0027 on NAD details page\n2013871 - Resource table headings are not aligned with their column data\n2013895 - Cannot enable accelerated network via MachineSets on Azure\n2013920 - \"--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude\"\n2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG)\n2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain\n2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab)\n2013996 - Project detail page: Action \"Delete Project\" does nothing for the default project\n2014071 - Payload imagestream new tags not properly updated during cluster upgrade\n2014153 - SRIOV exclusive pooling\n2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace\n2014238 - AWS console test is failing on importing duplicate YAML definitions\n2014245 - Several aria-labels, external links, and labels aren\u0027t internationalized\n2014248 - Several files aren\u0027t internationalized\n2014352 - Could not filter out machine by using node name on machines page\n2014464 - Unexpected spacing/padding below navigation groups in developer perspective\n2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages\n2014486 - Integration Tests: OLM single namespace operator tests failing\n2014488 - Custom operator cannot change orders of condition tables\n2014497 - Regex slows down different forms and creates too much recursion errors in the log\n2014538 - Kuryr controller crash looping on  self._get_vip_port(loadbalancer).id   \u0027NoneType\u0027 object has no attribute \u0027id\u0027\n2014614 - Metrics scraping requests should be assigned to exempt priority level\n2014710 - TestIngressStatus test is broken on Azure\n2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly\n2014995 - oc adm must-gather cannot gather audit logs with \u0027None\u0027 audit profile\n2015115 - [RFE] PCI passthrough\n2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl \u0027--resource-group-name\u0027 parameter\n2015154 - Support ports defined networks and primarySubnet\n2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic\n2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production\n2015386 - Possibility to add labels to the built-in OCP alerts\n2015395 - Table head on Affinity Rules modal is not fully expanded\n2015416 - CI implementation for Topology plugin\n2015418 - Project Filesystem query returns No datapoints found\n2015420 - No vm resource in project view\u0027s inventory\n2015422 - No conflict checking on snapshot name\n2015472 - Form and YAML view switch button should have distinguishable status\n2015481 - [4.10]  sriov-network-operator daemon pods are failing to start\n2015493 - Cloud Controller Manager Operator does not respect \u0027additionalTrustBundle\u0027 setting\n2015496 - Storage - PersistentVolumes : Claim colum value \u0027No Claim\u0027 in English\n2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on \u0027Add Capacity\u0027 button click\n2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu\n2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. \n2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart \u0027x% used\u0027 is in English\n2015549 - Observe - Metrics: Column heading and pagination text is in English\n2015557 - Workloads - DeploymentConfigs :  Error message is in English\n2015568 - Compute - Nodes : CPU column\u0027s values are in English\n2015635 - Storage operator fails causing installation to fail on ASH\n2015660 - \"Finishing boot source customization\" screen should not use term \"patched\"\n2015793 - [hypershift] The collect-profiles job\u0027s pods should run on the control-plane node\n2015806 - Metrics view in Deployment reports \"Forbidden\" when not cluster-admin\n2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning\n2015837 - OS_CLOUD overwrites install-config\u0027s platform.openstack.cloud\n2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch\n2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail\n2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)\n2016008 - [4.10] Bootimage bump tracker\n2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver\n2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator\n2016054 - No e2e CI presubmit configured for release component cluster-autoscaler\n2016055 - No e2e CI presubmit configured for release component console\n2016058 - openshift-sync does not synchronise in \"ose-jenkins:v4.8\"\n2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager\n2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers\n2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. \n2016179 - Add Sprint 208 translations\n2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager\n2016235 - should update to 7.5.11 for grafana resources version label\n2016296 - Openshift virtualization  : Create Windows Server 2019 VM using template : Fails\n2016334 - shiftstack: SRIOV nic reported as not supported\n2016352 - Some pods start before CA resources are present\n2016367 - Empty task box is getting created for a pipeline without finally task\n2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts\n2016438 - Feature flag gating is missing in few extensions contributed via knative plugin\n2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc\n2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets\n2016453 - Complete i18n for GaugeChart defaults\n2016479 - iface-id-ver is not getting updated for existing lsp\n2016925 - Dashboards with All filter, change to a specific value and change back to All,  data will disappear\n2016951 - dynamic actions list is not disabling \"open console\" for stopped vms\n2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available\n2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances\n2017016 - [REF] Virtualization menu\n2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn\n2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly\n2017130 - t is not a function error navigating to details page\n2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue\n2017244 - ovirt csi operator static files creation is in the wrong order\n2017276 - [4.10] Volume mounts not created with the correct security context\n2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. \n2017427 - NTO does not restart TuneD daemon when profile application is taking too long\n2017535 - Broken Argo CD link image on GitOps Details Page\n2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references\n2017564 - On-prem prepender dispatcher script overwrites DNS search settings\n2017565 - CCMO does not handle additionalTrustBundle on Azure Stack\n2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice\n2017606 - [e2e][automation] add test to verify send key for VNC console\n2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes\n2017656 - VM IP address is \"undefined\" under VM details -\u003e ssh field\n2017663 - SSH password authentication is disabled when public key is not supplied\n2017680 - [gcp] Couldn\u2019t enable support for instances with GPUs on GCP\n2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set\n2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource\n2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults\n2017761 - [e2e][automation] dummy bug for 4.9 test dependency\n2017872 - Add Sprint 209 translations\n2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances\n2017879 - Add Chinese translation for \"alternate\"\n2017882 - multus: add handling of pod UIDs passed from runtime\n2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods\n2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI\n2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS\n2018094 - the tooltip length is limited\n2018152 - CNI pod is not restarted when It cannot start servers due to ports being used\n2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time\n2018234 - user settings are saved in local storage instead of on cluster\n2018264 - Delete Export button doesn\u0027t work in topology sidebar (general issue with unknown CSV?)\n2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports)\n2018275 - Topology graph doesn\u0027t show context menu for Export CSV\n2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked\n2018380 - Migrate docs links to access.redhat.com\n2018413 - Error: context deadline exceeded, OCP 4.8.9\n2018428 - PVC is deleted along with VM even with \"Delete Disks\" unchecked\n2018445 - [e2e][automation] enhance tests for downstream\n2018446 - [e2e][automation] move tests to different level\n2018449 - [e2e][automation] add test about create/delete network attachment definition\n2018490 - [4.10] Image provisioning fails with file name too long\n2018495 - Fix typo in internationalization README\n2018542 - Kernel upgrade does not reconcile DaemonSet\n2018880 - Get \u0027No datapoints found.\u0027 when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit\n2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes\n2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950\n2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10\n2018985 - The rootdisk size is 15Gi of windows VM in customize wizard\n2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync. \n2019096 - Update SRO leader election timeout to support SNO\n2019129 - SRO in operator hub points to wrong repo for README\n2019181 - Performance profile does not apply\n2019198 - ptp offset metrics are not named according to the log output\n2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest\n2019284 - Stop action should not in the action list while VMI is not running\n2019346 - zombie processes accumulation and Argument list too long\n2019360 - [RFE] Virtualization Overview page\n2019452 - Logger object in LSO appends to existing logger recursively\n2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect\n2019634 - Pause and migration is enabled in action list for a user who has view only permission\n2019636 - Actions in VM tabs should be disabled when user has view only permission\n2019639 - \"Take snapshot\" should be disabled while VM image is still been importing\n2019645 - Create button is not removed on \"Virtual Machines\" page for view only user\n2019646 - Permission error should pop-up immediately while clicking \"Create VM\" button on template page for view only user\n2019647 - \"Remove favorite\" and \"Create new Template\" should be disabled in template action list for view only user\n2019717 - cant delete VM with un-owned pvc attached\n2019722 - The shared-resource-csi-driver-node pod runs as \u201cBestEffort\u201d qosClass\n2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as \"Always\"\n2019744 - [RFE] Suggest users to download newest RHEL 8 version\n2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level\n2019827 - Display issue with top-level menu items running demo plugin\n2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded\n2019886 - Kuryr unable to finish ports recovery upon controller restart\n2019948 - [RFE] Restructring Virtualization links\n2019972 - The Nodes section doesn\u0027t display the csr of the nodes that are trying to join the cluster\n2019977 - Installer doesn\u0027t validate region causing binary to hang with a 60 minute timeout\n2019986 - Dynamic demo plugin fails to build\n2019992 - instance:node_memory_utilisation:ratio metric is incorrect\n2020001 - Update dockerfile for demo dynamic plugin to reflect dir change\n2020003 - MCD does not regard \"dangling\" symlinks as a files, attempts to write through them on next backup, resulting in \"not writing through dangling symlink\" error and degradation. \n2020107 - cluster-version-operator: remove runlevel from CVO namespace\n2020153 - Creation of Windows high performance VM fails\n2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn\u0027t be public\n2020250 - Replacing deprecated ioutil\n2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build\n2020275 - ClusterOperators link in console returns blank page during upgrades\n2020377 - permissions error while using tcpdump option with must-gather\n2020489 - coredns_dns metrics don\u0027t include the custom zone metrics data due to CoreDNS prometheus plugin is not defined\n2020498 - \"Show PromQL\" button is disabled\n2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature\n2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI\n2020664 - DOWN subports are not cleaned up\n2020904 - When trying to create a connection from the Developer view between VMs, it fails\n2021016 - \u0027Prometheus Stats\u0027 of dashboard \u0027Prometheus Overview\u0027 miss data on console compared with Grafana\n2021017 - 404 page not found error on knative eventing page\n2021031 - QE - Fix the topology CI scripts\n2021048 - [RFE] Added MAC Spoof check\n2021053 - Metallb operator presented as community operator\n2021067 - Extensive number of requests from storage version operator in cluster\n2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes\n2021135 - [azure-file-csi-driver] \"make unit-test\" returns non-zero code, but tests pass\n2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node\n2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating\n2021152 - imagePullPolicy is \"Always\" for ptp operator images\n2021191 - Project admins should be able to list available network attachment defintions\n2021205 - Invalid URL in git import form causes validation to not happen on URL change\n2021322 - cluster-api-provider-azure should populate purchase plan information\n2021337 - Dynamic Plugins: ResourceLink doesn\u0027t render when passed a groupVersionKind\n2021364 - Installer requires invalid AWS permission s3:GetBucketReplication\n2021400 - Bump documentationBaseURL to 4.10\n2021405 - [e2e][automation] VM creation wizard Cloud Init editor\n2021433 - \"[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified\" test fail permanently on disconnected\n2021466 - [e2e][automation] Windows guest tool mount\n2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver\n2021551 - Build is not recognizing the USER group from an s2i image\n2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character\n2021629 - api request counts for current hour are incorrect\n2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page\n2021693 - Modals assigned modal-lg class are no longer the correct width\n2021724 - Observe \u003e Dashboards: Graph lines are not visible when obscured by other lines\n2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled\n2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags\n2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem\n2022053 - dpdk application with vhost-net is not able to start\n2022114 - Console logging every proxy request\n2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack  (Intermittent)\n2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long\n2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error . \n2022447 - ServiceAccount in manifests conflicts with OLM\n2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. \n2022509 - getOverrideForManifest does not check manifest.GVK.Group\n2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache\n2022612 - no namespace field for \"Kubernetes / Compute Resources / Namespace (Pods)\" admin console dashboard\n2022627 - Machine object not picking up external FIP added to an openstack vm\n2022646 - configure-ovs.sh failure -  Error: unknown connection \u0027WARN:\u0027\n2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox\n2022801 - Add Sprint 210 translations\n2022811 - Fix kubelet log rotation file handle leak\n2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations\n2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2022880 - Pipeline renders with minor visual artifact with certain task dependencies\n2022886 - Incorrect URL in operator description\n2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config\n2023060 - [e2e][automation] Windows VM with CDROM migration\n2023077 - [e2e][automation] Home Overview Virtualization status\n2023090 - [e2e][automation] Examples of Import URL for VM templates\n2023102 - [e2e][automation] Cloudinit disk of VM from custom template\n2023216 - ACL for a deleted egressfirewall still present on node join switch\n2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9\n2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image  Django example should work with hot deploy\n2023342 - SCC admission should take ephemeralContainers into account\n2023356 - Devfiles can\u0027t be loaded in Safari on macOS (403 - Forbidden)\n2023434 - Update Azure Machine Spec API to accept Marketplace Images\n2023500 - Latency experienced while waiting for volumes to attach to node\n2023522 - can\u0027t remove package from index: database is locked\n2023560 - \"Network Attachment Definitions\" has no project field on the top in the list view\n2023592 - [e2e][automation] add mac spoof check for nad\n2023604 - ACL violation when deleting a provisioning-configuration resource\n2023607 - console returns blank page when normal user without any projects visit Installed Operators page\n2023638 - Downgrade support level for extended control plane integration to Dev Preview\n2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10\n2023675 - Changing CNV Namespace\n2023779 - Fix Patch 104847 in 4.9\n2023781 - initial hardware devices is not loading in wizard\n2023832 - CCO updates lastTransitionTime for non-Status changes\n2023839 - Bump recommended FCOS to 34.20211031.3.0\n2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly\n2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from \"registry:5000\" repository\n2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8\n2024055 - External DNS added extra prefix for the TXT record\n2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully\n2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json\n2024199 - 400 Bad Request error for some queries for the non admin user\n2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode\n2024262 - Sample catalog is not displayed when one API call to the backend fails\n2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability\n2024316 - modal about support displays wrong annotation\n2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected\n2024399 - Extra space is in the translated text of \"Add/Remove alternate service\" on Create Route page\n2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view\n2024493 - Observe \u003e Alerting \u003e Alerting rules page throws error trying to destructure undefined\n2024515 - test-blocker: Ceph-storage-plugin tests failing\n2024535 - hotplug disk missing OwnerReference\n2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image\n2024547 - Detail page is breaking for namespace store , backing store and bucket class. \n2024551 - KMS resources not getting created for IBM FlashSystem storage\n2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel\n2024613 - pod-identity-webhook starts without tls\n2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded\n2024665 - Bindable services are not shown on topology\n2024731 - linuxptp container: unnecessary checking of interfaces\n2024750 - i18n some remaining OLM items\n2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured\n2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack\n2024841 - test Keycloak with latest tag\n2024859 - Not able to deploy an existing image from private image registry using developer console\n2024880 - Egress IP breaks when network policies are applied\n2024900 - Operator upgrade kube-apiserver\n2024932 - console throws \"Unauthorized\" error after logging out\n2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up\n2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick\n2025230 - ClusterAutoscalerUnschedulablePods should not be a warning\n2025266 - CreateResource route has exact prop which need to be removed\n2025301 - [e2e][automation] VM actions availability in different VM states\n2025304 - overwrite storage section of the DV spec instead of the pvc section\n2025431 - [RFE]Provide specific windows source link\n2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36\n2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node\n2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn\u0027t work for ExternalTrafficPolicy=local\n2025481 - Update VM Snapshots UI\n2025488 - [DOCS] Update the doc for nmstate operator installation\n2025592 - ODC 4.9 supports invalid devfiles only\n2025765 - It should not try to load from storageProfile after unchecking\"Apply optimized StorageProfile settings\"\n2025767 - VMs orphaned during machineset scaleup\n2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns \"kubevirt-hyperconverged\" while using customize wizard\n2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size\u2019s vCPUsAvailable instead of vCPUs for the sku. \n2025821 - Make \"Network Attachment Definitions\" available to regular user\n2025823 - The console nav bar ignores plugin separator in existing sections\n2025830 - CentOS capitalizaion is wrong\n2025837 - Warn users that the RHEL URL expire\n2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-*\n2025903 - [UI] RoleBindings tab doesn\u0027t show correct rolebindings\n2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2026178 - OpenShift Alerting Rules Style-Guide Compliance\n2026209 - Updation of task is getting failed (tekton hub integration)\n2026223 - Internal error occurred: failed calling webhook \"ptpconfigvalidationwebhook.openshift.io\"\n2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates\n2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct\n2026352 - Kube-Scheduler revision-pruner fail during install of new cluster\n2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment\n2026383 - Error when rendering custom Grafana dashboard through ConfigMap\n2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation\n2026396 - Cachito Issues: sriov-network-operator Image build failure\n2026488 - openshift-controller-manager - delete event is repeating pathologically\n2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. \n2026560 - Cluster-version operator does not remove unrecognized volume mounts\n2026699 - fixed a bug with missing metadata\n2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator\n2026898 - Description/details are missing for Local Storage Operator\n2027132 - Use the specific icon for Fedora and CentOS template\n2027238 - \"Node Exporter / USE Method / Cluster\" CPU utilization graph shows incorrect legend\n2027272 - KubeMemoryOvercommit alert should be human readable\n2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group\n2027288 - Devfile samples can\u0027t be loaded after fixing it on Safari (redirect caching issue)\n2027299 - The status of checkbox component is not revealed correctly in code\n2027311 - K8s watch hooks do not work when fetching core resources\n2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation\n2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don\u0027t use the downstream images\n2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation\n2027498 - [IBMCloud] SG Name character length limitation\n2027501 - [4.10] Bootimage bump tracker\n2027524 - Delete Application doesn\u0027t delete Channels or Brokers\n2027563 - e2e/add-flow-ci.feature fix accessibility violations\n2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges\n2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions\n2027685 - openshift-cluster-csi-drivers pods crashing on PSI\n2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced\n2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string\n2027917 - No settings in hostfirmwaresettings and schema objects for masters\n2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf\n2027982 - nncp stucked at ConfigurationProgressing\n2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters\n2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed\n2028030 - Panic detected in cluster-image-registry-operator pod\n2028042 - Desktop viewer for Windows VM shows \"no Service for the RDP (Remote Desktop Protocol) can be found\"\n2028054 - Cloud controller manager operator can\u0027t get leader lease when upgrading from 4.8 up to 4.9\n2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin\n2028141 - Console tests doesn\u0027t pass on Node.js 15 and 16\n2028160 - Remove i18nKey in network-policy-peer-selectors.tsx\n2028162 - Add Sprint 210 translations\n2028170 - Remove leading and trailing whitespace\n2028174 - Add Sprint 210 part 2 translations\n2028187 - Console build doesn\u0027t pass on Node.js 16 because node-sass doesn\u0027t support it\n2028217 - Cluster-version operator does not default Deployment replicas to one\n2028240 - Multiple CatalogSources causing higher CPU use than necessary\n2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn\u0027t be set in HostFirmwareSettings\n2028325 - disableDrain should be set automatically on SNO\n2028484 - AWS EBS CSI driver\u0027s livenessprobe does not respect operator\u0027s loglevel\n2028531 - Missing netFilter to the list of parameters when platform is OpenStack\n2028610 - Installer doesn\u0027t retry on GCP rate limiting\n2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting\n2028695 - destroy cluster does not prune bootstrap instance profile\n2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs\n2028802 - CRI-O panic due to invalid memory address or nil pointer dereference\n2028816 - VLAN IDs not released on failures\n2028881 - Override not working for the PerformanceProfile template\n2028885 - Console should show an error context if it logs an error object\n2028949 - Masthead dropdown item hover text color is incorrect\n2028963 - Whereabouts should reconcile stranded IP addresses\n2029034 - enabling ExternalCloudProvider leads to inoperative cluster\n2029178 - Create VM with wizard - page is not displayed\n2029181 - Missing CR from PGT\n2029273 - wizard is not able to use if project field is \"All Projects\"\n2029369 - Cypress tests github rate limit errors\n2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out\n2029394 - missing empty text for hardware devices at wizard review\n2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used\n2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl\n2029521 - EFS CSI driver cannot delete volumes under load\n2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle\n2029579 - Clicking on an Application which has a Helm Release in it causes an error\n2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn\u0027t for HPE\n2029645 - Sync upstream 1.15.0 downstream\n2029671 - VM action \"pause\" and \"clone\" should be disabled while VM disk is still being importing\n2029742 - [ovn] Stale lr-policy-list  and snat rules left for egressip\n2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage\n2029785 - CVO panic when an edge is included in both edges and conditionaledges\n2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp)\n2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error\n2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2030228 - Fix StorageSpec resources field to use correct API\n2030229 - Mirroring status card reflect wrong data\n2030240 - Hide overview page for non-privileged user\n2030305 - Export App job do not completes\n2030347 - kube-state-metrics exposes metrics about resource annotations\n2030364 - Shared resource CSI driver monitoring is not setup correctly\n2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets\n2030534 - Node selector/tolerations rules are evaluated too early\n2030539 - Prometheus is not highly available\n2030556 - Don\u0027t display Description or Message fields for alerting rules if those annotations are missing\n2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation\n2030574 - console service uses older \"service.alpha.openshift.io\" for the service serving certificates. \n2030677 - BOND CNI: There is no option to configure MTU on a Bond interface\n2030692 - NPE in PipelineJobListener.upsertWorkflowJob\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2030847 - PerformanceProfile API version should be v2\n2030961 - Customizing the OAuth server URL does not apply to upgraded cluster\n2031006 - Application name input field is not autofocused when user selects \"Create application\"\n2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex\n2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn\u0027t be started\n2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue\n2031057 - Topology sidebar for Knative services shows a small pod ring with \"0 undefined\" as tooltip\n2031060 - Failing CSR Unit test due to expired test certificate\n2031085 - ovs-vswitchd running more threads than expected\n2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1\n2031228 - CVE-2021-43813 grafana: directory traversal vulnerability\n2031502 - [RFE] New common templates crash the ui\n2031685 - Duplicated forward upstreams should be removed from the dns operator\n2031699 - The displayed ipv6 address of a dns upstream should be case sensitive\n2031797 - [RFE] Order and text of Boot source type input are wrong\n2031826 - CI tests needed to confirm driver-toolkit image contents\n2031831 - OCP Console - Global CSS overrides affecting dynamic plugins\n2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional\n2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled)\n2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain)\n2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself\n2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource\n2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64\n2032141 - open the alertrule link in new tab, got empty page\n2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy\n2032296 - Cannot create machine with ephemeral disk on Azure\n2032407 - UI will show the default openshift template wizard for HANA template\n2032415 - Templates page - remove \"support level\" badge and add \"support level\" column which should not be hard coded\n2032421 - [RFE] UI integration with automatic updated images\n2032516 - Not able to import git repo with .devfile.yaml\n2032521 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the aws_vpc_dhcp_options_association resource\n2032547 - hardware devices table have filter when table is empty\n2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool\n2032566 - Cluster-ingress-router does not support Azure Stack\n2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso\n2032589 - DeploymentConfigs ignore resolve-names annotation\n2032732 - Fix styling conflicts due to recent console-wide CSS changes\n2032831 - Knative Services and Revisions are not shown when Service has no ownerReference\n2032851 - Networking is \"not available\" in Virtualization Overview\n2032926 - Machine API components should use K8s 1.23 dependencies\n2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24\n2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster\n2033013 - Project dropdown in user preferences page is broken\n2033044 - Unable to change import strategy if devfile is invalid\n2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable\n2033111 - IBM VPC operator library bump removed global CLI args\n2033138 - \"No model registered for Templates\" shows on customize wizard\n2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected\n2033239 - [IPI on Alibabacloud] \u0027openshift-install\u0027 gets the wrong region (\u2018cn-hangzhou\u2019) selected\n2033257 - unable to use configmap for helm charts\n2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn\u2019t triggered\n2033290 - Product builds for console are failing\n2033382 - MAPO is missing machine annotations\n2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations\n2033403 - Devfile catalog does not show provider information\n2033404 - Cloud event schema is missing source type and resource field is using wrong value\n2033407 - Secure route data is not pre-filled in edit flow form\n2033422 - CNO not allowing LGW conversion from SGW in runtime\n2033434 - Offer darwin/arm64 oc in clidownloads\n2033489 - CCM operator failing on baremetal platform\n2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver\n2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains\n2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating \"cluster-infrastructure-02-config.yml\" status, which leads to bootstrap failed and all master nodes NotReady\n2033538 - Gather Cost Management Metrics Custom Resource\n2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined\n2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page\n2033634 - list-style-type: disc is applied to the modal dropdowns\n2033720 - Update samples in 4.10\n2033728 - Bump OVS to 2.16.0-33\n2033729 - remove runtime request timeout restriction for azure\n2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended\n2033749 - Azure Stack Terraform fails without Local Provider\n2033750 - Local volume should pull multi-arch image for kube-rbac-proxy\n2033751 - Bump kubernetes to 1.23\n2033752 - make verify fails due to missing yaml-patch\n2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource\n2034004 - [e2e][automation] add tests for VM snapshot improvements\n2034068 - [e2e][automation] Enhance tests for 4.10 downstream\n2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore\n2034097 - [OVN] After edit EgressIP object, the status is not correct\n2034102 - [OVN] Recreate the  deleted EgressIP object got  InvalidEgressIP  warning\n2034129 - blank page returned when clicking \u0027Get started\u0027 button\n2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0\n2034153 - CNO does not verify MTU migration for OpenShiftSDN\n2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled\n2034170 - Use function.knative.dev for Knative Functions related labels\n2034190 - unable to add new VirtIO disks to VMs\n2034192 - Prometheus fails to insert reporting metrics when the sample limit is met\n2034243 - regular user cant load template list\n2034245 - installing a cluster on aws, gcp always fails with \"Error: Incompatible provider version\"\n2034248 - GPU/Host device modal is too small\n2034257 - regular user `Create VM` missing permissions alert\n2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2034287 - do not block upgrades if we can\u0027t create storageclass in 4.10 in vsphere\n2034300 - Du validator policy is NonCompliant after DU configuration completed\n2034319 - Negation constraint is not validating packages\n2034322 - CNO doesn\u0027t pick up settings required when ExternalControlPlane topology\n2034350 - The CNO should implement the Whereabouts IP reconciliation cron job\n2034362 - update description of disk interface\n2034398 - The Whereabouts IPPools CRD should include the podref field\n2034409 - Default CatalogSources should be pointing to 4.10 index images\n2034410 - Metallb BGP, BFD:  prometheus is not scraping the frr metrics\n2034413 - cloud-network-config-controller fails to init with secret \"cloud-credentials\" not found in manual credential mode\n2034460 - Summary: cloud-network-config-controller does not account for different environment\n2034474 - Template\u0027s boot source is \"Unknown source\" before and after set enableCommonBootImageImport to true\n2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren\u0027t working properly\n2034493 - Change cluster version operator log level\n2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list\n2034527 - IPI deployment fails \u0027timeout reached while inspecting the node\u0027 when provisioning network ipv6\n2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer\n2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART\n2034537 - Update team\n2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds\n2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success\n2034577 - Current OVN gateway mode should be reflected on node annotation as well\n2034621 - context menu not popping up for application group\n2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10\n2034624 - Warn about unsupported CSI driver in vsphere operator\n2034647 - missing volumes list in snapshot modal\n2034648 - Rebase openshift-controller-manager to 1.23\n2034650 - Rebase openshift/builder to 1.23\n2034705 - vSphere: storage e2e tests logging configuration data\n2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail. \n2034766 - Special Resource Operator(SRO) -  no cert-manager pod created in dual stack environment\n2034785 - ptpconfig with summary_interval cannot be applied\n2034823 - RHEL9 should be starred in template list\n2034838 - An external router can inject routes if no service is added\n2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent\n2034879 - Lifecycle hook\u0027s name and owner shouldn\u0027t be allowed to be empty\n2034881 - Cloud providers components should use K8s 1.23 dependencies\n2034884 - ART cannot build the image because it tries to download controller-gen\n2034889 - `oc adm prune deployments` does not work\n2034898 - Regression in recently added Events feature\n2034957 - update openshift-apiserver to kube 1.23.1\n2035015 - ClusterLogForwarding CR remains stuck remediating forever\n2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster\n2035141 - [RFE] Show GPU/Host devices in template\u0027s details tab\n2035146 - \"kubevirt-plugin~PVC cannot be empty\" shows on add-disk modal while adding existing PVC\n2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting\n2035199 - IPv6 support in mtu-migration-dispatcher.yaml\n2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing\n2035250 - Peering with ebgp peer over multi-hops doesn\u0027t work\n2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices\n2035315 - invalid test cases for AWS passthrough mode\n2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env\n2035321 - Add Sprint 211 translations\n2035326 - [ExternalCloudProvider] installation with additional network on workers fails\n2035328 - Ccoctl does not ignore credentials request manifest marked for deletion\n2035333 - Kuryr orphans ports on 504 errors from Neutron\n2035348 - Fix two grammar issues in kubevirt-plugin.json strings\n2035393 - oc set data --dry-run=server  makes persistent changes to configmaps and secrets\n2035409 - OLM E2E test depends on operator package that\u0027s no longer published\n2035439 - SDN  Automatic assignment EgressIP on GCP returned node IP adress not egressIP address\n2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to \u0027ecs-cn-hangzhou.aliyuncs.com\u0027 timeout, although the specified region is \u0027us-east-1\u0027\n2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster\n2035467 - UI: Queried metrics can\u0027t be ordered on Oberve-\u003eMetrics page\n2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers\n2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class\n2035602 - [e2e][automation] add tests for Virtualization Overview page cards\n2035703 - Roles -\u003e RoleBindings tab doesn\u0027t show RoleBindings correctly\n2035704 - RoleBindings list page filter doesn\u0027t apply\n2035705 - Azure \u0027Destroy cluster\u0027 get stuck when the cluster resource group is already not existing. \n2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed\n2035772 - AccessMode and VolumeMode is not reserved for customize wizard\n2035847 - Two dashes in the Cronjob / Job pod name\n2035859 - the output of opm render doesn\u0027t contain  olm.constraint which is defined in dependencies.yaml\n2035882 - [BIOS setting values] Create events for all invalid settings in spec\n2035903 - One redundant capi-operator credential requests in \u201coc adm extract --credentials-requests\u201d\n2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen\n2035927 - Cannot enable HighNodeUtilization scheduler profile\n2035933 - volume mode and access mode are empty in customize wizard review tab\n2035969 - \"ip a \" shows \"Error: Peer netns reference is invalid\" after create test pods\n2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation\n2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error\n2036029 - New added cloud-network-config operator doesn\u2019t supported aws sts format credential\n2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend\n2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes\n2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23\n2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23\n2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments\n2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists\n2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected\n2036826 - `oc adm prune deployments` can prune the RC/RS\n2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform\n2036861 - kube-apiserver is degraded while enable multitenant\n2036937 - Command line tools page shows wrong download ODO link\n2036940 - oc registry login fails if the file is empty or stdout\n2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container\n2036989 - Route URL copy to clipboard button wraps to a separate line by itself\n2036990 - ZTP \"DU Done inform policy\" never becomes compliant on multi-node clusters\n2036993 - Machine API components should use Go lang version 1.17\n2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log. \n2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api\n2037073 - Alertmanager container fails to start because of startup probe never being successful\n2037075 - Builds do not support CSI volumes\n2037167 - Some log level in ibm-vpc-block-csi-controller are hard code\n2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles\n2037182 - PingSource badge color is not matched with knativeEventing color\n2037203 - \"Running VMs\" card is too small in Virtualization Overview\n2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly\n2037237 - Add \"This is a CD-ROM boot source\" to customize wizard\n2037241 - default TTL for noobaa cache buckets should be 0\n2037246 - Cannot customize auto-update boot source\n2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately\n2037288 - Remove stale image reference\n2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources\n2037483 - Rbacs for Pods within the CBO should be more restrictive\n2037484 - Bump dependencies to k8s 1.23\n2037554 - Mismatched wave number error message should include the wave numbers that are in conflict\n2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]\n2037635 - impossible to configure custom certs for default console route in ingress config\n2037637 - configure custom certificate for default console route doesn\u0027t take effect for OCP \u003e= 4.8\n2037638 - Builds do not support CSI volumes as volume sources\n2037664 - text formatting issue in Installed Operators list table\n2037680 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037689 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037801 - Serverless installation is failing on CI jobs for e2e tests\n2037813 - Metal Day 1 Networking -  networkConfig Field Only Accepts String Format\n2037856 - use lease for leader election\n2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10\n2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests\n2037904 - upgrade operator deployment failed due to memory limit too low for manager container\n2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]\n2038034 - non-privileged user cannot see auto-update boot source\n2038053 - Bump dependencies to k8s 1.23\n2038088 - Remove ipa-downloader references\n2038160 - The `default` project missed the annotation : openshift.io/node-selector: \"\"\n2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional\n2038196 - must-gather is missing collecting some metal3 resources\n2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)\n2038253 - Validator Policies are long lived\n2038272 - Failures to build a PreprovisioningImage are not reported\n2038384 - Azure Default Instance Types are Incorrect\n2038389 - Failing test: [sig-arch] events should not repeat pathologically\n2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket\n2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips\n2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained\n2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect\n2038663 - update kubevirt-plugin OWNERS\n2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via \"oc adm groups new\"\n2038705 - Update ptp reviewers\n2038761 - Open Observe-\u003eTargets page, wait for a while, page become blank\n2038768 - All the filters on the Observe-\u003eTargets page can\u0027t work\n2038772 - Some monitors failed to display on Observe-\u003eTargets page\n2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node\n2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces\n2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard\n2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation\n2038864 - E2E tests fail because multi-hop-net was not created\n2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console\n2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured\n2038968 - Move feature gates from a carry patch to openshift/api\n2039056 - Layout issue with breadcrumbs on API explorer page\n2039057 - Kind column is not wide enough in API explorer page\n2039064 - Bulk Import e2e test flaking at a high rate\n2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled\n2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters\n2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost\n2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy\n2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator\n2039170 - [upgrade]Error shown on registry operator \"missing the cloud-provider-config configmap\" after upgrade\n2039227 - Improve image customization server parameter passing during installation\n2039241 - Improve image customization server parameter passing during installation\n2039244 - Helm Release revision history page crashes the UI\n2039294 - SDN controller metrics cannot be consumed correctly by prometheus\n2039311 - oc Does Not Describe Build CSI Volumes\n2039315 - Helm release list page should only fetch secrets for deployed charts\n2039321 - SDN controller metrics are not being consumed by prometheus\n2039330 - Create NMState button doesn\u0027t work in OperatorHub web console\n2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations\n2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters. \n2039359 - `oc adm prune deployments` can\u0027t prune the RS  where the associated Deployment no longer exists\n2039382 - gather_metallb_logs does not have execution permission\n2039406 - logout from rest session after vsphere operator sync is finished\n2039408 - Add GCP region northamerica-northeast2 to allowed regions\n2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration\n2039425 - No need to set KlusterletAddonConfig CR applicationManager-\u003eenabled: true in RAN ztp deployment\n2039491 - oc - git:// protocol used in unit tests\n2039516 - Bump OVN to ovn21.12-21.12.0-25\n2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate\n2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled\n2039541 - Resolv-prepender script duplicating entries\n2039586 - [e2e] update centos8 to centos stream8\n2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty\n2039619 - [AWS] In tree provisioner storageclass aws disk type should contain \u0027gp3\u0027 and csi provisioner storageclass default aws disk type should be \u0027gp3\u0027\n2039670 - Create PDBs for control plane components\n2039678 - Page goes blank when create image pull secret\n2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported\n2039743 - React missing key warning when open operator hub detail page (and maybe others as well)\n2039756 - React missing key warning when open KnativeServing details\n2039770 - Observe dashboard doesn\u0027t react on time-range changes after browser reload when perspective is changed in another tab\n2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard\n2039781 - [GSS] OBC is not visible by admin of a Project on Console\n2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector\n2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled\n2039880 - Log level too low for control plane metrics\n2039919 - Add E2E test for router compression feature\n2039981 - ZTP for standard clusters installs stalld on master nodes\n2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead\n2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced\n2040143 - [IPI on Alibabacloud] suggest to remove region \"cn-nanjing\" or provide better error message\n2040150 - Update ConfigMap keys for IBM HPCS\n2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth\n2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository\n2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp\n2040376 - \"unknown instance type\" error for supported m6i.xlarge instance\n2040394 - Controller: enqueue the failed configmap till services update\n2040467 - Cannot build ztp-site-generator container image\n2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn\u0027t take affect in OpenShift 4\n2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps\n2040535 - Auto-update boot source is not available in customize wizard\n2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name\n2040603 - rhel worker scaleup playbook failed because missing some dependency of podman\n2040616 - rolebindings page doesn\u0027t load for normal users\n2040620 - [MAPO] Error pulling MAPO image on installation\n2040653 - Topology sidebar warns that another component is updated while rendering\n2040655 - User settings update fails when selecting application in topology sidebar\n2040661 - Different react warnings about updating state on unmounted components when leaving topology\n2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation\n2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi\n2040694 - Three upstream HTTPClientConfig struct fields missing in the operator\n2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers\n2040710 - cluster-baremetal-operator cannot update BMC subscription CR\n2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms\n2040782 - Import YAML page blocks input with more then one generateName attribute\n2040783 - The Import from YAML summary page doesn\u0027t show the resource name if created via generateName attribute\n2040791 - Default PGT policies must be \u0027inform\u0027 to integrate with the Lifecycle Operator\n2040793 - Fix snapshot e2e failures\n2040880 - do not block upgrades if we can\u0027t connect to vcenter\n2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10\n2041093 - autounattend.xml missing\n2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates\n2041319 - [IPI on Alibabacloud] installation in region \"cn-shanghai\" failed, due to \"Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped\"\n2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23\n2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller\n2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener\n2041441 - Provision volume with size 3000Gi even if sizeRange: \u0027[10-2000]GiB\u0027 in storageclass on IBM cloud\n2041466 - Kubedescheduler version is missing from the operator logs\n2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses\n2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR  is missing (controller and speaker pods)\n2041492 - Spacing between resources in inventory card is too small\n2041509 - GCP Cloud provider components should use K8s 1.23 dependencies\n2041510 - cluster-baremetal-operator doesn\u0027t run baremetal-operator\u0027s subscription webhook\n2041541 - audit: ManagedFields are dropped using API not annotation\n2041546 - ovnkube: set election timer at RAFT cluster creation time\n2041554 - use lease for leader election\n2041581 - KubeDescheduler operator log shows \"Use of insecure cipher detected\"\n2041583 - etcd and api server cpu mask interferes with a guaranteed workload\n2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure\n2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation\n2041620 - bundle CSV alm-examples does not parse\n2041641 - Fix inotify leak and kubelet retaining memory\n2041671 - Delete templates leads to 404 page\n2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category\n2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled\n2041750 - [IPI on Alibabacloud] trying \"create install-config\" with region \"cn-wulanchabu (China (Ulanqab))\" (or \"ap-southeast-6 (Philippines (Manila))\", \"cn-guangzhou (China (Guangzhou))\") failed due to invalid endpoint\n2041763 - The Observe \u003e Alerting pages no longer have their default sort order applied\n2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken\n2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied\n2041882 - cloud-network-config operator can\u0027t work normal on GCP workload identity cluster\n2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases\n2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist\n2041971 - [vsphere] Reconciliation of mutating webhooks didn\u0027t happen\n2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile\n2041999 - [PROXY] external dns pod cannot recognize custom proxy CA\n2042001 - unexpectedly found multiple load balancers\n2042029 - kubedescheduler fails to install completely\n2042036 - [IBMCLOUD] \"openshift-install explain installconfig.platform.ibmcloud\" contains not yet supported custom vpc parameters\n2042049 - Seeing warning related to unrecognized feature gate in kubescheduler \u0026 KCM logs\n2042059 - update discovery burst to reflect lots of CRDs on openshift clusters\n2042069 - Revert toolbox to rhcos-toolbox\n2042169 - Can not delete egressnetworkpolicy in Foreground propagation\n2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool\n2042265 - [IBM]\"--scale-down-utilization-threshold\" doesn\u0027t work on IBMCloud\n2042274 - Storage API should be used when creating a PVC\n2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection\n2042366 - Lifecycle hooks should be independently managed\n2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway\n2042382 - [e2e][automation] CI takes more then 2 hours to run\n2042395 - Add prerequisites for active health checks test\n2042438 - Missing rpms in openstack-installer image\n2042466 - Selection does not happen when switching from Topology Graph to List View\n2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver\n2042567 - insufficient info on CodeReady Containers configuration\n2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk\n2042619 - Overview page of the console is broken for hypershift clusters\n2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running\n2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud\n2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud\n2042770 - [IPI on Alibabacloud] with vpcID \u0026 vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly\n2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)\n2042851 - Create template from SAP HANA template flow - VM is created instead of a new template\n2042906 - Edit machineset with same machine deletion hook name succeed\n2042960 - azure-file CI fails with \"gid(0) in storageClass and pod fsgroup(1000) are not equal\"\n2043003 - [IPI on Alibabacloud] \u0027destroy cluster\u0027 of a failed installation (bug2041694) stuck after \u0027stage=Nat gateways\u0027\n2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n2043043 - Cluster Autoscaler should use K8s 1.23 dependencies\n2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)\n2043078 - Favorite system projects not visible in the project selector after toggling \"Show default projects\". \n2043117 - Recommended operators links are erroneously treated as external\n2043130 - Update CSI sidecars to the latest release for 4.10\n2043234 - Missing validation when creating several BGPPeers with the same peerAddress\n2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler\n2043254 - crio does not bind the security profiles directory\n2043296 - Ignition fails when reusing existing statically-keyed LUKS volume\n2043297 - [4.10] Bootimage bump tracker\n2043316 - RHCOS VM fails to boot on Nutanix AOS\n2043446 - Rebase aws-efs-utils to the latest upstream version. \n2043556 - Add proper ci-operator configuration to ironic and ironic-agent images\n2043577 - DPU network operator\n2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator\n2043675 - Too many machines deleted by cluster autoscaler when scaling down\n2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation\n2043709 - Logging flags no longer being bound to command line\n2043721 - Installer bootstrap hosts using outdated kubelet containing bugs\n2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather\n2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23\n2043780 - Bump router to k8s.io/api 1.23\n2043787 - Bump cluster-dns-operator to k8s.io/api 1.23\n2043801 - Bump CoreDNS to k8s.io/api 1.23\n2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown\n2043961 - [OVN-K] If pod creation fails, retry doesn\u0027t work as expected. \n2044201 - Templates golden image parameters names should be supported\n2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]\n2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter \u201ccsi.storage.k8s.io/fstype\u201d create pvc,pod successfully but write data to the pod\u0027s volume failed of \"Permission denied\"\n2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects\n2044347 - Bump to kubernetes 1.23.3\n2044481 - collect sharedresource cluster scoped instances with must-gather\n2044496 - Unable to create hardware events subscription - failed to add finalizers\n2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources\n2044680 - Additional libovsdb performance and resource consumption fixes\n2044704 - Observe \u003e Alerting pages should not show runbook links in 4.10\n2044717 - [e2e] improve tests for upstream test environment\n2044724 - Remove namespace column on VM list page when a project is selected\n2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff\n2044808 - machine-config-daemon-pull.service: use `cp` instead of `cat` when extracting MCD in OKD\n2045024 - CustomNoUpgrade alerts should be ignored\n2045112 - vsphere-problem-detector has missing rbac rules for leases\n2045199 - SnapShot with Disk Hot-plug hangs\n2045561 - Cluster Autoscaler should use the same default Group value as Cluster API\n2045591 - Reconciliation of aws pod identity mutating webhook did not happen\n2045849 - Add Sprint 212 translations\n2045866 - MCO Operator pod spam \"Error creating event\" warning messages in 4.10\n2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin\n2045916 - [IBMCloud] Default machine profile in installer is unreliable\n2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment\n2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify\n2046137 - oc output for unknown commands is not human readable\n2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance\n2046297 - Bump DB reconnect timeout\n2046517 - In Notification drawer, the \"Recommendations\" header shows when there isn\u0027t any recommendations\n2046597 - Observe \u003e Targets page may show the wrong service monitor is multiple monitors have the same namespace \u0026 label selectors\n2046626 - Allow setting custom metrics for Ansible-based Operators\n2046683 - [AliCloud]\"--scale-down-utilization-threshold\" doesn\u0027t work on AliCloud\n2047025 - Installation fails because of Alibaba CSI driver operator is degraded\n2047190 - Bump Alibaba CSI driver for 4.10\n2047238 - When using communities and localpreferences together, only localpreference gets applied\n2047255 - alibaba: resourceGroupID not found\n2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions\n2047317 - Update HELM OWNERS files under Dev Console\n2047455 - [IBM Cloud] Update custom image os type\n2047496 - Add image digest feature\n2047779 - do not degrade cluster if storagepolicy creation fails\n2047927 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2047929 - use lease for leader election\n2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2048046 - New route annotation to show another URL or hide topology URL decorator doesn\u0027t work for Knative Services\n2048048 - Application tab in User Preferences dropdown menus are too wide. \n2048050 - Topology list view items are not highlighted on keyboard navigation\n2048117 - [IBM]Shouldn\u0027t change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value\n2048413 - Bond CNI: Failed to  attach Bond NAD to pod\n2048443 - Image registry operator panics when finalizes config deletion\n2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*\n2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt\n2048598 - Web terminal view is broken\n2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure\n2048891 - Topology page is crashed\n2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class\n2049043 - Cannot create VM from template\n2049156 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2049886 - Placeholder bug for OCP 4.10.0 metadata release\n2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning\n2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2\n2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0\n2050227 - Installation on PSI fails with: \u0027openstack platform does not have the required standard-attr-tag network extension\u0027\n2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]\n2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members\n2050310 - ContainerCreateError when trying to launch large (\u003e500) numbers of pods across nodes\n2050370 - alert data for burn budget needs to be updated to prevent regression\n2050393 - ZTP missing support for local image registry and custom machine config\n2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud\n2050737 - Remove metrics and events for master port offsets\n2050801 - Vsphere upi tries to access vsphere during manifests generation phase\n2050883 - Logger object in LSO does not log source location accurately\n2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit\n2052062 - Whereabouts should implement client-go 1.22+\n2052125 - [4.10] Crio appears to be coredumping in some scenarios\n2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config\n2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. \n2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052598 - kube-scheduler should use configmap lease\n2052599 - kube-controller-manger should use configmap lease\n2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1\n2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch\n2052756 - [4.10] PVs are not being cleaned up after PVC deletion\n2053175 - oc adm catalog mirror throws \u0027missing signature key\u0027 error when using file://local/index\n2053218 - ImagePull fails with error  \"unable to pull manifest from example.com/busy.box:v5  invalid reference format\"\n2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs\n2053268 - inability to detect static lifecycle failure\n2053314 - requestheader IDP test doesn\u0027t wait for cleanup, causing high failure rates\n2053323 - OpenShift-Ansible BYOH Unit Tests are Broken\n2053339 - Remove dev preview badge from IBM FlashSystem deployment windows\n2053751 - ztp-site-generate container is missing convenience entrypoint\n2053945 - [4.10] Failed to apply sriov policy on intel nics\n2054109 - Missing \"app\" label\n2054154 - RoleBinding in project without subject is causing \"Project access\" page to fail\n2054244 - Latest pipeline run should be listed on the top of the pipeline run list\n2054288 - console-master-e2e-gcp-console is broken\n2054562 - DPU network operator 4.10 branch need to sync with master\n2054897 - Unable to deploy hw-event-proxy operator\n2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently\n2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line\n2055371 - Remove Check which enforces summary_interval must match logSyncInterval\n2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11\n2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API\n2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured\n2056479 - ovirt-csi-driver-node pods are crashing intermittently\n2056572 - reconcilePrecaching error: cannot list resource \"clusterserviceversions\" in API group \"operators.coreos.com\" at the cluster scope\"\n2056629 - [4.10] EFS CSI driver can\u0027t unmount volumes with \"wait: no child processes\"\n2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs\n2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation\n2056948 - post 1.23 rebase: regression in service-load balancer reliability\n2057438 - Service Level Agreement (SLA) always show \u0027Unknown\u0027\n2057721 - Fix Proxy support in RHACM 2.4.2\n2057724 - Image creation fails when NMstateConfig CR is empty\n2058641 - [4.10] Pod density test causing problems when using kube-burner\n2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install\n2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials\n2060956 - service domain can\u0027t be resolved when networkpolicy is used in OCP 4.10-rc\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2014-3577\nhttps://access.redhat.com/security/cve/CVE-2016-10228\nhttps://access.redhat.com/security/cve/CVE-2017-14502\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2018-1000858\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9169\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-25013\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-8927\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-9952\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-13434\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-15358\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-25660\nhttps://access.redhat.com/security/cve/CVE-2020-25677\nhttps://access.redhat.com/security/cve/CVE-2020-27618\nhttps://access.redhat.com/security/cve/CVE-2020-27781\nhttps://access.redhat.com/security/cve/CVE-2020-29361\nhttps://access.redhat.com/security/cve/CVE-2020-29362\nhttps://access.redhat.com/security/cve/CVE-2020-29363\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/cve/CVE-2021-3326\nhttps://access.redhat.com/security/cve/CVE-2021-3449\nhttps://access.redhat.com/security/cve/CVE-2021-3450\nhttps://access.redhat.com/security/cve/CVE-2021-3516\nhttps://access.redhat.com/security/cve/CVE-2021-3517\nhttps://access.redhat.com/security/cve/CVE-2021-3518\nhttps://access.redhat.com/security/cve/CVE-2021-3520\nhttps://access.redhat.com/security/cve/CVE-2021-3521\nhttps://access.redhat.com/security/cve/CVE-2021-3537\nhttps://access.redhat.com/security/cve/CVE-2021-3541\nhttps://access.redhat.com/security/cve/CVE-2021-3733\nhttps://access.redhat.com/security/cve/CVE-2021-3749\nhttps://access.redhat.com/security/cve/CVE-2021-20305\nhttps://access.redhat.com/security/cve/CVE-2021-21684\nhttps://access.redhat.com/security/cve/CVE-2021-22946\nhttps://access.redhat.com/security/cve/CVE-2021-22947\nhttps://access.redhat.com/security/cve/CVE-2021-25215\nhttps://access.redhat.com/security/cve/CVE-2021-27218\nhttps://access.redhat.com/security/cve/CVE-2021-30666\nhttps://access.redhat.com/security/cve/CVE-2021-30761\nhttps://access.redhat.com/security/cve/CVE-2021-30762\nhttps://access.redhat.com/security/cve/CVE-2021-33928\nhttps://access.redhat.com/security/cve/CVE-2021-33929\nhttps://access.redhat.com/security/cve/CVE-2021-33930\nhttps://access.redhat.com/security/cve/CVE-2021-33938\nhttps://access.redhat.com/security/cve/CVE-2021-36222\nhttps://access.redhat.com/security/cve/CVE-2021-37750\nhttps://access.redhat.com/security/cve/CVE-2021-39226\nhttps://access.redhat.com/security/cve/CVE-2021-41190\nhttps://access.redhat.com/security/cve/CVE-2021-43813\nhttps://access.redhat.com/security/cve/CVE-2021-44716\nhttps://access.redhat.com/security/cve/CVE-2021-44717\nhttps://access.redhat.com/security/cve/CVE-2022-0532\nhttps://access.redhat.com/security/cve/CVE-2022-21673\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL\n0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne\neGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM\nCEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF\naDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC\nY/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp\nsQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO\nRDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN\nrs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry\nbSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z\n7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT\nb5PUYUBIZLc=\n=GUDA\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.6.0 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Solution:\n\nFor details on how to install and use MTC, refer to:\n\nhttps://docs.openshift.com/container-platform/4.8/migration_toolkit_for_con\ntainers/installing-mtc.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1878824 - Web console is not accessible when deployed on OpenShift cluster on IBM Cloud\n1887526 - \"Stage\" pods fail when migrating from classic OpenShift source cluster on IBM Cloud with block storage\n1899562 - MigMigration custom resource does not display an error message when a migration fails because of volume mount error\n1936886 - Service account token of existing remote cluster cannot be updated by using the web console\n1936894 - \"Ready\" status of MigHook and MigPlan custom resources is not synchronized automatically\n1949117 - \"Migration plan resources\" page displays a permanent error message when a migration plan is deleted from the backend\n1951869 - MigPlan custom resource does not detect invalid source cluster reference\n1968621 - Paused deployment config causes a migration to hang\n1970338 - Parallel migrations fail because the initial backup is missing\n1974737 - Migration plan name length in the \"Migration plan\" wizard is not validated\n1975369 - \"Debug view\" link text on \"Migration plans\" page can be improved\n1975372 - Destination namespace in MigPlan custom resource is not validated\n1976895 - Namespace mapping cannot be changed using the Migration Plan wizard\n1981810 - \"Excluded\" resources are not excluded from the migration\n1982026 - Direct image migration fails if the source URI contains a double slash (\"//\")\n1994985 - Web console crashes when a MigPlan custom resource is created with an empty namespaces list\n1996169 - When \"None\" is selected as the target storage class in the web console, the setting is ignored and the default storage class is used\n1996627 - MigPlan custom resource displays a \"PvUsageAnalysisFailed\" warning after a successful PVC migration\n1996784 - \"Migration resources\" tree on the \"Migration details\" page is not displayed\n1996902 - \"Select all\" checkbox on the \"Namespaces\" page of the \"Migration plan\" wizard remains selected after a namespace is unselected\n1996904 - \"Migration\" dialogs on the \"Migration plans\" page display inconsistent capitalization\n1996906 - \"Migration details\" page link is displayed for a migration plan with no associated migrations\n1996938 - Search function on \"Migration plans\" page displays no results\n1997051 - Indirect migration from MTC 1.5.1 to 1.6.0 fails during \"StageBackup\" phase\n1997127 - Direct volume migration \"retry\" feature does not work correctly after a network failure\n1997173 - Migration of custom resource definitions to OpenShift Container Platform 4.9 fails because of API version incompatibility\n1997180 - \"migration-log-reader\" pod does not log invalid Rsync options\n1997665 - Selected PVCs in the \"State migration\" dialog are reset because of background polling\n1997694 - \"Update operator\" link on the \"Clusters\" page is incorrect\n1997827 - \"Migration plan\" wizard displays PVC names incorrectly formatted after running state migration\n1998062 - Rsync pod uses upstream image\n1998283 - \"Migration step details\" link on the \"Migrations\" page does not work\n1998550 - \"Migration plan\" wizard does not support certain screen resolutions\n1998581 - \"Migration details\" link on \"Migration plans\" page displays \"latestIsFailed\" error\n1999113 - \"oc describe\" and \"oc log\" commands on \"Migration resources\" tree cannot be copied after failed migration\n1999381 - MigPlan custom resource displays \"Stage completed with warnings\" status after successful migration\n1999528 - Position of the \"Add migration plan\" button is different from the other \"Add\" buttons\n1999765 - \"Migrate\" button on \"State migration\" dialog is enabled when no PVCs are selected\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n2000205 - \"Options\" menu on the \"Migration details\" page displays incorrect items\n2000218 - Validation incorrectly blocks namespace mapping if a source cluster namespace is the same as the destination namespace\n2000243 - \"Migration plan\" wizard does not allow a migration within the same cluster\n2000644 - Invalid migration plan causes \"controller\" pod to crash\n2000875 - State migration status on \"Migrations\" page displays \"Stage succeeded\" message\n2000979 - \"clusterIPs\" parameter of \"service\" object can cause Velero errors\n2001089 - Direct volume migration fails because of missing CA path configuration\n2001173 - Migration plan requires two clusters\n2001786 - Migration fails during \"Stage Backup\" step because volume path on host not found\n2001829 - Migration does not complete when the namespace contains a cron job with a PVC\n2001941 - Fixing PVC conflicts in state migration plan using the web console causes the migration to run twice\n2002420 - \"Stage\" pod not created for completed application pod, causing the \"mig-controller\" to stall\n2002608 - Migration of unmounted PVC fails during \"StageBackup\" phase\n2002897 - Rollback migration does not complete when the namespace contains a cron job\n2003603 - \"View logs\" dialog displays the \"--selector\" option, which does not print all logs\n2004601 - Migration plan status on \"Migration plans\" page is \"Ready\" after migration completed with warnings\n2004923 - Web console displays \"New operator version available\" notification for incorrect operator\n2005143 - Combining Rsync and Stunnel in a single pod can degrade performance\n2006316 - Web console cannot create migration plan in a proxy environment\n2007175 - Web console cannot be launched in a proxy environment\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nMIG-785 - Search for \"Crane\" in the Operator Hub should display the Migration Toolkit for Containers\n\n6. Description:\n\nThe release of RHACS 3.67 provides the following new features, bug fixes,\nsecurity patches and system changes:\n\nOpenShift Dedicated support\n\nRHACS 3.67 is thoroughly tested and supported on OpenShift Dedicated on\nAmazon Web Services and Google Cloud Platform. Use OpenShift OAuth server as an identity provider\nIf you are using RHACS with OpenShift, you can now configure the built-in\nOpenShift OAuth server as an identity provider for RHACS. Enhancements for CI outputs\nRed Hat has improved the usability of RHACS CI integrations. CI outputs now\nshow additional detailed information about the vulnerabilities and the\nsecurity policies responsible for broken builds. Runtime Class policy criteria\nUsers can now use RHACS to define the container runtime configuration that\nmay be used to run a pod\u2019s containers using the Runtime Class policy\ncriteria. \n\nBug Fixes\nThe release of RHACS 3.67 includes the following bug fixes:\n\n1. Previously, when using RHACS with the Compliance Operator integration,\nRHACS did not respect or populate Compliance Operator TailoredProfiles. \nThis has been fixed. Previously, the Alpine Linux package manager (APK) in Image policy\nlooked for the presence of apk package in the image rather than the\napk-tools package. This issue has been fixed. \n\nSystem changes\nThe release of RHACS 3.67 includes the following system changes:\n\n1. Scanner now identifies vulnerabilities in Ubuntu 21.10 images. The Port exposure method policy criteria now include route as an\nexposure method. The OpenShift: Kubeadmin Secret Accessed security policy now allows the\nOpenShift Compliance Operator to check for the existence of the Kubeadmin\nsecret without creating a violation. The OpenShift Compliance Operator integration now supports using\nTailoredProfiles. The RHACS Jenkins plugin now provides additional security information. When you enable the environment variable ROX_NETWORK_ACCESS_LOG for\nCentral, the logs contain the Request URI and X-Forwarded-For header\nvalues. The default uid:gid pair for the Scanner image is now 65534:65534. RHACS adds a new default Scope Manager role that includes minimum\npermissions to create and modify access scopes. In addition to manually uploading vulnerability definitions in offline\nmode, you can now upload definitions in online mode. You can now format the output of the following roxctl CLI commands in\ntable, csv, or JSON format: image scan, image check \u0026 deployment check\n12. You can now use a regular expression for the deployment name while\nspecifying policy exclusions\n\n3. Solution:\n\nTo take advantage of these new features, fixes and changes, please upgrade\nRed Hat Advanced Cluster Security for Kubernetes to version 3.67. Bugs fixed (https://bugzilla.redhat.com/):\n\n1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n1978144 - CVE-2021-32690 helm: information disclosure vulnerability\n1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n2005445 - CVE-2021-3801 nodejs-prismjs: ReDoS vulnerability\n2006044 - CVE-2021-39293 golang: archive/zip: malformed archive may cause panic or memory exhaustion (incomplete fix of CVE-2021-33196)\n2016640 - CVE-2020-27304 civetweb: directory traversal when using the built-in example HTTP form-based file upload mechanism via the mg_handle_form_request API\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nRHACS-65 - Release RHACS 3.67.0\n\n6. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See\nthe following Release Notes documentation, which will be updated shortly\nfor this release, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana\ngement_for_kubernetes/2.4/html/release_notes/\n\nSecurity fixes: \n\n* CVE-2021-33623: nodejs-trim-newlines: ReDoS in .end() method\n\n* CVE-2021-32626: redis: Lua scripts can overflow the heap-based Lua stack\n\n* CVE-2021-32627: redis: Integer overflow issue with Streams\n\n* CVE-2021-32628: redis: Integer overflow bug in the ziplist data structure\n\n* CVE-2021-32672: redis: Out of bounds read in lua debugger protocol parser\n\n* CVE-2021-32675: redis: Denial of service via Redis Standard Protocol\n(RESP) request\n\n* CVE-2021-32687: redis: Integer overflow issue with intsets\n\n* CVE-2021-32690: helm: information disclosure vulnerability\n\n* CVE-2021-32803: nodejs-tar: Insufficient symlink protection allowing\narbitrary file creation and overwrite\n\n* CVE-2021-32804: nodejs-tar: Insufficient absolute path sanitization\nallowing arbitrary file creation and overwrite\n\n* CVE-2021-23017: nginx: Off-by-one in ngx_resolver_copy() when labels are\nfollowed by a pointer to a root domain name\n\n* CVE-2021-3711: openssl: SM2 Decryption Buffer Overflow\n\n* CVE-2021-3712: openssl: Read buffer overruns processing ASN.1 strings\n\n* CVE-2021-3749: nodejs-axios: Regular expression denial of service in trim\nfunction\n\n* CVE-2021-41099: redis: Integer overflow issue with strings\n\nBug fixes:\n\n* RFE ACM Application management UI doesn\u0027t reflect object status (Bugzilla\n#1965321)\n\n* RHACM 2.4 files (Bugzilla #1983663)\n\n* Hive Operator CrashLoopBackOff when deploying ACM with latest downstream\n2.4 (Bugzilla #1993366)\n\n* submariner-addon pod failing in RHACM 2.4 latest ds snapshot (Bugzilla\n#1994668)\n\n* ACM 2.4 install on OCP 4.9 ipv6 disconnected hub fails due to\nmulticluster pod in clb (Bugzilla #2000274)\n\n* pre-network-manager-config failed due to timeout when static config is\nused (Bugzilla #2003915)\n\n* InfraEnv condition does not reflect the actual error message (Bugzilla\n#2009204, 2010030)\n\n* Flaky test point to a nil pointer conditions list (Bugzilla #2010175)\n\n* InfraEnv status shows \u0027Failed to create image: internal error (Bugzilla\n#2010272)\n\n* subctl diagnose firewall intra-cluster - failed VXLAN checks (Bugzilla\n#2013157)\n\n* pre-network-manager-config failed due to timeout when static config is\nused (Bugzilla #2014084)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1963121 - CVE-2021-23017 nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a pointer to a root domain name\n1965321 - RFE ACM Application management UI doesn\u0027t reflect object status\n1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method\n1978144 - CVE-2021-32690 helm: information disclosure vulnerability\n1983663 - RHACM 2.4.0 images\n1990409 - CVE-2021-32804 nodejs-tar: Insufficient absolute path sanitization allowing arbitrary file creation and overwrite\n1990415 - CVE-2021-32803 nodejs-tar: Insufficient symlink protection allowing arbitrary file creation and overwrite\n1993366 - Hive Operator CrashLoopBackOff when deploying ACM with latest downstream 2.4\n1994668 - submariner-addon pod failing in RHACM 2.4 latest ds snapshot\n1995623 - CVE-2021-3711 openssl: SM2 Decryption Buffer Overflow\n1995634 - CVE-2021-3712 openssl: Read buffer overruns processing ASN.1 strings\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n2000274 - ACM 2.4 install on OCP 4.9 ipv6 disconnected hub fails due to multicluster pod in clb\n2003915 - pre-network-manager-config failed due to timeout when static config is used\n2009204 - InfraEnv condition does not reflect the actual error message\n2010030 - InfraEnv condition does not reflect the actual error message\n2010175 - Flaky test point to a nil pointer conditions list\n2010272 - InfraEnv status shows \u0027Failed to create image: internal error\n2010991 - CVE-2021-32687 redis: Integer overflow issue with intsets\n2011000 - CVE-2021-32675 redis: Denial of service via Redis Standard Protocol (RESP) request\n2011001 - CVE-2021-32672 redis: Out of bounds read in lua debugger protocol parser\n2011004 - CVE-2021-32628 redis: Integer overflow bug in the ziplist data structure\n2011010 - CVE-2021-32627 redis: Integer overflow issue with Streams\n2011017 - CVE-2021-32626 redis: Lua scripts can overflow the heap-based Lua stack\n2011020 - CVE-2021-41099 redis: Integer overflow issue with strings\n2013157 - subctl diagnose firewall intra-cluster - failed VXLAN checks\n2014084 - pre-network-manager-config failed due to timeout when static config is used\n\n5",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-3749"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-011290"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-975"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-3749"
      },
      {
        "db": "PACKETSTORM",
        "id": "166643"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "164342"
      },
      {
        "db": "PACKETSTORM",
        "id": "165129"
      },
      {
        "db": "PACKETSTORM",
        "id": "164948"
      }
    ],
    "trust": 2.7
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2021-3749",
        "trust": 3.8
      },
      {
        "db": "SIEMENS",
        "id": "SSA-637483",
        "trust": 1.7
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-22-258-05",
        "trust": 1.5
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-24-277-02",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU90178687",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU99475301",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-011290",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "166643",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "164342",
        "trust": 0.7
      },
      {
        "db": "CS-HELP",
        "id": "SB2021041363",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-975",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1025",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.4059",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1504",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4616",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3247",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3878",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021093012",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021120334",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202108-2780",
        "trust": 0.6
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-3749",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "166279",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "165129",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "164948",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-3749"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-011290"
      },
      {
        "db": "PACKETSTORM",
        "id": "166643"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "164342"
      },
      {
        "db": "PACKETSTORM",
        "id": "165129"
      },
      {
        "db": "PACKETSTORM",
        "id": "164948"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-975"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202108-2780"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-3749"
      }
    ]
  },
  "id": "VAR-202108-1941",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-11-23T20:17:08.168000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Security\u00a0fix\u00a0for\u00a0ReDoS\u00a0(#3980)",
        "trust": 0.8,
        "url": "https://github.com/axios/axios/commit/5b457116e31db0e88fede6c428e969e87f290929"
      },
      {
        "title": "Axios Security vulnerabilities",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=161088"
      },
      {
        "title": "Red Hat: Important: Red Hat OpenShift Service Mesh 2.0.9 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221276 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: OpenShift Container Platform 4.10.3 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220056 - Security Advisory"
      },
      {
        "title": "node-red-contrib-graphql",
        "trust": 0.1,
        "url": "https://github.com/rgstephens/node-red-contrib-graphql "
      },
      {
        "title": "Axios Regular Expression Denial Of Service Attack",
        "trust": 0.1,
        "url": "https://github.com/T-Guerrero/axios-redos "
      },
      {
        "title": "https://github.com/broxus/ton-wallet-crystal-browser-extension",
        "trust": 0.1,
        "url": "https://github.com/broxus/ton-wallet-crystal-browser-extension "
      },
      {
        "title": "geidai-ikoi (\u85dd\u5927\u30aa\u30f3\u30e9\u30a4\u30f3\u61a9\u3044)",
        "trust": 0.1,
        "url": "https://github.com/MaySoMusician/geidai-ikoi "
      },
      {
        "title": "Seal Security Patches",
        "trust": 0.1,
        "url": "https://github.com/seal-community/patches "
      },
      {
        "title": "PoC in GitHub",
        "trust": 0.1,
        "url": "https://github.com/manas3c/CVE-POC "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-3749"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-011290"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202108-2780"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-1333",
        "trust": 1.0
      },
      {
        "problemtype": "CWE-400",
        "trust": 1.0
      },
      {
        "problemtype": "Resource exhaustion (CWE-400) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-011290"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-3749"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.7,
        "url": "https://github.com/axios/axios/commit/5b457116e31db0e88fede6c428e969e87f290929"
      },
      {
        "trust": 1.7,
        "url": "https://huntr.dev/bounties/1e8f07fc-c384-4ff9-8498-0690de2e8c31"
      },
      {
        "trust": 1.7,
        "url": "https://www.oracle.com/security-alerts/cpujul2022.html"
      },
      {
        "trust": 1.7,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf"
      },
      {
        "trust": 1.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3749"
      },
      {
        "trust": 1.1,
        "url": "https://lists.apache.org/thread.html/r7324ecc35b8027a51cb6ed629490fcd3b2d7cf01c424746ed5744bf1%40%3ccommits.druid.apache.org%3e"
      },
      {
        "trust": 1.1,
        "url": "https://lists.apache.org/thread.html/rfc5c478053ff808671aef170f3d9fc9d05cc1fab8fb64431edc66103%40%3ccommits.druid.apache.org%3e"
      },
      {
        "trust": 1.1,
        "url": "https://lists.apache.org/thread.html/r216f0fd0a3833856d6a6a1fada488cadba45f447d87010024328ccf2%40%3ccommits.druid.apache.org%3e"
      },
      {
        "trust": 1.1,
        "url": "https://lists.apache.org/thread.html/r3ae6d2654f92c5851bdb73b35e96b0e4e3da39f28ac7a1b15ae3aab8%40%3ccommits.druid.apache.org%3e"
      },
      {
        "trust": 1.1,
        "url": "https://lists.apache.org/thread.html/ra15d63c54dc6474b29f72ae4324bcb03038758545b3ab800845de7a1%40%3ccommits.druid.apache.org%3e"
      },
      {
        "trust": 1.1,
        "url": "https://lists.apache.org/thread.html/r74d0b359408fff31f87445261f0ee13bdfcac7d66f6b8e846face321%40%3ccommits.druid.apache.org%3e"
      },
      {
        "trust": 1.1,
        "url": "https://lists.apache.org/thread.html/rc263bfc5b53afcb7e849605478d73f5556eb0c00d1f912084e407289%40%3ccommits.druid.apache.org%3e"
      },
      {
        "trust": 1.1,
        "url": "https://lists.apache.org/thread.html/r4bf1b32983f50be00f9752214c1b53738b621be1c2b0dbd68c7f2391%40%3ccommits.druid.apache.org%3e"
      },
      {
        "trust": 1.1,
        "url": "https://lists.apache.org/thread.html/r075d464dce95cd13c03ff9384658edcccd5ab2983b82bfc72b62bb10%40%3ccommits.druid.apache.org%3e"
      },
      {
        "trust": 1.1,
        "url": "https://lists.apache.org/thread.html/rfa094029c959da0f7c8cd7dc9c4e59d21b03457bf0cedf6c93e1bb0a%40%3cdev.druid.apache.org%3e"
      },
      {
        "trust": 1.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3749"
      },
      {
        "trust": 0.9,
        "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.8,
        "url": "http://jvn.jp/vu/jvnvu99475301/index.html"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu90178687/"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-24-277-02"
      },
      {
        "trust": 0.8,
        "url": "https://huntr.dev/bounties/1e8f07fc-c384-4ff9-8498-0690de2e8c31/"
      },
      {
        "trust": 0.8,
        "url": "https://lists.apache.org/thread/3ss0n5d2mf2k9rvjywnbmmzrjlo6fhyr"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021041363"
      },
      {
        "trust": 0.6,
        "url": "https://lists.apache.org/thread.html/rc263bfc5b53afcb7e849605478d73f5556eb0c00d1f912084e407289@%3ccommits.druid.apache.org%3e"
      },
      {
        "trust": 0.6,
        "url": "https://lists.apache.org/thread.html/ra15d63c54dc6474b29f72ae4324bcb03038758545b3ab800845de7a1@%3ccommits.druid.apache.org%3e"
      },
      {
        "trust": 0.6,
        "url": "https://lists.apache.org/thread.html/rfa094029c959da0f7c8cd7dc9c4e59d21b03457bf0cedf6c93e1bb0a@%3cdev.druid.apache.org%3e"
      },
      {
        "trust": 0.6,
        "url": "https://lists.apache.org/thread.html/r7324ecc35b8027a51cb6ed629490fcd3b2d7cf01c424746ed5744bf1@%3ccommits.druid.apache.org%3e"
      },
      {
        "trust": 0.6,
        "url": "https://lists.apache.org/thread.html/r74d0b359408fff31f87445261f0ee13bdfcac7d66f6b8e846face321@%3ccommits.druid.apache.org%3e"
      },
      {
        "trust": 0.6,
        "url": "https://lists.apache.org/thread.html/r4bf1b32983f50be00f9752214c1b53738b621be1c2b0dbd68c7f2391@%3ccommits.druid.apache.org%3e"
      },
      {
        "trust": 0.6,
        "url": "https://lists.apache.org/thread.html/r3ae6d2654f92c5851bdb73b35e96b0e4e3da39f28ac7a1b15ae3aab8@%3ccommits.druid.apache.org%3e"
      },
      {
        "trust": 0.6,
        "url": "https://lists.apache.org/thread.html/r075d464dce95cd13c03ff9384658edcccd5ab2983b82bfc72b62bb10@%3ccommits.druid.apache.org%3e"
      },
      {
        "trust": 0.6,
        "url": "https://lists.apache.org/thread.html/r216f0fd0a3833856d6a6a1fada488cadba45f447d87010024328ccf2@%3ccommits.druid.apache.org%3e"
      },
      {
        "trust": 0.6,
        "url": "https://lists.apache.org/thread.html/rfc5c478053ff808671aef170f3d9fc9d05cc1fab8fb64431edc66103@%3ccommits.druid.apache.org%3e"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/164342/red-hat-security-advisory-2021-3694-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166643/red-hat-security-advisory-2022-1276-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1025"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4616"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021093012"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3878"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/support/pages/node/6526104"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.4059"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/support/pages/node/6514811"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3247"
      },
      {
        "trust": 0.6,
        "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021120334"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1504"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/support/pages/node/6516466"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.5,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.5,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-37750"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-36222"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3121"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-29923"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33938"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33930"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33928"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-22947"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3733"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33929"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-22946"
      },
      {
        "trust": 0.2,
        "url": "https://issues.jboss.org/):"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-22924"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22922"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-22922"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-36222"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-22923"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22924"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22923"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-32690"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/1333.html"
      },
      {
        "trust": 0.1,
        "url": "https://github.com/rgstephens/node-red-contrib-graphql"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21654"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43565"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43825"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:1276"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28852"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43826"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3121"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24726"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/latest/service_mesh/v2x/servicemesh-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43825"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-23635"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23606"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28851"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21654"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24726"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21655"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23635"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43824"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29482"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-29923"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43565"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43826"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-29482"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36221"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21655"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28852"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-36221"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-23606"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43824"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28851"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13050"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9925"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9802"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30762"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8927"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9895"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8625"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44716"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3450"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8812"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3899"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8819"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43813"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3867"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8720"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9893"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8808"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3902"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25215"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3900"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30761"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3537"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9805"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8820"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9807"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8769"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8710"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3449"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8813"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9850"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27781"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8811"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0055"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9803"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9862"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27618"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-25013"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3885"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15503"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3326"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41190"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10018"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25660"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8835"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2017-14502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8764"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8844"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3865"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-1730"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3864"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19906"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3520"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15358"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21684"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14391"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3862"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0056"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3901"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-39226"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8823"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3518"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13434"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15903"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3895"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44717"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11793"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20454"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-20843"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0532"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9894"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8816"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9843"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13627"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8771"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3897"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9806"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8814"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-14889"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8743"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9915"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8815"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8783"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9169"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20807"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3516"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29361"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-9952"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2016-10228"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3517"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20305"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21673"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29363"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8766"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3868"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8846"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-3894"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25677"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30666"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-8782"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3521"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37750"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.8/migration_toolkit_for_con"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37576"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-38201"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-38201"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:3694"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-37576"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23343"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14155"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27304"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24370"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13435"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3580"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36086"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3200"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20838"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16135"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22876"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20266"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27645"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-28153"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20232"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22898"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22925"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23840"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23841"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33560"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36087"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17595"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-39293"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3800"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33574"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20231"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36085"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27645"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-5827"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-28153"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19603"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:4902"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23343"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3445"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20266"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36084"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-35942"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12762"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-20673"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27304"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3801"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22947"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33929"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-0512"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-32803"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33930"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-32626"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32690"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3711"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:4618"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32675"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3656"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3733"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36385"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-32675"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3712"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-32804"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33623"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23017"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36385"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41099"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3656"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32804"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-32627"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-32672"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32627"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-0512"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-32628"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22946"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32626"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3711"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32672"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33623"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32687"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23017"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33928"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3712"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33938"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-32687"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32628"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32803"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-3749"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-011290"
      },
      {
        "db": "PACKETSTORM",
        "id": "166643"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "164342"
      },
      {
        "db": "PACKETSTORM",
        "id": "165129"
      },
      {
        "db": "PACKETSTORM",
        "id": "164948"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-975"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202108-2780"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-3749"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULMON",
        "id": "CVE-2021-3749"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-011290"
      },
      {
        "db": "PACKETSTORM",
        "id": "166643"
      },
      {
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "db": "PACKETSTORM",
        "id": "164342"
      },
      {
        "db": "PACKETSTORM",
        "id": "165129"
      },
      {
        "db": "PACKETSTORM",
        "id": "164948"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-975"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202108-2780"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-3749"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2021-08-31T00:00:00",
        "db": "VULMON",
        "id": "CVE-2021-3749"
      },
      {
        "date": "2022-07-26T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2021-011290"
      },
      {
        "date": "2022-04-08T15:05:23",
        "db": "PACKETSTORM",
        "id": "166643"
      },
      {
        "date": "2022-03-11T16:38:38",
        "db": "PACKETSTORM",
        "id": "166279"
      },
      {
        "date": "2021-09-30T16:27:16",
        "db": "PACKETSTORM",
        "id": "164342"
      },
      {
        "date": "2021-12-02T16:06:16",
        "db": "PACKETSTORM",
        "id": "165129"
      },
      {
        "date": "2021-11-12T17:01:04",
        "db": "PACKETSTORM",
        "id": "164948"
      },
      {
        "date": "2021-04-13T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202104-975"
      },
      {
        "date": "2021-08-31T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202108-2780"
      },
      {
        "date": "2021-08-31T11:15:07.890000",
        "db": "NVD",
        "id": "CVE-2021-3749"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-11-07T00:00:00",
        "db": "VULMON",
        "id": "CVE-2021-3749"
      },
      {
        "date": "2024-10-07T01:05:00",
        "db": "JVNDB",
        "id": "JVNDB-2021-011290"
      },
      {
        "date": "2021-04-14T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202104-975"
      },
      {
        "date": "2022-09-19T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202108-2780"
      },
      {
        "date": "2024-11-21T06:22:19.837000",
        "db": "NVD",
        "id": "CVE-2021-3749"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "165129"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202108-2780"
      }
    ],
    "trust": 0.7
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "axios\u00a0 Resource exhaustion vulnerability in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-011290"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "other",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-975"
      }
    ],
    "trust": 0.6
  }
}

var-202102-1490
Vulnerability from variot

OpenSSL 1.0.2 supports SSLv2. If a client attempts to negotiate SSLv2 with a server that is configured to support both SSLv2 and more recent SSL and TLS versions then a check is made for a version rollback attack when unpadding an RSA signature. Clients that support SSL or TLS versions greater than SSLv2 are supposed to use a special form of padding. A server that supports greater than SSLv2 is supposed to reject connection attempts from a client where this special form of padding is present, because this indicates that a version rollback has occurred (i.e. both client and server support greater than SSLv2, and yet this is the version that is being requested). The implementation of this padding check inverted the logic so that the connection attempt is accepted if the padding is present, and rejected if it is absent. This means that such as server will accept a connection if a version rollback attack has occurred. Further the server will erroneously reject a connection if a normal SSLv2 connection attempt is made. Only OpenSSL 1.0.2 servers from version 1.0.2s to 1.0.2x are affected by this issue. In order to be vulnerable a 1.0.2 server must: 1) have configured SSLv2 support at compile time (this is off by default), 2) have configured SSLv2 support at runtime (this is off by default), 3) have configured SSLv2 ciphersuites (these are not in the default ciphersuite list) OpenSSL 1.1.1 does not have SSLv2 support and therefore is not vulnerable to this issue. The underlying error is in the implementation of the RSA_padding_check_SSLv23() function. This also affects the RSA_SSLV23_PADDING padding mode used by various other functions. Although 1.1.1 does not support SSLv2 the RSA_padding_check_SSLv23() function still exists, as does the RSA_SSLV23_PADDING padding mode. Applications that directly call that function or use that padding mode will encounter this issue. However since there is no support for the SSLv2 protocol in 1.1.1 this is considered a bug and not a security issue in that version. OpenSSL 1.0.2 is out of support and no longer receiving public updates. Premium support customers of OpenSSL 1.0.2 should upgrade to 1.0.2y. Other users should upgrade to 1.1.1j. Fixed in OpenSSL 1.0.2y (Affected 1.0.2s-1.0.2x). OpenSSL There is a security level vulnerability in.Information may be tampered with. Pillow is a Python-based image processing library. There is currently no information about this vulnerability, please feel free to follow CNNVD or manufacturer announcements. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Important: Red Hat Advanced Cluster Management for Kubernetes version 2.3 Advisory ID: RHSA-2021:3016-01 Product: Red Hat ACM Advisory URL: https://access.redhat.com/errata/RHSA-2021:3016 Issue date: 2021-08-05 CVE Names: CVE-2016-10228 CVE-2017-14502 CVE-2018-20843 CVE-2018-1000858 CVE-2019-2708 CVE-2019-9169 CVE-2019-13050 CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 CVE-2019-19906 CVE-2019-20454 CVE-2019-20934 CVE-2019-25013 CVE-2020-1730 CVE-2020-8231 CVE-2020-8284 CVE-2020-8285 CVE-2020-8286 CVE-2020-8927 CVE-2020-11668 CVE-2020-13434 CVE-2020-15358 CVE-2020-27618 CVE-2020-28196 CVE-2020-28469 CVE-2020-28500 CVE-2020-28851 CVE-2020-28852 CVE-2020-29361 CVE-2020-29362 CVE-2020-29363 CVE-2021-3326 CVE-2021-3377 CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3537 CVE-2021-3541 CVE-2021-3560 CVE-2021-20271 CVE-2021-20305 CVE-2021-21272 CVE-2021-21309 CVE-2021-21321 CVE-2021-21322 CVE-2021-23337 CVE-2021-23343 CVE-2021-23346 CVE-2021-23362 CVE-2021-23364 CVE-2021-23368 CVE-2021-23369 CVE-2021-23382 CVE-2021-23383 CVE-2021-23839 CVE-2021-23840 CVE-2021-23841 CVE-2021-25217 CVE-2021-27219 CVE-2021-27292 CVE-2021-27358 CVE-2021-28092 CVE-2021-28918 CVE-2021-29418 CVE-2021-29477 CVE-2021-29478 CVE-2021-29482 CVE-2021-32399 CVE-2021-33033 CVE-2021-33034 CVE-2021-33502 CVE-2021-33623 CVE-2021-33909 CVE-2021-33910 =====================================================================

  1. Summary:

Red Hat Advanced Cluster Management for Kubernetes 2.3.0 General Availability release images, which fix several bugs and security issues.

Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE links in the References section.

  1. Description:

Red Hat Advanced Cluster Management for Kubernetes 2.3.0 images

Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in.

This advisory contains the container images for Red Hat Advanced Cluster Management for Kubernetes, which fix several bugs and security issues. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:

https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana gement_for_kubernetes/2.3/html/release_notes/

Security:

  • fastify-reply-from: crafted URL allows prefix scape of the proxied backend service (CVE-2021-21321)

  • fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service (CVE-2021-21322)

  • nodejs-netmask: improper input validation of octal input data (CVE-2021-28918)

  • redis: Integer overflow via STRALGO LCS command (CVE-2021-29477)

  • redis: Integer overflow via COPY command for large intsets (CVE-2021-29478)

  • nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)

  • nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions (CVE-2020-28500)

  • golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing

  • -u- extension (CVE-2020-28851)

  • golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag (CVE-2020-28852)

  • nodejs-ansi_up: XSS due to insufficient URL sanitization (CVE-2021-3377)

  • oras: zip-slip vulnerability via oras-pull (CVE-2021-21272)

  • redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms (CVE-2021-21309)

  • nodejs-lodash: command injection via template (CVE-2021-23337)

  • nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() (CVE-2021-23362)

  • browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS) (CVE-2021-23364)

  • nodejs-postcss: Regular expression denial of service during source map parsing (CVE-2021-23368)

  • nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option (CVE-2021-23369)

  • nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js (CVE-2021-23382)

  • nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option (CVE-2021-23383)

  • openssl: integer overflow in CipherUpdate (CVE-2021-23840)

  • openssl: NULL pointer dereference in X509_issuer_and_serial_hash() (CVE-2021-23841)

  • nodejs-ua-parser-js: ReDoS via malicious User-Agent header (CVE-2021-27292)

  • grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call (CVE-2021-27358)

  • nodejs-is-svg: ReDoS via malicious string (CVE-2021-28092)

  • nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character (CVE-2021-29418)

  • ulikunitz/xz: Infinite loop in readUvarint allows for denial of service (CVE-2021-29482)

  • normalize-url: ReDoS for data URLs (CVE-2021-33502)

  • nodejs-trim-newlines: ReDoS in .end() method (CVE-2021-33623)

  • nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe (CVE-2021-23343)

  • html-parse-stringify: Regular Expression DoS (CVE-2021-23346)

  • openssl: incorrect SSLv2 rollback protection (CVE-2021-23839)

For more details about the security issues, including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE pages listed in the References section.

Bugs:

  • RFE Make the source code for the endpoint-metrics-operator public (BZ# 1913444)

  • cluster became offline after apiserver health check (BZ# 1942589)

  • Solution:

Before applying this update, make sure all previously released errata relevant to your system have been applied.

For details on how to apply this update, refer to:

https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana gement_for_kubernetes/2.3/html-single/install/index#installing

  1. Bugs fixed (https://bugzilla.redhat.com/):

1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension 1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag 1913444 - RFE Make the source code for the endpoint-metrics-operator public 1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull 1927520 - RHACM 2.3.0 images 1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection 1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash() 1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate 1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms 1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization 1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string 1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application 1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header 1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call 1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS 1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service 1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service 1942589 - cluster became offline after apiserver health check 1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() 1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character 1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data 1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service 1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option 1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing 1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js 1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service 1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS) 1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option 1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe 1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command 1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets 1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs 1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method 1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions 1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id 1983131 - Defragmenting an etcd member doesn't reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters

  1. References:

https://access.redhat.com/security/cve/CVE-2016-10228 https://access.redhat.com/security/cve/CVE-2017-14502 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2018-1000858 https://access.redhat.com/security/cve/CVE-2019-2708 https://access.redhat.com/security/cve/CVE-2019-9169 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20934 https://access.redhat.com/security/cve/CVE-2019-25013 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-8231 https://access.redhat.com/security/cve/CVE-2020-8284 https://access.redhat.com/security/cve/CVE-2020-8285 https://access.redhat.com/security/cve/CVE-2020-8286 https://access.redhat.com/security/cve/CVE-2020-8927 https://access.redhat.com/security/cve/CVE-2020-11668 https://access.redhat.com/security/cve/CVE-2020-13434 https://access.redhat.com/security/cve/CVE-2020-15358 https://access.redhat.com/security/cve/CVE-2020-27618 https://access.redhat.com/security/cve/CVE-2020-28196 https://access.redhat.com/security/cve/CVE-2020-28469 https://access.redhat.com/security/cve/CVE-2020-28500 https://access.redhat.com/security/cve/CVE-2020-28851 https://access.redhat.com/security/cve/CVE-2020-28852 https://access.redhat.com/security/cve/CVE-2020-29361 https://access.redhat.com/security/cve/CVE-2020-29362 https://access.redhat.com/security/cve/CVE-2020-29363 https://access.redhat.com/security/cve/CVE-2021-3326 https://access.redhat.com/security/cve/CVE-2021-3377 https://access.redhat.com/security/cve/CVE-2021-3449 https://access.redhat.com/security/cve/CVE-2021-3450 https://access.redhat.com/security/cve/CVE-2021-3516 https://access.redhat.com/security/cve/CVE-2021-3517 https://access.redhat.com/security/cve/CVE-2021-3518 https://access.redhat.com/security/cve/CVE-2021-3520 https://access.redhat.com/security/cve/CVE-2021-3537 https://access.redhat.com/security/cve/CVE-2021-3541 https://access.redhat.com/security/cve/CVE-2021-3560 https://access.redhat.com/security/cve/CVE-2021-20271 https://access.redhat.com/security/cve/CVE-2021-20305 https://access.redhat.com/security/cve/CVE-2021-21272 https://access.redhat.com/security/cve/CVE-2021-21309 https://access.redhat.com/security/cve/CVE-2021-21321 https://access.redhat.com/security/cve/CVE-2021-21322 https://access.redhat.com/security/cve/CVE-2021-23337 https://access.redhat.com/security/cve/CVE-2021-23343 https://access.redhat.com/security/cve/CVE-2021-23346 https://access.redhat.com/security/cve/CVE-2021-23362 https://access.redhat.com/security/cve/CVE-2021-23364 https://access.redhat.com/security/cve/CVE-2021-23368 https://access.redhat.com/security/cve/CVE-2021-23369 https://access.redhat.com/security/cve/CVE-2021-23382 https://access.redhat.com/security/cve/CVE-2021-23383 https://access.redhat.com/security/cve/CVE-2021-23839 https://access.redhat.com/security/cve/CVE-2021-23840 https://access.redhat.com/security/cve/CVE-2021-23841 https://access.redhat.com/security/cve/CVE-2021-25217 https://access.redhat.com/security/cve/CVE-2021-27219 https://access.redhat.com/security/cve/CVE-2021-27292 https://access.redhat.com/security/cve/CVE-2021-27358 https://access.redhat.com/security/cve/CVE-2021-28092 https://access.redhat.com/security/cve/CVE-2021-28918 https://access.redhat.com/security/cve/CVE-2021-29418 https://access.redhat.com/security/cve/CVE-2021-29477 https://access.redhat.com/security/cve/CVE-2021-29478 https://access.redhat.com/security/cve/CVE-2021-29482 https://access.redhat.com/security/cve/CVE-2021-32399 https://access.redhat.com/security/cve/CVE-2021-33033 https://access.redhat.com/security/cve/CVE-2021-33034 https://access.redhat.com/security/cve/CVE-2021-33502 https://access.redhat.com/security/cve/CVE-2021-33623 https://access.redhat.com/security/cve/CVE-2021-33909 https://access.redhat.com/security/cve/CVE-2021-33910 https://access.redhat.com/security/updates/classification/#important

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYQyKDNzjgjWX9erEAQhAWQ//fU2h/y+76CVkExXChhgJ779lC9Ec1f+X 6yw1b2WCHcztbTwyRtZw90dvIA1rNIDBrd83jIwfzsXzxEfGcCTriOmotHKX44+4 w6uPpmPSOBTsXB/yV/kvbPWpUKkahITC2uvjaInzO2zMmUQ2ntNGpvPu7BbFLmL1 oHMVIZaJ+zrPifwPhGqlp3rAkYe6uGobdvwtrOMXw8L5VnJor+35xLjos5k30IlC 4lftpWm9cD4oozdb5hw4A0i8fyAvue4hzpmgPfUJ6bngux8wycYhPGiRJR1HX03T MSXsWNBtqXNcB7r/GGqen73rr/eyyqsqfJ7+l8Uu7ph5cjk04foZcMqg+rz/1xne gVPkWcUJT8j7BH2sO8qiMdfYNl3+xNqPI9MtPEI8K/eiwynwETZqsKnEGIyhcTcX xe08Io2jV3jlnpQO/SBcvpKyzcqhDOuNBH2ozhn7Ka68WIMk2OuWempQcyDlWizO 1UbgoiMVb0hlP0APVpJKNtpfFCjBzFC24gWSAOPTep3vzA418Sn/moCJupM+3PPA QIzkGAt9f7sffI0JEg0JPEy0/aTmfsPm7XeR6DG+xF7o1nfy1SOcf+tcnPD0K+z8 8fS0uUMB/wO2s5yQ1TctsYzL9S5HRwMtnq7qKwWq9ItYzdQB4pcmyK1WgJAHVAtf Omk9Hj44tdI= =X9lR -----END PGP SIGNATURE-----

-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . OpenSSL Security Advisory [16 February 2021] ============================================

Null pointer deref in X509_issuer_and_serial_hash() (CVE-2021-23841)

Severity: Moderate

The OpenSSL public API function X509_issuer_and_serial_hash() attempts to create a unique hash value based on the issuer and serial number data contained within an X509 certificate. However it fails to correctly handle any errors that may occur while parsing the issuer field (which might occur if the issuer field is maliciously constructed). This may subsequently result in a NULL pointer deref and a crash leading to a potential denial of service attack.

This issue was reported to OpenSSL on 15th December 2020 by Tavis Ormandy from Google. The fix was developed by Matt Caswell.

Incorrect SSLv2 rollback protection (CVE-2021-23839)

Severity: Low

OpenSSL 1.0.2 supports SSLv2.

This issue was reported to OpenSSL on 21st January 2021 by D. Katz and Joel Luellwitz from Trustwave. The fix was developed by Matt Caswell.

Integer overflow in CipherUpdate (CVE-2021-23840)

Severity: Low

Calls to EVP_CipherUpdate, EVP_EncryptUpdate and EVP_DecryptUpdate may overflow the output length argument in some cases where the input length is close to the maximum permissable length for an integer on the platform. In such cases the return value from the function call will be 1 (indicating success), but the output length value will be negative. This could cause applications to behave incorrectly or crash.

This issue was reported to OpenSSL on 13th December 2020 by Paul Kehrer. The fix was developed by Matt Caswell.

References

URL for this Security Advisory: https://www.openssl.org/news/secadv/20210216.txt

Note: the online version of the advisory may be updated with additional details over time.

For details of OpenSSL severity classifications please see: https://www.openssl.org/policies/secpolicy.html

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202102-1490",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "openssl",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "1.0.2s"
      },
      {
        "model": "business intelligence",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "5.9.0.0.0"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "graalvm",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "20.3.1.2"
      },
      {
        "model": "enterprise manager ops center",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "12.4.0.0"
      },
      {
        "model": "business intelligence",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "5.5.0.0.0"
      },
      {
        "model": "graalvm",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "19.3.5"
      },
      {
        "model": "openssl",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "1.0.2x"
      },
      {
        "model": "zfs storage appliance kit",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.8"
      },
      {
        "model": "jd edwards world security",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "a9.4"
      },
      {
        "model": "business intelligence",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "12.2.1.4.0"
      },
      {
        "model": "graalvm",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "21.0.0.2"
      },
      {
        "model": "enterprise manager for storage management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "13.4.0.0"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "business intelligence",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "12.2.1.3.0"
      },
      {
        "model": "oracle graalvm",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30aa\u30e9\u30af\u30eb",
        "version": null
      },
      {
        "model": "oracle enterprise manager ops center",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30aa\u30e9\u30af\u30eb",
        "version": null
      },
      {
        "model": "openssl",
        "scope": null,
        "trust": 0.8,
        "vendor": "openssl",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-003872"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23839"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Siemens reported these vulnerabilities to CISA.",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1230"
      }
    ],
    "trust": 0.6
  },
  "cve": "CVE-2021-23839",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 4.3,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 8.6,
            "id": "CVE-2021-23839",
            "impactScore": 2.9,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 1.9,
            "vectorString": "AV:N/AC:M/Au:N/C:N/I:P/A:N",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "HIGH",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 3.7,
            "baseSeverity": "LOW",
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 2.2,
            "id": "CVE-2021-23839",
            "impactScore": 1.4,
            "integrityImpact": "LOW",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:L/A:N",
            "version": "3.1"
          },
          {
            "attackComplexity": "High",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "None",
            "baseScore": 3.7,
            "baseSeverity": "Low",
            "confidentialityImpact": "None",
            "exploitabilityScore": null,
            "id": "CVE-2021-23839",
            "impactScore": null,
            "integrityImpact": "Low",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:L/A:N",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2021-23839",
            "trust": 1.0,
            "value": "LOW"
          },
          {
            "author": "NVD",
            "id": "CVE-2021-23839",
            "trust": 0.8,
            "value": "Low"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202104-975",
            "trust": 0.6,
            "value": "MEDIUM"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202102-1230",
            "trust": 0.6,
            "value": "LOW"
          },
          {
            "author": "VULMON",
            "id": "CVE-2021-23839",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-23839"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-003872"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-975"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1230"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23839"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "OpenSSL 1.0.2 supports SSLv2. If a client attempts to negotiate SSLv2 with a server that is configured to support both SSLv2 and more recent SSL and TLS versions then a check is made for a version rollback attack when unpadding an RSA signature. Clients that support SSL or TLS versions greater than SSLv2 are supposed to use a special form of padding. A server that supports greater than SSLv2 is supposed to reject connection attempts from a client where this special form of padding is present, because this indicates that a version rollback has occurred (i.e. both client and server support greater than SSLv2, and yet this is the version that is being requested). The implementation of this padding check inverted the logic so that the connection attempt is accepted if the padding is present, and rejected if it is absent. This means that such as server will accept a connection if a version rollback attack has occurred. Further the server will erroneously reject a connection if a normal SSLv2 connection attempt is made. Only OpenSSL 1.0.2 servers from version 1.0.2s to 1.0.2x are affected by this issue. In order to be vulnerable a 1.0.2 server must: 1) have configured SSLv2 support at compile time (this is off by default), 2) have configured SSLv2 support at runtime (this is off by default), 3) have configured SSLv2 ciphersuites (these are not in the default ciphersuite list) OpenSSL 1.1.1 does not have SSLv2 support and therefore is not vulnerable to this issue. The underlying error is in the implementation of the RSA_padding_check_SSLv23() function. This also affects the RSA_SSLV23_PADDING padding mode used by various other functions. Although 1.1.1 does not support SSLv2 the RSA_padding_check_SSLv23() function still exists, as does the RSA_SSLV23_PADDING padding mode. Applications that directly call that function or use that padding mode will encounter this issue. However since there is no support for the SSLv2 protocol in 1.1.1 this is considered a bug and not a security issue in that version. OpenSSL 1.0.2 is out of support and no longer receiving public updates. Premium support customers of OpenSSL 1.0.2 should upgrade to 1.0.2y. Other users should upgrade to 1.1.1j. Fixed in OpenSSL 1.0.2y (Affected 1.0.2s-1.0.2x). OpenSSL There is a security level vulnerability in.Information may be tampered with. Pillow is a Python-based image processing library. \nThere is currently no information about this vulnerability, please feel free to follow CNNVD or manufacturer announcements. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Important: Red Hat Advanced Cluster Management for Kubernetes version 2.3\nAdvisory ID:       RHSA-2021:3016-01\nProduct:           Red Hat ACM\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2021:3016\nIssue date:        2021-08-05\nCVE Names:         CVE-2016-10228 CVE-2017-14502 CVE-2018-20843 \n                   CVE-2018-1000858 CVE-2019-2708 CVE-2019-9169 \n                   CVE-2019-13050 CVE-2019-13627 CVE-2019-14889 \n                   CVE-2019-15903 CVE-2019-19906 CVE-2019-20454 \n                   CVE-2019-20934 CVE-2019-25013 CVE-2020-1730 \n                   CVE-2020-8231 CVE-2020-8284 CVE-2020-8285 \n                   CVE-2020-8286 CVE-2020-8927 CVE-2020-11668 \n                   CVE-2020-13434 CVE-2020-15358 CVE-2020-27618 \n                   CVE-2020-28196 CVE-2020-28469 CVE-2020-28500 \n                   CVE-2020-28851 CVE-2020-28852 CVE-2020-29361 \n                   CVE-2020-29362 CVE-2020-29363 CVE-2021-3326 \n                   CVE-2021-3377 CVE-2021-3449 CVE-2021-3450 \n                   CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 \n                   CVE-2021-3520 CVE-2021-3537 CVE-2021-3541 \n                   CVE-2021-3560 CVE-2021-20271 CVE-2021-20305 \n                   CVE-2021-21272 CVE-2021-21309 CVE-2021-21321 \n                   CVE-2021-21322 CVE-2021-23337 CVE-2021-23343 \n                   CVE-2021-23346 CVE-2021-23362 CVE-2021-23364 \n                   CVE-2021-23368 CVE-2021-23369 CVE-2021-23382 \n                   CVE-2021-23383 CVE-2021-23839 CVE-2021-23840 \n                   CVE-2021-23841 CVE-2021-25217 CVE-2021-27219 \n                   CVE-2021-27292 CVE-2021-27358 CVE-2021-28092 \n                   CVE-2021-28918 CVE-2021-29418 CVE-2021-29477 \n                   CVE-2021-29478 CVE-2021-29482 CVE-2021-32399 \n                   CVE-2021-33033 CVE-2021-33034 CVE-2021-33502 \n                   CVE-2021-33623 CVE-2021-33909 CVE-2021-33910 \n=====================================================================\n\n1. Summary:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.3.0 General\nAvailability release images, which fix several bugs and security issues. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE links in the References section. \n\n2. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.3.0 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \n\nThis advisory contains the container images for Red Hat Advanced Cluster\nManagement for Kubernetes, which fix several bugs and security issues. See\nthe following Release Notes documentation, which will be updated shortly\nfor this release, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana\ngement_for_kubernetes/2.3/html/release_notes/\n\nSecurity:\n\n* fastify-reply-from: crafted URL allows prefix scape of the proxied\nbackend service (CVE-2021-21321)\n\n* fastify-http-proxy: crafted URL allows prefix scape of the proxied\nbackend service (CVE-2021-21322)\n\n* nodejs-netmask: improper input validation of octal input data\n(CVE-2021-28918)\n\n* redis: Integer overflow via STRALGO LCS command (CVE-2021-29477)\n\n* redis: Integer overflow via COPY command for large intsets\n(CVE-2021-29478)\n\n* nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)\n\n* nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n(CVE-2020-28500)\n\n* golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing\n- -u- extension (CVE-2020-28851)\n\n* golang.org/x/text: Panic in language.ParseAcceptLanguage while processing\nbcp47 tag (CVE-2020-28852)\n\n* nodejs-ansi_up: XSS due to insufficient URL sanitization (CVE-2021-3377)\n\n* oras: zip-slip vulnerability via oras-pull (CVE-2021-21272)\n\n* redis: integer overflow when configurable limit for maximum supported\nbulk input size is too big on 32-bit platforms (CVE-2021-21309)\n\n* nodejs-lodash: command injection via template (CVE-2021-23337)\n\n* nodejs-hosted-git-info: Regular Expression denial of service via\nshortcutMatch in fromUrl() (CVE-2021-23362)\n\n* browserslist: parsing of invalid queries could result in Regular\nExpression Denial of Service (ReDoS) (CVE-2021-23364)\n\n* nodejs-postcss: Regular expression denial of service during source map\nparsing (CVE-2021-23368)\n\n* nodejs-handlebars: Remote code execution when compiling untrusted compile\ntemplates with strict:true option (CVE-2021-23369)\n\n* nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in\nlib/previous-map.js (CVE-2021-23382)\n\n* nodejs-handlebars: Remote code execution when compiling untrusted compile\ntemplates with compat:true option (CVE-2021-23383)\n\n* openssl: integer overflow in CipherUpdate (CVE-2021-23840)\n\n* openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n(CVE-2021-23841)\n\n* nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n(CVE-2021-27292)\n\n* grafana: snapshot feature allow an unauthenticated remote attacker to\ntrigger a DoS via a remote API call (CVE-2021-27358)\n\n* nodejs-is-svg: ReDoS via malicious string (CVE-2021-28092)\n\n* nodejs-netmask: incorrectly parses an IP address that has octal integer\nwith invalid character (CVE-2021-29418)\n\n* ulikunitz/xz: Infinite loop in readUvarint allows for denial of service\n(CVE-2021-29482)\n\n* normalize-url: ReDoS for data URLs (CVE-2021-33502)\n\n* nodejs-trim-newlines: ReDoS in .end() method (CVE-2021-33623)\n\n* nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n(CVE-2021-23343)\n\n* html-parse-stringify: Regular Expression DoS (CVE-2021-23346)\n\n* openssl: incorrect SSLv2 rollback protection (CVE-2021-23839)\n\nFor more details about the security issues, including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npages listed in the References section. \n\nBugs:\n\n* RFE Make the source code for the endpoint-metrics-operator public (BZ#\n1913444)\n\n* cluster became offline after apiserver health check (BZ# 1942589)\n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nFor details on how to apply this update, refer to:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana\ngement_for_kubernetes/2.3/html-single/install/index#installing\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension\n1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag\n1913444 - RFE Make the source code for the endpoint-metrics-operator public\n1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull\n1927520 - RHACM 2.3.0 images\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection\n1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate\n1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms\n1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization\n1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string\n1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application\n1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call\n1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS\n1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service\n1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service\n1942589 - cluster became offline after apiserver health check\n1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl()\n1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character\n1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data\n1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service\n1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option\n1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing\n1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js\n1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service\n1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS)\n1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option\n1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command\n1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets\n1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs\n1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method\n1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions\n1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id\n1983131 - Defragmenting an etcd member doesn\u0027t reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2016-10228\nhttps://access.redhat.com/security/cve/CVE-2017-14502\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2018-1000858\nhttps://access.redhat.com/security/cve/CVE-2019-2708\nhttps://access.redhat.com/security/cve/CVE-2019-9169\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20934\nhttps://access.redhat.com/security/cve/CVE-2019-25013\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-8231\nhttps://access.redhat.com/security/cve/CVE-2020-8284\nhttps://access.redhat.com/security/cve/CVE-2020-8285\nhttps://access.redhat.com/security/cve/CVE-2020-8286\nhttps://access.redhat.com/security/cve/CVE-2020-8927\nhttps://access.redhat.com/security/cve/CVE-2020-11668\nhttps://access.redhat.com/security/cve/CVE-2020-13434\nhttps://access.redhat.com/security/cve/CVE-2020-15358\nhttps://access.redhat.com/security/cve/CVE-2020-27618\nhttps://access.redhat.com/security/cve/CVE-2020-28196\nhttps://access.redhat.com/security/cve/CVE-2020-28469\nhttps://access.redhat.com/security/cve/CVE-2020-28500\nhttps://access.redhat.com/security/cve/CVE-2020-28851\nhttps://access.redhat.com/security/cve/CVE-2020-28852\nhttps://access.redhat.com/security/cve/CVE-2020-29361\nhttps://access.redhat.com/security/cve/CVE-2020-29362\nhttps://access.redhat.com/security/cve/CVE-2020-29363\nhttps://access.redhat.com/security/cve/CVE-2021-3326\nhttps://access.redhat.com/security/cve/CVE-2021-3377\nhttps://access.redhat.com/security/cve/CVE-2021-3449\nhttps://access.redhat.com/security/cve/CVE-2021-3450\nhttps://access.redhat.com/security/cve/CVE-2021-3516\nhttps://access.redhat.com/security/cve/CVE-2021-3517\nhttps://access.redhat.com/security/cve/CVE-2021-3518\nhttps://access.redhat.com/security/cve/CVE-2021-3520\nhttps://access.redhat.com/security/cve/CVE-2021-3537\nhttps://access.redhat.com/security/cve/CVE-2021-3541\nhttps://access.redhat.com/security/cve/CVE-2021-3560\nhttps://access.redhat.com/security/cve/CVE-2021-20271\nhttps://access.redhat.com/security/cve/CVE-2021-20305\nhttps://access.redhat.com/security/cve/CVE-2021-21272\nhttps://access.redhat.com/security/cve/CVE-2021-21309\nhttps://access.redhat.com/security/cve/CVE-2021-21321\nhttps://access.redhat.com/security/cve/CVE-2021-21322\nhttps://access.redhat.com/security/cve/CVE-2021-23337\nhttps://access.redhat.com/security/cve/CVE-2021-23343\nhttps://access.redhat.com/security/cve/CVE-2021-23346\nhttps://access.redhat.com/security/cve/CVE-2021-23362\nhttps://access.redhat.com/security/cve/CVE-2021-23364\nhttps://access.redhat.com/security/cve/CVE-2021-23368\nhttps://access.redhat.com/security/cve/CVE-2021-23369\nhttps://access.redhat.com/security/cve/CVE-2021-23382\nhttps://access.redhat.com/security/cve/CVE-2021-23383\nhttps://access.redhat.com/security/cve/CVE-2021-23839\nhttps://access.redhat.com/security/cve/CVE-2021-23840\nhttps://access.redhat.com/security/cve/CVE-2021-23841\nhttps://access.redhat.com/security/cve/CVE-2021-25217\nhttps://access.redhat.com/security/cve/CVE-2021-27219\nhttps://access.redhat.com/security/cve/CVE-2021-27292\nhttps://access.redhat.com/security/cve/CVE-2021-27358\nhttps://access.redhat.com/security/cve/CVE-2021-28092\nhttps://access.redhat.com/security/cve/CVE-2021-28918\nhttps://access.redhat.com/security/cve/CVE-2021-29418\nhttps://access.redhat.com/security/cve/CVE-2021-29477\nhttps://access.redhat.com/security/cve/CVE-2021-29478\nhttps://access.redhat.com/security/cve/CVE-2021-29482\nhttps://access.redhat.com/security/cve/CVE-2021-32399\nhttps://access.redhat.com/security/cve/CVE-2021-33033\nhttps://access.redhat.com/security/cve/CVE-2021-33034\nhttps://access.redhat.com/security/cve/CVE-2021-33502\nhttps://access.redhat.com/security/cve/CVE-2021-33623\nhttps://access.redhat.com/security/cve/CVE-2021-33909\nhttps://access.redhat.com/security/cve/CVE-2021-33910\nhttps://access.redhat.com/security/updates/classification/#important\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYQyKDNzjgjWX9erEAQhAWQ//fU2h/y+76CVkExXChhgJ779lC9Ec1f+X\n6yw1b2WCHcztbTwyRtZw90dvIA1rNIDBrd83jIwfzsXzxEfGcCTriOmotHKX44+4\nw6uPpmPSOBTsXB/yV/kvbPWpUKkahITC2uvjaInzO2zMmUQ2ntNGpvPu7BbFLmL1\noHMVIZaJ+zrPifwPhGqlp3rAkYe6uGobdvwtrOMXw8L5VnJor+35xLjos5k30IlC\n4lftpWm9cD4oozdb5hw4A0i8fyAvue4hzpmgPfUJ6bngux8wycYhPGiRJR1HX03T\nMSXsWNBtqXNcB7r/GGqen73rr/eyyqsqfJ7+l8Uu7ph5cjk04foZcMqg+rz/1xne\ngVPkWcUJT8j7BH2sO8qiMdfYNl3+xNqPI9MtPEI8K/eiwynwETZqsKnEGIyhcTcX\nxe08Io2jV3jlnpQO/SBcvpKyzcqhDOuNBH2ozhn7Ka68WIMk2OuWempQcyDlWizO\n1UbgoiMVb0hlP0APVpJKNtpfFCjBzFC24gWSAOPTep3vzA418Sn/moCJupM+3PPA\nQIzkGAt9f7sffI0JEg0JPEy0/aTmfsPm7XeR6DG+xF7o1nfy1SOcf+tcnPD0K+z8\n8fS0uUMB/wO2s5yQ1TctsYzL9S5HRwMtnq7qKwWq9ItYzdQB4pcmyK1WgJAHVAtf\nOmk9Hj44tdI=\n=X9lR\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. OpenSSL Security Advisory [16 February 2021]\n============================================\n\nNull pointer deref in X509_issuer_and_serial_hash() (CVE-2021-23841)\n====================================================================\n\nSeverity: Moderate\n\nThe OpenSSL public API function X509_issuer_and_serial_hash() attempts to\ncreate a unique hash value based on the issuer and serial number data contained\nwithin an X509 certificate. However it fails to correctly handle any errors\nthat may occur while parsing the issuer field (which might occur if the issuer\nfield is maliciously constructed). This may subsequently result in a NULL\npointer deref and a crash leading to a potential denial of service attack. \n\nThis issue was reported to OpenSSL on 15th December 2020 by Tavis Ormandy from\nGoogle. The fix was developed by Matt Caswell. \n\nIncorrect SSLv2 rollback protection (CVE-2021-23839)\n====================================================\n\nSeverity: Low\n\nOpenSSL 1.0.2 supports SSLv2. \n\nThis issue was reported to OpenSSL on 21st January 2021 by D. Katz and Joel\nLuellwitz from Trustwave. The fix was developed by Matt Caswell. \n\nInteger overflow in CipherUpdate (CVE-2021-23840)\n=================================================\n\nSeverity: Low\n\nCalls to EVP_CipherUpdate, EVP_EncryptUpdate and EVP_DecryptUpdate may overflow\nthe output length argument in some cases where the input length is close to the\nmaximum permissable length for an integer on the platform. In such cases the\nreturn value from the function call will be 1 (indicating success), but the\noutput length value will be negative. This could cause applications to behave\nincorrectly or crash. \n\nThis issue was reported to OpenSSL on 13th December 2020 by Paul Kehrer. The fix\nwas developed by Matt Caswell. \n\nReferences\n==========\n\nURL for this Security Advisory:\nhttps://www.openssl.org/news/secadv/20210216.txt\n\nNote: the online version of the advisory may be updated with additional details\nover time. \n\nFor details of OpenSSL severity classifications please see:\nhttps://www.openssl.org/policies/secpolicy.html\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-23839"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-003872"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-975"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-23839"
      },
      {
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "db": "PACKETSTORM",
        "id": "169676"
      }
    ],
    "trust": 2.43
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2021-23839",
        "trust": 3.5
      },
      {
        "db": "SIEMENS",
        "id": "SSA-637483",
        "trust": 1.7
      },
      {
        "db": "PULSESECURE",
        "id": "SA44846",
        "trust": 1.7
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-22-258-05",
        "trust": 1.5
      },
      {
        "db": "JVN",
        "id": "JVNVU99475301",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU94508446",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-003872",
        "trust": 0.8
      },
      {
        "db": "CS-HELP",
        "id": "SB2021041363",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-975",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.0636",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2259.2",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4616",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.1502",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2657",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021041501",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022071618",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021092209",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1230",
        "trust": 0.6
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-23839",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "163747",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "169676",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-23839"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-003872"
      },
      {
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "db": "PACKETSTORM",
        "id": "169676"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-975"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1230"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23839"
      }
    ]
  },
  "id": "VAR-202102-1490",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-11-23T21:25:00.398000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Oracle\u00a0Critical\u00a0Patch\u00a0Update\u00a0Advisory\u00a0-\u00a0April\u00a02021 Mitsubishi Electric Mitsubishi Electric Corporation",
        "trust": 0.8,
        "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=30919ab80a478f2d81f2e9acdcca3fa4740cd547"
      },
      {
        "title": "OpenSSL Fixes for encryption problem vulnerabilities",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=142768"
      },
      {
        "title": "IBM: Security Bulletin: Vulnerabilities in OpenSSL affect AIX (CVE-2021-23839, CVE-2021-23840, and CVE-2021-23841)",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=3d5f5025c65711c2d9489cd9fe502978"
      },
      {
        "title": "Arch Linux Issues: ",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2021-23839 log"
      },
      {
        "title": "IBM: Security Bulletin: IBM MQ for HP NonStop Server is affected by OpenSSL vulnerabilities  CVE-2021-23839, CVE-2021-23840 and CVE-2021-23841",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=9ff59b7038a3eb3a3ff198d62d8029d1"
      },
      {
        "title": "IBM: Security Bulletin:  Multiple OpenSSL Vulnerabilities Affect  IBM Connect:Direct for HP NonStop",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=10390d4e672c305fd00ed46b83871274"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2021-1608",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2021-1608"
      },
      {
        "title": "Siemens Security Advisories: Siemens Security Advisory",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=ec6577109e640dac19a6ddb978afe82d"
      },
      {
        "title": "",
        "trust": 0.1,
        "url": "https://github.com/Live-Hack-CVE/CVE-2021-23839 "
      },
      {
        "title": "CVE-2021-23839",
        "trust": 0.1,
        "url": "https://github.com/PwnCast/CVE-2021-23839 "
      },
      {
        "title": "tekton-image-scan-trivy",
        "trust": 0.1,
        "url": "https://github.com/vinamra28/tekton-image-scan-trivy "
      },
      {
        "title": "TASSL-1.1.1k",
        "trust": 0.1,
        "url": "https://github.com/jntass/TASSL-1.1.1k "
      },
      {
        "title": "",
        "trust": 0.1,
        "url": "https://github.com/scholarnishu/Trivy-by-AquaSecurity "
      },
      {
        "title": "",
        "trust": 0.1,
        "url": "https://github.com/isgo-golgo13/gokit-gorillakit-enginesvc "
      },
      {
        "title": "",
        "trust": 0.1,
        "url": "https://github.com/fredrkl/trivy-demo "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-23839"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-003872"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1230"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-327",
        "trust": 1.0
      },
      {
        "problemtype": "Inappropriate cryptographic strength (CWE-326) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-003872"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23839"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.8,
        "url": "https://www.openssl.org/news/secadv/20210216.txt"
      },
      {
        "trust": 1.7,
        "url": "https://security.netapp.com/advisory/ntap-20210219-0009/"
      },
      {
        "trust": 1.7,
        "url": "https://www.oracle.com/security-alerts/cpuapr2021.html"
      },
      {
        "trust": 1.7,
        "url": "https://kb.pulsesecure.net/articles/pulse_security_advisories/sa44846"
      },
      {
        "trust": 1.7,
        "url": "https://www.oracle.com//security-alerts/cpujul2021.html"
      },
      {
        "trust": 1.7,
        "url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
      },
      {
        "trust": 1.7,
        "url": "https://www.oracle.com/security-alerts/cpuapr2022.html"
      },
      {
        "trust": 1.7,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf"
      },
      {
        "trust": 1.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23839"
      },
      {
        "trust": 1.1,
        "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=30919ab80a478f2d81f2e9acdcca3fa4740cd547"
      },
      {
        "trust": 1.0,
        "url": "https://security.netapp.com/advisory/ntap-20240621-0006/"
      },
      {
        "trust": 0.9,
        "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu94508446/index.html"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu99475301/"
      },
      {
        "trust": 0.7,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-vulnerabilities-in-openssl-affect-aix-cve-2021-23839-cve-2021-23840-and-cve-2021-23841-2/"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021041363"
      },
      {
        "trust": 0.6,
        "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=30919ab80a478f2d81f2e9acdcca3fa4740cd547"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-a-vulnerability-was-identified-and-remediated-in-the-ibm-maas360-cloud-extender-v2-103-000-051-and-modules/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-in-openssl-affect-ibm-tivoli-netcool-system-service-monitors-application-service-monitors/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-openssl-vulnerabilities-affect-ibm-connectdirect-for-hp-nonstop/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.1502"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2657"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-websphere-mq-for-hp-nonstop-server-is-affected-by-multiple-openssl-vulnerabilities-cve-2021-23839-cve-2021-23840-and-cve-2021-23841/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-mq-for-hp-nonstop-server-is-affected-by-openssl-vulnerabilities-cve-2021-23839-cve-2021-23840-and-cve-2021-23841/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.0636"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021041501"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-sterling-connectexpress-for-unix-is-affected-by-multiple-vulnerabilities-in-openssl-2/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilites-affect-engineering-lifecycle-management-and-ibm-engineering-products/"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021092209"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022071618"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-affect-ibm-sdk-for-node-js-in-ibm-cloud-5/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4616"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-vulnerability-in-openssl-affects-ibm-rational-clearcase-cve-2020-1971-cve-2021-23839-cve-2021-23840-cve-2021-23841-cve-2021-23839-cve-2021-23840-cve-2021-23841/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-vulnerabilities-in-openssl-affect-aix-cve-2021-23839-cve-2021-23840-and-cve-2021-23841/"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/openssl-1-0-2-read-write-access-via-sslv2-rollback-protection-bypass-34596"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-openssl-vulnerabilites-impacting-aspera-high-speed-transfer-server-aspera-high-speed-transfer-endpoint-aspera-desktop-client-4-0-and-earlier-cve-2021-23839-cve-2021-23840-cve/"
      },
      {
        "trust": 0.6,
        "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-vulnerabilities-in-openssl-affect-ibm-integration-bus-and-ibm-app-connect-enterprise-v11-cve-2021-23839-cve-2021-23840/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-vulnerabilities-in-openssl-affect-ibm-integration-bus-and-ibm-app-connect-enterprise-v11-cve-2021-23839-cve-2021-23840-2/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-openssl-vulnerabilites-impacting-aspera-high-speed-transfer-server-aspera-high-speed-transfer-endpoint-aspera-desktop-client-4-0-and-earlier-cve-2021-23839-cve-2021-23840-cve-2/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2259.2"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-security-vulnerabilities-fixed-in-openssl-as-shipped-with-ibm-security-verify-products/"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/327.html"
      },
      {
        "trust": 0.1,
        "url": "https://github.com/live-hack-cve/cve-2021-23839"
      },
      {
        "trust": 0.1,
        "url": "https://github.com/pwncast/cve-2021-23839"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20454"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28469"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28500"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20934"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8286"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28196"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20305"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15358"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29418"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15358"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28852"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13050"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33034"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27618"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-28092"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3520"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-20843"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13434"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3537"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28851"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-1730"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8231"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33909"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27219"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29482"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3518"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23337"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-32399"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27358"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23369"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3516"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21321"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23368"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13434"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2017-14502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8285"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11668"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-9169"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23364"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23343"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3449"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21309"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23841"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28196"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29361"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23383"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-28918"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3517"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28851"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3560"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28852"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23840"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33033"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-14889"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-1730"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13627"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20934"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25217"
      },
      {
        "trust": 0.1,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28469"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:3016"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3377"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20271"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9169"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3326"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3450"
      },
      {
        "trust": 0.1,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-25013"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29362"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28500"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-2708"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21272"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29477"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27292"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23346"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29478"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8927"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11668"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23839"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19906"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33623"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21322"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-2708"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2016-10228"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23382"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15903"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8284"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33910"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29361"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27618"
      },
      {
        "trust": 0.1,
        "url": "https://www.openssl.org/support/contracts.html"
      },
      {
        "trust": 0.1,
        "url": "https://www.openssl.org/policies/secpolicy.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-23839"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-003872"
      },
      {
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "db": "PACKETSTORM",
        "id": "169676"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-975"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1230"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23839"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULMON",
        "id": "CVE-2021-23839"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-003872"
      },
      {
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "db": "PACKETSTORM",
        "id": "169676"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-975"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1230"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23839"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2021-02-16T00:00:00",
        "db": "VULMON",
        "id": "CVE-2021-23839"
      },
      {
        "date": "2021-11-09T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2021-003872"
      },
      {
        "date": "2021-08-06T14:02:37",
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "date": "2021-02-16T12:12:12",
        "db": "PACKETSTORM",
        "id": "169676"
      },
      {
        "date": "2021-04-13T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202104-975"
      },
      {
        "date": "2021-02-16T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202102-1230"
      },
      {
        "date": "2021-02-16T17:15:13.190000",
        "db": "NVD",
        "id": "CVE-2021-23839"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-11-07T00:00:00",
        "db": "VULMON",
        "id": "CVE-2021-23839"
      },
      {
        "date": "2022-09-20T06:06:00",
        "db": "JVNDB",
        "id": "JVNDB-2021-003872"
      },
      {
        "date": "2021-04-14T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202104-975"
      },
      {
        "date": "2022-09-19T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202102-1230"
      },
      {
        "date": "2024-11-21T05:51:55.003000",
        "db": "NVD",
        "id": "CVE-2021-23839"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1230"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "OpenSSL\u00a0 Cryptographic strength vulnerabilities in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-003872"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "other",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-975"
      }
    ],
    "trust": 0.6
  }
}

var-202207-0378
Vulnerability from variot

A cryptographic vulnerability exists on Node.js on linux in versions of 18.x prior to 18.40.0 which allowed a default path for openssl.cnf that might be accessible under some circumstances to a non-admin user instead of /etc/ssl as was the case in versions prior to the upgrade to OpenSSL 3. Node.js Foundation of Node.js Products from multiple other vendors are vulnerable to uncontrolled search path elements.Information may be tampered with. Node.js July 7th 2022 Security Releases: Attempt to read openssl.cnf from /home/iojs/build/ upon startup. When Node.js starts on linux based systems, it attempts to read /home/iojs/build/ws/out/Release/obj.target/deps/openssl/openssl.cnf, which ordinarily doesn't exist. On some shared systems an attacker may be able create this file and therefore affect the default OpenSSL configuration for other users. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202405-29


                                       https://security.gentoo.org/

Severity: Low Title: Node.js: Multiple Vulnerabilities Date: May 08, 2024 Bugs: #772422, #781704, #800986, #805053, #807775, #811273, #817938, #831037, #835615, #857111, #865627, #872692, #879617, #918086, #918614 ID: 202405-29


Synopsis

Multiple vulnerabilities have been discovered in Node.js.

Background

Node.js is a JavaScript runtime built on Chrome’s V8 JavaScript engine. Please review the CVE identifiers referenced below for details.

Impact

Please review the referenced CVE identifiers for details.

Workaround

There is no known workaround at this time.

Resolution

All Node.js 20 users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-20.5.1"

All Node.js 18 users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-18.17.1"

All Node.js 16 users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-16.20.2"

References

[ 1 ] CVE-2020-7774 https://nvd.nist.gov/vuln/detail/CVE-2020-7774 [ 2 ] CVE-2021-3672 https://nvd.nist.gov/vuln/detail/CVE-2021-3672 [ 3 ] CVE-2021-22883 https://nvd.nist.gov/vuln/detail/CVE-2021-22883 [ 4 ] CVE-2021-22884 https://nvd.nist.gov/vuln/detail/CVE-2021-22884 [ 5 ] CVE-2021-22918 https://nvd.nist.gov/vuln/detail/CVE-2021-22918 [ 6 ] CVE-2021-22930 https://nvd.nist.gov/vuln/detail/CVE-2021-22930 [ 7 ] CVE-2021-22931 https://nvd.nist.gov/vuln/detail/CVE-2021-22931 [ 8 ] CVE-2021-22939 https://nvd.nist.gov/vuln/detail/CVE-2021-22939 [ 9 ] CVE-2021-22940 https://nvd.nist.gov/vuln/detail/CVE-2021-22940 [ 10 ] CVE-2021-22959 https://nvd.nist.gov/vuln/detail/CVE-2021-22959 [ 11 ] CVE-2021-22960 https://nvd.nist.gov/vuln/detail/CVE-2021-22960 [ 12 ] CVE-2021-37701 https://nvd.nist.gov/vuln/detail/CVE-2021-37701 [ 13 ] CVE-2021-37712 https://nvd.nist.gov/vuln/detail/CVE-2021-37712 [ 14 ] CVE-2021-39134 https://nvd.nist.gov/vuln/detail/CVE-2021-39134 [ 15 ] CVE-2021-39135 https://nvd.nist.gov/vuln/detail/CVE-2021-39135 [ 16 ] CVE-2021-44531 https://nvd.nist.gov/vuln/detail/CVE-2021-44531 [ 17 ] CVE-2021-44532 https://nvd.nist.gov/vuln/detail/CVE-2021-44532 [ 18 ] CVE-2021-44533 https://nvd.nist.gov/vuln/detail/CVE-2021-44533 [ 19 ] CVE-2022-0778 https://nvd.nist.gov/vuln/detail/CVE-2022-0778 [ 20 ] CVE-2022-3602 https://nvd.nist.gov/vuln/detail/CVE-2022-3602 [ 21 ] CVE-2022-3786 https://nvd.nist.gov/vuln/detail/CVE-2022-3786 [ 22 ] CVE-2022-21824 https://nvd.nist.gov/vuln/detail/CVE-2022-21824 [ 23 ] CVE-2022-32212 https://nvd.nist.gov/vuln/detail/CVE-2022-32212 [ 24 ] CVE-2022-32213 https://nvd.nist.gov/vuln/detail/CVE-2022-32213 [ 25 ] CVE-2022-32214 https://nvd.nist.gov/vuln/detail/CVE-2022-32214 [ 26 ] CVE-2022-32215 https://nvd.nist.gov/vuln/detail/CVE-2022-32215 [ 27 ] CVE-2022-32222 https://nvd.nist.gov/vuln/detail/CVE-2022-32222 [ 28 ] CVE-2022-35255 https://nvd.nist.gov/vuln/detail/CVE-2022-35255 [ 29 ] CVE-2022-35256 https://nvd.nist.gov/vuln/detail/CVE-2022-35256 [ 30 ] CVE-2022-35948 https://nvd.nist.gov/vuln/detail/CVE-2022-35948 [ 31 ] CVE-2022-35949 https://nvd.nist.gov/vuln/detail/CVE-2022-35949 [ 32 ] CVE-2022-43548 https://nvd.nist.gov/vuln/detail/CVE-2022-43548 [ 33 ] CVE-2023-30581 https://nvd.nist.gov/vuln/detail/CVE-2023-30581 [ 34 ] CVE-2023-30582 https://nvd.nist.gov/vuln/detail/CVE-2023-30582 [ 35 ] CVE-2023-30583 https://nvd.nist.gov/vuln/detail/CVE-2023-30583 [ 36 ] CVE-2023-30584 https://nvd.nist.gov/vuln/detail/CVE-2023-30584 [ 37 ] CVE-2023-30586 https://nvd.nist.gov/vuln/detail/CVE-2023-30586 [ 38 ] CVE-2023-30587 https://nvd.nist.gov/vuln/detail/CVE-2023-30587 [ 39 ] CVE-2023-30588 https://nvd.nist.gov/vuln/detail/CVE-2023-30588 [ 40 ] CVE-2023-30589 https://nvd.nist.gov/vuln/detail/CVE-2023-30589 [ 41 ] CVE-2023-30590 https://nvd.nist.gov/vuln/detail/CVE-2023-30590 [ 42 ] CVE-2023-32002 https://nvd.nist.gov/vuln/detail/CVE-2023-32002 [ 43 ] CVE-2023-32003 https://nvd.nist.gov/vuln/detail/CVE-2023-32003 [ 44 ] CVE-2023-32004 https://nvd.nist.gov/vuln/detail/CVE-2023-32004 [ 45 ] CVE-2023-32005 https://nvd.nist.gov/vuln/detail/CVE-2023-32005 [ 46 ] CVE-2023-32006 https://nvd.nist.gov/vuln/detail/CVE-2023-32006 [ 47 ] CVE-2023-32558 https://nvd.nist.gov/vuln/detail/CVE-2023-32558 [ 48 ] CVE-2023-32559 https://nvd.nist.gov/vuln/detail/CVE-2023-32559

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202405-29

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2024 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202207-0378",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "18.0.0"
      },
      {
        "model": "node.js",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "18.5.0"
      },
      {
        "model": "node.js",
        "scope": null,
        "trust": 0.8,
        "vendor": "node js",
        "version": null
      },
      {
        "model": "sinec ins",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013242"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-32222"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Gentoo",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "178512"
      }
    ],
    "trust": 0.1
  },
  "cve": "CVE-2022-32222",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 5.3,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 3.9,
            "id": "CVE-2022-32222",
            "impactScore": 1.4,
            "integrityImpact": "LOW",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:L/A:N",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "None",
            "baseScore": 5.3,
            "baseSeverity": "Medium",
            "confidentialityImpact": "None",
            "exploitabilityScore": null,
            "id": "CVE-2022-32222",
            "impactScore": null,
            "integrityImpact": "Low",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:L/A:N",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2022-32222",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "NVD",
            "id": "CVE-2022-32222",
            "trust": 0.8,
            "value": "Medium"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202207-682",
            "trust": 0.6,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013242"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-682"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-32222"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A cryptographic vulnerability exists on Node.js on linux in versions of 18.x prior to 18.40.0 which allowed a default path for openssl.cnf that might be accessible under some circumstances to a non-admin user instead of /etc/ssl as was the case in versions prior to the upgrade to OpenSSL 3. Node.js Foundation of Node.js Products from multiple other vendors are vulnerable to uncontrolled search path elements.Information may be tampered with. Node.js July 7th 2022 Security Releases: Attempt to read openssl.cnf from /home/iojs/build/ upon startup. When Node.js starts on linux based systems, it attempts to read /home/iojs/build/ws/out/Release/obj.target/deps/openssl/openssl.cnf, which ordinarily doesn\u0027t exist. On some shared systems an attacker may be able create this file and therefore affect the default OpenSSL configuration for other users. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202405-29\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Low\n    Title: Node.js: Multiple Vulnerabilities\n     Date: May 08, 2024\n     Bugs: #772422, #781704, #800986, #805053, #807775, #811273, #817938, #831037, #835615, #857111, #865627, #872692, #879617, #918086, #918614\n       ID: 202405-29\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been discovered in Node.js. \n\nBackground\n=========\nNode.js is a JavaScript runtime built on Chrome\u2019s V8 JavaScript engine. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll Node.js 20 users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-20.5.1\"\n\nAll Node.js 18 users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-18.17.1\"\n\nAll Node.js 16 users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-16.20.2\"\n\nReferences\n=========\n[ 1 ] CVE-2020-7774\n      https://nvd.nist.gov/vuln/detail/CVE-2020-7774\n[ 2 ] CVE-2021-3672\n      https://nvd.nist.gov/vuln/detail/CVE-2021-3672\n[ 3 ] CVE-2021-22883\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22883\n[ 4 ] CVE-2021-22884\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22884\n[ 5 ] CVE-2021-22918\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22918\n[ 6 ] CVE-2021-22930\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22930\n[ 7 ] CVE-2021-22931\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22931\n[ 8 ] CVE-2021-22939\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22939\n[ 9 ] CVE-2021-22940\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22940\n[ 10 ] CVE-2021-22959\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22959\n[ 11 ] CVE-2021-22960\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22960\n[ 12 ] CVE-2021-37701\n      https://nvd.nist.gov/vuln/detail/CVE-2021-37701\n[ 13 ] CVE-2021-37712\n      https://nvd.nist.gov/vuln/detail/CVE-2021-37712\n[ 14 ] CVE-2021-39134\n      https://nvd.nist.gov/vuln/detail/CVE-2021-39134\n[ 15 ] CVE-2021-39135\n      https://nvd.nist.gov/vuln/detail/CVE-2021-39135\n[ 16 ] CVE-2021-44531\n      https://nvd.nist.gov/vuln/detail/CVE-2021-44531\n[ 17 ] CVE-2021-44532\n      https://nvd.nist.gov/vuln/detail/CVE-2021-44532\n[ 18 ] CVE-2021-44533\n      https://nvd.nist.gov/vuln/detail/CVE-2021-44533\n[ 19 ] CVE-2022-0778\n      https://nvd.nist.gov/vuln/detail/CVE-2022-0778\n[ 20 ] CVE-2022-3602\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3602\n[ 21 ] CVE-2022-3786\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3786\n[ 22 ] CVE-2022-21824\n      https://nvd.nist.gov/vuln/detail/CVE-2022-21824\n[ 23 ] CVE-2022-32212\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32212\n[ 24 ] CVE-2022-32213\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32213\n[ 25 ] CVE-2022-32214\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32214\n[ 26 ] CVE-2022-32215\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32215\n[ 27 ] CVE-2022-32222\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32222\n[ 28 ] CVE-2022-35255\n      https://nvd.nist.gov/vuln/detail/CVE-2022-35255\n[ 29 ] CVE-2022-35256\n      https://nvd.nist.gov/vuln/detail/CVE-2022-35256\n[ 30 ] CVE-2022-35948\n      https://nvd.nist.gov/vuln/detail/CVE-2022-35948\n[ 31 ] CVE-2022-35949\n      https://nvd.nist.gov/vuln/detail/CVE-2022-35949\n[ 32 ] CVE-2022-43548\n      https://nvd.nist.gov/vuln/detail/CVE-2022-43548\n[ 33 ] CVE-2023-30581\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30581\n[ 34 ] CVE-2023-30582\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30582\n[ 35 ] CVE-2023-30583\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30583\n[ 36 ] CVE-2023-30584\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30584\n[ 37 ] CVE-2023-30586\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30586\n[ 38 ] CVE-2023-30587\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30587\n[ 39 ] CVE-2023-30588\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30588\n[ 40 ] CVE-2023-30589\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30589\n[ 41 ] CVE-2023-30590\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30590\n[ 42 ] CVE-2023-32002\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32002\n[ 43 ] CVE-2023-32003\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32003\n[ 44 ] CVE-2023-32004\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32004\n[ 45 ] CVE-2023-32005\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32005\n[ 46 ] CVE-2023-32006\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32006\n[ 47 ] CVE-2023-32558\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32558\n[ 48 ] CVE-2023-32559\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32559\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202405-29\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2024 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-32222"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013242"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-32222"
      },
      {
        "db": "PACKETSTORM",
        "id": "178512"
      }
    ],
    "trust": 1.8
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2022-32222",
        "trust": 3.4
      },
      {
        "db": "HACKERONE",
        "id": "1695596",
        "trust": 2.4
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013242",
        "trust": 0.8
      },
      {
        "db": "CS-HELP",
        "id": "SB2022071338",
        "trust": 0.6
      },
      {
        "db": "SIEMENS",
        "id": "SSA-332410",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-682",
        "trust": 0.6
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-32222",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "178512",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-32222"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013242"
      },
      {
        "db": "PACKETSTORM",
        "id": "178512"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-682"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-32222"
      }
    ]
  },
  "id": "VAR-202207-0378",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-08-14T12:52:43.493000Z",
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-310",
        "trust": 1.0
      },
      {
        "problemtype": "CWE-427",
        "trust": 1.0
      },
      {
        "problemtype": "Uncontrolled search path elements (CWE-427) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013242"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-32222"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.4,
        "url": "https://hackerone.com/reports/1695596"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32222"
      },
      {
        "trust": 0.7,
        "url": "https://nodejs.org/en/blog/vulnerability/july-2022-security-releases/"
      },
      {
        "trust": 0.6,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf"
      },
      {
        "trust": 0.6,
        "url": "https://security.netapp.com/advisory/ntap-20220915-0001/"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2022-32222"
      },
      {
        "trust": 0.6,
        "url": "https://cxsecurity.com/cveshow/cve-2022-32222/"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022071338"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22960"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30587"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32006"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22931"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22939"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32558"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30588"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21824"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3672"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44532"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35949"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22959"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/glsa/202405-29"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22918"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32004"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-43548"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30584"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7774"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30589"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32003"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32212"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22883"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32214"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22884"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35948"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35255"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44533"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32002"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30582"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3602"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3786"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30590"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30586"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35256"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32213"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32215"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22940"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32005"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32559"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22930"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39135"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39134"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30581"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37712"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30583"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44531"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37701"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-32222"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013242"
      },
      {
        "db": "PACKETSTORM",
        "id": "178512"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-682"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-32222"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULMON",
        "id": "CVE-2022-32222"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013242"
      },
      {
        "db": "PACKETSTORM",
        "id": "178512"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-682"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-32222"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-09-06T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-013242"
      },
      {
        "date": "2024-05-09T15:46:44",
        "db": "PACKETSTORM",
        "id": "178512"
      },
      {
        "date": "2022-07-08T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202207-682"
      },
      {
        "date": "2022-07-14T15:15:08.437000",
        "db": "NVD",
        "id": "CVE-2022-32222"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-09-06T08:23:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-013242"
      },
      {
        "date": "2023-07-25T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202207-682"
      },
      {
        "date": "2023-07-24T13:16:33.287000",
        "db": "NVD",
        "id": "CVE-2022-32222"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-682"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Node.js\u00a0Foundation\u00a0 of \u00a0Node.js\u00a0 Uncontrolled Search Path Element Vulnerability in Products from Other Vendors",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013242"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "code problem",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-682"
      }
    ],
    "trust": 0.6
  }
}

var-202207-0587
Vulnerability from variot

The llhttp parser <v14.20.1, <v16.17.1 and <v18.9.1 in the http module in Node.js does not correctly parse and validate Transfer-Encoding headers and can lead to HTTP Request Smuggling (HRS). llhttp of llhttp For products from other vendors, HTTP There is a vulnerability related to request smuggling.Information may be obtained and information may be tampered with. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Moderate: rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon security and bug fix update Advisory ID: RHSA-2022:6389-01 Product: Red Hat Software Collections Advisory URL: https://access.redhat.com/errata/RHSA-2022:6389 Issue date: 2022-09-08 CVE Names: CVE-2022-32212 CVE-2022-32213 CVE-2022-32214 CVE-2022-32215 CVE-2022-33987 ==================================================================== 1. Summary:

An update for rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon is now available for Red Hat Software Collections.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

Red Hat Software Collections for Red Hat Enterprise Linux Server (v. 7) - noarch, ppc64le, s390x, x86_64 Red Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64

  1. Description:

Node.js is a software development platform for building fast and scalable network applications in the JavaScript programming language.

The following packages have been upgraded to a later upstream version: rh-nodejs14-nodejs (14.20.0).

Security Fix(es):

  • nodejs: DNS rebinding in --inspect via invalid IP addresses (CVE-2022-32212)

  • nodejs: HTTP request smuggling due to flawed parsing of Transfer-Encoding (CVE-2022-32213)

  • nodejs: HTTP request smuggling due to improper delimiting of header fields (CVE-2022-32214)

  • nodejs: HTTP request smuggling due to incorrect parsing of multi-line Transfer-Encoding (CVE-2022-32215)

  • got: missing verification of requested URLs allows redirects to UNIX sockets (CVE-2022-33987)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Bug Fix(es):

  • rh-nodejs14-nodejs: rebase to latest upstream release (BZ#2106673)

  • Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Bugs fixed (https://bugzilla.redhat.com/):

2102001 - CVE-2022-33987 got: missing verification of requested URLs allows redirects to UNIX sockets 2105422 - CVE-2022-32212 nodejs: DNS rebinding in --inspect via invalid IP addresses 2105426 - CVE-2022-32215 nodejs: HTTP request smuggling due to incorrect parsing of multi-line Transfer-Encoding 2105428 - CVE-2022-32214 nodejs: HTTP request smuggling due to improper delimiting of header fields 2105430 - CVE-2022-32213 nodejs: HTTP request smuggling due to flawed parsing of Transfer-Encoding 2106673 - rh-nodejs14-nodejs: rebase to latest upstream release [rhscl-3.8.z]

  1. Package List:

Red Hat Software Collections for Red Hat Enterprise Linux Server (v. 7):

Source: rh-nodejs14-nodejs-14.20.0-2.el7.src.rpm rh-nodejs14-nodejs-nodemon-2.0.19-1.el7.src.rpm

noarch: rh-nodejs14-nodejs-docs-14.20.0-2.el7.noarch.rpm rh-nodejs14-nodejs-nodemon-2.0.19-1.el7.noarch.rpm

ppc64le: rh-nodejs14-nodejs-14.20.0-2.el7.ppc64le.rpm rh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.ppc64le.rpm rh-nodejs14-nodejs-devel-14.20.0-2.el7.ppc64le.rpm rh-nodejs14-npm-6.14.17-14.20.0.2.el7.ppc64le.rpm

s390x: rh-nodejs14-nodejs-14.20.0-2.el7.s390x.rpm rh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.s390x.rpm rh-nodejs14-nodejs-devel-14.20.0-2.el7.s390x.rpm rh-nodejs14-npm-6.14.17-14.20.0.2.el7.s390x.rpm

x86_64: rh-nodejs14-nodejs-14.20.0-2.el7.x86_64.rpm rh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.x86_64.rpm rh-nodejs14-nodejs-devel-14.20.0-2.el7.x86_64.rpm rh-nodejs14-npm-6.14.17-14.20.0.2.el7.x86_64.rpm

Red Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7):

Source: rh-nodejs14-nodejs-14.20.0-2.el7.src.rpm rh-nodejs14-nodejs-nodemon-2.0.19-1.el7.src.rpm

noarch: rh-nodejs14-nodejs-docs-14.20.0-2.el7.noarch.rpm rh-nodejs14-nodejs-nodemon-2.0.19-1.el7.noarch.rpm

x86_64: rh-nodejs14-nodejs-14.20.0-2.el7.x86_64.rpm rh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.x86_64.rpm rh-nodejs14-nodejs-devel-14.20.0-2.el7.x86_64.rpm rh-nodejs14-npm-6.14.17-14.20.0.2.el7.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. References:

https://access.redhat.com/security/cve/CVE-2022-32212 https://access.redhat.com/security/cve/CVE-2022-32213 https://access.redhat.com/security/cve/CVE-2022-32214 https://access.redhat.com/security/cve/CVE-2022-32215 https://access.redhat.com/security/cve/CVE-2022-33987 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYxnqU9zjgjWX9erEAQipBg/+NJmkBsKEPkFHZAiZhGKiwIkwaFcHK+e/ ODClFTTT9SkkMBheuc9HQDmwukaVlLMvbOJSVL/6NvuLQvOcQHtprOAJXr3I6KQm VScJRQny4et+D/N3bJJiuhqe9YY9Bh+EP7omS4aq2UuphEhkuTSQ0V2+Fa4O8wdZ bAhUhU660Q6aGzNGvcyz8vi7ohmOFZS94/x2Lr6cBG8LF0dmr/pIw+uPlO36ghXF IPEM3VcGisTGQRg2Xy5yqeouK1S+YAcZ1f0QUOePP+WRhIecfmG3cj6oYTRnrOyq +62525BHDNjIz55z6H32dKBIy+r+HT7WaOGgPwvH+ugmlH6NyKHjSyy+IJoglkfM 4+QA0zun7WhLet5y4jmsWCpT3mOCWj7h+iW6IqTlfcad3wCQ6OnySRq67W3GDq+M 3kdUdBoyfLm1vzLceEF4AK8qChj7rVl8x0b4v8OfRGv6ZEIe+BfJYNzI9HeuIE91 BYtLGe18vMs5mcWxcYMWlfAgzVSGTaqaaBie9qPtAThs00lJd9oRf/Mfga42/6vI nBLHwE3NyPyKfaLvcyLa/oPwGnOhKyPtD8HeN2MORm6RUeUClaq9s+ihDIPvbyLX bcKKdjGoJDWyJy2yU2GkVwrbF6gcKgdvo2uFckOpouKQ4P9KEooI/15fLy8NPIZz hGdWoRKL34w\xcePC -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . 9) - aarch64, noarch, ppc64le, s390x, x86_64

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512

Debian Security Advisory DSA-5326-1 security@debian.org https://www.debian.org/security/ Aron Xu January 24, 2023 https://www.debian.org/security/faq


Package : nodejs CVE ID : CVE-2022-32212 CVE-2022-32213 CVE-2022-32214 CVE-2022-32215 CVE-2022-35255 CVE-2022-35256 CVE-2022-43548

Multiple vulnerabilities were discovered in Node.js, which could result in HTTP request smuggling, bypass of host IP address validation and weak randomness setup.

For the stable distribution (bullseye), these problems have been fixed in version 12.22.12~dfsg-1~deb11u3.

We recommend that you upgrade your nodejs packages.

For the detailed security status of nodejs please refer to its security tracker page at: https://security-tracker.debian.org/tracker/nodejs

Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/

Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmPQNhIACgkQEMKTtsN8 TjaRmA/+KDFkQcd2sE/eAAx9cVikICNkfu7uIVKHpeDH9o5oq5M2nj4zHJCeAArp WblguyZwEtqzAOO2WesbrmwfXLmglhrNZwRMOrsbu63JxSnecp7qcMwR8A4JWdmd Txb4aZr6Prmwq6fT0G3K6oV8Hw+OeqYA/RZKenxtkBf/jdzVahGJHJ/NrFKKWVQW xbqHwCkP7uUlm+5UR5XzNrodTRCQYHJvUmDUrjEOjM6x+sqYirKWiERN0A14kVn9 0Ufrw6+Z2tKhdKFZfU1BtDthhlH/nybz0h3aHsk+E5/vx20WAURiCEDVi7nf8+Rf EtbCxaqV+/xVoPmXStHY/ogCo8CgRVsyYUIemgi4q5LwVx/Oqjm2CJ/xCwOKh0E2 idXLJfLSpxxBe598MUn9iKbnFFCN9DQZXf7BYs3djtn8ALFVBSHZSF1QXFoFQ86w Y9xGhBQzfEgCoEW7H4S30ZQ+Gz+ZnOMCSH+MKIMtSpqbc7wLtrKf839DO6Uux7B7 u0WR3lZlsihi92QKq9X/VRkyy8ZiA2TYy3IE+KDKlXDHKls9FR9BUClYe9L8RiRu boP8KPFUHUsSVaTzkufMStdKkcXCqgj/6KhJL6E9ZunTBpTmqx1Ty7/N2qktLFnH ujrffzV3rCE6eIg7ps8OdZbjCfqUqmQk9/pV6ZDjymqjZ1LKZDs\xfeRn -----END PGP SIGNATURE----- . - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202405-29


                                       https://security.gentoo.org/

Severity: Low Title: Node.js: Multiple Vulnerabilities Date: May 08, 2024 Bugs: #772422, #781704, #800986, #805053, #807775, #811273, #817938, #831037, #835615, #857111, #865627, #872692, #879617, #918086, #918614 ID: 202405-29


Synopsis

Multiple vulnerabilities have been discovered in Node.js.

Background

Node.js is a JavaScript runtime built on Chrome’s V8 JavaScript engine.

Affected packages

Package Vulnerable Unaffected


net-libs/nodejs < 16.20.2 >= 16.20.2

Description

Multiple vulnerabilities have been discovered in Node.js. Please review the CVE identifiers referenced below for details.

Impact

Please review the referenced CVE identifiers for details.

Workaround

There is no known workaround at this time.

Resolution

All Node.js 20 users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-20.5.1"

All Node.js 18 users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-18.17.1"

All Node.js 16 users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-16.20.2"

References

[ 1 ] CVE-2020-7774 https://nvd.nist.gov/vuln/detail/CVE-2020-7774 [ 2 ] CVE-2021-3672 https://nvd.nist.gov/vuln/detail/CVE-2021-3672 [ 3 ] CVE-2021-22883 https://nvd.nist.gov/vuln/detail/CVE-2021-22883 [ 4 ] CVE-2021-22884 https://nvd.nist.gov/vuln/detail/CVE-2021-22884 [ 5 ] CVE-2021-22918 https://nvd.nist.gov/vuln/detail/CVE-2021-22918 [ 6 ] CVE-2021-22930 https://nvd.nist.gov/vuln/detail/CVE-2021-22930 [ 7 ] CVE-2021-22931 https://nvd.nist.gov/vuln/detail/CVE-2021-22931 [ 8 ] CVE-2021-22939 https://nvd.nist.gov/vuln/detail/CVE-2021-22939 [ 9 ] CVE-2021-22940 https://nvd.nist.gov/vuln/detail/CVE-2021-22940 [ 10 ] CVE-2021-22959 https://nvd.nist.gov/vuln/detail/CVE-2021-22959 [ 11 ] CVE-2021-22960 https://nvd.nist.gov/vuln/detail/CVE-2021-22960 [ 12 ] CVE-2021-37701 https://nvd.nist.gov/vuln/detail/CVE-2021-37701 [ 13 ] CVE-2021-37712 https://nvd.nist.gov/vuln/detail/CVE-2021-37712 [ 14 ] CVE-2021-39134 https://nvd.nist.gov/vuln/detail/CVE-2021-39134 [ 15 ] CVE-2021-39135 https://nvd.nist.gov/vuln/detail/CVE-2021-39135 [ 16 ] CVE-2021-44531 https://nvd.nist.gov/vuln/detail/CVE-2021-44531 [ 17 ] CVE-2021-44532 https://nvd.nist.gov/vuln/detail/CVE-2021-44532 [ 18 ] CVE-2021-44533 https://nvd.nist.gov/vuln/detail/CVE-2021-44533 [ 19 ] CVE-2022-0778 https://nvd.nist.gov/vuln/detail/CVE-2022-0778 [ 20 ] CVE-2022-3602 https://nvd.nist.gov/vuln/detail/CVE-2022-3602 [ 21 ] CVE-2022-3786 https://nvd.nist.gov/vuln/detail/CVE-2022-3786 [ 22 ] CVE-2022-21824 https://nvd.nist.gov/vuln/detail/CVE-2022-21824 [ 23 ] CVE-2022-32212 https://nvd.nist.gov/vuln/detail/CVE-2022-32212 [ 24 ] CVE-2022-32213 https://nvd.nist.gov/vuln/detail/CVE-2022-32213 [ 25 ] CVE-2022-32214 https://nvd.nist.gov/vuln/detail/CVE-2022-32214 [ 26 ] CVE-2022-32215 https://nvd.nist.gov/vuln/detail/CVE-2022-32215 [ 27 ] CVE-2022-32222 https://nvd.nist.gov/vuln/detail/CVE-2022-32222 [ 28 ] CVE-2022-35255 https://nvd.nist.gov/vuln/detail/CVE-2022-35255 [ 29 ] CVE-2022-35256 https://nvd.nist.gov/vuln/detail/CVE-2022-35256 [ 30 ] CVE-2022-35948 https://nvd.nist.gov/vuln/detail/CVE-2022-35948 [ 31 ] CVE-2022-35949 https://nvd.nist.gov/vuln/detail/CVE-2022-35949 [ 32 ] CVE-2022-43548 https://nvd.nist.gov/vuln/detail/CVE-2022-43548 [ 33 ] CVE-2023-30581 https://nvd.nist.gov/vuln/detail/CVE-2023-30581 [ 34 ] CVE-2023-30582 https://nvd.nist.gov/vuln/detail/CVE-2023-30582 [ 35 ] CVE-2023-30583 https://nvd.nist.gov/vuln/detail/CVE-2023-30583 [ 36 ] CVE-2023-30584 https://nvd.nist.gov/vuln/detail/CVE-2023-30584 [ 37 ] CVE-2023-30586 https://nvd.nist.gov/vuln/detail/CVE-2023-30586 [ 38 ] CVE-2023-30587 https://nvd.nist.gov/vuln/detail/CVE-2023-30587 [ 39 ] CVE-2023-30588 https://nvd.nist.gov/vuln/detail/CVE-2023-30588 [ 40 ] CVE-2023-30589 https://nvd.nist.gov/vuln/detail/CVE-2023-30589 [ 41 ] CVE-2023-30590 https://nvd.nist.gov/vuln/detail/CVE-2023-30590 [ 42 ] CVE-2023-32002 https://nvd.nist.gov/vuln/detail/CVE-2023-32002 [ 43 ] CVE-2023-32003 https://nvd.nist.gov/vuln/detail/CVE-2023-32003 [ 44 ] CVE-2023-32004 https://nvd.nist.gov/vuln/detail/CVE-2023-32004 [ 45 ] CVE-2023-32005 https://nvd.nist.gov/vuln/detail/CVE-2023-32005 [ 46 ] CVE-2023-32006 https://nvd.nist.gov/vuln/detail/CVE-2023-32006 [ 47 ] CVE-2023-32558 https://nvd.nist.gov/vuln/detail/CVE-2023-32558 [ 48 ] CVE-2023-32559 https://nvd.nist.gov/vuln/detail/CVE-2023-32559

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202405-29

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2024 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202207-0587",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "node.js",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "16.17.1"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "node.js",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "18.9.1"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "16.0.0"
      },
      {
        "model": "management center",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "stormshield",
        "version": "3.3.2"
      },
      {
        "model": "llhttp",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "llhttp",
        "version": "2.1.5"
      },
      {
        "model": "node.js",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "14.14.0"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "14.0.0"
      },
      {
        "model": "node.js",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "16.12.0"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "16.13.0"
      },
      {
        "model": "llhttp",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "llhttp",
        "version": "6.0.7"
      },
      {
        "model": "llhttp",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "llhttp",
        "version": "6.0.0"
      },
      {
        "model": "node.js",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "14.20.1"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "18.0.0"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "14.15.0"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "35"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "11.0"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "36"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "37"
      },
      {
        "model": "fedora",
        "scope": null,
        "trust": 0.8,
        "vendor": "fedora",
        "version": null
      },
      {
        "model": "management center",
        "scope": null,
        "trust": 0.8,
        "vendor": "stormshield",
        "version": null
      },
      {
        "model": "gnu/linux",
        "scope": null,
        "trust": 0.8,
        "vendor": "debian",
        "version": null
      },
      {
        "model": "llhttp",
        "scope": null,
        "trust": 0.8,
        "vendor": "llhttp",
        "version": null
      },
      {
        "model": "node.js",
        "scope": null,
        "trust": 0.8,
        "vendor": "node js",
        "version": null
      },
      {
        "model": "sinec ins",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013368"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-32213"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "168305"
      },
      {
        "db": "PACKETSTORM",
        "id": "169410"
      },
      {
        "db": "PACKETSTORM",
        "id": "168442"
      },
      {
        "db": "PACKETSTORM",
        "id": "168358"
      },
      {
        "db": "PACKETSTORM",
        "id": "168359"
      }
    ],
    "trust": 0.5
  },
  "cve": "CVE-2022-32213",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 6.5,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "LOW",
            "exploitabilityScore": 3.9,
            "id": "CVE-2022-32213",
            "impactScore": 2.5,
            "integrityImpact": "LOW",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:N",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "None",
            "baseScore": 6.5,
            "baseSeverity": "Medium",
            "confidentialityImpact": "Low",
            "exploitabilityScore": null,
            "id": "CVE-2022-32213",
            "impactScore": null,
            "integrityImpact": "Low",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:N",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2022-32213",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "NVD",
            "id": "CVE-2022-32213",
            "trust": 0.8,
            "value": "Medium"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202207-683",
            "trust": 0.6,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013368"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-683"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-32213"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "The llhttp parser \u003cv14.20.1, \u003cv16.17.1 and \u003cv18.9.1 in the http module in Node.js does not correctly parse and validate Transfer-Encoding headers and can lead to HTTP Request Smuggling (HRS). llhttp of llhttp For products from other vendors, HTTP There is a vulnerability related to request smuggling.Information may be obtained and information may be tampered with. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Moderate: rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon security and bug fix update\nAdvisory ID:       RHSA-2022:6389-01\nProduct:           Red Hat Software Collections\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:6389\nIssue date:        2022-09-08\nCVE Names:         CVE-2022-32212 CVE-2022-32213 CVE-2022-32214\n                   CVE-2022-32215 CVE-2022-33987\n====================================================================\n1. Summary:\n\nAn update for rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon is now\navailable for Red Hat Software Collections. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Software Collections for Red Hat Enterprise Linux Server (v. 7) - noarch, ppc64le, s390x, x86_64\nRed Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64\n\n3. Description:\n\nNode.js is a software development platform for building fast and scalable\nnetwork applications in the JavaScript programming language. \n\nThe following packages have been upgraded to a later upstream version:\nrh-nodejs14-nodejs (14.20.0). \n\nSecurity Fix(es):\n\n* nodejs: DNS rebinding in --inspect via invalid IP addresses\n(CVE-2022-32212)\n\n* nodejs: HTTP request smuggling due to flawed parsing of Transfer-Encoding\n(CVE-2022-32213)\n\n* nodejs: HTTP request smuggling due to improper delimiting of header\nfields (CVE-2022-32214)\n\n* nodejs: HTTP request smuggling due to incorrect parsing of multi-line\nTransfer-Encoding (CVE-2022-32215)\n\n* got: missing verification of requested URLs allows redirects to UNIX\nsockets (CVE-2022-33987)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fix(es):\n\n* rh-nodejs14-nodejs: rebase to latest upstream release (BZ#2106673)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2102001 - CVE-2022-33987 got: missing verification of requested URLs allows redirects to UNIX sockets\n2105422 - CVE-2022-32212 nodejs: DNS rebinding in --inspect via invalid IP addresses\n2105426 - CVE-2022-32215 nodejs: HTTP request smuggling due to incorrect parsing of multi-line Transfer-Encoding\n2105428 - CVE-2022-32214 nodejs: HTTP request smuggling due to improper delimiting of header fields\n2105430 - CVE-2022-32213 nodejs: HTTP request smuggling due to flawed parsing of Transfer-Encoding\n2106673 - rh-nodejs14-nodejs: rebase to latest upstream release [rhscl-3.8.z]\n\n6. Package List:\n\nRed Hat Software Collections for Red Hat Enterprise Linux Server (v. 7):\n\nSource:\nrh-nodejs14-nodejs-14.20.0-2.el7.src.rpm\nrh-nodejs14-nodejs-nodemon-2.0.19-1.el7.src.rpm\n\nnoarch:\nrh-nodejs14-nodejs-docs-14.20.0-2.el7.noarch.rpm\nrh-nodejs14-nodejs-nodemon-2.0.19-1.el7.noarch.rpm\n\nppc64le:\nrh-nodejs14-nodejs-14.20.0-2.el7.ppc64le.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.ppc64le.rpm\nrh-nodejs14-nodejs-devel-14.20.0-2.el7.ppc64le.rpm\nrh-nodejs14-npm-6.14.17-14.20.0.2.el7.ppc64le.rpm\n\ns390x:\nrh-nodejs14-nodejs-14.20.0-2.el7.s390x.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.s390x.rpm\nrh-nodejs14-nodejs-devel-14.20.0-2.el7.s390x.rpm\nrh-nodejs14-npm-6.14.17-14.20.0.2.el7.s390x.rpm\n\nx86_64:\nrh-nodejs14-nodejs-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-devel-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-npm-6.14.17-14.20.0.2.el7.x86_64.rpm\n\nRed Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nrh-nodejs14-nodejs-14.20.0-2.el7.src.rpm\nrh-nodejs14-nodejs-nodemon-2.0.19-1.el7.src.rpm\n\nnoarch:\nrh-nodejs14-nodejs-docs-14.20.0-2.el7.noarch.rpm\nrh-nodejs14-nodejs-nodemon-2.0.19-1.el7.noarch.rpm\n\nx86_64:\nrh-nodejs14-nodejs-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-devel-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-npm-6.14.17-14.20.0.2.el7.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-32212\nhttps://access.redhat.com/security/cve/CVE-2022-32213\nhttps://access.redhat.com/security/cve/CVE-2022-32214\nhttps://access.redhat.com/security/cve/CVE-2022-32215\nhttps://access.redhat.com/security/cve/CVE-2022-33987\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYxnqU9zjgjWX9erEAQipBg/+NJmkBsKEPkFHZAiZhGKiwIkwaFcHK+e/\nODClFTTT9SkkMBheuc9HQDmwukaVlLMvbOJSVL/6NvuLQvOcQHtprOAJXr3I6KQm\nVScJRQny4et+D/N3bJJiuhqe9YY9Bh+EP7omS4aq2UuphEhkuTSQ0V2+Fa4O8wdZ\nbAhUhU660Q6aGzNGvcyz8vi7ohmOFZS94/x2Lr6cBG8LF0dmr/pIw+uPlO36ghXF\nIPEM3VcGisTGQRg2Xy5yqeouK1S+YAcZ1f0QUOePP+WRhIecfmG3cj6oYTRnrOyq\n+62525BHDNjIz55z6H32dKBIy+r+HT7WaOGgPwvH+ugmlH6NyKHjSyy+IJoglkfM\n4+QA0zun7WhLet5y4jmsWCpT3mOCWj7h+iW6IqTlfcad3wCQ6OnySRq67W3GDq+M\n3kdUdBoyfLm1vzLceEF4AK8qChj7rVl8x0b4v8OfRGv6ZEIe+BfJYNzI9HeuIE91\nBYtLGe18vMs5mcWxcYMWlfAgzVSGTaqaaBie9qPtAThs00lJd9oRf/Mfga42/6vI\nnBLHwE3NyPyKfaLvcyLa/oPwGnOhKyPtD8HeN2MORm6RUeUClaq9s+ihDIPvbyLX\nbcKKdjGoJDWyJy2yU2GkVwrbF6gcKgdvo2uFckOpouKQ4P9KEooI/15fLy8NPIZz\nhGdWoRKL34w\\xcePC\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. 9) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA512\n\n- -------------------------------------------------------------------------\nDebian Security Advisory DSA-5326-1                   security@debian.org\nhttps://www.debian.org/security/                                  Aron Xu\nJanuary 24, 2023                      https://www.debian.org/security/faq\n- -------------------------------------------------------------------------\n\nPackage        : nodejs\nCVE ID         : CVE-2022-32212 CVE-2022-32213 CVE-2022-32214 CVE-2022-32215\n                 CVE-2022-35255 CVE-2022-35256 CVE-2022-43548\n\nMultiple vulnerabilities were discovered in Node.js, which could result\nin HTTP request smuggling, bypass of host IP address validation and weak\nrandomness setup. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 12.22.12~dfsg-1~deb11u3. \n\nWe recommend that you upgrade your nodejs packages. \n\nFor the detailed security status of nodejs please refer to\nits security tracker page at:\nhttps://security-tracker.debian.org/tracker/nodejs\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmPQNhIACgkQEMKTtsN8\nTjaRmA/+KDFkQcd2sE/eAAx9cVikICNkfu7uIVKHpeDH9o5oq5M2nj4zHJCeAArp\nWblguyZwEtqzAOO2WesbrmwfXLmglhrNZwRMOrsbu63JxSnecp7qcMwR8A4JWdmd\nTxb4aZr6Prmwq6fT0G3K6oV8Hw+OeqYA/RZKenxtkBf/jdzVahGJHJ/NrFKKWVQW\nxbqHwCkP7uUlm+5UR5XzNrodTRCQYHJvUmDUrjEOjM6x+sqYirKWiERN0A14kVn9\n0Ufrw6+Z2tKhdKFZfU1BtDthhlH/nybz0h3aHsk+E5/vx20WAURiCEDVi7nf8+Rf\nEtbCxaqV+/xVoPmXStHY/ogCo8CgRVsyYUIemgi4q5LwVx/Oqjm2CJ/xCwOKh0E2\nidXLJfLSpxxBe598MUn9iKbnFFCN9DQZXf7BYs3djtn8ALFVBSHZSF1QXFoFQ86w\nY9xGhBQzfEgCoEW7H4S30ZQ+Gz+ZnOMCSH+MKIMtSpqbc7wLtrKf839DO6Uux7B7\nu0WR3lZlsihi92QKq9X/VRkyy8ZiA2TYy3IE+KDKlXDHKls9FR9BUClYe9L8RiRu\nboP8KPFUHUsSVaTzkufMStdKkcXCqgj/6KhJL6E9ZunTBpTmqx1Ty7/N2qktLFnH\nujrffzV3rCE6eIg7ps8OdZbjCfqUqmQk9/pV6ZDjymqjZ1LKZDs\\xfeRn\n-----END PGP SIGNATURE-----\n. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202405-29\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Low\n    Title: Node.js: Multiple Vulnerabilities\n     Date: May 08, 2024\n     Bugs: #772422, #781704, #800986, #805053, #807775, #811273, #817938, #831037, #835615, #857111, #865627, #872692, #879617, #918086, #918614\n       ID: 202405-29\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been discovered in Node.js. \n\nBackground\n=========\nNode.js is a JavaScript runtime built on Chrome\u2019s V8 JavaScript engine. \n\nAffected packages\n================\nPackage          Vulnerable    Unaffected\n---------------  ------------  ------------\nnet-libs/nodejs  \u003c 16.20.2     \u003e= 16.20.2\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in Node.js. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll Node.js 20 users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-20.5.1\"\n\nAll Node.js 18 users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-18.17.1\"\n\nAll Node.js 16 users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-16.20.2\"\n\nReferences\n=========\n[ 1 ] CVE-2020-7774\n      https://nvd.nist.gov/vuln/detail/CVE-2020-7774\n[ 2 ] CVE-2021-3672\n      https://nvd.nist.gov/vuln/detail/CVE-2021-3672\n[ 3 ] CVE-2021-22883\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22883\n[ 4 ] CVE-2021-22884\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22884\n[ 5 ] CVE-2021-22918\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22918\n[ 6 ] CVE-2021-22930\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22930\n[ 7 ] CVE-2021-22931\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22931\n[ 8 ] CVE-2021-22939\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22939\n[ 9 ] CVE-2021-22940\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22940\n[ 10 ] CVE-2021-22959\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22959\n[ 11 ] CVE-2021-22960\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22960\n[ 12 ] CVE-2021-37701\n      https://nvd.nist.gov/vuln/detail/CVE-2021-37701\n[ 13 ] CVE-2021-37712\n      https://nvd.nist.gov/vuln/detail/CVE-2021-37712\n[ 14 ] CVE-2021-39134\n      https://nvd.nist.gov/vuln/detail/CVE-2021-39134\n[ 15 ] CVE-2021-39135\n      https://nvd.nist.gov/vuln/detail/CVE-2021-39135\n[ 16 ] CVE-2021-44531\n      https://nvd.nist.gov/vuln/detail/CVE-2021-44531\n[ 17 ] CVE-2021-44532\n      https://nvd.nist.gov/vuln/detail/CVE-2021-44532\n[ 18 ] CVE-2021-44533\n      https://nvd.nist.gov/vuln/detail/CVE-2021-44533\n[ 19 ] CVE-2022-0778\n      https://nvd.nist.gov/vuln/detail/CVE-2022-0778\n[ 20 ] CVE-2022-3602\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3602\n[ 21 ] CVE-2022-3786\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3786\n[ 22 ] CVE-2022-21824\n      https://nvd.nist.gov/vuln/detail/CVE-2022-21824\n[ 23 ] CVE-2022-32212\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32212\n[ 24 ] CVE-2022-32213\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32213\n[ 25 ] CVE-2022-32214\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32214\n[ 26 ] CVE-2022-32215\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32215\n[ 27 ] CVE-2022-32222\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32222\n[ 28 ] CVE-2022-35255\n      https://nvd.nist.gov/vuln/detail/CVE-2022-35255\n[ 29 ] CVE-2022-35256\n      https://nvd.nist.gov/vuln/detail/CVE-2022-35256\n[ 30 ] CVE-2022-35948\n      https://nvd.nist.gov/vuln/detail/CVE-2022-35948\n[ 31 ] CVE-2022-35949\n      https://nvd.nist.gov/vuln/detail/CVE-2022-35949\n[ 32 ] CVE-2022-43548\n      https://nvd.nist.gov/vuln/detail/CVE-2022-43548\n[ 33 ] CVE-2023-30581\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30581\n[ 34 ] CVE-2023-30582\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30582\n[ 35 ] CVE-2023-30583\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30583\n[ 36 ] CVE-2023-30584\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30584\n[ 37 ] CVE-2023-30586\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30586\n[ 38 ] CVE-2023-30587\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30587\n[ 39 ] CVE-2023-30588\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30588\n[ 40 ] CVE-2023-30589\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30589\n[ 41 ] CVE-2023-30590\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30590\n[ 42 ] CVE-2023-32002\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32002\n[ 43 ] CVE-2023-32003\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32003\n[ 44 ] CVE-2023-32004\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32004\n[ 45 ] CVE-2023-32005\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32005\n[ 46 ] CVE-2023-32006\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32006\n[ 47 ] CVE-2023-32558\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32558\n[ 48 ] CVE-2023-32559\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32559\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202405-29\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2024 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-32213"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013368"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-32213"
      },
      {
        "db": "PACKETSTORM",
        "id": "168305"
      },
      {
        "db": "PACKETSTORM",
        "id": "169410"
      },
      {
        "db": "PACKETSTORM",
        "id": "168442"
      },
      {
        "db": "PACKETSTORM",
        "id": "168358"
      },
      {
        "db": "PACKETSTORM",
        "id": "168359"
      },
      {
        "db": "PACKETSTORM",
        "id": "170727"
      },
      {
        "db": "PACKETSTORM",
        "id": "178512"
      }
    ],
    "trust": 2.34
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2022-32213",
        "trust": 4.0
      },
      {
        "db": "SIEMENS",
        "id": "SSA-332410",
        "trust": 2.4
      },
      {
        "db": "HACKERONE",
        "id": "1524555",
        "trust": 2.4
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-23-017-03",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU90782730",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013368",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "168305",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "169410",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "168442",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "168359",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "170727",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3673",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3488",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3505",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3487",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4136",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4101",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3586",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4681",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022071827",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022071338",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022072639",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022072522",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022071612",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-683",
        "trust": 0.6
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-32213",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168358",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "178512",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-32213"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013368"
      },
      {
        "db": "PACKETSTORM",
        "id": "168305"
      },
      {
        "db": "PACKETSTORM",
        "id": "169410"
      },
      {
        "db": "PACKETSTORM",
        "id": "168442"
      },
      {
        "db": "PACKETSTORM",
        "id": "168358"
      },
      {
        "db": "PACKETSTORM",
        "id": "168359"
      },
      {
        "db": "PACKETSTORM",
        "id": "170727"
      },
      {
        "db": "PACKETSTORM",
        "id": "178512"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-683"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-32213"
      }
    ]
  },
  "id": "VAR-202207-0587",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-08-14T12:04:24.035000Z",
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-444",
        "trust": 1.0
      },
      {
        "problemtype": "HTTP Request Smuggling (CWE-444) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013368"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-32213"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.5,
        "url": "https://nodejs.org/en/blog/vulnerability/july-2022-security-releases/"
      },
      {
        "trust": 2.4,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf"
      },
      {
        "trust": 2.4,
        "url": "https://hackerone.com/reports/1524555"
      },
      {
        "trust": 2.4,
        "url": "https://www.debian.org/security/2023/dsa-5326"
      },
      {
        "trust": 1.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32213"
      },
      {
        "trust": 1.4,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/2icg6csib3guwh5dusqevx53mojw7lyk/"
      },
      {
        "trust": 1.4,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/qcnn3yg2bcls4zekj3clsut6as7axth3/"
      },
      {
        "trust": 1.4,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/vmqk5l5sbyd47qqz67lemhnq662gh3oy/"
      },
      {
        "trust": 1.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32213"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/2icg6csib3guwh5dusqevx53mojw7lyk/"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/qcnn3yg2bcls4zekj3clsut6as7axth3/"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/vmqk5l5sbyd47qqz67lemhnq662gh3oy/"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu90782730/"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-017-03"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32215"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32214"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32212"
      },
      {
        "trust": 0.6,
        "url": "https://security.netapp.com/advisory/ntap-20220915-0001/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/170727/debian-security-advisory-5326-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3505"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/168305/red-hat-security-advisory-2022-6389-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022072522"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/168442/red-hat-security-advisory-2022-6595-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/168359/red-hat-security-advisory-2022-6448-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4681"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022072639"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4101"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3673"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4136"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3487"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022071827"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3586"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3488"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022071612"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169410/red-hat-security-advisory-2022-6985-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022071338"
      },
      {
        "trust": 0.6,
        "url": "https://cxsecurity.com/cveshow/cve-2022-32213/"
      },
      {
        "trust": 0.5,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2022-32214"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2022-32212"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-33987"
      },
      {
        "trust": 0.5,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2022-32215"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2022-33987"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3807"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3807"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35256"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35255"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-43548"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6389"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6985"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33502"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-29244"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6595"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7788"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28469"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-29244"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28469"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7788"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6449"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6448"
      },
      {
        "trust": 0.1,
        "url": "https://security-tracker.debian.org/tracker/nodejs"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/faq"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22960"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30587"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32006"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22931"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32222"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22939"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32558"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30588"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21824"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3672"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44532"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35949"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22959"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/glsa/202405-29"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22918"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32004"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30584"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7774"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30589"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32003"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22883"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22884"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35948"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44533"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32002"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30582"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3602"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3786"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30590"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30586"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22940"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32005"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32559"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22930"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39135"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39134"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30581"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37712"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30583"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44531"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37701"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-32213"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013368"
      },
      {
        "db": "PACKETSTORM",
        "id": "168305"
      },
      {
        "db": "PACKETSTORM",
        "id": "169410"
      },
      {
        "db": "PACKETSTORM",
        "id": "168442"
      },
      {
        "db": "PACKETSTORM",
        "id": "168358"
      },
      {
        "db": "PACKETSTORM",
        "id": "168359"
      },
      {
        "db": "PACKETSTORM",
        "id": "170727"
      },
      {
        "db": "PACKETSTORM",
        "id": "178512"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-683"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-32213"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULMON",
        "id": "CVE-2022-32213"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013368"
      },
      {
        "db": "PACKETSTORM",
        "id": "168305"
      },
      {
        "db": "PACKETSTORM",
        "id": "169410"
      },
      {
        "db": "PACKETSTORM",
        "id": "168442"
      },
      {
        "db": "PACKETSTORM",
        "id": "168358"
      },
      {
        "db": "PACKETSTORM",
        "id": "168359"
      },
      {
        "db": "PACKETSTORM",
        "id": "170727"
      },
      {
        "db": "PACKETSTORM",
        "id": "178512"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-683"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-32213"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-09-07T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-013368"
      },
      {
        "date": "2022-09-08T14:41:32",
        "db": "PACKETSTORM",
        "id": "168305"
      },
      {
        "date": "2022-10-18T22:30:49",
        "db": "PACKETSTORM",
        "id": "169410"
      },
      {
        "date": "2022-09-21T13:47:04",
        "db": "PACKETSTORM",
        "id": "168442"
      },
      {
        "date": "2022-09-13T15:43:41",
        "db": "PACKETSTORM",
        "id": "168358"
      },
      {
        "date": "2022-09-13T15:43:55",
        "db": "PACKETSTORM",
        "id": "168359"
      },
      {
        "date": "2023-01-25T16:09:12",
        "db": "PACKETSTORM",
        "id": "170727"
      },
      {
        "date": "2024-05-09T15:46:44",
        "db": "PACKETSTORM",
        "id": "178512"
      },
      {
        "date": "2022-07-08T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202207-683"
      },
      {
        "date": "2022-07-14T15:15:08.287000",
        "db": "NVD",
        "id": "CVE-2022-32213"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-09-07T08:25:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-013368"
      },
      {
        "date": "2023-02-01T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202207-683"
      },
      {
        "date": "2023-11-07T03:47:46.473000",
        "db": "NVD",
        "id": "CVE-2022-32213"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-683"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "llhttp\u00a0 of \u00a0llhttp\u00a0 in products from other multiple vendors \u00a0HTTP\u00a0 Request Smuggling Vulnerability",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013368"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "environmental issue",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-683"
      }
    ],
    "trust": 0.6
  }
}

var-202009-1545
Vulnerability from variot

Multiple memory corruption vulnerabilities exist in CodeMeter (All versions prior to 7.10) where the packet parser mechanism does not verify length fields. An attacker could send specially crafted packets to exploit these vulnerabilities. CodeMeter There is a vulnerability in accessing the buffer with an improper length value.Information is obtained, information is tampered with, and service is disrupted (DoS) It may be put into a state. Siemens SIMATIC WinCC OA (Open Architecture) is a set of SCADA system of Siemens (Siemens), Germany, and it is also an integral part of HMI series. The system is mainly suitable for industries such as rail transit, building automation and public power supply. Information Server is used to report and visualize the process data stored in the Process Historian. SINEC INS is a web-based application that combines various network services in one tool. SPPA-S2000 simulates the automation component (S7) of the nuclear DCS system SPPA-T2000. SPPA-S3000 simulates the automation components of DCS system SPPA-T3000. SPPA-T3000 is a distributed control system, mainly used in fossil and large renewable energy power plants.

Many Siemens products have memory corruption vulnerabilities

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202009-1545",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "codemeter",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "wibu",
        "version": "7.10"
      },
      {
        "model": "codemeter",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "wibu",
        "version": "7.10"
      },
      {
        "model": "codemeter",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "wibu",
        "version": null
      },
      {
        "model": "information server sp1",
        "scope": "lte",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "\u003c=2019"
      },
      {
        "model": "simatic wincc oa",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "3.17"
      },
      {
        "model": "sinec ins",
        "scope": null,
        "trust": 0.6,
        "vendor": "siemens",
        "version": null
      },
      {
        "model": "sppa-s2000",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "3.04"
      },
      {
        "model": "sppa-s2000",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "3.06"
      },
      {
        "model": "sppa-t3000 r8.2 sp2",
        "scope": null,
        "trust": 0.6,
        "vendor": "siemens",
        "version": null
      },
      {
        "model": "sppa-s3000",
        "scope": "eq",
        "trust": 0.6,
        "vendor": "siemens",
        "version": "3.05"
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51245"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011219"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-14509"
      }
    ]
  },
  "cve": "CVE-2020-14509",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 7.5,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 10.0,
            "id": "CVE-2020-14509",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "HIGH",
            "trust": 1.9,
            "vectorString": "AV:N/AC:L/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "CNVD",
            "availabilityImpact": "COMPLETE",
            "baseScore": 10.0,
            "confidentialityImpact": "COMPLETE",
            "exploitabilityScore": 10.0,
            "id": "CNVD-2020-51245",
            "impactScore": 10.0,
            "integrityImpact": "COMPLETE",
            "severity": "HIGH",
            "trust": 0.6,
            "vectorString": "AV:N/AC:L/Au:N/C:C/I:C/A:C",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 9.8,
            "baseSeverity": "CRITICAL",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 3.9,
            "id": "CVE-2020-14509",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 9.8,
            "baseSeverity": "Critical",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2020-14509",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2020-14509",
            "trust": 1.0,
            "value": "CRITICAL"
          },
          {
            "author": "NVD",
            "id": "CVE-2020-14509",
            "trust": 0.8,
            "value": "Critical"
          },
          {
            "author": "CNVD",
            "id": "CNVD-2020-51245",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202009-491",
            "trust": 0.6,
            "value": "CRITICAL"
          },
          {
            "author": "VULMON",
            "id": "CVE-2020-14509",
            "trust": 0.1,
            "value": "HIGH"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51245"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-14509"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011219"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-491"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-14509"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Multiple memory corruption vulnerabilities exist in CodeMeter (All versions prior to 7.10) where the packet parser mechanism does not verify length fields. An attacker could send specially crafted packets to exploit these vulnerabilities. CodeMeter There is a vulnerability in accessing the buffer with an improper length value.Information is obtained, information is tampered with, and service is disrupted  (DoS) It may be put into a state. Siemens SIMATIC WinCC OA (Open Architecture) is a set of SCADA system of Siemens (Siemens), Germany, and it is also an integral part of HMI series. The system is mainly suitable for industries such as rail transit, building automation and public power supply. Information Server is used to report and visualize the process data stored in the Process Historian. SINEC INS is a web-based application that combines various network services in one tool. SPPA-S2000 simulates the automation component (S7) of the nuclear DCS system SPPA-T2000. SPPA-S3000 simulates the automation components of DCS system SPPA-T3000. SPPA-T3000 is a distributed control system, mainly used in fossil and large renewable energy power plants. \n\r\n\r\nMany Siemens products have memory corruption vulnerabilities",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-14509"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011219"
      },
      {
        "db": "CNVD",
        "id": "CNVD-2020-51245"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-14509"
      }
    ],
    "trust": 2.25
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2020-14509",
        "trust": 3.9
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-20-203-01",
        "trust": 2.5
      },
      {
        "db": "JVN",
        "id": "JVNVU90770748",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU94568336",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011219",
        "trust": 0.8
      },
      {
        "db": "SIEMENS",
        "id": "SSA-455843",
        "trust": 0.6
      },
      {
        "db": "CNVD",
        "id": "CNVD-2020-51245",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3076.2",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3076.3",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3076",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022021806",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-491",
        "trust": 0.6
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-14509",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51245"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-14509"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011219"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-491"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-14509"
      }
    ]
  },
  "id": "VAR-202009-1545",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51245"
      }
    ],
    "trust": 1.3593294842857142
  },
  "iot_taxonomy": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot_taxonomy#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "category": [
          "ICS"
        ],
        "sub_category": null,
        "trust": 0.6
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51245"
      }
    ]
  },
  "last_update_date": "2024-11-23T20:27:55.134000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "CodeMeter",
        "trust": 0.8,
        "url": "https://www.wibu.com/products/codemeter.html"
      },
      {
        "title": "Patch for Memory corruption vulnerabilities in many Siemens products",
        "trust": 0.6,
        "url": "https://www.cnvd.org.cn/patchInfo/show/233335"
      },
      {
        "title": "ARC  and MATIO Security vulnerabilities",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=127912"
      },
      {
        "title": "Siemens Security Advisories: Siemens Security Advisory",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=6161645a91c3d669954a802b5a5a2baf"
      },
      {
        "title": "Threatpost",
        "trust": 0.1,
        "url": "https://threatpost.com/severe-industrial-bugs-takeover-critical-systems/159068/"
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51245"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-14509"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011219"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-491"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "NVD-CWE-Other",
        "trust": 1.0
      },
      {
        "problemtype": "CWE-805",
        "trust": 1.0
      },
      {
        "problemtype": "Accessing the buffer with improper length values (CWE-805) [ Other ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011219"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-14509"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.5,
        "url": "https://us-cert.cisa.gov/ics/advisories/icsa-20-203-01"
      },
      {
        "trust": 1.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14509"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu94568336/"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu90770748/"
      },
      {
        "trust": 0.6,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-455843.pdf"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/siemens-simatic-six-vulnerabilities-via-wibu-systems-codemeter-runtime-33282"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022021806"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3076.2/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3076.3/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3076/"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/805.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://exchange.xforce.ibmcloud.com/vulnerabilities/187940"
      },
      {
        "trust": 0.1,
        "url": "https://threatpost.com/severe-industrial-bugs-takeover-critical-systems/159068/"
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51245"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-14509"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011219"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-491"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-14509"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51245"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-14509"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011219"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-491"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-14509"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-09-10T00:00:00",
        "db": "CNVD",
        "id": "CNVD-2020-51245"
      },
      {
        "date": "2020-09-16T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-14509"
      },
      {
        "date": "2021-03-24T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-011219"
      },
      {
        "date": "2020-09-08T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202009-491"
      },
      {
        "date": "2020-09-16T20:15:13.380000",
        "db": "NVD",
        "id": "CVE-2020-14509"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-09-10T00:00:00",
        "db": "CNVD",
        "id": "CNVD-2020-51245"
      },
      {
        "date": "2020-09-22T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-14509"
      },
      {
        "date": "2022-03-15T05:02:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-011219"
      },
      {
        "date": "2022-02-21T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202009-491"
      },
      {
        "date": "2024-11-21T05:03:25.453000",
        "db": "NVD",
        "id": "CVE-2020-14509"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-491"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "CodeMeter\u00a0 Vulnerability in accessing buffers with improper length values in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011219"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "other",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-491"
      }
    ],
    "trust": 0.6
  }
}

var-202207-0588
Vulnerability from variot

The llhttp parser <v14.20.1, <v16.17.1 and <v18.9.1 in the http module in Node.js does not correctly handle multi-line Transfer-Encoding headers. This can lead to HTTP Request Smuggling (HRS). llhttp of llhttp For products from other vendors, HTTP There is a vulnerability related to request smuggling.Information may be obtained and information may be tampered with. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Moderate: rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon security and bug fix update Advisory ID: RHSA-2022:6389-01 Product: Red Hat Software Collections Advisory URL: https://access.redhat.com/errata/RHSA-2022:6389 Issue date: 2022-09-08 CVE Names: CVE-2022-32212 CVE-2022-32213 CVE-2022-32214 CVE-2022-32215 CVE-2022-33987 ==================================================================== 1. Summary:

An update for rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon is now available for Red Hat Software Collections.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

Red Hat Software Collections for Red Hat Enterprise Linux Server (v. 7) - noarch, ppc64le, s390x, x86_64 Red Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64

  1. Description:

Node.js is a software development platform for building fast and scalable network applications in the JavaScript programming language.

The following packages have been upgraded to a later upstream version: rh-nodejs14-nodejs (14.20.0).

Security Fix(es):

  • nodejs: DNS rebinding in --inspect via invalid IP addresses (CVE-2022-32212)

  • nodejs: HTTP request smuggling due to flawed parsing of Transfer-Encoding (CVE-2022-32213)

  • nodejs: HTTP request smuggling due to improper delimiting of header fields (CVE-2022-32214)

  • nodejs: HTTP request smuggling due to incorrect parsing of multi-line Transfer-Encoding (CVE-2022-32215)

  • got: missing verification of requested URLs allows redirects to UNIX sockets (CVE-2022-33987)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Bug Fix(es):

  • rh-nodejs14-nodejs: rebase to latest upstream release (BZ#2106673)

  • Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Bugs fixed (https://bugzilla.redhat.com/):

2102001 - CVE-2022-33987 got: missing verification of requested URLs allows redirects to UNIX sockets 2105422 - CVE-2022-32212 nodejs: DNS rebinding in --inspect via invalid IP addresses 2105426 - CVE-2022-32215 nodejs: HTTP request smuggling due to incorrect parsing of multi-line Transfer-Encoding 2105428 - CVE-2022-32214 nodejs: HTTP request smuggling due to improper delimiting of header fields 2105430 - CVE-2022-32213 nodejs: HTTP request smuggling due to flawed parsing of Transfer-Encoding 2106673 - rh-nodejs14-nodejs: rebase to latest upstream release [rhscl-3.8.z]

  1. Package List:

Red Hat Software Collections for Red Hat Enterprise Linux Server (v. 7):

Source: rh-nodejs14-nodejs-14.20.0-2.el7.src.rpm rh-nodejs14-nodejs-nodemon-2.0.19-1.el7.src.rpm

noarch: rh-nodejs14-nodejs-docs-14.20.0-2.el7.noarch.rpm rh-nodejs14-nodejs-nodemon-2.0.19-1.el7.noarch.rpm

ppc64le: rh-nodejs14-nodejs-14.20.0-2.el7.ppc64le.rpm rh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.ppc64le.rpm rh-nodejs14-nodejs-devel-14.20.0-2.el7.ppc64le.rpm rh-nodejs14-npm-6.14.17-14.20.0.2.el7.ppc64le.rpm

s390x: rh-nodejs14-nodejs-14.20.0-2.el7.s390x.rpm rh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.s390x.rpm rh-nodejs14-nodejs-devel-14.20.0-2.el7.s390x.rpm rh-nodejs14-npm-6.14.17-14.20.0.2.el7.s390x.rpm

x86_64: rh-nodejs14-nodejs-14.20.0-2.el7.x86_64.rpm rh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.x86_64.rpm rh-nodejs14-nodejs-devel-14.20.0-2.el7.x86_64.rpm rh-nodejs14-npm-6.14.17-14.20.0.2.el7.x86_64.rpm

Red Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7):

Source: rh-nodejs14-nodejs-14.20.0-2.el7.src.rpm rh-nodejs14-nodejs-nodemon-2.0.19-1.el7.src.rpm

noarch: rh-nodejs14-nodejs-docs-14.20.0-2.el7.noarch.rpm rh-nodejs14-nodejs-nodemon-2.0.19-1.el7.noarch.rpm

x86_64: rh-nodejs14-nodejs-14.20.0-2.el7.x86_64.rpm rh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.x86_64.rpm rh-nodejs14-nodejs-devel-14.20.0-2.el7.x86_64.rpm rh-nodejs14-npm-6.14.17-14.20.0.2.el7.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. References:

https://access.redhat.com/security/cve/CVE-2022-32212 https://access.redhat.com/security/cve/CVE-2022-32213 https://access.redhat.com/security/cve/CVE-2022-32214 https://access.redhat.com/security/cve/CVE-2022-32215 https://access.redhat.com/security/cve/CVE-2022-33987 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYxnqU9zjgjWX9erEAQipBg/+NJmkBsKEPkFHZAiZhGKiwIkwaFcHK+e/ ODClFTTT9SkkMBheuc9HQDmwukaVlLMvbOJSVL/6NvuLQvOcQHtprOAJXr3I6KQm VScJRQny4et+D/N3bJJiuhqe9YY9Bh+EP7omS4aq2UuphEhkuTSQ0V2+Fa4O8wdZ bAhUhU660Q6aGzNGvcyz8vi7ohmOFZS94/x2Lr6cBG8LF0dmr/pIw+uPlO36ghXF IPEM3VcGisTGQRg2Xy5yqeouK1S+YAcZ1f0QUOePP+WRhIecfmG3cj6oYTRnrOyq +62525BHDNjIz55z6H32dKBIy+r+HT7WaOGgPwvH+ugmlH6NyKHjSyy+IJoglkfM 4+QA0zun7WhLet5y4jmsWCpT3mOCWj7h+iW6IqTlfcad3wCQ6OnySRq67W3GDq+M 3kdUdBoyfLm1vzLceEF4AK8qChj7rVl8x0b4v8OfRGv6ZEIe+BfJYNzI9HeuIE91 BYtLGe18vMs5mcWxcYMWlfAgzVSGTaqaaBie9qPtAThs00lJd9oRf/Mfga42/6vI nBLHwE3NyPyKfaLvcyLa/oPwGnOhKyPtD8HeN2MORm6RUeUClaq9s+ihDIPvbyLX bcKKdjGoJDWyJy2yU2GkVwrbF6gcKgdvo2uFckOpouKQ4P9KEooI/15fLy8NPIZz hGdWoRKL34w\xcePC -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . 9) - aarch64, noarch, ppc64le, s390x, x86_64

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512

Debian Security Advisory DSA-5326-1 security@debian.org https://www.debian.org/security/ Aron Xu January 24, 2023 https://www.debian.org/security/faq


Package : nodejs CVE ID : CVE-2022-32212 CVE-2022-32213 CVE-2022-32214 CVE-2022-32215 CVE-2022-35255 CVE-2022-35256 CVE-2022-43548

Multiple vulnerabilities were discovered in Node.js, which could result in HTTP request smuggling, bypass of host IP address validation and weak randomness setup.

For the stable distribution (bullseye), these problems have been fixed in version 12.22.12~dfsg-1~deb11u3.

We recommend that you upgrade your nodejs packages.

For the detailed security status of nodejs please refer to its security tracker page at: https://security-tracker.debian.org/tracker/nodejs

Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/

Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmPQNhIACgkQEMKTtsN8 TjaRmA/+KDFkQcd2sE/eAAx9cVikICNkfu7uIVKHpeDH9o5oq5M2nj4zHJCeAArp WblguyZwEtqzAOO2WesbrmwfXLmglhrNZwRMOrsbu63JxSnecp7qcMwR8A4JWdmd Txb4aZr6Prmwq6fT0G3K6oV8Hw+OeqYA/RZKenxtkBf/jdzVahGJHJ/NrFKKWVQW xbqHwCkP7uUlm+5UR5XzNrodTRCQYHJvUmDUrjEOjM6x+sqYirKWiERN0A14kVn9 0Ufrw6+Z2tKhdKFZfU1BtDthhlH/nybz0h3aHsk+E5/vx20WAURiCEDVi7nf8+Rf EtbCxaqV+/xVoPmXStHY/ogCo8CgRVsyYUIemgi4q5LwVx/Oqjm2CJ/xCwOKh0E2 idXLJfLSpxxBe598MUn9iKbnFFCN9DQZXf7BYs3djtn8ALFVBSHZSF1QXFoFQ86w Y9xGhBQzfEgCoEW7H4S30ZQ+Gz+ZnOMCSH+MKIMtSpqbc7wLtrKf839DO6Uux7B7 u0WR3lZlsihi92QKq9X/VRkyy8ZiA2TYy3IE+KDKlXDHKls9FR9BUClYe9L8RiRu boP8KPFUHUsSVaTzkufMStdKkcXCqgj/6KhJL6E9ZunTBpTmqx1Ty7/N2qktLFnH ujrffzV3rCE6eIg7ps8OdZbjCfqUqmQk9/pV6ZDjymqjZ1LKZDs\xfeRn -----END PGP SIGNATURE----- . ========================================================================== Ubuntu Security Notice USN-6491-1 November 21, 2023

nodejs vulnerabilities

A security issue affects these releases of Ubuntu and its derivatives:

  • Ubuntu 22.04 LTS
  • Ubuntu 20.04 LTS
  • Ubuntu 18.04 LTS (Available with Ubuntu Pro)

Summary:

Several security issues were fixed in Node.js.

Software Description: - nodejs: An open-source, cross-platform JavaScript runtime environment.

Details:

Axel Chong discovered that Node.js incorrectly handled certain inputs. If a user or an automated system were tricked into opening a specially crafted input file, a remote attacker could possibly use this issue to execute arbitrary code. (CVE-2022-32212)

Zeyu Zhang discovered that Node.js incorrectly handled certain inputs. If a user or an automated system were tricked into opening a specially crafted input file, a remote attacker could possibly use this issue to execute arbitrary code. This issue only affected Ubuntu 22.04 LTS. (CVE-2022-32213, CVE-2022-32214, CVE-2022-32215)

It was discovered that Node.js incorrectly handled certain inputs. If a user or an automated system were tricked into opening a specially crafted input file, a remote attacker could possibly use this issue to execute arbitrary code. This issue only affected Ubuntu 22.04 LTS. (CVE-2022-35256)

It was discovered that Node.js incorrectly handled certain inputs. If a user or an automated system were tricked into opening a specially crafted input file, a remote attacker could possibly use this issue to execute arbitrary code. This issue only affected Ubuntu 22.04 LTS. (CVE-2022-43548)

Update instructions:

The problem can be corrected by updating your system to the following package versions:

Ubuntu 22.04 LTS: libnode-dev 12.22.9~dfsg-1ubuntu3.2 libnode72 12.22.9~dfsg-1ubuntu3.2 nodejs 12.22.9~dfsg-1ubuntu3.2 nodejs-doc 12.22.9~dfsg-1ubuntu3.2

Ubuntu 20.04 LTS: libnode-dev 10.19.0~dfsg-3ubuntu1.3 libnode64 10.19.0~dfsg-3ubuntu1.3 nodejs 10.19.0~dfsg-3ubuntu1.3 nodejs-doc 10.19.0~dfsg-3ubuntu1.3

Ubuntu 18.04 LTS (Available with Ubuntu Pro): nodejs 8.10.0~dfsg-2ubuntu0.4+esm4 nodejs-dev 8.10.0~dfsg-2ubuntu0.4+esm4 nodejs-doc 8.10.0~dfsg-2ubuntu0.4+esm4

In general, a standard system update will make all the necessary changes. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202405-29


                                       https://security.gentoo.org/

Severity: Low Title: Node.js: Multiple Vulnerabilities Date: May 08, 2024 Bugs: #772422, #781704, #800986, #805053, #807775, #811273, #817938, #831037, #835615, #857111, #865627, #872692, #879617, #918086, #918614 ID: 202405-29


Synopsis

Multiple vulnerabilities have been discovered in Node.js.

Background

Node.js is a JavaScript runtime built on Chrome’s V8 JavaScript engine.

Affected packages

Package Vulnerable Unaffected


net-libs/nodejs < 16.20.2 >= 16.20.2

Description

Multiple vulnerabilities have been discovered in Node.js. Please review the CVE identifiers referenced below for details.

Impact

Please review the referenced CVE identifiers for details.

Workaround

There is no known workaround at this time.

Resolution

All Node.js 20 users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-20.5.1"

All Node.js 18 users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-18.17.1"

All Node.js 16 users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-16.20.2"

References

[ 1 ] CVE-2020-7774 https://nvd.nist.gov/vuln/detail/CVE-2020-7774 [ 2 ] CVE-2021-3672 https://nvd.nist.gov/vuln/detail/CVE-2021-3672 [ 3 ] CVE-2021-22883 https://nvd.nist.gov/vuln/detail/CVE-2021-22883 [ 4 ] CVE-2021-22884 https://nvd.nist.gov/vuln/detail/CVE-2021-22884 [ 5 ] CVE-2021-22918 https://nvd.nist.gov/vuln/detail/CVE-2021-22918 [ 6 ] CVE-2021-22930 https://nvd.nist.gov/vuln/detail/CVE-2021-22930 [ 7 ] CVE-2021-22931 https://nvd.nist.gov/vuln/detail/CVE-2021-22931 [ 8 ] CVE-2021-22939 https://nvd.nist.gov/vuln/detail/CVE-2021-22939 [ 9 ] CVE-2021-22940 https://nvd.nist.gov/vuln/detail/CVE-2021-22940 [ 10 ] CVE-2021-22959 https://nvd.nist.gov/vuln/detail/CVE-2021-22959 [ 11 ] CVE-2021-22960 https://nvd.nist.gov/vuln/detail/CVE-2021-22960 [ 12 ] CVE-2021-37701 https://nvd.nist.gov/vuln/detail/CVE-2021-37701 [ 13 ] CVE-2021-37712 https://nvd.nist.gov/vuln/detail/CVE-2021-37712 [ 14 ] CVE-2021-39134 https://nvd.nist.gov/vuln/detail/CVE-2021-39134 [ 15 ] CVE-2021-39135 https://nvd.nist.gov/vuln/detail/CVE-2021-39135 [ 16 ] CVE-2021-44531 https://nvd.nist.gov/vuln/detail/CVE-2021-44531 [ 17 ] CVE-2021-44532 https://nvd.nist.gov/vuln/detail/CVE-2021-44532 [ 18 ] CVE-2021-44533 https://nvd.nist.gov/vuln/detail/CVE-2021-44533 [ 19 ] CVE-2022-0778 https://nvd.nist.gov/vuln/detail/CVE-2022-0778 [ 20 ] CVE-2022-3602 https://nvd.nist.gov/vuln/detail/CVE-2022-3602 [ 21 ] CVE-2022-3786 https://nvd.nist.gov/vuln/detail/CVE-2022-3786 [ 22 ] CVE-2022-21824 https://nvd.nist.gov/vuln/detail/CVE-2022-21824 [ 23 ] CVE-2022-32212 https://nvd.nist.gov/vuln/detail/CVE-2022-32212 [ 24 ] CVE-2022-32213 https://nvd.nist.gov/vuln/detail/CVE-2022-32213 [ 25 ] CVE-2022-32214 https://nvd.nist.gov/vuln/detail/CVE-2022-32214 [ 26 ] CVE-2022-32215 https://nvd.nist.gov/vuln/detail/CVE-2022-32215 [ 27 ] CVE-2022-32222 https://nvd.nist.gov/vuln/detail/CVE-2022-32222 [ 28 ] CVE-2022-35255 https://nvd.nist.gov/vuln/detail/CVE-2022-35255 [ 29 ] CVE-2022-35256 https://nvd.nist.gov/vuln/detail/CVE-2022-35256 [ 30 ] CVE-2022-35948 https://nvd.nist.gov/vuln/detail/CVE-2022-35948 [ 31 ] CVE-2022-35949 https://nvd.nist.gov/vuln/detail/CVE-2022-35949 [ 32 ] CVE-2022-43548 https://nvd.nist.gov/vuln/detail/CVE-2022-43548 [ 33 ] CVE-2023-30581 https://nvd.nist.gov/vuln/detail/CVE-2023-30581 [ 34 ] CVE-2023-30582 https://nvd.nist.gov/vuln/detail/CVE-2023-30582 [ 35 ] CVE-2023-30583 https://nvd.nist.gov/vuln/detail/CVE-2023-30583 [ 36 ] CVE-2023-30584 https://nvd.nist.gov/vuln/detail/CVE-2023-30584 [ 37 ] CVE-2023-30586 https://nvd.nist.gov/vuln/detail/CVE-2023-30586 [ 38 ] CVE-2023-30587 https://nvd.nist.gov/vuln/detail/CVE-2023-30587 [ 39 ] CVE-2023-30588 https://nvd.nist.gov/vuln/detail/CVE-2023-30588 [ 40 ] CVE-2023-30589 https://nvd.nist.gov/vuln/detail/CVE-2023-30589 [ 41 ] CVE-2023-30590 https://nvd.nist.gov/vuln/detail/CVE-2023-30590 [ 42 ] CVE-2023-32002 https://nvd.nist.gov/vuln/detail/CVE-2023-32002 [ 43 ] CVE-2023-32003 https://nvd.nist.gov/vuln/detail/CVE-2023-32003 [ 44 ] CVE-2023-32004 https://nvd.nist.gov/vuln/detail/CVE-2023-32004 [ 45 ] CVE-2023-32005 https://nvd.nist.gov/vuln/detail/CVE-2023-32005 [ 46 ] CVE-2023-32006 https://nvd.nist.gov/vuln/detail/CVE-2023-32006 [ 47 ] CVE-2023-32558 https://nvd.nist.gov/vuln/detail/CVE-2023-32558 [ 48 ] CVE-2023-32559 https://nvd.nist.gov/vuln/detail/CVE-2023-32559

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202405-29

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2024 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202207-0588",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "llhttp",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "llhttp",
        "version": "14.20.1"
      },
      {
        "model": "node.js",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "14.20.0"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "16.0.0"
      },
      {
        "model": "llhttp",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "llhttp",
        "version": "16.0.0"
      },
      {
        "model": "node.js",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "18.5.0"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "11.0"
      },
      {
        "model": "node.js",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "16.16.0"
      },
      {
        "model": "management center",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "stormshield",
        "version": "3.3.2"
      },
      {
        "model": "llhttp",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "llhttp",
        "version": "16.17.1"
      },
      {
        "model": "node.js",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "14.14.0"
      },
      {
        "model": "llhttp",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "llhttp",
        "version": "18.9.1"
      },
      {
        "model": "llhttp",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "llhttp",
        "version": "14.0.0"
      },
      {
        "model": "node.js",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "16.12.0"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "14.0.0"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "16.13.0"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "18.0.0"
      },
      {
        "model": "llhttp",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "llhttp",
        "version": "18.0.0"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "14.15.0"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "35"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "36"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "37"
      },
      {
        "model": "fedora",
        "scope": null,
        "trust": 0.8,
        "vendor": "fedora",
        "version": null
      },
      {
        "model": "sinec ins",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": null
      },
      {
        "model": "gnu/linux",
        "scope": null,
        "trust": 0.8,
        "vendor": "debian",
        "version": null
      },
      {
        "model": "management center",
        "scope": null,
        "trust": 0.8,
        "vendor": "stormshield",
        "version": null
      },
      {
        "model": "node.js",
        "scope": null,
        "trust": 0.8,
        "vendor": "node js",
        "version": null
      },
      {
        "model": "llhttp",
        "scope": null,
        "trust": 0.8,
        "vendor": "llhttp",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013243"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-32215"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "168305"
      },
      {
        "db": "PACKETSTORM",
        "id": "169410"
      },
      {
        "db": "PACKETSTORM",
        "id": "168442"
      },
      {
        "db": "PACKETSTORM",
        "id": "168358"
      },
      {
        "db": "PACKETSTORM",
        "id": "168359"
      }
    ],
    "trust": 0.5
  },
  "cve": "CVE-2022-32215",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 6.5,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "LOW",
            "exploitabilityScore": 3.9,
            "id": "CVE-2022-32215",
            "impactScore": 2.5,
            "integrityImpact": "LOW",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:N",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "None",
            "baseScore": 6.5,
            "baseSeverity": "Medium",
            "confidentialityImpact": "Low",
            "exploitabilityScore": null,
            "id": "CVE-2022-32215",
            "impactScore": null,
            "integrityImpact": "Low",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:N",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2022-32215",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "NVD",
            "id": "CVE-2022-32215",
            "trust": 0.8,
            "value": "Medium"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202207-678",
            "trust": 0.6,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013243"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-678"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-32215"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "The llhttp parser \u003cv14.20.1, \u003cv16.17.1 and \u003cv18.9.1 in the http module in Node.js does not correctly handle multi-line Transfer-Encoding headers. This can lead to HTTP Request Smuggling (HRS). llhttp of llhttp For products from other vendors, HTTP There is a vulnerability related to request smuggling.Information may be obtained and information may be tampered with. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Moderate: rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon security and bug fix update\nAdvisory ID:       RHSA-2022:6389-01\nProduct:           Red Hat Software Collections\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:6389\nIssue date:        2022-09-08\nCVE Names:         CVE-2022-32212 CVE-2022-32213 CVE-2022-32214\n                   CVE-2022-32215 CVE-2022-33987\n====================================================================\n1. Summary:\n\nAn update for rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon is now\navailable for Red Hat Software Collections. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Software Collections for Red Hat Enterprise Linux Server (v. 7) - noarch, ppc64le, s390x, x86_64\nRed Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64\n\n3. Description:\n\nNode.js is a software development platform for building fast and scalable\nnetwork applications in the JavaScript programming language. \n\nThe following packages have been upgraded to a later upstream version:\nrh-nodejs14-nodejs (14.20.0). \n\nSecurity Fix(es):\n\n* nodejs: DNS rebinding in --inspect via invalid IP addresses\n(CVE-2022-32212)\n\n* nodejs: HTTP request smuggling due to flawed parsing of Transfer-Encoding\n(CVE-2022-32213)\n\n* nodejs: HTTP request smuggling due to improper delimiting of header\nfields (CVE-2022-32214)\n\n* nodejs: HTTP request smuggling due to incorrect parsing of multi-line\nTransfer-Encoding (CVE-2022-32215)\n\n* got: missing verification of requested URLs allows redirects to UNIX\nsockets (CVE-2022-33987)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fix(es):\n\n* rh-nodejs14-nodejs: rebase to latest upstream release (BZ#2106673)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2102001 - CVE-2022-33987 got: missing verification of requested URLs allows redirects to UNIX sockets\n2105422 - CVE-2022-32212 nodejs: DNS rebinding in --inspect via invalid IP addresses\n2105426 - CVE-2022-32215 nodejs: HTTP request smuggling due to incorrect parsing of multi-line Transfer-Encoding\n2105428 - CVE-2022-32214 nodejs: HTTP request smuggling due to improper delimiting of header fields\n2105430 - CVE-2022-32213 nodejs: HTTP request smuggling due to flawed parsing of Transfer-Encoding\n2106673 - rh-nodejs14-nodejs: rebase to latest upstream release [rhscl-3.8.z]\n\n6. Package List:\n\nRed Hat Software Collections for Red Hat Enterprise Linux Server (v. 7):\n\nSource:\nrh-nodejs14-nodejs-14.20.0-2.el7.src.rpm\nrh-nodejs14-nodejs-nodemon-2.0.19-1.el7.src.rpm\n\nnoarch:\nrh-nodejs14-nodejs-docs-14.20.0-2.el7.noarch.rpm\nrh-nodejs14-nodejs-nodemon-2.0.19-1.el7.noarch.rpm\n\nppc64le:\nrh-nodejs14-nodejs-14.20.0-2.el7.ppc64le.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.ppc64le.rpm\nrh-nodejs14-nodejs-devel-14.20.0-2.el7.ppc64le.rpm\nrh-nodejs14-npm-6.14.17-14.20.0.2.el7.ppc64le.rpm\n\ns390x:\nrh-nodejs14-nodejs-14.20.0-2.el7.s390x.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.s390x.rpm\nrh-nodejs14-nodejs-devel-14.20.0-2.el7.s390x.rpm\nrh-nodejs14-npm-6.14.17-14.20.0.2.el7.s390x.rpm\n\nx86_64:\nrh-nodejs14-nodejs-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-devel-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-npm-6.14.17-14.20.0.2.el7.x86_64.rpm\n\nRed Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nrh-nodejs14-nodejs-14.20.0-2.el7.src.rpm\nrh-nodejs14-nodejs-nodemon-2.0.19-1.el7.src.rpm\n\nnoarch:\nrh-nodejs14-nodejs-docs-14.20.0-2.el7.noarch.rpm\nrh-nodejs14-nodejs-nodemon-2.0.19-1.el7.noarch.rpm\n\nx86_64:\nrh-nodejs14-nodejs-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-devel-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-npm-6.14.17-14.20.0.2.el7.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-32212\nhttps://access.redhat.com/security/cve/CVE-2022-32213\nhttps://access.redhat.com/security/cve/CVE-2022-32214\nhttps://access.redhat.com/security/cve/CVE-2022-32215\nhttps://access.redhat.com/security/cve/CVE-2022-33987\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYxnqU9zjgjWX9erEAQipBg/+NJmkBsKEPkFHZAiZhGKiwIkwaFcHK+e/\nODClFTTT9SkkMBheuc9HQDmwukaVlLMvbOJSVL/6NvuLQvOcQHtprOAJXr3I6KQm\nVScJRQny4et+D/N3bJJiuhqe9YY9Bh+EP7omS4aq2UuphEhkuTSQ0V2+Fa4O8wdZ\nbAhUhU660Q6aGzNGvcyz8vi7ohmOFZS94/x2Lr6cBG8LF0dmr/pIw+uPlO36ghXF\nIPEM3VcGisTGQRg2Xy5yqeouK1S+YAcZ1f0QUOePP+WRhIecfmG3cj6oYTRnrOyq\n+62525BHDNjIz55z6H32dKBIy+r+HT7WaOGgPwvH+ugmlH6NyKHjSyy+IJoglkfM\n4+QA0zun7WhLet5y4jmsWCpT3mOCWj7h+iW6IqTlfcad3wCQ6OnySRq67W3GDq+M\n3kdUdBoyfLm1vzLceEF4AK8qChj7rVl8x0b4v8OfRGv6ZEIe+BfJYNzI9HeuIE91\nBYtLGe18vMs5mcWxcYMWlfAgzVSGTaqaaBie9qPtAThs00lJd9oRf/Mfga42/6vI\nnBLHwE3NyPyKfaLvcyLa/oPwGnOhKyPtD8HeN2MORm6RUeUClaq9s+ihDIPvbyLX\nbcKKdjGoJDWyJy2yU2GkVwrbF6gcKgdvo2uFckOpouKQ4P9KEooI/15fLy8NPIZz\nhGdWoRKL34w\\xcePC\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. 9) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA512\n\n- -------------------------------------------------------------------------\nDebian Security Advisory DSA-5326-1                   security@debian.org\nhttps://www.debian.org/security/                                  Aron Xu\nJanuary 24, 2023                      https://www.debian.org/security/faq\n- -------------------------------------------------------------------------\n\nPackage        : nodejs\nCVE ID         : CVE-2022-32212 CVE-2022-32213 CVE-2022-32214 CVE-2022-32215\n                 CVE-2022-35255 CVE-2022-35256 CVE-2022-43548\n\nMultiple vulnerabilities were discovered in Node.js, which could result\nin HTTP request smuggling, bypass of host IP address validation and weak\nrandomness setup. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 12.22.12~dfsg-1~deb11u3. \n\nWe recommend that you upgrade your nodejs packages. \n\nFor the detailed security status of nodejs please refer to\nits security tracker page at:\nhttps://security-tracker.debian.org/tracker/nodejs\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmPQNhIACgkQEMKTtsN8\nTjaRmA/+KDFkQcd2sE/eAAx9cVikICNkfu7uIVKHpeDH9o5oq5M2nj4zHJCeAArp\nWblguyZwEtqzAOO2WesbrmwfXLmglhrNZwRMOrsbu63JxSnecp7qcMwR8A4JWdmd\nTxb4aZr6Prmwq6fT0G3K6oV8Hw+OeqYA/RZKenxtkBf/jdzVahGJHJ/NrFKKWVQW\nxbqHwCkP7uUlm+5UR5XzNrodTRCQYHJvUmDUrjEOjM6x+sqYirKWiERN0A14kVn9\n0Ufrw6+Z2tKhdKFZfU1BtDthhlH/nybz0h3aHsk+E5/vx20WAURiCEDVi7nf8+Rf\nEtbCxaqV+/xVoPmXStHY/ogCo8CgRVsyYUIemgi4q5LwVx/Oqjm2CJ/xCwOKh0E2\nidXLJfLSpxxBe598MUn9iKbnFFCN9DQZXf7BYs3djtn8ALFVBSHZSF1QXFoFQ86w\nY9xGhBQzfEgCoEW7H4S30ZQ+Gz+ZnOMCSH+MKIMtSpqbc7wLtrKf839DO6Uux7B7\nu0WR3lZlsihi92QKq9X/VRkyy8ZiA2TYy3IE+KDKlXDHKls9FR9BUClYe9L8RiRu\nboP8KPFUHUsSVaTzkufMStdKkcXCqgj/6KhJL6E9ZunTBpTmqx1Ty7/N2qktLFnH\nujrffzV3rCE6eIg7ps8OdZbjCfqUqmQk9/pV6ZDjymqjZ1LKZDs\\xfeRn\n-----END PGP SIGNATURE-----\n. ==========================================================================\nUbuntu Security Notice USN-6491-1\nNovember 21, 2023\n\nnodejs vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 22.04 LTS\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS (Available with Ubuntu Pro)\n\nSummary:\n\nSeveral security issues were fixed in Node.js. \n\nSoftware Description:\n- nodejs: An open-source, cross-platform JavaScript runtime environment. \n\nDetails:\n\nAxel Chong discovered that Node.js incorrectly handled certain inputs. If a\nuser or an automated system were tricked into opening a specially crafted\ninput file, a remote attacker could possibly use this issue to execute\narbitrary code. (CVE-2022-32212)\n\nZeyu Zhang discovered that Node.js incorrectly handled certain inputs. If a\nuser or an automated system were tricked into opening a specially crafted\ninput file, a remote attacker could possibly use this issue to execute\narbitrary code. This issue only affected Ubuntu 22.04 LTS. (CVE-2022-32213,\nCVE-2022-32214, CVE-2022-32215)\n\nIt was discovered that Node.js incorrectly handled certain inputs. If a user\nor an automated system were tricked into opening a specially crafted input\nfile, a remote attacker could possibly use this issue to execute arbitrary\ncode. This issue only affected Ubuntu 22.04 LTS. (CVE-2022-35256)\n\nIt was discovered that Node.js incorrectly handled certain inputs. If a user\nor an automated system were tricked into opening a specially crafted input\nfile, a remote attacker could possibly use this issue to execute arbitrary\ncode. This issue only affected Ubuntu 22.04 LTS. (CVE-2022-43548)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 22.04 LTS:\n   libnode-dev                     12.22.9~dfsg-1ubuntu3.2\n   libnode72                       12.22.9~dfsg-1ubuntu3.2\n   nodejs                          12.22.9~dfsg-1ubuntu3.2\n   nodejs-doc                      12.22.9~dfsg-1ubuntu3.2\n\nUbuntu 20.04 LTS:\n   libnode-dev                     10.19.0~dfsg-3ubuntu1.3\n   libnode64                       10.19.0~dfsg-3ubuntu1.3\n   nodejs                          10.19.0~dfsg-3ubuntu1.3\n   nodejs-doc                      10.19.0~dfsg-3ubuntu1.3\n\nUbuntu 18.04 LTS (Available with Ubuntu Pro):\n   nodejs                          8.10.0~dfsg-2ubuntu0.4+esm4\n   nodejs-dev                      8.10.0~dfsg-2ubuntu0.4+esm4\n   nodejs-doc                      8.10.0~dfsg-2ubuntu0.4+esm4\n\nIn general, a standard system update will make all the necessary changes. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202405-29\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Low\n    Title: Node.js: Multiple Vulnerabilities\n     Date: May 08, 2024\n     Bugs: #772422, #781704, #800986, #805053, #807775, #811273, #817938, #831037, #835615, #857111, #865627, #872692, #879617, #918086, #918614\n       ID: 202405-29\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been discovered in Node.js. \n\nBackground\n=========\nNode.js is a JavaScript runtime built on Chrome\u2019s V8 JavaScript engine. \n\nAffected packages\n================\nPackage          Vulnerable    Unaffected\n---------------  ------------  ------------\nnet-libs/nodejs  \u003c 16.20.2     \u003e= 16.20.2\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in Node.js. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll Node.js 20 users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-20.5.1\"\n\nAll Node.js 18 users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-18.17.1\"\n\nAll Node.js 16 users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-16.20.2\"\n\nReferences\n=========\n[ 1 ] CVE-2020-7774\n      https://nvd.nist.gov/vuln/detail/CVE-2020-7774\n[ 2 ] CVE-2021-3672\n      https://nvd.nist.gov/vuln/detail/CVE-2021-3672\n[ 3 ] CVE-2021-22883\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22883\n[ 4 ] CVE-2021-22884\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22884\n[ 5 ] CVE-2021-22918\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22918\n[ 6 ] CVE-2021-22930\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22930\n[ 7 ] CVE-2021-22931\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22931\n[ 8 ] CVE-2021-22939\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22939\n[ 9 ] CVE-2021-22940\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22940\n[ 10 ] CVE-2021-22959\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22959\n[ 11 ] CVE-2021-22960\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22960\n[ 12 ] CVE-2021-37701\n      https://nvd.nist.gov/vuln/detail/CVE-2021-37701\n[ 13 ] CVE-2021-37712\n      https://nvd.nist.gov/vuln/detail/CVE-2021-37712\n[ 14 ] CVE-2021-39134\n      https://nvd.nist.gov/vuln/detail/CVE-2021-39134\n[ 15 ] CVE-2021-39135\n      https://nvd.nist.gov/vuln/detail/CVE-2021-39135\n[ 16 ] CVE-2021-44531\n      https://nvd.nist.gov/vuln/detail/CVE-2021-44531\n[ 17 ] CVE-2021-44532\n      https://nvd.nist.gov/vuln/detail/CVE-2021-44532\n[ 18 ] CVE-2021-44533\n      https://nvd.nist.gov/vuln/detail/CVE-2021-44533\n[ 19 ] CVE-2022-0778\n      https://nvd.nist.gov/vuln/detail/CVE-2022-0778\n[ 20 ] CVE-2022-3602\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3602\n[ 21 ] CVE-2022-3786\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3786\n[ 22 ] CVE-2022-21824\n      https://nvd.nist.gov/vuln/detail/CVE-2022-21824\n[ 23 ] CVE-2022-32212\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32212\n[ 24 ] CVE-2022-32213\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32213\n[ 25 ] CVE-2022-32214\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32214\n[ 26 ] CVE-2022-32215\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32215\n[ 27 ] CVE-2022-32222\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32222\n[ 28 ] CVE-2022-35255\n      https://nvd.nist.gov/vuln/detail/CVE-2022-35255\n[ 29 ] CVE-2022-35256\n      https://nvd.nist.gov/vuln/detail/CVE-2022-35256\n[ 30 ] CVE-2022-35948\n      https://nvd.nist.gov/vuln/detail/CVE-2022-35948\n[ 31 ] CVE-2022-35949\n      https://nvd.nist.gov/vuln/detail/CVE-2022-35949\n[ 32 ] CVE-2022-43548\n      https://nvd.nist.gov/vuln/detail/CVE-2022-43548\n[ 33 ] CVE-2023-30581\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30581\n[ 34 ] CVE-2023-30582\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30582\n[ 35 ] CVE-2023-30583\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30583\n[ 36 ] CVE-2023-30584\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30584\n[ 37 ] CVE-2023-30586\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30586\n[ 38 ] CVE-2023-30587\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30587\n[ 39 ] CVE-2023-30588\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30588\n[ 40 ] CVE-2023-30589\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30589\n[ 41 ] CVE-2023-30590\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30590\n[ 42 ] CVE-2023-32002\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32002\n[ 43 ] CVE-2023-32003\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32003\n[ 44 ] CVE-2023-32004\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32004\n[ 45 ] CVE-2023-32005\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32005\n[ 46 ] CVE-2023-32006\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32006\n[ 47 ] CVE-2023-32558\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32558\n[ 48 ] CVE-2023-32559\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32559\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202405-29\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2024 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-32215"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013243"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-32215"
      },
      {
        "db": "PACKETSTORM",
        "id": "168305"
      },
      {
        "db": "PACKETSTORM",
        "id": "169410"
      },
      {
        "db": "PACKETSTORM",
        "id": "168442"
      },
      {
        "db": "PACKETSTORM",
        "id": "168358"
      },
      {
        "db": "PACKETSTORM",
        "id": "170727"
      },
      {
        "db": "PACKETSTORM",
        "id": "175817"
      },
      {
        "db": "PACKETSTORM",
        "id": "178512"
      },
      {
        "db": "PACKETSTORM",
        "id": "168359"
      }
    ],
    "trust": 2.43
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2022-32215",
        "trust": 4.1
      },
      {
        "db": "HACKERONE",
        "id": "1501679",
        "trust": 2.4
      },
      {
        "db": "SIEMENS",
        "id": "SSA-332410",
        "trust": 2.4
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-23-017-03",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU90782730",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013243",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "168305",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "169410",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "168442",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "168358",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "170727",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3673",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3488",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3505",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3487",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4136",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4101",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3586",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4681",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022071827",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022071338",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022072639",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022072522",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022071612",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-678",
        "trust": 0.6
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-32215",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "175817",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "178512",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168359",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-32215"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013243"
      },
      {
        "db": "PACKETSTORM",
        "id": "168305"
      },
      {
        "db": "PACKETSTORM",
        "id": "169410"
      },
      {
        "db": "PACKETSTORM",
        "id": "168442"
      },
      {
        "db": "PACKETSTORM",
        "id": "168358"
      },
      {
        "db": "PACKETSTORM",
        "id": "170727"
      },
      {
        "db": "PACKETSTORM",
        "id": "175817"
      },
      {
        "db": "PACKETSTORM",
        "id": "178512"
      },
      {
        "db": "PACKETSTORM",
        "id": "168359"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-678"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-32215"
      }
    ]
  },
  "id": "VAR-202207-0588",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-11-29T22:19:57.824000Z",
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-444",
        "trust": 1.0
      },
      {
        "problemtype": "HTTP Request Smuggling (CWE-444) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013243"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-32215"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.5,
        "url": "https://nodejs.org/en/blog/vulnerability/july-2022-security-releases/"
      },
      {
        "trust": 2.4,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf"
      },
      {
        "trust": 2.4,
        "url": "https://hackerone.com/reports/1501679"
      },
      {
        "trust": 2.4,
        "url": "https://www.debian.org/security/2023/dsa-5326"
      },
      {
        "trust": 1.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32215"
      },
      {
        "trust": 1.4,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/2icg6csib3guwh5dusqevx53mojw7lyk/"
      },
      {
        "trust": 1.4,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/qcnn3yg2bcls4zekj3clsut6as7axth3/"
      },
      {
        "trust": 1.4,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/vmqk5l5sbyd47qqz67lemhnq662gh3oy/"
      },
      {
        "trust": 1.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32215"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/2icg6csib3guwh5dusqevx53mojw7lyk/"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/qcnn3yg2bcls4zekj3clsut6as7axth3/"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/vmqk5l5sbyd47qqz67lemhnq662gh3oy/"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu90782730/"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-017-03"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32214"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32212"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32213"
      },
      {
        "trust": 0.6,
        "url": "https://security.netapp.com/advisory/ntap-20220915-0001/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/170727/debian-security-advisory-5326-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3505"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/168305/red-hat-security-advisory-2022-6389-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022072522"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/168442/red-hat-security-advisory-2022-6595-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/168358/red-hat-security-advisory-2022-6449-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4681"
      },
      {
        "trust": 0.6,
        "url": "https://cxsecurity.com/cveshow/cve-2022-32215/"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022072639"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4101"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3673"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4136"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3487"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022071827"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3586"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3488"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022071612"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169410/red-hat-security-advisory-2022-6985-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022071338"
      },
      {
        "trust": 0.5,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2022-32214"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2022-32213"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2022-32212"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-33987"
      },
      {
        "trust": 0.5,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2022-33987"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35256"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-43548"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3807"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3807"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35255"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6389"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6985"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33502"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-29244"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6595"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7788"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28469"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-29244"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28469"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7788"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6449"
      },
      {
        "trust": 0.1,
        "url": "https://security-tracker.debian.org/tracker/nodejs"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/faq"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/nodejs/12.22.9~dfsg-1ubuntu3.2"
      },
      {
        "trust": 0.1,
        "url": "https://ubuntu.com/security/notices/usn-6491-1"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/nodejs/10.19.0~dfsg-3ubuntu1.3"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22960"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30587"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32006"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22931"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32222"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22939"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32558"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30588"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21824"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3672"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44532"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35949"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22959"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/glsa/202405-29"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22918"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32004"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30584"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7774"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30589"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32003"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22883"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22884"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35948"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44533"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32002"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30582"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3602"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3786"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30590"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30586"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22940"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32005"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32559"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22930"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39135"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39134"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30581"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37712"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30583"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44531"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37701"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6448"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-32215"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013243"
      },
      {
        "db": "PACKETSTORM",
        "id": "168305"
      },
      {
        "db": "PACKETSTORM",
        "id": "169410"
      },
      {
        "db": "PACKETSTORM",
        "id": "168442"
      },
      {
        "db": "PACKETSTORM",
        "id": "168358"
      },
      {
        "db": "PACKETSTORM",
        "id": "170727"
      },
      {
        "db": "PACKETSTORM",
        "id": "175817"
      },
      {
        "db": "PACKETSTORM",
        "id": "178512"
      },
      {
        "db": "PACKETSTORM",
        "id": "168359"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-678"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-32215"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULMON",
        "id": "CVE-2022-32215"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013243"
      },
      {
        "db": "PACKETSTORM",
        "id": "168305"
      },
      {
        "db": "PACKETSTORM",
        "id": "169410"
      },
      {
        "db": "PACKETSTORM",
        "id": "168442"
      },
      {
        "db": "PACKETSTORM",
        "id": "168358"
      },
      {
        "db": "PACKETSTORM",
        "id": "170727"
      },
      {
        "db": "PACKETSTORM",
        "id": "175817"
      },
      {
        "db": "PACKETSTORM",
        "id": "178512"
      },
      {
        "db": "PACKETSTORM",
        "id": "168359"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-678"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-32215"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-09-06T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-013243"
      },
      {
        "date": "2022-09-08T14:41:32",
        "db": "PACKETSTORM",
        "id": "168305"
      },
      {
        "date": "2022-10-18T22:30:49",
        "db": "PACKETSTORM",
        "id": "169410"
      },
      {
        "date": "2022-09-21T13:47:04",
        "db": "PACKETSTORM",
        "id": "168442"
      },
      {
        "date": "2022-09-13T15:43:41",
        "db": "PACKETSTORM",
        "id": "168358"
      },
      {
        "date": "2023-01-25T16:09:12",
        "db": "PACKETSTORM",
        "id": "170727"
      },
      {
        "date": "2023-11-21T16:00:44",
        "db": "PACKETSTORM",
        "id": "175817"
      },
      {
        "date": "2024-05-09T15:46:44",
        "db": "PACKETSTORM",
        "id": "178512"
      },
      {
        "date": "2022-09-13T15:43:55",
        "db": "PACKETSTORM",
        "id": "168359"
      },
      {
        "date": "2022-07-08T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202207-678"
      },
      {
        "date": "2022-07-14T15:15:08.387000",
        "db": "NVD",
        "id": "CVE-2022-32215"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-09-06T08:23:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-013243"
      },
      {
        "date": "2023-02-01T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202207-678"
      },
      {
        "date": "2023-11-07T03:47:46.577000",
        "db": "NVD",
        "id": "CVE-2022-32215"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "175817"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-678"
      }
    ],
    "trust": 0.7
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "llhttp\u00a0 of \u00a0llhttp\u00a0 in products from other multiple vendors \u00a0HTTP\u00a0 Request Smuggling Vulnerability",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013243"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "environmental issue",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-678"
      }
    ],
    "trust": 0.6
  }
}

var-202312-0207
Vulnerability from variot

A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 2). Affected products do not properly validate the certificate of the configured UMC server. This could allow an attacker to intercept credentials that are sent to the UMC server as well as to manipulate responses, potentially allowing an attacker to escalate privileges. Siemens' SINEC INS Exists in a certificate validation vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202312-0207",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": null
      },
      {
        "model": "sinec ins",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": null
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": "1.0"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-019617"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-48427"
      }
    ]
  },
  "cve": "CVE-2023-48427",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 9.8,
            "baseSeverity": "CRITICAL",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 3.9,
            "id": "CVE-2023-48427",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "HIGH",
            "attackVector": "NETWORK",
            "author": "productcert@siemens.com",
            "availabilityImpact": "HIGH",
            "baseScore": 8.1,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.2,
            "id": "CVE-2023-48427",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 9.8,
            "baseSeverity": "Critical",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2023-48427",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2023-48427",
            "trust": 1.0,
            "value": "CRITICAL"
          },
          {
            "author": "productcert@siemens.com",
            "id": "CVE-2023-48427",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "NVD",
            "id": "CVE-2023-48427",
            "trust": 0.8,
            "value": "Critical"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-019617"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-48427"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-48427"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 2). Affected products do not properly validate the certificate of the configured UMC server. This could allow an attacker to intercept credentials that are sent to the UMC server as well as to manipulate responses, potentially allowing an attacker to escalate privileges. Siemens\u0027 SINEC INS Exists in a certificate validation vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2023-48427"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-019617"
      }
    ],
    "trust": 1.62
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2023-48427",
        "trust": 2.6
      },
      {
        "db": "SIEMENS",
        "id": "SSA-077170",
        "trust": 1.8
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-23-348-16",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU98271228",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-019617",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-019617"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-48427"
      }
    ]
  },
  "id": "VAR-202312-0207",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-08-14T12:09:22.349000Z",
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-295",
        "trust": 1.0
      },
      {
        "problemtype": "Illegal certificate verification (CWE-295) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-019617"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-48427"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.8,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu98271228/"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-48427"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-348-16"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-019617"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-48427"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-019617"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-48427"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2024-01-15T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2023-019617"
      },
      {
        "date": "2023-12-12T12:15:14.677000",
        "db": "NVD",
        "id": "CVE-2023-48427"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2024-01-15T02:20:00",
        "db": "JVNDB",
        "id": "JVNDB-2023-019617"
      },
      {
        "date": "2023-12-14T20:07:17.240000",
        "db": "NVD",
        "id": "CVE-2023-48427"
      }
    ]
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Siemens\u0027 \u00a0SINEC\u00a0INS\u00a0 Certificate validation vulnerabilities in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-019617"
      }
    ],
    "trust": 0.8
  }
}

var-202312-0205
Vulnerability from variot

A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 2). The REST API of affected devices does not check the length of parameters in certain conditions. This allows a malicious admin to crash the server by sending a crafted request to the API. The server will automatically restart.

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202312-0205",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2023-48430"
      }
    ]
  },
  "cve": "CVE-2023-48430",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "LOW",
            "baseScore": 2.7,
            "baseSeverity": "LOW",
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 1.2,
            "id": "CVE-2023-48430",
            "impactScore": 1.4,
            "integrityImpact": "NONE",
            "privilegesRequired": "HIGH",
            "scope": "UNCHANGED",
            "trust": 2.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:N/I:N/A:L",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2023-48430",
            "trust": 1.0,
            "value": "LOW"
          },
          {
            "author": "productcert@siemens.com",
            "id": "CVE-2023-48430",
            "trust": 1.0,
            "value": "LOW"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2023-48430"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-48430"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 2). The REST API of affected devices does not check the length of parameters in certain conditions. This allows a malicious admin to crash the server by sending a crafted request to the API. The server will automatically restart.",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2023-48430"
      }
    ],
    "trust": 1.0
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "SIEMENS",
        "id": "SSA-077170",
        "trust": 1.0
      },
      {
        "db": "NVD",
        "id": "CVE-2023-48430",
        "trust": 1.0
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2023-48430"
      }
    ]
  },
  "id": "VAR-202312-0205",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-08-14T12:45:16.918000Z",
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "NVD-CWE-noinfo",
        "trust": 1.0
      },
      {
        "problemtype": "CWE-392",
        "trust": 1.0
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2023-48430"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.0,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf"
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2023-48430"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2023-48430"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-12-12T12:15:15.433000",
        "db": "NVD",
        "id": "CVE-2023-48430"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-12-14T19:37:28.207000",
        "db": "NVD",
        "id": "CVE-2023-48430"
      }
    ]
  }
}

var-202411-0481
Vulnerability from variot

A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 3). The affected application does not properly sanitize user provided paths for SFTP-based file up- and downloads. This could allow an authenticated remote attacker to manipulate arbitrary files on the filesystem and achieve arbitrary code execution on the device. Siemens' SINEC INS Exists in a past traversal vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202411-0481",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": null
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012662"
      },
      {
        "db": "NVD",
        "id": "CVE-2024-46888"
      }
    ]
  },
  "cve": "CVE-2024-46888",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 9.9,
            "baseSeverity": "CRITICAL",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 3.1,
            "id": "CVE-2024-46888",
            "impactScore": 6.0,
            "integrityImpact": "HIGH",
            "privilegesRequired": "LOW",
            "scope": "CHANGED",
            "trust": 2.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 9.9,
            "baseSeverity": "Critical",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2024-46888",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "Low",
            "scope": "Changed",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2024-46888",
            "trust": 1.0,
            "value": "CRITICAL"
          },
          {
            "author": "productcert@siemens.com",
            "id": "CVE-2024-46888",
            "trust": 1.0,
            "value": "Critical"
          },
          {
            "author": "NVD",
            "id": "CVE-2024-46888",
            "trust": 0.8,
            "value": "Critical"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012662"
      },
      {
        "db": "NVD",
        "id": "CVE-2024-46888"
      },
      {
        "db": "NVD",
        "id": "CVE-2024-46888"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 3). The affected application does not properly sanitize user provided paths for SFTP-based file up- and downloads. This could allow an authenticated remote attacker to manipulate arbitrary files on the filesystem and achieve arbitrary code execution on the device. Siemens\u0027 SINEC INS Exists in a past traversal vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2024-46888"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012662"
      }
    ],
    "trust": 1.62
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2024-46888",
        "trust": 2.6
      },
      {
        "db": "SIEMENS",
        "id": "SSA-915275",
        "trust": 1.8
      },
      {
        "db": "JVN",
        "id": "JVNVU96191615",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012662",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012662"
      },
      {
        "db": "NVD",
        "id": "CVE-2024-46888"
      }
    ]
  },
  "id": "VAR-202411-0481",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-11-16T22:09:55.581000Z",
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-22",
        "trust": 1.0
      },
      {
        "problemtype": "Path traversal (CWE-22) [ others ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012662"
      },
      {
        "db": "NVD",
        "id": "CVE-2024-46888"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.8,
        "url": "https://cert-portal.siemens.com/productcert/html/ssa-915275.html"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu96191615/"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2024-46888"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012662"
      },
      {
        "db": "NVD",
        "id": "CVE-2024-46888"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012662"
      },
      {
        "db": "NVD",
        "id": "CVE-2024-46888"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2024-11-15T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2024-012662"
      },
      {
        "date": "2024-11-12T13:15:08.927000",
        "db": "NVD",
        "id": "CVE-2024-46888"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2024-11-15T02:31:00",
        "db": "JVNDB",
        "id": "JVNDB-2024-012662"
      },
      {
        "date": "2024-11-13T23:11:24.570000",
        "db": "NVD",
        "id": "CVE-2024-46888"
      }
    ]
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Siemens\u0027 \u00a0SINEC\u00a0INS\u00a0 Past traversal vulnerability in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012662"
      }
    ],
    "trust": 0.8
  }
}

var-202210-0037
Vulnerability from variot

A weak randomness in WebCrypto keygen vulnerability exists in Node.js 18 due to a change with EntropySource() in SecretKeyGenTraits::DoKeyGen() in src/crypto/crypto_keygen.cc. There are two problems with this: 1) It does not check the return value, it assumes EntropySource() always succeeds, but it can (and sometimes will) fail. 2) The random data returned byEntropySource() may not be cryptographically strong and therefore not suitable as keying material. Node.js Foundation of Node.js Products from multiple other vendors have weak encryption. PRNG There is a vulnerability in the use of.Information may be obtained and information may be tampered with. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Important: nodejs:16 security update Advisory ID: RHSA-2022:6964-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2022:6964 Issue date: 2022-10-17 CVE Names: CVE-2022-35255 CVE-2022-35256 ==================================================================== 1. Summary:

An update for the nodejs:16 module is now available for Red Hat Enterprise Linux 8.

Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

Red Hat Enterprise Linux AppStream (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64

  1. Description:

Node.js is a software development platform for building fast and scalable network applications in the JavaScript programming language.

The following packages have been upgraded to a later upstream version: nodejs 16.

Security Fix(es):

  • nodejs: weak randomness in WebCrypto keygen (CVE-2022-35255)

  • nodejs: HTTP Request Smuggling due to incorrect parsing of header fields (CVE-2022-35256)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

  1. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Bugs fixed (https://bugzilla.redhat.com/):

2130517 - CVE-2022-35255 nodejs: weak randomness in WebCrypto keygen 2130518 - CVE-2022-35256 nodejs: HTTP Request Smuggling due to incorrect parsing of header fields

  1. Package List:

Red Hat Enterprise Linux AppStream (v. 8):

Source: nodejs-16.17.1-1.module+el8.6.0+16848+a483195a.src.rpm nodejs-nodemon-2.0.19-2.module+el8.6.0+16240+7ca51420.src.rpm nodejs-packaging-25-1.module+el8.5.0+10992+fac5fe06.src.rpm

aarch64: nodejs-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm nodejs-debuginfo-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm nodejs-debugsource-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm nodejs-devel-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm nodejs-full-i18n-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm npm-8.15.0-1.16.17.1.1.module+el8.6.0+16848+a483195a.aarch64.rpm

noarch: nodejs-docs-16.17.1-1.module+el8.6.0+16848+a483195a.noarch.rpm nodejs-nodemon-2.0.19-2.module+el8.6.0+16240+7ca51420.noarch.rpm nodejs-packaging-25-1.module+el8.5.0+10992+fac5fe06.noarch.rpm

ppc64le: nodejs-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm nodejs-debuginfo-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm nodejs-debugsource-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm nodejs-devel-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm nodejs-full-i18n-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm npm-8.15.0-1.16.17.1.1.module+el8.6.0+16848+a483195a.ppc64le.rpm

s390x: nodejs-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm nodejs-debuginfo-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm nodejs-debugsource-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm nodejs-devel-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm nodejs-full-i18n-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm npm-8.15.0-1.16.17.1.1.module+el8.6.0+16848+a483195a.s390x.rpm

x86_64: nodejs-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm nodejs-debuginfo-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm nodejs-debugsource-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm nodejs-devel-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm nodejs-full-i18n-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm npm-8.15.0-1.16.17.1.1.module+el8.6.0+16848+a483195a.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. References:

https://access.redhat.com/security/cve/CVE-2022-35255 https://access.redhat.com/security/cve/CVE-2022-35256 https://access.redhat.com/security/updates/classification/#important

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBY01tM9zjgjWX9erEAQgRRw/8DdK1QObq3so9+4ybaPFjCpdytAyNFy2E vrWNb7xRSO8myrQJ3cspxWMgRgfjMeJYPL8MT7iolW0SMWPd3uNMIh6ej3nK6zo+ BqHGgPBB2+knIF9ApMxW+2OpQAl4j0ICOeyLinqUXsyzDqPUOdW5kgNIPog668tc VsxB2Lt7pAJcpNkmwx6gvU5aZ6rWOUeNKyjAnat5AJPUx+NbtOtFWymivlPKCNWg bcGktfXz22tAixuEih9pC+YrPbJ++AHg5lZbK35uHBeGe7i9OdhbH8lbGrV5+0Vo 3DOlVTvuofjPZr0Ft50ChMsgsc/3pmBTXZOEfLrNHIMlJ2sHsP/3ZQ4hUmYYI3xs BF6HmgS4d3rEybSyXjqkQHKvSEi8KxBcs0y8RrvZeEUOfwTPwdaWKIhlzzn3lGYm a4iPlYzfCTfV4h2YdLvNE0hcOeaChiPVWvVxb9aV9XUW2ibWyHPSlJpBoP1UjMW4 8T0tYn6hUUWhWWT4cra5ipEjCmU9YfhdFsjoqKS/KFNA7kD94NSqWcbPs+3XnKbT l2IjXb8aBpn2Yykq1u4t12VEJCnKeTEUt43/LAlXW1mkNV3OQ2bPl2qwdEPTQxDP WBoK9aPtqD6W3VyuNza3VItmZKYw7nHtZL40YpvbdA6XtmlHZF6bFEiLdSwNduaV jippDtM0Pgw=vFcS -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512


Debian Security Advisory DSA-5326-1 security@debian.org https://www.debian.org/security/ Aron Xu January 24, 2023 https://www.debian.org/security/faq


Package : nodejs CVE ID : CVE-2022-32212 CVE-2022-32213 CVE-2022-32214 CVE-2022-32215 CVE-2022-35255 CVE-2022-35256 CVE-2022-43548

Multiple vulnerabilities were discovered in Node.js, which could result in HTTP request smuggling, bypass of host IP address validation and weak randomness setup.

For the stable distribution (bullseye), these problems have been fixed in version 12.22.12~dfsg-1~deb11u3.

We recommend that you upgrade your nodejs packages.

For the detailed security status of nodejs please refer to its security tracker page at: https://security-tracker.debian.org/tracker/nodejs

Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/

Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmPQNhIACgkQEMKTtsN8 TjaRmA/+KDFkQcd2sE/eAAx9cVikICNkfu7uIVKHpeDH9o5oq5M2nj4zHJCeAArp WblguyZwEtqzAOO2WesbrmwfXLmglhrNZwRMOrsbu63JxSnecp7qcMwR8A4JWdmd Txb4aZr6Prmwq6fT0G3K6oV8Hw+OeqYA/RZKenxtkBf/jdzVahGJHJ/NrFKKWVQW xbqHwCkP7uUlm+5UR5XzNrodTRCQYHJvUmDUrjEOjM6x+sqYirKWiERN0A14kVn9 0Ufrw6+Z2tKhdKFZfU1BtDthhlH/nybz0h3aHsk+E5/vx20WAURiCEDVi7nf8+Rf EtbCxaqV+/xVoPmXStHY/ogCo8CgRVsyYUIemgi4q5LwVx/Oqjm2CJ/xCwOKh0E2 idXLJfLSpxxBe598MUn9iKbnFFCN9DQZXf7BYs3djtn8ALFVBSHZSF1QXFoFQ86w Y9xGhBQzfEgCoEW7H4S30ZQ+Gz+ZnOMCSH+MKIMtSpqbc7wLtrKf839DO6Uux7B7 u0WR3lZlsihi92QKq9X/VRkyy8ZiA2TYy3IE+KDKlXDHKls9FR9BUClYe9L8RiRu boP8KPFUHUsSVaTzkufMStdKkcXCqgj/6KhJL6E9ZunTBpTmqx1Ty7/N2qktLFnH ujrffzV3rCE6eIg7ps8OdZbjCfqUqmQk9/pV6ZDjymqjZ1LKZDs\xfeRn -----END PGP SIGNATURE----- . - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202405-29


                                       https://security.gentoo.org/

Severity: Low Title: Node.js: Multiple Vulnerabilities Date: May 08, 2024 Bugs: #772422, #781704, #800986, #805053, #807775, #811273, #817938, #831037, #835615, #857111, #865627, #872692, #879617, #918086, #918614 ID: 202405-29


Synopsis

Multiple vulnerabilities have been discovered in Node.js.

Background

Node.js is a JavaScript runtime built on Chrome’s V8 JavaScript engine.

Affected packages

Package Vulnerable Unaffected


net-libs/nodejs < 16.20.2 >= 16.20.2

Description

Multiple vulnerabilities have been discovered in Node.js. Please review the CVE identifiers referenced below for details.

Impact

Please review the referenced CVE identifiers for details.

Workaround

There is no known workaround at this time.

Resolution

All Node.js 20 users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-20.5.1"

All Node.js 18 users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-18.17.1"

All Node.js 16 users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-16.20.2"

References

[ 1 ] CVE-2020-7774 https://nvd.nist.gov/vuln/detail/CVE-2020-7774 [ 2 ] CVE-2021-3672 https://nvd.nist.gov/vuln/detail/CVE-2021-3672 [ 3 ] CVE-2021-22883 https://nvd.nist.gov/vuln/detail/CVE-2021-22883 [ 4 ] CVE-2021-22884 https://nvd.nist.gov/vuln/detail/CVE-2021-22884 [ 5 ] CVE-2021-22918 https://nvd.nist.gov/vuln/detail/CVE-2021-22918 [ 6 ] CVE-2021-22930 https://nvd.nist.gov/vuln/detail/CVE-2021-22930 [ 7 ] CVE-2021-22931 https://nvd.nist.gov/vuln/detail/CVE-2021-22931 [ 8 ] CVE-2021-22939 https://nvd.nist.gov/vuln/detail/CVE-2021-22939 [ 9 ] CVE-2021-22940 https://nvd.nist.gov/vuln/detail/CVE-2021-22940 [ 10 ] CVE-2021-22959 https://nvd.nist.gov/vuln/detail/CVE-2021-22959 [ 11 ] CVE-2021-22960 https://nvd.nist.gov/vuln/detail/CVE-2021-22960 [ 12 ] CVE-2021-37701 https://nvd.nist.gov/vuln/detail/CVE-2021-37701 [ 13 ] CVE-2021-37712 https://nvd.nist.gov/vuln/detail/CVE-2021-37712 [ 14 ] CVE-2021-39134 https://nvd.nist.gov/vuln/detail/CVE-2021-39134 [ 15 ] CVE-2021-39135 https://nvd.nist.gov/vuln/detail/CVE-2021-39135 [ 16 ] CVE-2021-44531 https://nvd.nist.gov/vuln/detail/CVE-2021-44531 [ 17 ] CVE-2021-44532 https://nvd.nist.gov/vuln/detail/CVE-2021-44532 [ 18 ] CVE-2021-44533 https://nvd.nist.gov/vuln/detail/CVE-2021-44533 [ 19 ] CVE-2022-0778 https://nvd.nist.gov/vuln/detail/CVE-2022-0778 [ 20 ] CVE-2022-3602 https://nvd.nist.gov/vuln/detail/CVE-2022-3602 [ 21 ] CVE-2022-3786 https://nvd.nist.gov/vuln/detail/CVE-2022-3786 [ 22 ] CVE-2022-21824 https://nvd.nist.gov/vuln/detail/CVE-2022-21824 [ 23 ] CVE-2022-32212 https://nvd.nist.gov/vuln/detail/CVE-2022-32212 [ 24 ] CVE-2022-32213 https://nvd.nist.gov/vuln/detail/CVE-2022-32213 [ 25 ] CVE-2022-32214 https://nvd.nist.gov/vuln/detail/CVE-2022-32214 [ 26 ] CVE-2022-32215 https://nvd.nist.gov/vuln/detail/CVE-2022-32215 [ 27 ] CVE-2022-32222 https://nvd.nist.gov/vuln/detail/CVE-2022-32222 [ 28 ] CVE-2022-35255 https://nvd.nist.gov/vuln/detail/CVE-2022-35255 [ 29 ] CVE-2022-35256 https://nvd.nist.gov/vuln/detail/CVE-2022-35256 [ 30 ] CVE-2022-35948 https://nvd.nist.gov/vuln/detail/CVE-2022-35948 [ 31 ] CVE-2022-35949 https://nvd.nist.gov/vuln/detail/CVE-2022-35949 [ 32 ] CVE-2022-43548 https://nvd.nist.gov/vuln/detail/CVE-2022-43548 [ 33 ] CVE-2023-30581 https://nvd.nist.gov/vuln/detail/CVE-2023-30581 [ 34 ] CVE-2023-30582 https://nvd.nist.gov/vuln/detail/CVE-2023-30582 [ 35 ] CVE-2023-30583 https://nvd.nist.gov/vuln/detail/CVE-2023-30583 [ 36 ] CVE-2023-30584 https://nvd.nist.gov/vuln/detail/CVE-2023-30584 [ 37 ] CVE-2023-30586 https://nvd.nist.gov/vuln/detail/CVE-2023-30586 [ 38 ] CVE-2023-30587 https://nvd.nist.gov/vuln/detail/CVE-2023-30587 [ 39 ] CVE-2023-30588 https://nvd.nist.gov/vuln/detail/CVE-2023-30588 [ 40 ] CVE-2023-30589 https://nvd.nist.gov/vuln/detail/CVE-2023-30589 [ 41 ] CVE-2023-30590 https://nvd.nist.gov/vuln/detail/CVE-2023-30590 [ 42 ] CVE-2023-32002 https://nvd.nist.gov/vuln/detail/CVE-2023-32002 [ 43 ] CVE-2023-32003 https://nvd.nist.gov/vuln/detail/CVE-2023-32003 [ 44 ] CVE-2023-32004 https://nvd.nist.gov/vuln/detail/CVE-2023-32004 [ 45 ] CVE-2023-32005 https://nvd.nist.gov/vuln/detail/CVE-2023-32005 [ 46 ] CVE-2023-32006 https://nvd.nist.gov/vuln/detail/CVE-2023-32006 [ 47 ] CVE-2023-32558 https://nvd.nist.gov/vuln/detail/CVE-2023-32558 [ 48 ] CVE-2023-32559 https://nvd.nist.gov/vuln/detail/CVE-2023-32559

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202405-29

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2024 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202210-0037",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "node.js",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "16.17.1"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "node.js",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "18.9.1"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "16.0.0"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "18.0.0"
      },
      {
        "model": "node.js",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "15.14.0"
      },
      {
        "model": "node.js",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "16.12.0"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "11.0"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "15.0.0"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "16.13.0"
      },
      {
        "model": "gnu/linux",
        "scope": null,
        "trust": 0.8,
        "vendor": "debian",
        "version": null
      },
      {
        "model": "node.js",
        "scope": null,
        "trust": 0.8,
        "vendor": "node js",
        "version": null
      },
      {
        "model": "sinec ins",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-022576"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-35255"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "169408"
      },
      {
        "db": "PACKETSTORM",
        "id": "168757"
      },
      {
        "db": "PACKETSTORM",
        "id": "169779"
      }
    ],
    "trust": 0.3
  },
  "cve": "CVE-2022-35255",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 9.1,
            "baseSeverity": "CRITICAL",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 3.9,
            "id": "CVE-2022-35255",
            "impactScore": 5.2,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:N",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "None",
            "baseScore": 9.1,
            "baseSeverity": "Critical",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2022-35255",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:N",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2022-35255",
            "trust": 1.0,
            "value": "CRITICAL"
          },
          {
            "author": "NVD",
            "id": "CVE-2022-35255",
            "trust": 0.8,
            "value": "Critical"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202210-1268",
            "trust": 0.6,
            "value": "CRITICAL"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-022576"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1268"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-35255"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A weak randomness in WebCrypto keygen vulnerability exists in Node.js 18 due to a change with EntropySource() in SecretKeyGenTraits::DoKeyGen() in src/crypto/crypto_keygen.cc. There are two problems with this: 1) It does not check the return value, it assumes EntropySource() always succeeds, but it can (and sometimes will) fail. 2) The random data returned byEntropySource() may not be cryptographically strong and therefore not suitable as keying material. Node.js Foundation of Node.js Products from multiple other vendors have weak encryption. PRNG There is a vulnerability in the use of.Information may be obtained and information may be tampered with. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Important: nodejs:16 security update\nAdvisory ID:       RHSA-2022:6964-01\nProduct:           Red Hat Enterprise Linux\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:6964\nIssue date:        2022-10-17\nCVE Names:         CVE-2022-35255 CVE-2022-35256\n====================================================================\n1. Summary:\n\nAn update for the nodejs:16 module is now available for Red Hat Enterprise\nLinux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. Description:\n\nNode.js is a software development platform for building fast and scalable\nnetwork applications in the JavaScript programming language. \n\nThe following packages have been upgraded to a later upstream version:\nnodejs 16. \n\nSecurity Fix(es):\n\n* nodejs: weak randomness in WebCrypto keygen (CVE-2022-35255)\n\n* nodejs: HTTP Request Smuggling due to incorrect parsing of header fields\n(CVE-2022-35256)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2130517 - CVE-2022-35255 nodejs: weak randomness in WebCrypto keygen\n2130518 - CVE-2022-35256 nodejs: HTTP Request Smuggling due to incorrect parsing of header fields\n\n6. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 8):\n\nSource:\nnodejs-16.17.1-1.module+el8.6.0+16848+a483195a.src.rpm\nnodejs-nodemon-2.0.19-2.module+el8.6.0+16240+7ca51420.src.rpm\nnodejs-packaging-25-1.module+el8.5.0+10992+fac5fe06.src.rpm\n\naarch64:\nnodejs-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm\nnodejs-debuginfo-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm\nnodejs-debugsource-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm\nnodejs-devel-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm\nnodejs-full-i18n-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm\nnpm-8.15.0-1.16.17.1.1.module+el8.6.0+16848+a483195a.aarch64.rpm\n\nnoarch:\nnodejs-docs-16.17.1-1.module+el8.6.0+16848+a483195a.noarch.rpm\nnodejs-nodemon-2.0.19-2.module+el8.6.0+16240+7ca51420.noarch.rpm\nnodejs-packaging-25-1.module+el8.5.0+10992+fac5fe06.noarch.rpm\n\nppc64le:\nnodejs-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm\nnodejs-debuginfo-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm\nnodejs-debugsource-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm\nnodejs-devel-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm\nnodejs-full-i18n-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm\nnpm-8.15.0-1.16.17.1.1.module+el8.6.0+16848+a483195a.ppc64le.rpm\n\ns390x:\nnodejs-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm\nnodejs-debuginfo-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm\nnodejs-debugsource-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm\nnodejs-devel-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm\nnodejs-full-i18n-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm\nnpm-8.15.0-1.16.17.1.1.module+el8.6.0+16848+a483195a.s390x.rpm\n\nx86_64:\nnodejs-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm\nnodejs-debuginfo-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm\nnodejs-debugsource-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm\nnodejs-devel-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm\nnodejs-full-i18n-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm\nnpm-8.15.0-1.16.17.1.1.module+el8.6.0+16848+a483195a.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-35255\nhttps://access.redhat.com/security/cve/CVE-2022-35256\nhttps://access.redhat.com/security/updates/classification/#important\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBY01tM9zjgjWX9erEAQgRRw/8DdK1QObq3so9+4ybaPFjCpdytAyNFy2E\nvrWNb7xRSO8myrQJ3cspxWMgRgfjMeJYPL8MT7iolW0SMWPd3uNMIh6ej3nK6zo+\nBqHGgPBB2+knIF9ApMxW+2OpQAl4j0ICOeyLinqUXsyzDqPUOdW5kgNIPog668tc\nVsxB2Lt7pAJcpNkmwx6gvU5aZ6rWOUeNKyjAnat5AJPUx+NbtOtFWymivlPKCNWg\nbcGktfXz22tAixuEih9pC+YrPbJ++AHg5lZbK35uHBeGe7i9OdhbH8lbGrV5+0Vo\n3DOlVTvuofjPZr0Ft50ChMsgsc/3pmBTXZOEfLrNHIMlJ2sHsP/3ZQ4hUmYYI3xs\nBF6HmgS4d3rEybSyXjqkQHKvSEi8KxBcs0y8RrvZeEUOfwTPwdaWKIhlzzn3lGYm\na4iPlYzfCTfV4h2YdLvNE0hcOeaChiPVWvVxb9aV9XUW2ibWyHPSlJpBoP1UjMW4\n8T0tYn6hUUWhWWT4cra5ipEjCmU9YfhdFsjoqKS/KFNA7kD94NSqWcbPs+3XnKbT\nl2IjXb8aBpn2Yykq1u4t12VEJCnKeTEUt43/LAlXW1mkNV3OQ2bPl2qwdEPTQxDP\nWBoK9aPtqD6W3VyuNza3VItmZKYw7nHtZL40YpvbdA6XtmlHZF6bFEiLdSwNduaV\njippDtM0Pgw=vFcS\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA512\n\n- -------------------------------------------------------------------------\nDebian Security Advisory DSA-5326-1                   security@debian.org\nhttps://www.debian.org/security/                                  Aron Xu\nJanuary 24, 2023                      https://www.debian.org/security/faq\n- -------------------------------------------------------------------------\n\nPackage        : nodejs\nCVE ID         : CVE-2022-32212 CVE-2022-32213 CVE-2022-32214 CVE-2022-32215\n                 CVE-2022-35255 CVE-2022-35256 CVE-2022-43548\n\nMultiple vulnerabilities were discovered in Node.js, which could result\nin HTTP request smuggling, bypass of host IP address validation and weak\nrandomness setup. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 12.22.12~dfsg-1~deb11u3. \n\nWe recommend that you upgrade your nodejs packages. \n\nFor the detailed security status of nodejs please refer to\nits security tracker page at:\nhttps://security-tracker.debian.org/tracker/nodejs\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmPQNhIACgkQEMKTtsN8\nTjaRmA/+KDFkQcd2sE/eAAx9cVikICNkfu7uIVKHpeDH9o5oq5M2nj4zHJCeAArp\nWblguyZwEtqzAOO2WesbrmwfXLmglhrNZwRMOrsbu63JxSnecp7qcMwR8A4JWdmd\nTxb4aZr6Prmwq6fT0G3K6oV8Hw+OeqYA/RZKenxtkBf/jdzVahGJHJ/NrFKKWVQW\nxbqHwCkP7uUlm+5UR5XzNrodTRCQYHJvUmDUrjEOjM6x+sqYirKWiERN0A14kVn9\n0Ufrw6+Z2tKhdKFZfU1BtDthhlH/nybz0h3aHsk+E5/vx20WAURiCEDVi7nf8+Rf\nEtbCxaqV+/xVoPmXStHY/ogCo8CgRVsyYUIemgi4q5LwVx/Oqjm2CJ/xCwOKh0E2\nidXLJfLSpxxBe598MUn9iKbnFFCN9DQZXf7BYs3djtn8ALFVBSHZSF1QXFoFQ86w\nY9xGhBQzfEgCoEW7H4S30ZQ+Gz+ZnOMCSH+MKIMtSpqbc7wLtrKf839DO6Uux7B7\nu0WR3lZlsihi92QKq9X/VRkyy8ZiA2TYy3IE+KDKlXDHKls9FR9BUClYe9L8RiRu\nboP8KPFUHUsSVaTzkufMStdKkcXCqgj/6KhJL6E9ZunTBpTmqx1Ty7/N2qktLFnH\nujrffzV3rCE6eIg7ps8OdZbjCfqUqmQk9/pV6ZDjymqjZ1LKZDs\\xfeRn\n-----END PGP SIGNATURE-----\n. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202405-29\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Low\n    Title: Node.js: Multiple Vulnerabilities\n     Date: May 08, 2024\n     Bugs: #772422, #781704, #800986, #805053, #807775, #811273, #817938, #831037, #835615, #857111, #865627, #872692, #879617, #918086, #918614\n       ID: 202405-29\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been discovered in Node.js. \n\nBackground\n=========\nNode.js is a JavaScript runtime built on Chrome\u2019s V8 JavaScript engine. \n\nAffected packages\n================\nPackage          Vulnerable    Unaffected\n---------------  ------------  ------------\nnet-libs/nodejs  \u003c 16.20.2     \u003e= 16.20.2\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in Node.js. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll Node.js 20 users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-20.5.1\"\n\nAll Node.js 18 users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-18.17.1\"\n\nAll Node.js 16 users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-16.20.2\"\n\nReferences\n=========\n[ 1 ] CVE-2020-7774\n      https://nvd.nist.gov/vuln/detail/CVE-2020-7774\n[ 2 ] CVE-2021-3672\n      https://nvd.nist.gov/vuln/detail/CVE-2021-3672\n[ 3 ] CVE-2021-22883\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22883\n[ 4 ] CVE-2021-22884\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22884\n[ 5 ] CVE-2021-22918\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22918\n[ 6 ] CVE-2021-22930\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22930\n[ 7 ] CVE-2021-22931\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22931\n[ 8 ] CVE-2021-22939\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22939\n[ 9 ] CVE-2021-22940\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22940\n[ 10 ] CVE-2021-22959\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22959\n[ 11 ] CVE-2021-22960\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22960\n[ 12 ] CVE-2021-37701\n      https://nvd.nist.gov/vuln/detail/CVE-2021-37701\n[ 13 ] CVE-2021-37712\n      https://nvd.nist.gov/vuln/detail/CVE-2021-37712\n[ 14 ] CVE-2021-39134\n      https://nvd.nist.gov/vuln/detail/CVE-2021-39134\n[ 15 ] CVE-2021-39135\n      https://nvd.nist.gov/vuln/detail/CVE-2021-39135\n[ 16 ] CVE-2021-44531\n      https://nvd.nist.gov/vuln/detail/CVE-2021-44531\n[ 17 ] CVE-2021-44532\n      https://nvd.nist.gov/vuln/detail/CVE-2021-44532\n[ 18 ] CVE-2021-44533\n      https://nvd.nist.gov/vuln/detail/CVE-2021-44533\n[ 19 ] CVE-2022-0778\n      https://nvd.nist.gov/vuln/detail/CVE-2022-0778\n[ 20 ] CVE-2022-3602\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3602\n[ 21 ] CVE-2022-3786\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3786\n[ 22 ] CVE-2022-21824\n      https://nvd.nist.gov/vuln/detail/CVE-2022-21824\n[ 23 ] CVE-2022-32212\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32212\n[ 24 ] CVE-2022-32213\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32213\n[ 25 ] CVE-2022-32214\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32214\n[ 26 ] CVE-2022-32215\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32215\n[ 27 ] CVE-2022-32222\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32222\n[ 28 ] CVE-2022-35255\n      https://nvd.nist.gov/vuln/detail/CVE-2022-35255\n[ 29 ] CVE-2022-35256\n      https://nvd.nist.gov/vuln/detail/CVE-2022-35256\n[ 30 ] CVE-2022-35948\n      https://nvd.nist.gov/vuln/detail/CVE-2022-35948\n[ 31 ] CVE-2022-35949\n      https://nvd.nist.gov/vuln/detail/CVE-2022-35949\n[ 32 ] CVE-2022-43548\n      https://nvd.nist.gov/vuln/detail/CVE-2022-43548\n[ 33 ] CVE-2023-30581\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30581\n[ 34 ] CVE-2023-30582\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30582\n[ 35 ] CVE-2023-30583\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30583\n[ 36 ] CVE-2023-30584\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30584\n[ 37 ] CVE-2023-30586\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30586\n[ 38 ] CVE-2023-30587\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30587\n[ 39 ] CVE-2023-30588\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30588\n[ 40 ] CVE-2023-30589\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30589\n[ 41 ] CVE-2023-30590\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30590\n[ 42 ] CVE-2023-32002\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32002\n[ 43 ] CVE-2023-32003\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32003\n[ 44 ] CVE-2023-32004\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32004\n[ 45 ] CVE-2023-32005\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32005\n[ 46 ] CVE-2023-32006\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32006\n[ 47 ] CVE-2023-32558\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32558\n[ 48 ] CVE-2023-32559\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32559\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202405-29\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2024 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-35255"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-022576"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-35255"
      },
      {
        "db": "PACKETSTORM",
        "id": "169408"
      },
      {
        "db": "PACKETSTORM",
        "id": "168757"
      },
      {
        "db": "PACKETSTORM",
        "id": "169779"
      },
      {
        "db": "PACKETSTORM",
        "id": "170727"
      },
      {
        "db": "PACKETSTORM",
        "id": "178512"
      }
    ],
    "trust": 2.16
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2022-35255",
        "trust": 3.8
      },
      {
        "db": "HACKERONE",
        "id": "1690000",
        "trust": 2.4
      },
      {
        "db": "SIEMENS",
        "id": "SSA-332410",
        "trust": 2.4
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-23-017-03",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU90782730",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-022576",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "169408",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "169779",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "170727",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5146",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1268",
        "trust": 0.6
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-35255",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168757",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "178512",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-35255"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-022576"
      },
      {
        "db": "PACKETSTORM",
        "id": "169408"
      },
      {
        "db": "PACKETSTORM",
        "id": "168757"
      },
      {
        "db": "PACKETSTORM",
        "id": "169779"
      },
      {
        "db": "PACKETSTORM",
        "id": "170727"
      },
      {
        "db": "PACKETSTORM",
        "id": "178512"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1268"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-35255"
      }
    ]
  },
  "id": "VAR-202210-0037",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-08-14T13:12:40.970000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Node.js Fixing measures for security feature vulnerabilities",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=216854"
      },
      {
        "title": "Red Hat: ",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=CVE-2022-35255"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-35255"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1268"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-338",
        "trust": 1.0
      },
      {
        "problemtype": "Cryptographic weakness  PRNG Use of (CWE-338) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-022576"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-35255"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.4,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf"
      },
      {
        "trust": 2.4,
        "url": "https://hackerone.com/reports/1690000"
      },
      {
        "trust": 2.4,
        "url": "https://security.netapp.com/advisory/ntap-20230113-0002/"
      },
      {
        "trust": 2.4,
        "url": "https://www.debian.org/security/2023/dsa-5326"
      },
      {
        "trust": 1.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35255"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu90782730/"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-017-03"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/170727/debian-security-advisory-5326-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169408/red-hat-security-advisory-2022-6963-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5146"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169779/red-hat-security-advisory-2022-7821-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://cxsecurity.com/cveshow/cve-2022-35255/"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35256"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-35255"
      },
      {
        "trust": 0.3,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.3,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-35256"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32214"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32212"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-43548"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32213"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32215"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6963"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6964"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:7821"
      },
      {
        "trust": 0.1,
        "url": "https://security-tracker.debian.org/tracker/nodejs"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/faq"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22960"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30587"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32006"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22931"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32222"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22939"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32558"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30588"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21824"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3672"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44532"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35949"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22959"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/glsa/202405-29"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22918"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32004"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30584"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7774"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30589"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32003"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22883"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22884"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35948"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44533"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32002"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30582"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3602"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3786"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30590"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30586"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22940"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32005"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32559"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22930"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39135"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39134"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30581"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37712"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30583"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44531"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37701"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-35255"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-022576"
      },
      {
        "db": "PACKETSTORM",
        "id": "169408"
      },
      {
        "db": "PACKETSTORM",
        "id": "168757"
      },
      {
        "db": "PACKETSTORM",
        "id": "169779"
      },
      {
        "db": "PACKETSTORM",
        "id": "170727"
      },
      {
        "db": "PACKETSTORM",
        "id": "178512"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1268"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-35255"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULMON",
        "id": "CVE-2022-35255"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-022576"
      },
      {
        "db": "PACKETSTORM",
        "id": "169408"
      },
      {
        "db": "PACKETSTORM",
        "id": "168757"
      },
      {
        "db": "PACKETSTORM",
        "id": "169779"
      },
      {
        "db": "PACKETSTORM",
        "id": "170727"
      },
      {
        "db": "PACKETSTORM",
        "id": "178512"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1268"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-35255"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-11-17T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-022576"
      },
      {
        "date": "2022-10-18T22:30:35",
        "db": "PACKETSTORM",
        "id": "169408"
      },
      {
        "date": "2022-10-18T14:27:29",
        "db": "PACKETSTORM",
        "id": "168757"
      },
      {
        "date": "2022-11-08T13:50:31",
        "db": "PACKETSTORM",
        "id": "169779"
      },
      {
        "date": "2023-01-25T16:09:12",
        "db": "PACKETSTORM",
        "id": "170727"
      },
      {
        "date": "2024-05-09T15:46:44",
        "db": "PACKETSTORM",
        "id": "178512"
      },
      {
        "date": "2022-10-18T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202210-1268"
      },
      {
        "date": "2022-12-05T22:15:10.513000",
        "db": "NVD",
        "id": "CVE-2022-35255"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-11-17T08:21:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-022576"
      },
      {
        "date": "2023-02-01T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202210-1268"
      },
      {
        "date": "2023-03-01T15:03:19.287000",
        "db": "NVD",
        "id": "CVE-2022-35255"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1268"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Node.js\u00a0Foundation\u00a0 of \u00a0Node.js\u00a0 Cryptographic vulnerabilities in products from multiple other vendors \u00a0PRNG\u00a0 Vulnerability regarding the use of",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-022576"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "security feature problem",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1268"
      }
    ],
    "trust": 0.6
  }
}

var-202301-0545
Vulnerability from variot

A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 1). An authenticated remote attacker with access to the Web Based Management (443/tcp) of the affected product, could potentially read and write arbitrary files from and to the device's file system. An attacker might leverage this to trigger remote code execution on the affected component. SINEC INS Exists in a past traversal vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202301-0545",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": "1.0 sp2 update 1"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001808"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-45092"
      }
    ]
  },
  "cve": "CVE-2022-45092",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 8.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.8,
            "id": "CVE-2022-45092",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "LOW",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "productcert@siemens.com",
            "availabilityImpact": "HIGH",
            "baseScore": 9.9,
            "baseSeverity": "CRITICAL",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 3.1,
            "id": "CVE-2022-45092",
            "impactScore": 6.0,
            "integrityImpact": "HIGH",
            "privilegesRequired": "LOW",
            "scope": "CHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 8.8,
            "baseSeverity": "High",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2022-45092",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "Low",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2022-45092",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "productcert@siemens.com",
            "id": "CVE-2022-45092",
            "trust": 1.0,
            "value": "CRITICAL"
          },
          {
            "author": "NVD",
            "id": "CVE-2022-45092",
            "trust": 0.8,
            "value": "High"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202301-654",
            "trust": 0.6,
            "value": "HIGH"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001808"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-654"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-45092"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-45092"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 1). An authenticated remote attacker with access to the Web Based Management (443/tcp) of the affected product, could potentially read and write arbitrary files from and to the device\u0027s file system. An attacker might leverage this to trigger remote code execution on the affected component. SINEC INS Exists in a past traversal vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-45092"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001808"
      }
    ],
    "trust": 1.62
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2022-45092",
        "trust": 3.2
      },
      {
        "db": "SIEMENS",
        "id": "SSA-332410",
        "trust": 1.6
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-23-017-03",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU90782730",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001808",
        "trust": 0.8
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-654",
        "trust": 0.6
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001808"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-654"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-45092"
      }
    ]
  },
  "id": "VAR-202301-0545",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-08-14T12:24:25.883000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "SSA-332410",
        "trust": 0.8,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf"
      },
      {
        "title": "Siemens SINEC NMS Repair measures for path traversal vulnerabilities",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=221640"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001808"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-654"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-22",
        "trust": 1.0
      },
      {
        "problemtype": "Path traversal (CWE-22) [ others ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001808"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-45092"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.6,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu90782730/index.html"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-45092"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-017-03"
      },
      {
        "trust": 0.6,
        "url": "https://cxsecurity.com/cveshow/cve-2022-45092/"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001808"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-654"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-45092"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001808"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-654"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-45092"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-05-16T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2023-001808"
      },
      {
        "date": "2023-01-10T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202301-654"
      },
      {
        "date": "2023-01-10T12:15:23.453000",
        "db": "NVD",
        "id": "CVE-2022-45092"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-05-16T03:29:00",
        "db": "JVNDB",
        "id": "JVNDB-2023-001808"
      },
      {
        "date": "2023-01-16T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202301-654"
      },
      {
        "date": "2023-01-14T00:47:06.117000",
        "db": "NVD",
        "id": "CVE-2022-45092"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-654"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "SINEC\u00a0INS\u00a0 Past traversal vulnerability in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001808"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "path traversal",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-654"
      }
    ],
    "trust": 0.6
  }
}

var-202301-0547
Vulnerability from variot

A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 1). An authenticated remote attacker with access to the Web Based Management (443/tcp) of the affected product, could potentially inject commands into the dhcpd configuration of the affected product. An attacker might leverage this to trigger remote code execution on the affected component. SINEC INS Contains a command injection vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202301-0547",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": null
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": "1.0 sp2 update 1"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001790"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-45094"
      }
    ]
  },
  "cve": "CVE-2022-45094",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 8.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.8,
            "id": "CVE-2022-45094",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "LOW",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "LOW",
            "attackVector": "ADJACENT",
            "author": "productcert@siemens.com",
            "availabilityImpact": "HIGH",
            "baseScore": 8.4,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 1.7,
            "id": "CVE-2022-45094",
            "impactScore": 6.0,
            "integrityImpact": "HIGH",
            "privilegesRequired": "HIGH",
            "scope": "CHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:A/AC:L/PR:H/UI:N/S:C/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 8.8,
            "baseSeverity": "High",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2022-45094",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "Low",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2022-45094",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "productcert@siemens.com",
            "id": "CVE-2022-45094",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "NVD",
            "id": "CVE-2022-45094",
            "trust": 0.8,
            "value": "High"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202301-661",
            "trust": 0.6,
            "value": "HIGH"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001790"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-661"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-45094"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-45094"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 1). An authenticated remote attacker with access to the Web Based Management (443/tcp) of the affected product, could potentially inject commands into the dhcpd configuration of the affected product. An attacker might leverage this to trigger remote code execution on the affected component. SINEC INS Contains a command injection vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-45094"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001790"
      }
    ],
    "trust": 1.62
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2022-45094",
        "trust": 3.2
      },
      {
        "db": "SIEMENS",
        "id": "SSA-332410",
        "trust": 1.6
      },
      {
        "db": "JVN",
        "id": "JVNVU90782730",
        "trust": 0.8
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-23-017-03",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001790",
        "trust": 0.8
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-661",
        "trust": 0.6
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001790"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-661"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-45094"
      }
    ]
  },
  "id": "VAR-202301-0547",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-08-14T13:06:58.516000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "SSA-332410",
        "trust": 0.8,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf"
      },
      {
        "title": "Siemens SINEC NMS Fixes for command injection vulnerabilities",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=221646"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001790"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-661"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-77",
        "trust": 1.0
      },
      {
        "problemtype": "Command injection (CWE-77) [ others ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001790"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-45094"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.6,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu90782730/index.html"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-45094"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-017-03"
      },
      {
        "trust": 0.6,
        "url": "https://cxsecurity.com/cveshow/cve-2022-45094/"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001790"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-661"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-45094"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001790"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-661"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-45094"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-05-12T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2023-001790"
      },
      {
        "date": "2023-01-10T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202301-661"
      },
      {
        "date": "2023-01-10T12:15:23.590000",
        "db": "NVD",
        "id": "CVE-2022-45094"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-05-12T04:41:00",
        "db": "JVNDB",
        "id": "JVNDB-2023-001790"
      },
      {
        "date": "2023-01-16T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202301-661"
      },
      {
        "date": "2023-01-14T00:43:06.910000",
        "db": "NVD",
        "id": "CVE-2022-45094"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-661"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "SINEC\u00a0INS\u00a0 Command injection vulnerability in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001790"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "command injection",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-661"
      }
    ],
    "trust": 0.6
  }
}

var-202012-1420
Vulnerability from variot

The package ua-parser-js before 0.7.23 are vulnerable to Regular Expression Denial of Service (ReDoS) in multiple regexes (see linked commit for more info). ua-parser-js Exists in a resource exhaustion vulnerability.Service operation interruption (DoS) It may be in a state

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202012-1420",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "ua-parser-js",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "ua parser js",
        "version": "0.7.23"
      },
      {
        "model": "ua-parser-js",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "faisalman",
        "version": "0.7.23"
      },
      {
        "model": "ua-parser-js",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "faisalman",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-014179"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-7793"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Siemens reported these vulnerabilities to CISA.",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202012-978"
      }
    ],
    "trust": 0.6
  },
  "cve": "CVE-2020-7793",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 5.0,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 10.0,
            "id": "CVE-2020-7793",
            "impactScore": 2.9,
            "integrityImpact": "NONE",
            "severity": "MEDIUM",
            "trust": 1.9,
            "vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 7.5,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 3.9,
            "id": "CVE-2020-7793",
            "impactScore": 3.6,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 2.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "OTHER",
            "availabilityImpact": "High",
            "baseScore": 7.5,
            "baseSeverity": "High",
            "confidentialityImpact": "None",
            "exploitabilityScore": null,
            "id": "JVNDB-2020-014179",
            "impactScore": null,
            "integrityImpact": "None",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2020-7793",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "report@snyk.io",
            "id": "CVE-2020-7793",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "NVD",
            "id": "CVE-2020-7793",
            "trust": 0.8,
            "value": "High"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202012-978",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "VULMON",
            "id": "CVE-2020-7793",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2020-7793"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-014179"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202012-978"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-7793"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-7793"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "The package ua-parser-js before 0.7.23 are vulnerable to Regular Expression Denial of Service (ReDoS) in multiple regexes (see linked commit for more info). ua-parser-js Exists in a resource exhaustion vulnerability.Service operation interruption (DoS) It may be in a state",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-7793"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-014179"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-7793"
      }
    ],
    "trust": 1.71
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2020-7793",
        "trust": 3.3
      },
      {
        "db": "SIEMENS",
        "id": "SSA-637483",
        "trust": 1.7
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-22-258-05",
        "trust": 1.5
      },
      {
        "db": "JVN",
        "id": "JVNVU99475301",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-014179",
        "trust": 0.8
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4616",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.2555",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022052615",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202012-978",
        "trust": 0.6
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-7793",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2020-7793"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-014179"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202012-978"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-7793"
      }
    ]
  },
  "id": "VAR-202012-1420",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-11-23T21:29:05.361000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Fix\u00a0ReDoS\u00a0vulnerabilities\u00a0reported\u00a0by\u00a0Snyk",
        "trust": 0.8,
        "url": "https://github.com/faisalman/ua-parser-js/commit/6d1f26df051ba681463ef109d36c9cf0f7e32b18"
      },
      {
        "title": "ua-parser-js Remediation of resource management error vulnerabilities",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=137311"
      },
      {
        "title": "awesome-redos-security",
        "trust": 0.1,
        "url": "https://github.com/engn33r/awesome-redos-security "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2020-7793"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-014179"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202012-978"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "NVD-CWE-Other",
        "trust": 1.0
      },
      {
        "problemtype": "Resource exhaustion (CWE-400) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-014179"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-7793"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.7,
        "url": "https://github.com/faisalman/ua-parser-js/commit/6d1f26df051ba681463ef109d36c9cf0f7e32b18"
      },
      {
        "trust": 1.7,
        "url": "https://snyk.io/vuln/snyk-java-orgwebjarsbowergithubfaisalman-1050388"
      },
      {
        "trust": 1.7,
        "url": "https://snyk.io/vuln/snyk-js-uaparserjs-1023599"
      },
      {
        "trust": 1.7,
        "url": "https://snyk.io/vuln/snyk-java-orgwebjarsnpm-1050387"
      },
      {
        "trust": 1.7,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf"
      },
      {
        "trust": 1.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7793"
      },
      {
        "trust": 0.9,
        "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.8,
        "url": "http://jvn.jp/vu/jvnvu99475301/index.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022052615"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4616"
      },
      {
        "trust": 0.6,
        "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.2555"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2020-7793"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-014179"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202012-978"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-7793"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULMON",
        "id": "CVE-2020-7793"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-014179"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202012-978"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-7793"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-12-11T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-7793"
      },
      {
        "date": "2021-08-04T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-014179"
      },
      {
        "date": "2020-12-11T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202012-978"
      },
      {
        "date": "2020-12-11T14:15:11.283000",
        "db": "NVD",
        "id": "CVE-2020-7793"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-09-13T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-7793"
      },
      {
        "date": "2022-09-20T05:31:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-014179"
      },
      {
        "date": "2022-09-19T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202012-978"
      },
      {
        "date": "2024-11-21T05:37:48.890000",
        "db": "NVD",
        "id": "CVE-2020-7793"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202012-978"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "ua-parser-js\u00a0 Resource exhaustion vulnerability in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-014179"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "other",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202012-978"
      }
    ],
    "trust": 0.6
  }
}

var-202210-0043
Vulnerability from variot

The llhttp parser in the http module in Node v18.7.0 does not correctly handle header fields that are not terminated with CLRF. This may result in HTTP Request Smuggling. 8) - aarch64, noarch, ppc64le, s390x, x86_64

The following packages have been upgraded to a later upstream version: nodejs 16. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Moderate: rh-nodejs14-nodejs security update Advisory ID: RHSA-2022:7044-01 Product: Red Hat Software Collections Advisory URL: https://access.redhat.com/errata/RHSA-2022:7044 Issue date: 2022-10-19 CVE Names: CVE-2021-44531 CVE-2021-44532 CVE-2021-44533 CVE-2021-44906 CVE-2022-21824 CVE-2022-35256 ==================================================================== 1. Summary:

An update for rh-nodejs14-nodejs is now available for Red Hat Software Collections.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

Red Hat Software Collections for Red Hat Enterprise Linux Server (v. 7) - noarch, ppc64le, s390x, x86_64 Red Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64

  1. Description:

Node.js is a software development platform for building fast and scalable network applications in the JavaScript programming language.

Security Fix(es):

  • nodejs: Improper handling of URI Subject Alternative Names (CVE-2021-44531)

  • nodejs: Certificate Verification Bypass via String Injection (CVE-2021-44532)

  • nodejs: Incorrect handling of certificate subject and issuer fields (CVE-2021-44533)

  • minimist: prototype pollution (CVE-2021-44906)

  • nodejs: HTTP Request Smuggling due to incorrect parsing of header fields (CVE-2022-35256)

  • nodejs: Prototype pollution via console.table properties (CVE-2022-21824)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

  1. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Bugs fixed (https://bugzilla.redhat.com/):

2040839 - CVE-2021-44531 nodejs: Improper handling of URI Subject Alternative Names 2040846 - CVE-2021-44532 nodejs: Certificate Verification Bypass via String Injection 2040856 - CVE-2021-44533 nodejs: Incorrect handling of certificate subject and issuer fields 2040862 - CVE-2022-21824 nodejs: Prototype pollution via console.table properties 2066009 - CVE-2021-44906 minimist: prototype pollution 2130518 - CVE-2022-35256 nodejs: HTTP Request Smuggling due to incorrect parsing of header fields

  1. Package List:

Red Hat Software Collections for Red Hat Enterprise Linux Server (v. 7):

Source: rh-nodejs14-nodejs-14.20.1-2.el7.src.rpm

noarch: rh-nodejs14-nodejs-docs-14.20.1-2.el7.noarch.rpm

ppc64le: rh-nodejs14-nodejs-14.20.1-2.el7.ppc64le.rpm rh-nodejs14-nodejs-debuginfo-14.20.1-2.el7.ppc64le.rpm rh-nodejs14-nodejs-devel-14.20.1-2.el7.ppc64le.rpm rh-nodejs14-npm-6.14.17-14.20.1.2.el7.ppc64le.rpm

s390x: rh-nodejs14-nodejs-14.20.1-2.el7.s390x.rpm rh-nodejs14-nodejs-debuginfo-14.20.1-2.el7.s390x.rpm rh-nodejs14-nodejs-devel-14.20.1-2.el7.s390x.rpm rh-nodejs14-npm-6.14.17-14.20.1.2.el7.s390x.rpm

x86_64: rh-nodejs14-nodejs-14.20.1-2.el7.x86_64.rpm rh-nodejs14-nodejs-debuginfo-14.20.1-2.el7.x86_64.rpm rh-nodejs14-nodejs-devel-14.20.1-2.el7.x86_64.rpm rh-nodejs14-npm-6.14.17-14.20.1.2.el7.x86_64.rpm

Red Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7):

Source: rh-nodejs14-nodejs-14.20.1-2.el7.src.rpm

noarch: rh-nodejs14-nodejs-docs-14.20.1-2.el7.noarch.rpm

x86_64: rh-nodejs14-nodejs-14.20.1-2.el7.x86_64.rpm rh-nodejs14-nodejs-debuginfo-14.20.1-2.el7.x86_64.rpm rh-nodejs14-nodejs-devel-14.20.1-2.el7.x86_64.rpm rh-nodejs14-npm-6.14.17-14.20.1.2.el7.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. References:

https://access.redhat.com/security/cve/CVE-2021-44531 https://access.redhat.com/security/cve/CVE-2021-44532 https://access.redhat.com/security/cve/CVE-2021-44533 https://access.redhat.com/security/cve/CVE-2021-44906 https://access.redhat.com/security/cve/CVE-2022-21824 https://access.redhat.com/security/cve/CVE-2022-35256 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBY1Bkk9zjgjWX9erEAQh9DQ//dSOPbtnYD3f9AvLUnQpnJb7OyGisGpPW von8hNiTCD5J3FP2DlY3/wGX9H1g2BXmuwpojS/sh17E2+sHldBTMk5kxT8bkBkB ZWnmIwqA1PfjAO4FEc7MtePJXsqCrBne63Bpo7k3ALc4hHtP2BEMkjA4ZOJJDl82 ydj74PPr0uVuZAn0jcLKsIPq1OmUW9jNuzY0p5uqhXKVP4XfFWfpi2dd34Nej+dv RbSABk5jZ0R6bQlPOdG4bI8vevvmhkeAqkcWgHWBZ9n34SFdiGKFdxUI3+SM2zvl tB7zuDc9rsLnF7DLZq3HVG3eOVdxJ1MKwap89iQrmQCy1kz4iq3hZbAKJHIjLTEy gWpwYI9nCamIsNwYB1pUM5RexkKTPKDRttZh9hff2RO9QCvdnecw3386blkhsb8s XJMAywflJeBrTnMPQ9tSNx60CgGI8JkU40RtnfwwS5yS1upd56jYbL+W4CzbZmzd bj48/l+fl3Ny0bGZ6QAG0ZWrH0eTs6hL/xYKFu2Z7jDteP9ITE1kSKeISjE/G0Rb Hjjp6sfEiR07PEJx2/Lne+o5JvCGu7wviT2SnJIfjX9C056CtO4IjRXEqdPqZqYq 3+T1AOLM1M2vu55WagYhnTtfGefIj5EScstARXZjz5pF0dQyhNZNO+p/S0coNUWz y4v1DFKlYtA=JvnP -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512


Debian Security Advisory DSA-5326-1 security@debian.org https://www.debian.org/security/ Aron Xu January 24, 2023 https://www.debian.org/security/faq


Package : nodejs CVE ID : CVE-2022-32212 CVE-2022-32213 CVE-2022-32214 CVE-2022-32215 CVE-2022-35255 CVE-2022-35256 CVE-2022-43548

Multiple vulnerabilities were discovered in Node.js, which could result in HTTP request smuggling, bypass of host IP address validation and weak randomness setup.

For the stable distribution (bullseye), these problems have been fixed in version 12.22.12~dfsg-1~deb11u3.

We recommend that you upgrade your nodejs packages.

For the detailed security status of nodejs please refer to its security tracker page at: https://security-tracker.debian.org/tracker/nodejs

Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/

Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmPQNhIACgkQEMKTtsN8 TjaRmA/+KDFkQcd2sE/eAAx9cVikICNkfu7uIVKHpeDH9o5oq5M2nj4zHJCeAArp WblguyZwEtqzAOO2WesbrmwfXLmglhrNZwRMOrsbu63JxSnecp7qcMwR8A4JWdmd Txb4aZr6Prmwq6fT0G3K6oV8Hw+OeqYA/RZKenxtkBf/jdzVahGJHJ/NrFKKWVQW xbqHwCkP7uUlm+5UR5XzNrodTRCQYHJvUmDUrjEOjM6x+sqYirKWiERN0A14kVn9 0Ufrw6+Z2tKhdKFZfU1BtDthhlH/nybz0h3aHsk+E5/vx20WAURiCEDVi7nf8+Rf EtbCxaqV+/xVoPmXStHY/ogCo8CgRVsyYUIemgi4q5LwVx/Oqjm2CJ/xCwOKh0E2 idXLJfLSpxxBe598MUn9iKbnFFCN9DQZXf7BYs3djtn8ALFVBSHZSF1QXFoFQ86w Y9xGhBQzfEgCoEW7H4S30ZQ+Gz+ZnOMCSH+MKIMtSpqbc7wLtrKf839DO6Uux7B7 u0WR3lZlsihi92QKq9X/VRkyy8ZiA2TYy3IE+KDKlXDHKls9FR9BUClYe9L8RiRu boP8KPFUHUsSVaTzkufMStdKkcXCqgj/6KhJL6E9ZunTBpTmqx1Ty7/N2qktLFnH ujrffzV3rCE6eIg7ps8OdZbjCfqUqmQk9/pV6ZDjymqjZ1LKZDs\xfeRn -----END PGP SIGNATURE----- . Bugs fixed (https://bugzilla.redhat.com/):

2066009 - CVE-2021-44906 minimist: prototype pollution 2130518 - CVE-2022-35256 nodejs: HTTP Request Smuggling due to incorrect parsing of header fields 2134609 - CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function 2140911 - CVE-2022-43548 nodejs: DNS rebinding in inspect via invalid octal IP address 2142823 - nodejs:14/nodejs: Rebase to the latest Nodejs 14 release [rhel-8] [rhel-8.4.0.z] 2150323 - CVE-2022-24999 express: "qs" prototype poisoning causes the hang of the node process 2156324 - CVE-2021-35065 glob-parent: Regular Expression Denial of Service 2165824 - CVE-2022-25881 http-cache-semantics: Regular Expression Denial of Service (ReDoS) vulnerability 2168631 - CVE-2022-4904 c-ares: buffer overflow in config_sortlist() due to missing string length check 2170644 - CVE-2022-38900 decode-uri-component: improper input validation resulting in DoS 2171935 - CVE-2023-23918 Node.js: Permissions policies can be bypassed via process.mainModule 2172217 - CVE-2023-23920 Node.js: insecure loading of ICU data through ICU_DATA environment variable 2175828 - nodejs:14/nodejs: Rebase to the latest Nodejs 14 release [rhel-8] [rhel-8.4.0.z]

6

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202210-0043",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "node.js",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "16.17.1"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "node.js",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "18.9.1"
      },
      {
        "model": "llhttp",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "llhttp",
        "version": "6.0.10"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "16.0.0"
      },
      {
        "model": "node.js",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "14.20.1"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "18.0.0"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "14.15.0"
      },
      {
        "model": "node.js",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "16.12.0"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "11.0"
      },
      {
        "model": "node.js",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "14.14.0"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "14.0.0"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "16.13.0"
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-35256"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "169408"
      },
      {
        "db": "PACKETSTORM",
        "id": "168757"
      },
      {
        "db": "PACKETSTORM",
        "id": "169437"
      },
      {
        "db": "PACKETSTORM",
        "id": "169779"
      },
      {
        "db": "PACKETSTORM",
        "id": "171839"
      },
      {
        "db": "PACKETSTORM",
        "id": "171666"
      }
    ],
    "trust": 0.6
  },
  "cve": "CVE-2022-35256",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 6.5,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "LOW",
            "exploitabilityScore": 3.9,
            "id": "CVE-2022-35256",
            "impactScore": 2.5,
            "integrityImpact": "LOW",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:N",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2022-35256",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202210-1266",
            "trust": 0.6,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1266"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-35256"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "The llhttp parser in the http module in Node v18.7.0 does not correctly handle header fields that are not terminated with CLRF. This may result in HTTP Request Smuggling. 8) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. \n\nThe following packages have been upgraded to a later upstream version:\nnodejs 16. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Moderate: rh-nodejs14-nodejs security update\nAdvisory ID:       RHSA-2022:7044-01\nProduct:           Red Hat Software Collections\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:7044\nIssue date:        2022-10-19\nCVE Names:         CVE-2021-44531 CVE-2021-44532 CVE-2021-44533\n                   CVE-2021-44906 CVE-2022-21824 CVE-2022-35256\n====================================================================\n1. Summary:\n\nAn update for rh-nodejs14-nodejs is now available for Red Hat Software\nCollections. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Software Collections for Red Hat Enterprise Linux Server (v. 7) - noarch, ppc64le, s390x, x86_64\nRed Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64\n\n3. Description:\n\nNode.js is a software development platform for building fast and scalable\nnetwork applications in the JavaScript programming language. \n\nSecurity Fix(es):\n\n* nodejs: Improper handling of URI Subject Alternative Names\n(CVE-2021-44531)\n\n* nodejs: Certificate Verification Bypass via String Injection\n(CVE-2021-44532)\n\n* nodejs: Incorrect handling of certificate subject and issuer fields\n(CVE-2021-44533)\n\n* minimist: prototype pollution (CVE-2021-44906)\n\n* nodejs: HTTP Request Smuggling due to incorrect parsing of header fields\n(CVE-2022-35256)\n\n* nodejs: Prototype pollution via console.table properties (CVE-2022-21824)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2040839 - CVE-2021-44531 nodejs: Improper handling of URI Subject Alternative Names\n2040846 - CVE-2021-44532 nodejs: Certificate Verification Bypass via String Injection\n2040856 - CVE-2021-44533 nodejs: Incorrect handling of certificate subject and issuer fields\n2040862 - CVE-2022-21824 nodejs: Prototype pollution via console.table properties\n2066009 - CVE-2021-44906 minimist: prototype pollution\n2130518 - CVE-2022-35256 nodejs: HTTP Request Smuggling due to incorrect parsing of header fields\n\n6. Package List:\n\nRed Hat Software Collections for Red Hat Enterprise Linux Server (v. 7):\n\nSource:\nrh-nodejs14-nodejs-14.20.1-2.el7.src.rpm\n\nnoarch:\nrh-nodejs14-nodejs-docs-14.20.1-2.el7.noarch.rpm\n\nppc64le:\nrh-nodejs14-nodejs-14.20.1-2.el7.ppc64le.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.1-2.el7.ppc64le.rpm\nrh-nodejs14-nodejs-devel-14.20.1-2.el7.ppc64le.rpm\nrh-nodejs14-npm-6.14.17-14.20.1.2.el7.ppc64le.rpm\n\ns390x:\nrh-nodejs14-nodejs-14.20.1-2.el7.s390x.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.1-2.el7.s390x.rpm\nrh-nodejs14-nodejs-devel-14.20.1-2.el7.s390x.rpm\nrh-nodejs14-npm-6.14.17-14.20.1.2.el7.s390x.rpm\n\nx86_64:\nrh-nodejs14-nodejs-14.20.1-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.1-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-devel-14.20.1-2.el7.x86_64.rpm\nrh-nodejs14-npm-6.14.17-14.20.1.2.el7.x86_64.rpm\n\nRed Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nrh-nodejs14-nodejs-14.20.1-2.el7.src.rpm\n\nnoarch:\nrh-nodejs14-nodejs-docs-14.20.1-2.el7.noarch.rpm\n\nx86_64:\nrh-nodejs14-nodejs-14.20.1-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.1-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-devel-14.20.1-2.el7.x86_64.rpm\nrh-nodejs14-npm-6.14.17-14.20.1.2.el7.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-44531\nhttps://access.redhat.com/security/cve/CVE-2021-44532\nhttps://access.redhat.com/security/cve/CVE-2021-44533\nhttps://access.redhat.com/security/cve/CVE-2021-44906\nhttps://access.redhat.com/security/cve/CVE-2022-21824\nhttps://access.redhat.com/security/cve/CVE-2022-35256\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBY1Bkk9zjgjWX9erEAQh9DQ//dSOPbtnYD3f9AvLUnQpnJb7OyGisGpPW\nvon8hNiTCD5J3FP2DlY3/wGX9H1g2BXmuwpojS/sh17E2+sHldBTMk5kxT8bkBkB\nZWnmIwqA1PfjAO4FEc7MtePJXsqCrBne63Bpo7k3ALc4hHtP2BEMkjA4ZOJJDl82\nydj74PPr0uVuZAn0jcLKsIPq1OmUW9jNuzY0p5uqhXKVP4XfFWfpi2dd34Nej+dv\nRbSABk5jZ0R6bQlPOdG4bI8vevvmhkeAqkcWgHWBZ9n34SFdiGKFdxUI3+SM2zvl\ntB7zuDc9rsLnF7DLZq3HVG3eOVdxJ1MKwap89iQrmQCy1kz4iq3hZbAKJHIjLTEy\ngWpwYI9nCamIsNwYB1pUM5RexkKTPKDRttZh9hff2RO9QCvdnecw3386blkhsb8s\nXJMAywflJeBrTnMPQ9tSNx60CgGI8JkU40RtnfwwS5yS1upd56jYbL+W4CzbZmzd\nbj48/l+fl3Ny0bGZ6QAG0ZWrH0eTs6hL/xYKFu2Z7jDteP9ITE1kSKeISjE/G0Rb\nHjjp6sfEiR07PEJx2/Lne+o5JvCGu7wviT2SnJIfjX9C056CtO4IjRXEqdPqZqYq\n3+T1AOLM1M2vu55WagYhnTtfGefIj5EScstARXZjz5pF0dQyhNZNO+p/S0coNUWz\ny4v1DFKlYtA=JvnP\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA512\n\n- -------------------------------------------------------------------------\nDebian Security Advisory DSA-5326-1                   security@debian.org\nhttps://www.debian.org/security/                                  Aron Xu\nJanuary 24, 2023                      https://www.debian.org/security/faq\n- -------------------------------------------------------------------------\n\nPackage        : nodejs\nCVE ID         : CVE-2022-32212 CVE-2022-32213 CVE-2022-32214 CVE-2022-32215\n                 CVE-2022-35255 CVE-2022-35256 CVE-2022-43548\n\nMultiple vulnerabilities were discovered in Node.js, which could result\nin HTTP request smuggling, bypass of host IP address validation and weak\nrandomness setup. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 12.22.12~dfsg-1~deb11u3. \n\nWe recommend that you upgrade your nodejs packages. \n\nFor the detailed security status of nodejs please refer to\nits security tracker page at:\nhttps://security-tracker.debian.org/tracker/nodejs\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmPQNhIACgkQEMKTtsN8\nTjaRmA/+KDFkQcd2sE/eAAx9cVikICNkfu7uIVKHpeDH9o5oq5M2nj4zHJCeAArp\nWblguyZwEtqzAOO2WesbrmwfXLmglhrNZwRMOrsbu63JxSnecp7qcMwR8A4JWdmd\nTxb4aZr6Prmwq6fT0G3K6oV8Hw+OeqYA/RZKenxtkBf/jdzVahGJHJ/NrFKKWVQW\nxbqHwCkP7uUlm+5UR5XzNrodTRCQYHJvUmDUrjEOjM6x+sqYirKWiERN0A14kVn9\n0Ufrw6+Z2tKhdKFZfU1BtDthhlH/nybz0h3aHsk+E5/vx20WAURiCEDVi7nf8+Rf\nEtbCxaqV+/xVoPmXStHY/ogCo8CgRVsyYUIemgi4q5LwVx/Oqjm2CJ/xCwOKh0E2\nidXLJfLSpxxBe598MUn9iKbnFFCN9DQZXf7BYs3djtn8ALFVBSHZSF1QXFoFQ86w\nY9xGhBQzfEgCoEW7H4S30ZQ+Gz+ZnOMCSH+MKIMtSpqbc7wLtrKf839DO6Uux7B7\nu0WR3lZlsihi92QKq9X/VRkyy8ZiA2TYy3IE+KDKlXDHKls9FR9BUClYe9L8RiRu\nboP8KPFUHUsSVaTzkufMStdKkcXCqgj/6KhJL6E9ZunTBpTmqx1Ty7/N2qktLFnH\nujrffzV3rCE6eIg7ps8OdZbjCfqUqmQk9/pV6ZDjymqjZ1LKZDs\\xfeRn\n-----END PGP SIGNATURE-----\n. Bugs fixed (https://bugzilla.redhat.com/):\n\n2066009 - CVE-2021-44906 minimist: prototype pollution\n2130518 - CVE-2022-35256 nodejs: HTTP Request Smuggling due to incorrect parsing of header fields\n2134609 - CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function\n2140911 - CVE-2022-43548 nodejs: DNS rebinding in inspect via invalid octal IP address\n2142823 - nodejs:14/nodejs: Rebase to the latest Nodejs 14 release [rhel-8] [rhel-8.4.0.z]\n2150323 - CVE-2022-24999 express: \"qs\" prototype poisoning causes the hang of the node process\n2156324 - CVE-2021-35065 glob-parent: Regular Expression Denial of Service\n2165824 - CVE-2022-25881 http-cache-semantics: Regular Expression Denial of Service (ReDoS) vulnerability\n2168631 - CVE-2022-4904 c-ares: buffer overflow in config_sortlist() due to missing string length check\n2170644 - CVE-2022-38900 decode-uri-component: improper input validation resulting in DoS\n2171935 - CVE-2023-23918 Node.js: Permissions policies can be bypassed via process.mainModule\n2172217 - CVE-2023-23920 Node.js: insecure loading of ICU data through ICU_DATA environment variable\n2175828 - nodejs:14/nodejs: Rebase to the latest Nodejs 14 release [rhel-8] [rhel-8.4.0.z]\n\n6",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-35256"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-35256"
      },
      {
        "db": "PACKETSTORM",
        "id": "169408"
      },
      {
        "db": "PACKETSTORM",
        "id": "168757"
      },
      {
        "db": "PACKETSTORM",
        "id": "169437"
      },
      {
        "db": "PACKETSTORM",
        "id": "170727"
      },
      {
        "db": "PACKETSTORM",
        "id": "169779"
      },
      {
        "db": "PACKETSTORM",
        "id": "171839"
      },
      {
        "db": "PACKETSTORM",
        "id": "171666"
      }
    ],
    "trust": 1.62
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2022-35256",
        "trust": 2.4
      },
      {
        "db": "HACKERONE",
        "id": "1675191",
        "trust": 1.6
      },
      {
        "db": "SIEMENS",
        "id": "SSA-332410",
        "trust": 1.6
      },
      {
        "db": "PACKETSTORM",
        "id": "169408",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "169437",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "170727",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "169781",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.6632",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.1926",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5146",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1266",
        "trust": 0.6
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-35256",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168757",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "169779",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "171839",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "171666",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-35256"
      },
      {
        "db": "PACKETSTORM",
        "id": "169408"
      },
      {
        "db": "PACKETSTORM",
        "id": "168757"
      },
      {
        "db": "PACKETSTORM",
        "id": "169437"
      },
      {
        "db": "PACKETSTORM",
        "id": "170727"
      },
      {
        "db": "PACKETSTORM",
        "id": "169779"
      },
      {
        "db": "PACKETSTORM",
        "id": "171839"
      },
      {
        "db": "PACKETSTORM",
        "id": "171666"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1266"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-35256"
      }
    ]
  },
  "id": "VAR-202210-0043",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-11-29T22:13:29.754000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Node.js Remediation measures for environmental problem vulnerabilities",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=219729"
      },
      {
        "title": "Red Hat: ",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=CVE-2022-35256"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-35256"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1266"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-444",
        "trust": 1.0
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-35256"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.6,
        "url": "https://www.debian.org/security/2023/dsa-5326"
      },
      {
        "trust": 1.6,
        "url": "https://hackerone.com/reports/1675191"
      },
      {
        "trust": 1.6,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2022-35256"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35256"
      },
      {
        "trust": 0.6,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.6,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/170727/debian-security-advisory-5326-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169408/red-hat-security-advisory-2022-6963-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.1926"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169781/red-hat-security-advisory-2022-7830-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5146"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169437/red-hat-security-advisory-2022-7044-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.6632"
      },
      {
        "trust": 0.6,
        "url": "https://cxsecurity.com/cveshow/cve-2022-35256/"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35255"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-35255"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44906"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-44906"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-43548"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44532"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21824"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-44533"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44531"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-44531"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-44532"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-21824"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44533"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-3517"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2023-23918"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-35065"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-35065"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3517"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-43548"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24999"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-24999"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-38900"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-4904"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2023-23920"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-25881"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-4904"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-25881"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-38900"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6963"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6964"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:7044"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32214"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32212"
      },
      {
        "trust": 0.1,
        "url": "https://security-tracker.debian.org/tracker/nodejs"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32213"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/faq"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32215"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:7821"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0235"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2023:1742"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0235"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2023:1533"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23918"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23920"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-35256"
      },
      {
        "db": "PACKETSTORM",
        "id": "169408"
      },
      {
        "db": "PACKETSTORM",
        "id": "168757"
      },
      {
        "db": "PACKETSTORM",
        "id": "169437"
      },
      {
        "db": "PACKETSTORM",
        "id": "170727"
      },
      {
        "db": "PACKETSTORM",
        "id": "169779"
      },
      {
        "db": "PACKETSTORM",
        "id": "171839"
      },
      {
        "db": "PACKETSTORM",
        "id": "171666"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1266"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-35256"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULMON",
        "id": "CVE-2022-35256"
      },
      {
        "db": "PACKETSTORM",
        "id": "169408"
      },
      {
        "db": "PACKETSTORM",
        "id": "168757"
      },
      {
        "db": "PACKETSTORM",
        "id": "169437"
      },
      {
        "db": "PACKETSTORM",
        "id": "170727"
      },
      {
        "db": "PACKETSTORM",
        "id": "169779"
      },
      {
        "db": "PACKETSTORM",
        "id": "171839"
      },
      {
        "db": "PACKETSTORM",
        "id": "171666"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1266"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-35256"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-10-18T22:30:35",
        "db": "PACKETSTORM",
        "id": "169408"
      },
      {
        "date": "2022-10-18T14:27:29",
        "db": "PACKETSTORM",
        "id": "168757"
      },
      {
        "date": "2022-10-20T14:20:24",
        "db": "PACKETSTORM",
        "id": "169437"
      },
      {
        "date": "2023-01-25T16:09:12",
        "db": "PACKETSTORM",
        "id": "170727"
      },
      {
        "date": "2022-11-08T13:50:31",
        "db": "PACKETSTORM",
        "id": "169779"
      },
      {
        "date": "2023-04-12T16:57:08",
        "db": "PACKETSTORM",
        "id": "171839"
      },
      {
        "date": "2023-04-03T17:32:27",
        "db": "PACKETSTORM",
        "id": "171666"
      },
      {
        "date": "2022-10-18T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202210-1266"
      },
      {
        "date": "2022-12-05T22:15:10.570000",
        "db": "NVD",
        "id": "CVE-2022-35256"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-04-04T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202210-1266"
      },
      {
        "date": "2023-05-12T13:30:33.190000",
        "db": "NVD",
        "id": "CVE-2022-35256"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1266"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Node.js Environmental problem loophole",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1266"
      }
    ],
    "trust": 0.6
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "environmental issue",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202210-1266"
      }
    ],
    "trust": 0.6
  }
}

var-202005-0397
Vulnerability from variot

json-c through 0.14 has an integer overflow and out-of-bounds write via a large JSON file, as demonstrated by printbuf_memappend. (DoS) It may be in a state. Summary:

An update is now available for OpenShift Logging 5.1. Bugs fixed (https://bugzilla.redhat.com/):

1944888 - CVE-2021-21409 netty: Request smuggling via content-length header 2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn't allow setting size restrictions for decompressed data 2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn't restrict chunk length and may buffer skippable chunks in an unnecessary way 2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value

  1. JIRA issues fixed (https://issues.jboss.org/):

LOG-1971 - Applying cluster state is causing elasticsearch to hit an issue and become unusable

  1. Bugs fixed (https://bugzilla.redhat.com/):

1995656 - CVE-2021-36221 golang: net/http/httputil: panic due to racy read of persistConn after handler panic

  1. ========================================================================= Ubuntu Security Notice USN-4360-4 May 28, 2020

json-c vulnerability

A security issue affects these releases of Ubuntu and its derivatives:

  • Ubuntu 20.04 LTS
  • Ubuntu 19.10
  • Ubuntu 18.04 LTS
  • Ubuntu 16.04 LTS
  • Ubuntu 14.04 ESM
  • Ubuntu 12.04 ESM

Summary:

json-c could be made to execute arbitrary code if it received a specially crafted JSON file.

Software Description: - json-c: JSON manipulation library

Details:

USN-4360-1 fixed a vulnerability in json-c. The security fix introduced a memory leak that was reverted in USN-4360-2 and USN-4360-3. This update provides the correct fix update for CVE-2020-12762.

Original advisory details:

It was discovered that json-c incorrectly handled certain JSON files. An attacker could possibly use this issue to execute arbitrary code.

Update instructions:

The problem can be corrected by updating your system to the following package versions:

Ubuntu 20.04 LTS: libjson-c4 0.13.1+dfsg-7ubuntu0.3

Ubuntu 19.10: libjson-c4 0.13.1+dfsg-4ubuntu0.3

Ubuntu 18.04 LTS: libjson-c3 0.12.1-1.3ubuntu0.3

Ubuntu 16.04 LTS: libjson-c2 0.11-4ubuntu2.6 libjson0 0.11-4ubuntu2.6

Ubuntu 14.04 ESM: libjson-c2 0.11-3ubuntu1.2+esm3 libjson0 0.11-3ubuntu1.2+esm3

Ubuntu 12.04 ESM: libjson0 0.9-1ubuntu1.4

In general, a standard system update will make all the necessary changes. Bugs fixed (https://bugzilla.redhat.com/):

1983596 - CVE-2021-34558 golang: crypto/tls: certificate of wrong type is causing TLS client to panic 1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet 1997017 - unprivileged client fails to get guest agent data 1998855 - Node drain: Sometimes source virt-launcher pod status is Failed and not Completed 2000251 - RoleBinding and ClusterRoleBinding brought in by kubevirt does not get reconciled when kind is ServiceAccount 2001270 - [VMIO] [Warm from Vmware] Snapshot files are not deleted after Successful Import 2001281 - [VMIO] [Warm from VMware] Source VM should not be turned ON if vmio import is removed 2001901 - [4.8.3] NNCP creation failures after nmstate-handler pod deletion 2007336 - 4.8.3 containers 2007776 - Failed to Migrate Windows VM with CDROM (readonly) 2008511 - [CNV-4.8.3] VMI is in LiveMigrate loop when Upgrading Cluster from 2.6.7/4.7.32 to OCP 4.8.13 2012890 - With descheduler during multiple VMIs migrations, some VMs are restarted 2025475 - [4.8.3] Upgrade from 2.6 to 4.x versions failed due to vlan-filtering issues 2026881 - [4.8.3] vlan-filtering is getting applied on veth ports

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Moderate: Red Hat OpenShift Container Storage 4.8.5 Security and Bug Fix Update Advisory ID: RHSA-2021:4845-01 Product: Red Hat OpenShift Container Storage Advisory URL: https://access.redhat.com/errata/RHSA-2021:4845 Issue date: 2021-11-29 CVE Names: CVE-2019-5827 CVE-2019-13750 CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 CVE-2020-8037 CVE-2020-12762 CVE-2020-13435 CVE-2020-14155 CVE-2020-16135 CVE-2020-24370 CVE-2020-26301 CVE-2020-28493 CVE-2021-3200 CVE-2021-3426 CVE-2021-3445 CVE-2021-3572 CVE-2021-3580 CVE-2021-3778 CVE-2021-3796 CVE-2021-3800 CVE-2021-20095 CVE-2021-20231 CVE-2021-20232 CVE-2021-20266 CVE-2021-22876 CVE-2021-22898 CVE-2021-22925 CVE-2021-23840 CVE-2021-23841 CVE-2021-27645 CVE-2021-28153 CVE-2021-28957 CVE-2021-33560 CVE-2021-33574 CVE-2021-35942 CVE-2021-36084 CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 CVE-2021-42574 CVE-2021-42771 ==================================================================== 1. Summary:

An update is now available for Red Hat OpenShift Container Storage 4.8.5 on Red Hat Enterprise Linux 8.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

Red Hat OpenShift Container Storage is software-defined storage integrated with and optimized for the Red Hat OpenShift Container Platform. Red Hat OpenShift Container Storage is highly scalable, production-grade persistent storage for stateful applications running in the Red Hat OpenShift Container Platform. In addition to persistent storage, Red Hat OpenShift Container Storage provides a multicloud data management service with an S3 compatible API.

Security Fix(es):

  • nodejs-ssh2: Command injection by calling vulnerable method with untrusted input (CVE-2020-26301)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Bug Fix(es):

  • Previously, when the namespace store target was deleted, no alert was sent to the namespace bucket because of an issue in calculating the namespace bucket health. With this update, the issue in calculating the namespace bucket health is fixed and alerts are triggered as expected. (BZ#1993873)

  • Previously, the Multicloud Object Gateway (MCG) components performed slowly and there was a lot of pressure on the MCG components due to non-optimized database queries. With this update the non-optimized database queries are fixed which reduces the compute resources and time taken for queries. (BZ#2015939)

Red Hat recommends that all users of OpenShift Container Storage apply this update to fix these issues.

  1. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Bugs fixed (https://bugzilla.redhat.com/):

1993873 - [4.8.z clone] Alert NooBaaNamespaceBucketErrorState is not triggered when namespacestore's target bucket is deleted 2006958 - CVE-2020-26301 nodejs-ssh2: Command injection by calling vulnerable method with untrusted input

  1. References:

https://access.redhat.com/security/cve/CVE-2019-5827 https://access.redhat.com/security/cve/CVE-2019-13750 https://access.redhat.com/security/cve/CVE-2019-13751 https://access.redhat.com/security/cve/CVE-2019-17594 https://access.redhat.com/security/cve/CVE-2019-17595 https://access.redhat.com/security/cve/CVE-2019-18218 https://access.redhat.com/security/cve/CVE-2019-19603 https://access.redhat.com/security/cve/CVE-2019-20838 https://access.redhat.com/security/cve/CVE-2020-8037 https://access.redhat.com/security/cve/CVE-2020-12762 https://access.redhat.com/security/cve/CVE-2020-13435 https://access.redhat.com/security/cve/CVE-2020-14155 https://access.redhat.com/security/cve/CVE-2020-16135 https://access.redhat.com/security/cve/CVE-2020-24370 https://access.redhat.com/security/cve/CVE-2020-26301 https://access.redhat.com/security/cve/CVE-2020-28493 https://access.redhat.com/security/cve/CVE-2021-3200 https://access.redhat.com/security/cve/CVE-2021-3426 https://access.redhat.com/security/cve/CVE-2021-3445 https://access.redhat.com/security/cve/CVE-2021-3572 https://access.redhat.com/security/cve/CVE-2021-3580 https://access.redhat.com/security/cve/CVE-2021-3778 https://access.redhat.com/security/cve/CVE-2021-3796 https://access.redhat.com/security/cve/CVE-2021-3800 https://access.redhat.com/security/cve/CVE-2021-20095 https://access.redhat.com/security/cve/CVE-2021-20231 https://access.redhat.com/security/cve/CVE-2021-20232 https://access.redhat.com/security/cve/CVE-2021-20266 https://access.redhat.com/security/cve/CVE-2021-22876 https://access.redhat.com/security/cve/CVE-2021-22898 https://access.redhat.com/security/cve/CVE-2021-22925 https://access.redhat.com/security/cve/CVE-2021-23840 https://access.redhat.com/security/cve/CVE-2021-23841 https://access.redhat.com/security/cve/CVE-2021-27645 https://access.redhat.com/security/cve/CVE-2021-28153 https://access.redhat.com/security/cve/CVE-2021-28957 https://access.redhat.com/security/cve/CVE-2021-33560 https://access.redhat.com/security/cve/CVE-2021-33574 https://access.redhat.com/security/cve/CVE-2021-35942 https://access.redhat.com/security/cve/CVE-2021-36084 https://access.redhat.com/security/cve/CVE-2021-36085 https://access.redhat.com/security/cve/CVE-2021-36086 https://access.redhat.com/security/cve/CVE-2021-36087 https://access.redhat.com/security/cve/CVE-2021-42574 https://access.redhat.com/security/cve/CVE-2021-42771 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYaTmwtzjgjWX9erEAQiaNhAAlr3+bFLFjRQ2l7VN2PTQ0i7orLBDvxOm ET3lUXgy7WOJl+AD7SgB9ILTdj1vrS1IplbhISNREDCeT9PdOZm1jExlJFVCWFuX QRXz4qpAga+42/5qgDhRcYwW4gcLRzKBmEx0R+pRYU71r/Uiz8wv12mo4kfkxICT prZitHSzkh+ER1BHXbVp6cZxWN7s6BD2D+e/tr2/Hh6IvFkIpfrR2aolasbkebQd HxP6gJDNihvlIAcdjft0xJzdqkAJ+Y/KtuFxHhJbWRG1wfMNV3mf8ebv9qDyojTU 4js1ai82zVqJwZWvZ6ryJltuQBjdPYKGt/ZgzuzzN4CULk7GWt6JGZ7BtswICt9N TiYDfKaD5gADA7f/PTwk4TgjMuxQWFi08bZiJ/ajp2KxzMqoOQhVaVUz5XoeCEaS wGgDxGP0r+2TISbZ+Fc4yPARZRPeUbuNeAPG67isliR+gMofbfuunSNNdN9IzfsT Xp2RyIIoPWf5PzM704VN/B0kv7gkij06bcZ2wBqwmDMJH8aG6ksXe7gjGfFeGoxY BXHI2oZoprsh0TlVRTffRHRc0/0PwYGAUG/lI919gXS5bUhZoK81+MlxNg7uzxtu vbhW2EhwWM/5wqbuyS0P1w/mpS+2mi+QBr/NfxM3+mAx7vFxJKKhCST0dfQtjbqn UnaUyPeShL0=/IPR -----END PGP SIGNATURE-----

-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Summary:

The Migration Toolkit for Containers (MTC) 1.5.2 is now available. Description:

The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):

2000734 - CVE-2021-3757 nodejs-immer: prototype pollution may lead to DoS or remote code execution 2005438 - Combining Rsync and Stunnel in a single pod can degrade performance (1.5 backport) 2006842 - MigCluster CR remains in "unready" state and source registry is inaccessible after temporary shutdown of source cluster 2007429 - "oc describe" and "oc log" commands on "Migration resources" tree cannot be copied after failed migration 2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)

5

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202005-0397",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "30"
      },
      {
        "model": "json-c",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "json c",
        "version": "0.15-20200726"
      },
      {
        "model": "ubuntu linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "canonical",
        "version": "12.04"
      },
      {
        "model": "ubuntu linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "canonical",
        "version": "14.04"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "31"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "10.0"
      },
      {
        "model": "ubuntu linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "canonical",
        "version": "20.04"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "8.0"
      },
      {
        "model": "ubuntu linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "canonical",
        "version": "19.10"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "32"
      },
      {
        "model": "ubuntu linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "canonical",
        "version": "18.04"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": null
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "9.0"
      },
      {
        "model": "ubuntu linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "canonical",
        "version": "16.04"
      },
      {
        "model": "json-c",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "json c",
        "version": "0.14  to"
      },
      {
        "model": "json-c",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "json c",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-005140"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-12762"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "165286"
      },
      {
        "db": "PACKETSTORM",
        "id": "165288"
      },
      {
        "db": "PACKETSTORM",
        "id": "166789"
      },
      {
        "db": "PACKETSTORM",
        "id": "165135"
      },
      {
        "db": "PACKETSTORM",
        "id": "165096"
      },
      {
        "db": "PACKETSTORM",
        "id": "165099"
      }
    ],
    "trust": 0.6
  },
  "cve": "CVE-2020-12762",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "CVE-2020-12762",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 1.9,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "LOCAL",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 7.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 1.8,
            "id": "CVE-2020-12762",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Local",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 7.8,
            "baseSeverity": "High",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2020-12762",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "Required",
            "vectorString": "CVSS:3.0/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2020-12762",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "NVD",
            "id": "CVE-2020-12762",
            "trust": 0.8,
            "value": "High"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202005-391",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "VULMON",
            "id": "CVE-2020-12762",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2020-12762"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-005140"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202005-391"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-12762"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "json-c through 0.14 has an integer overflow and out-of-bounds write via a large JSON file, as demonstrated by printbuf_memappend. (DoS) It may be in a state. Summary:\n\nAn update is now available for OpenShift Logging 5.1. Bugs fixed (https://bugzilla.redhat.com/):\n\n1944888 - CVE-2021-21409 netty: Request smuggling via content-length header\n2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn\u0027t allow setting size restrictions for decompressed data\n2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn\u0027t restrict chunk length and may buffer skippable chunks in an unnecessary way\n2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1971 - Applying cluster state is causing elasticsearch to hit an issue and become unusable\n\n6. Bugs fixed (https://bugzilla.redhat.com/):\n\n1995656 - CVE-2021-36221 golang: net/http/httputil: panic due to racy read of persistConn after handler panic\n\n5. =========================================================================\nUbuntu Security Notice USN-4360-4\nMay 28, 2020\n\njson-c vulnerability\n=========================================================================\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 20.04 LTS\n- Ubuntu 19.10\n- Ubuntu 18.04 LTS\n- Ubuntu 16.04 LTS\n- Ubuntu 14.04 ESM\n- Ubuntu 12.04 ESM\n\nSummary:\n\njson-c could be made to execute arbitrary code if it received\na specially crafted JSON file. \n\nSoftware Description:\n- json-c: JSON manipulation library\n\nDetails:\n\nUSN-4360-1 fixed a vulnerability in json-c. The security fix introduced a\nmemory leak that was reverted in USN-4360-2 and USN-4360-3. This update provides\nthe correct fix update for CVE-2020-12762. \n\nOriginal advisory details:\n\n It was discovered that json-c incorrectly handled certain JSON files. \n An attacker could possibly use this issue to execute arbitrary code. \n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 20.04 LTS:\n  libjson-c4                      0.13.1+dfsg-7ubuntu0.3\n\nUbuntu 19.10:\n  libjson-c4                      0.13.1+dfsg-4ubuntu0.3\n\nUbuntu 18.04 LTS:\n  libjson-c3                      0.12.1-1.3ubuntu0.3\n\nUbuntu 16.04 LTS:\n  libjson-c2                      0.11-4ubuntu2.6\n  libjson0                        0.11-4ubuntu2.6\n\nUbuntu 14.04 ESM:\n  libjson-c2                      0.11-3ubuntu1.2+esm3\n  libjson0                        0.11-3ubuntu1.2+esm3\n\nUbuntu 12.04 ESM:\n  libjson0                        0.9-1ubuntu1.4\n\nIn general, a standard system update will make all the necessary changes. Bugs fixed (https://bugzilla.redhat.com/):\n\n1983596 - CVE-2021-34558 golang: crypto/tls: certificate of wrong type is causing TLS client to panic\n1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet\n1997017 - unprivileged client fails to get guest agent data\n1998855 - Node drain: Sometimes source virt-launcher pod status is Failed and not Completed\n2000251 - RoleBinding and ClusterRoleBinding brought in by kubevirt does not get reconciled when kind is ServiceAccount\n2001270 - [VMIO] [Warm from Vmware] Snapshot files are not deleted after Successful Import\n2001281 - [VMIO] [Warm from VMware] Source VM should not be turned ON if  vmio import is  removed\n2001901 - [4.8.3] NNCP creation failures after nmstate-handler pod deletion\n2007336 - 4.8.3 containers\n2007776 - Failed to Migrate Windows VM with CDROM  (readonly)\n2008511 - [CNV-4.8.3] VMI is in LiveMigrate loop when Upgrading Cluster from 2.6.7/4.7.32 to OCP 4.8.13\n2012890 - With descheduler during multiple VMIs migrations, some VMs are restarted\n2025475 - [4.8.3] Upgrade from 2.6 to 4.x versions failed due to vlan-filtering issues\n2026881 - [4.8.3] vlan-filtering is getting applied on veth ports\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Moderate: Red Hat OpenShift Container Storage 4.8.5 Security and Bug Fix Update\nAdvisory ID:       RHSA-2021:4845-01\nProduct:           Red Hat OpenShift Container Storage\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2021:4845\nIssue date:        2021-11-29\nCVE Names:         CVE-2019-5827 CVE-2019-13750 CVE-2019-13751\n                   CVE-2019-17594 CVE-2019-17595 CVE-2019-18218\n                   CVE-2019-19603 CVE-2019-20838 CVE-2020-8037\n                   CVE-2020-12762 CVE-2020-13435 CVE-2020-14155\n                   CVE-2020-16135 CVE-2020-24370 CVE-2020-26301\n                   CVE-2020-28493 CVE-2021-3200 CVE-2021-3426\n                   CVE-2021-3445 CVE-2021-3572 CVE-2021-3580\n                   CVE-2021-3778 CVE-2021-3796 CVE-2021-3800\n                   CVE-2021-20095 CVE-2021-20231 CVE-2021-20232\n                   CVE-2021-20266 CVE-2021-22876 CVE-2021-22898\n                   CVE-2021-22925 CVE-2021-23840 CVE-2021-23841\n                   CVE-2021-27645 CVE-2021-28153 CVE-2021-28957\n                   CVE-2021-33560 CVE-2021-33574 CVE-2021-35942\n                   CVE-2021-36084 CVE-2021-36085 CVE-2021-36086\n                   CVE-2021-36087 CVE-2021-42574 CVE-2021-42771\n====================================================================\n1. Summary:\n\nAn update is now available for Red Hat OpenShift Container Storage 4.8.5 on\nRed Hat Enterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Storage is software-defined storage integrated\nwith and optimized for the Red Hat OpenShift Container Platform. \nRed Hat OpenShift Container Storage is highly scalable, production-grade\npersistent storage for stateful applications running in the Red Hat\nOpenShift Container Platform. In addition to persistent storage, Red Hat\nOpenShift Container Storage provides a multicloud data management service\nwith an S3 compatible API. \n\nSecurity Fix(es):\n\n* nodejs-ssh2: Command injection by calling vulnerable method with\nuntrusted input (CVE-2020-26301)\n\nFor more details about the security issue(s), including the impact, a\nCVSS score, acknowledgments, and other related information, refer to\nthe CVE page(s) listed in the References section. \n\nBug Fix(es):\n\n* Previously, when the namespace store target was deleted, no alert was\nsent to the namespace bucket because of an issue in calculating the\nnamespace bucket health. With this update, the issue in calculating the\nnamespace bucket health is fixed and alerts are triggered as expected. \n(BZ#1993873)\n\n* Previously, the Multicloud Object Gateway (MCG) components performed\nslowly and there was a lot of pressure on the MCG components due to\nnon-optimized database queries. With this update the non-optimized\ndatabase queries are fixed which reduces the compute resources and time\ntaken for queries. (BZ#2015939)\n\nRed Hat recommends that all users of OpenShift Container Storage apply this\nupdate to fix these issues. \n\n3. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1993873 - [4.8.z clone] Alert NooBaaNamespaceBucketErrorState is not triggered when namespacestore\u0027s target bucket is deleted\n2006958 - CVE-2020-26301 nodejs-ssh2: Command injection by calling vulnerable method with untrusted input\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2019-5827\nhttps://access.redhat.com/security/cve/CVE-2019-13750\nhttps://access.redhat.com/security/cve/CVE-2019-13751\nhttps://access.redhat.com/security/cve/CVE-2019-17594\nhttps://access.redhat.com/security/cve/CVE-2019-17595\nhttps://access.redhat.com/security/cve/CVE-2019-18218\nhttps://access.redhat.com/security/cve/CVE-2019-19603\nhttps://access.redhat.com/security/cve/CVE-2019-20838\nhttps://access.redhat.com/security/cve/CVE-2020-8037\nhttps://access.redhat.com/security/cve/CVE-2020-12762\nhttps://access.redhat.com/security/cve/CVE-2020-13435\nhttps://access.redhat.com/security/cve/CVE-2020-14155\nhttps://access.redhat.com/security/cve/CVE-2020-16135\nhttps://access.redhat.com/security/cve/CVE-2020-24370\nhttps://access.redhat.com/security/cve/CVE-2020-26301\nhttps://access.redhat.com/security/cve/CVE-2020-28493\nhttps://access.redhat.com/security/cve/CVE-2021-3200\nhttps://access.redhat.com/security/cve/CVE-2021-3426\nhttps://access.redhat.com/security/cve/CVE-2021-3445\nhttps://access.redhat.com/security/cve/CVE-2021-3572\nhttps://access.redhat.com/security/cve/CVE-2021-3580\nhttps://access.redhat.com/security/cve/CVE-2021-3778\nhttps://access.redhat.com/security/cve/CVE-2021-3796\nhttps://access.redhat.com/security/cve/CVE-2021-3800\nhttps://access.redhat.com/security/cve/CVE-2021-20095\nhttps://access.redhat.com/security/cve/CVE-2021-20231\nhttps://access.redhat.com/security/cve/CVE-2021-20232\nhttps://access.redhat.com/security/cve/CVE-2021-20266\nhttps://access.redhat.com/security/cve/CVE-2021-22876\nhttps://access.redhat.com/security/cve/CVE-2021-22898\nhttps://access.redhat.com/security/cve/CVE-2021-22925\nhttps://access.redhat.com/security/cve/CVE-2021-23840\nhttps://access.redhat.com/security/cve/CVE-2021-23841\nhttps://access.redhat.com/security/cve/CVE-2021-27645\nhttps://access.redhat.com/security/cve/CVE-2021-28153\nhttps://access.redhat.com/security/cve/CVE-2021-28957\nhttps://access.redhat.com/security/cve/CVE-2021-33560\nhttps://access.redhat.com/security/cve/CVE-2021-33574\nhttps://access.redhat.com/security/cve/CVE-2021-35942\nhttps://access.redhat.com/security/cve/CVE-2021-36084\nhttps://access.redhat.com/security/cve/CVE-2021-36085\nhttps://access.redhat.com/security/cve/CVE-2021-36086\nhttps://access.redhat.com/security/cve/CVE-2021-36087\nhttps://access.redhat.com/security/cve/CVE-2021-42574\nhttps://access.redhat.com/security/cve/CVE-2021-42771\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYaTmwtzjgjWX9erEAQiaNhAAlr3+bFLFjRQ2l7VN2PTQ0i7orLBDvxOm\nET3lUXgy7WOJl+AD7SgB9ILTdj1vrS1IplbhISNREDCeT9PdOZm1jExlJFVCWFuX\nQRXz4qpAga+42/5qgDhRcYwW4gcLRzKBmEx0R+pRYU71r/Uiz8wv12mo4kfkxICT\nprZitHSzkh+ER1BHXbVp6cZxWN7s6BD2D+e/tr2/Hh6IvFkIpfrR2aolasbkebQd\nHxP6gJDNihvlIAcdjft0xJzdqkAJ+Y/KtuFxHhJbWRG1wfMNV3mf8ebv9qDyojTU\n4js1ai82zVqJwZWvZ6ryJltuQBjdPYKGt/ZgzuzzN4CULk7GWt6JGZ7BtswICt9N\nTiYDfKaD5gADA7f/PTwk4TgjMuxQWFi08bZiJ/ajp2KxzMqoOQhVaVUz5XoeCEaS\nwGgDxGP0r+2TISbZ+Fc4yPARZRPeUbuNeAPG67isliR+gMofbfuunSNNdN9IzfsT\nXp2RyIIoPWf5PzM704VN/B0kv7gkij06bcZ2wBqwmDMJH8aG6ksXe7gjGfFeGoxY\nBXHI2oZoprsh0TlVRTffRHRc0/0PwYGAUG/lI919gXS5bUhZoK81+MlxNg7uzxtu\nvbhW2EhwWM/5wqbuyS0P1w/mpS+2mi+QBr/NfxM3+mAx7vFxJKKhCST0dfQtjbqn\nUnaUyPeShL0=/IPR\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.5.2 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):\n\n2000734 - CVE-2021-3757 nodejs-immer: prototype pollution may lead to DoS or remote code execution\n2005438 - Combining Rsync and Stunnel in a single pod can degrade performance (1.5 backport)\n2006842 - MigCluster CR remains in \"unready\" state and source registry is inaccessible after temporary shutdown of source cluster\n2007429 - \"oc describe\" and \"oc log\" commands on \"Migration resources\" tree cannot be copied after failed migration\n2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)\n\n5",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-12762"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-005140"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-12762"
      },
      {
        "db": "PACKETSTORM",
        "id": "165286"
      },
      {
        "db": "PACKETSTORM",
        "id": "165288"
      },
      {
        "db": "PACKETSTORM",
        "id": "166789"
      },
      {
        "db": "PACKETSTORM",
        "id": "157858"
      },
      {
        "db": "PACKETSTORM",
        "id": "165135"
      },
      {
        "db": "PACKETSTORM",
        "id": "165096"
      },
      {
        "db": "PACKETSTORM",
        "id": "165099"
      }
    ],
    "trust": 2.34
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2020-12762",
        "trust": 4.0
      },
      {
        "db": "SIEMENS",
        "id": "SSA-637483",
        "trust": 1.7
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-22-258-05",
        "trust": 1.5
      },
      {
        "db": "JVN",
        "id": "JVNVU99475301",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-005140",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "165286",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "166789",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "157858",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "165135",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "165096",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "165099",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "165631",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "165209",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "164967",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "166051",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "166489",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "165862",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "165002",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "166308",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "158084",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "157714",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "165758",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "165129",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "164876",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3778",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.1724",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0245",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0493",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4616",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1071",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.1724.3",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.2608",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3935",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.4254",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.4095",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3905",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4368",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0716",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0379",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1677",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1837",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.2678",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.4172",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.1899",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0394",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.4059",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.4229",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.4019",
        "trust": 0.6
      },
      {
        "db": "NSFOCUS",
        "id": "47604",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202005-391",
        "trust": 0.6
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-12762",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "165288",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2020-12762"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-005140"
      },
      {
        "db": "PACKETSTORM",
        "id": "165286"
      },
      {
        "db": "PACKETSTORM",
        "id": "165288"
      },
      {
        "db": "PACKETSTORM",
        "id": "166789"
      },
      {
        "db": "PACKETSTORM",
        "id": "157858"
      },
      {
        "db": "PACKETSTORM",
        "id": "165135"
      },
      {
        "db": "PACKETSTORM",
        "id": "165096"
      },
      {
        "db": "PACKETSTORM",
        "id": "165099"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202005-391"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-12762"
      }
    ]
  },
  "id": "VAR-202005-0397",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-11-29T21:37:52.267000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Prevent\u00a0out\u00a0of\u00a0boundary\u00a0write\u00a0on\u00a0malicious\u00a0input\u00a0#592",
        "trust": 0.8,
        "url": "https://github.com/json-c/json-c/pull/592"
      },
      {
        "title": "json-c Enter the fix for the verification error vulnerability",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=118666"
      },
      {
        "title": "Ubuntu Security Notice: json-c vulnerability",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-4360-1"
      },
      {
        "title": "Ubuntu Security Notice: json-c vulnerability",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-4360-4"
      },
      {
        "title": "Debian CVElist Bug Report Logs: json-c: CVE-2020-12762",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=136719ded61e273212f821541d12e175"
      },
      {
        "title": "Debian Security Advisories: DSA-4741-1 json-c -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=20b6b384fb69b76b5f17fc7ea1278139"
      },
      {
        "title": "Red Hat: Moderate: libfastjson security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20236431 - Security Advisory"
      },
      {
        "title": "Amazon Linux AMI: ALAS-2020-1381",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=ALAS-2020-1381"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2020-1442",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2020-1442"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2023-2079",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2023-2079"
      },
      {
        "title": "Arch Linux Issues: ",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2020-12762 log"
      },
      {
        "title": "Red Hat: Moderate: Release of OpenShift Serverless 1.20.0",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220434 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat OpenShift distributed tracing 2.1.0 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220318 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Release of containers for OSP 16.2 director operator tech preview",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220842 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Gatekeeper Operator v0.2 security updates and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221081 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Red Hat OpenShift GitOps security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220580 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.2.11 security updates and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220856 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Migration Toolkit for Containers (MTC) 1.5.4 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221396 - Security Advisory"
      },
      {
        "title": "Siemens Security Advisories: Siemens Security Advisory",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=ec6577109e640dac19a6ddb978afe82d"
      },
      {
        "title": "clamav-win32",
        "trust": 0.1,
        "url": "https://github.com/clamwin/clamav-win32 "
      },
      {
        "title": "",
        "trust": 0.1,
        "url": "https://github.com/vincent-deng/veracode-container-security-finding-parser "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2020-12762"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-005140"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202005-391"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-787",
        "trust": 1.0
      },
      {
        "problemtype": "CWE-190",
        "trust": 1.0
      },
      {
        "problemtype": "Integer overflow or wraparound (CWE-190) [NVD evaluation ]",
        "trust": 0.8
      },
      {
        "problemtype": " Out-of-bounds writing (CWE-787) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-005140"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-12762"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762"
      },
      {
        "trust": 1.8,
        "url": "https://usn.ubuntu.com/4360-1/"
      },
      {
        "trust": 1.7,
        "url": "https://github.com/json-c/json-c/pull/592"
      },
      {
        "trust": 1.7,
        "url": "https://github.com/rsyslog/libfastjson/issues/161"
      },
      {
        "trust": 1.7,
        "url": "https://lists.debian.org/debian-lts-announce/2020/05/msg00032.html"
      },
      {
        "trust": 1.7,
        "url": "https://lists.debian.org/debian-lts-announce/2020/05/msg00034.html"
      },
      {
        "trust": 1.7,
        "url": "https://usn.ubuntu.com/4360-4/"
      },
      {
        "trust": 1.7,
        "url": "https://security.gentoo.org/glsa/202006-13"
      },
      {
        "trust": 1.7,
        "url": "https://lists.debian.org/debian-lts-announce/2020/07/msg00031.html"
      },
      {
        "trust": 1.7,
        "url": "https://www.debian.org/security/2020/dsa-4741"
      },
      {
        "trust": 1.7,
        "url": "https://security.netapp.com/advisory/ntap-20210521-0001/"
      },
      {
        "trust": 1.7,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf"
      },
      {
        "trust": 1.7,
        "url": "https://lists.debian.org/debian-lts-announce/2023/06/msg00023.html"
      },
      {
        "trust": 1.1,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/cqqrrgbqcawnccj2hn3w5sscz4qgmxqi/"
      },
      {
        "trust": 1.1,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/cbr36ixybhitazfb5pfbjted22wo5onb/"
      },
      {
        "trust": 1.1,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/w226tscjbeoxdufvknwnh7etg7ar6mcs/"
      },
      {
        "trust": 0.9,
        "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.8,
        "url": "http://jvn.jp/vu/jvnvu99475301/index.html"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-3200"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-13435"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-5827"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-24370"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-13751"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-19603"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-17594"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-12762"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-36086"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-22898"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-16135"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-36084"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-3800"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-36087"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-3445"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-22925"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-20232"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-20838"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-22876"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-20231"
      },
      {
        "trust": 0.6,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-14155"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-36085"
      },
      {
        "trust": 0.6,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-33560"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-17595"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-28153"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-13750"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2019-18218"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-3580"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.6,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/w226tscjbeoxdufvknwnh7etg7ar6mcs/"
      },
      {
        "trust": 0.6,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/cbr36ixybhitazfb5pfbjted22wo5onb/"
      },
      {
        "trust": 0.6,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/cqqrrgbqcawnccj2hn3w5sscz4qgmxqi/"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/json-c-memory-corruption-32277"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0245"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.1724.3/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3905"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1071"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.4019"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/165862/red-hat-security-advisory-2022-0434-05.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/165631/red-hat-security-advisory-2022-0202-04.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0716"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.1724/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.1899/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/165135/red-hat-security-advisory-2021-4914-06.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/165129/red-hat-security-advisory-2021-4902-06.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/165209/red-hat-security-advisory-2021-5038-04.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0379"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166489/red-hat-security-advisory-2022-1081-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4616"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/165096/red-hat-security-advisory-2021-4845-05.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0394"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0493"
      },
      {
        "trust": 0.6,
        "url": "http://www.nsfocus.net/vulndb/47604"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3935"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/165286/red-hat-security-advisory-2021-5128-06.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3778"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/158084/gentoo-linux-security-advisory-202006-13.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.4229"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/157858/ubuntu-security-notice-usn-4360-4.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/165002/red-hat-security-advisory-2021-4032-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/165099/red-hat-security-advisory-2021-4848-07.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.4059"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166051/red-hat-security-advisory-2022-0580-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/164876/red-hat-security-advisory-2021-4382-02.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.2678/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166789/red-hat-security-advisory-2022-1396-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-mq-is-affected-by-a-vulnerability-in-json-c-cve-2020-12762/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.4254"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/157714/ubuntu-security-notice-usn-4360-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/165758/red-hat-security-advisory-2022-0318-06.html"
      },
      {
        "trust": 0.6,
        "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.2608/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.4095"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.4172"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1837"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166308/red-hat-security-advisory-2022-0842-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4368"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/164967/red-hat-security-advisory-2021-4627-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1677"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2021-27645"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2021-33574"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2021-35942"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2021-3572"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2021-20266"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2021-42574"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2021-3426"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-3778"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-23841"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-23840"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-3796"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-14145"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14145"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-20673"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20266"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-25013"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/vulnerabilities/rhsb-2021-009"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-35522"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-35524"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-43527"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-25014"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-25012"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-35521"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35524"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35522"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-37136"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-44228"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-17541"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-36331"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3712"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-31535"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35523"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-36330"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-36332"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-17541"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-37137"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-21409"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3481"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-25009"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-25010"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-35523"
      },
      {
        "trust": 0.2,
        "url": "https://issues.jboss.org/):"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36330"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35521"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-20317"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-43267"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33938"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33930"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33928"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-37750"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-22947"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22946"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3733"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22947"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33929"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-36222"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-22946"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/787.html"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/190.html"
      },
      {
        "trust": 0.1,
        "url": "https://github.com/clamwin/clamav-win32"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:5128"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.8/logging/cluster-logging-upgrading.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:5129"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36331"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-25315"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25710"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0492"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-25236"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21684"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-25235"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23308"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4154"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25710"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-28153"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41190"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23852"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4122"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22822"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22823"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22827"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0392"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0261"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-0920"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-31566"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22826"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23177"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3999"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25709"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22817"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0413"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0847"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44716"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:1396"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23219"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22824"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-45960"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36221"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22825"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0435"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23177"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0532"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-46143"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22942"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2014-3577"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0330"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0516"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22816"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21684"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-31566"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0361"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0778"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3521"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0359"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0318"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-0920"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25709"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44717"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/json-c/0.11-4ubuntu2.6"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/json-c/0.13.1+dfsg-7ubuntu0.3"
      },
      {
        "trust": 0.1,
        "url": "https://usn.ubuntu.com/4360-1"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/json-c/0.12.1-1.3ubuntu0.3"
      },
      {
        "trust": 0.1,
        "url": "https://usn.ubuntu.com/4360-4"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/json-c/0.13.1+dfsg-4ubuntu0.3"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25648"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36385"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-34558"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-0512"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29923"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-0512"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36385"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20317"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:4914"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25648"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3656"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-28950"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27645"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:4845"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20095"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28493"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-42771"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26301"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26301"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-28957"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8037"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8037"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20095"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28493"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3757"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:4848"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3948"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3620"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2020-12762"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-005140"
      },
      {
        "db": "PACKETSTORM",
        "id": "165286"
      },
      {
        "db": "PACKETSTORM",
        "id": "165288"
      },
      {
        "db": "PACKETSTORM",
        "id": "166789"
      },
      {
        "db": "PACKETSTORM",
        "id": "157858"
      },
      {
        "db": "PACKETSTORM",
        "id": "165135"
      },
      {
        "db": "PACKETSTORM",
        "id": "165096"
      },
      {
        "db": "PACKETSTORM",
        "id": "165099"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202005-391"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-12762"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULMON",
        "id": "CVE-2020-12762"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-005140"
      },
      {
        "db": "PACKETSTORM",
        "id": "165286"
      },
      {
        "db": "PACKETSTORM",
        "id": "165288"
      },
      {
        "db": "PACKETSTORM",
        "id": "166789"
      },
      {
        "db": "PACKETSTORM",
        "id": "157858"
      },
      {
        "db": "PACKETSTORM",
        "id": "165135"
      },
      {
        "db": "PACKETSTORM",
        "id": "165096"
      },
      {
        "db": "PACKETSTORM",
        "id": "165099"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202005-391"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-12762"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-05-09T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-12762"
      },
      {
        "date": "2020-06-08T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-005140"
      },
      {
        "date": "2021-12-15T15:20:33",
        "db": "PACKETSTORM",
        "id": "165286"
      },
      {
        "date": "2021-12-15T15:22:36",
        "db": "PACKETSTORM",
        "id": "165288"
      },
      {
        "date": "2022-04-20T15:12:33",
        "db": "PACKETSTORM",
        "id": "166789"
      },
      {
        "date": "2020-05-28T16:22:37",
        "db": "PACKETSTORM",
        "id": "157858"
      },
      {
        "date": "2021-12-03T16:41:45",
        "db": "PACKETSTORM",
        "id": "165135"
      },
      {
        "date": "2021-11-29T18:12:32",
        "db": "PACKETSTORM",
        "id": "165096"
      },
      {
        "date": "2021-11-30T14:44:48",
        "db": "PACKETSTORM",
        "id": "165099"
      },
      {
        "date": "2020-05-09T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202005-391"
      },
      {
        "date": "2020-05-09T18:15:11.283000",
        "db": "NVD",
        "id": "CVE-2020-12762"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-11-07T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-12762"
      },
      {
        "date": "2022-09-20T05:36:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-005140"
      },
      {
        "date": "2023-06-25T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202005-391"
      },
      {
        "date": "2024-11-21T05:00:13.950000",
        "db": "NVD",
        "id": "CVE-2020-12762"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "local",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202005-391"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "json-c\u00a0 Out-of-bounds write vulnerability in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-005140"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "input validation error",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202005-391"
      }
    ],
    "trust": 0.6
  }
}

var-202102-1488
Vulnerability from variot

The OpenSSL public API function X509_issuer_and_serial_hash() attempts to create a unique hash value based on the issuer and serial number data contained within an X509 certificate. However it fails to correctly handle any errors that may occur while parsing the issuer field (which might occur if the issuer field is maliciously constructed). This may subsequently result in a NULL pointer deref and a crash leading to a potential denial of service attack. The function X509_issuer_and_serial_hash() is never directly called by OpenSSL itself so applications are only vulnerable if they use this function directly and they use it on certificates that may have been obtained from untrusted sources. OpenSSL versions 1.1.1i and below are affected by this issue. Users of these versions should upgrade to OpenSSL 1.1.1j. OpenSSL versions 1.0.2x and below are affected by this issue. However OpenSSL 1.0.2 is out of support and no longer receiving public updates. Premium support customers of OpenSSL 1.0.2 should upgrade to 1.0.2y. Other users should upgrade to 1.1.1j. Fixed in OpenSSL 1.1.1j (Affected 1.1.1-1.1.1i). Fixed in OpenSSL 1.0.2y (Affected 1.0.2-1.0.2x). Please keep an eye on CNNVD or manufacturer announcements. Clusters and applications are all visible and managed from a single console—with security policy built in.

Security fixes:

  • nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a pointer to a root domain name (CVE-2021-23017)

  • redis: Lua scripts can overflow the heap-based Lua stack (CVE-2021-32626)

  • redis: Integer overflow issue with Streams (CVE-2021-32627)

  • redis: Integer overflow bug in the ziplist data structure (CVE-2021-32628)

  • redis: Integer overflow issue with intsets (CVE-2021-32687)

  • redis: Integer overflow issue with strings (CVE-2021-41099)

  • redis: Out of bounds read in lua debugger protocol parser (CVE-2021-32672)

  • redis: Denial of service via Redis Standard Protocol (RESP) request (CVE-2021-32675)

  • helm: information disclosure vulnerability (CVE-2021-32690)

Bug fixes:

  • KUBE-API: Support move agent to different cluster in the same namespace (BZ# 1977358)

  • Add columns to the Agent CRD list (BZ# 1977398)

  • ClusterDeployment controller watches all Secrets from all namespaces (BZ# 1986081)

  • RHACM 2.3.3 images (BZ# 1999365)

  • Workaround for Network Manager not supporting nmconnections priority (BZ# 2001294)

  • create cluster page empty in Safary Browser (BZ# 2002280)

  • Compliance state doesn't get updated after fixing the issue causing initially the policy not being able to update the managed object (BZ# 2002667)

  • Overview page displays VMware based managed cluster as other (BZ# 2004188)

  • Bugs fixed (https://bugzilla.redhat.com/):

1963121 - CVE-2021-23017 nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a pointer to a root domain name 1977358 - [4.8.0] KUBE-API: Support move agent to different cluster in the same namespace 1977398 - [4.8.0] [master] Add columns to the Agent CRD list 1978144 - CVE-2021-32690 helm: information disclosure vulnerability 1986081 - [4.8.0] ClusterDeployment controller watches all Secrets from all namespaces 1999365 - RHACM 2.3.3 images 2001294 - [4.8.0] Workaround for Network Manager not supporting nmconnections priority 2002280 - create cluster page empty in Safary Browser 2002667 - Compliance state doesn't get updated after fixing the issue causing initially the policy not being able to update the managed object 2004188 - Overview page displays VMware based managed cluster as other 2010991 - CVE-2021-32687 redis: Integer overflow issue with intsets 2011000 - CVE-2021-32675 redis: Denial of service via Redis Standard Protocol (RESP) request 2011001 - CVE-2021-32672 redis: Out of bounds read in lua debugger protocol parser 2011004 - CVE-2021-32628 redis: Integer overflow bug in the ziplist data structure 2011010 - CVE-2021-32627 redis: Integer overflow issue with Streams 2011017 - CVE-2021-32626 redis: Lua scripts can overflow the heap-based Lua stack 2011020 - CVE-2021-41099 redis: Integer overflow issue with strings

  1. Relevant releases/architectures:

Red Hat Enterprise Linux Client (v. 7) - x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64 Red Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Workstation (v. 7) - x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - x86_64

  1. Description:

OpenSSL is a toolkit that implements the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols, as well as a full-strength general-purpose cryptography library.

Security Fix(es):

  • openssl: integer overflow in CipherUpdate (CVE-2021-23840)

  • openssl: NULL pointer dereference in X509_issuer_and_serial_hash() (CVE-2021-23841)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

For the update to take effect, all services linked to the OpenSSL library must be restarted, or the system rebooted. Bugs fixed (https://bugzilla.redhat.com/):

1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash() 1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate

  1. Package List:

Red Hat Enterprise Linux Client (v. 7):

Source: openssl-1.0.2k-22.el7_9.src.rpm

x86_64: openssl-1.0.2k-22.el7_9.x86_64.rpm openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-libs-1.0.2k-22.el7_9.i686.rpm openssl-libs-1.0.2k-22.el7_9.x86_64.rpm

Red Hat Enterprise Linux Client Optional (v. 7):

Source: openssl-1.0.2k-22.el7_9.src.rpm

x86_64: openssl-1.0.2k-22.el7_9.x86_64.rpm openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-libs-1.0.2k-22.el7_9.i686.rpm openssl-libs-1.0.2k-22.el7_9.x86_64.rpm

Red Hat Enterprise Linux ComputeNode Optional (v. 7):

x86_64: openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-devel-1.0.2k-22.el7_9.i686.rpm openssl-devel-1.0.2k-22.el7_9.x86_64.rpm openssl-perl-1.0.2k-22.el7_9.x86_64.rpm openssl-static-1.0.2k-22.el7_9.i686.rpm openssl-static-1.0.2k-22.el7_9.x86_64.rpm

Red Hat Enterprise Linux Server (v. 7):

Source: openssl-1.0.2k-22.el7_9.src.rpm

ppc64: openssl-1.0.2k-22.el7_9.ppc64.rpm openssl-debuginfo-1.0.2k-22.el7_9.ppc.rpm openssl-debuginfo-1.0.2k-22.el7_9.ppc64.rpm openssl-devel-1.0.2k-22.el7_9.ppc.rpm openssl-devel-1.0.2k-22.el7_9.ppc64.rpm openssl-libs-1.0.2k-22.el7_9.ppc.rpm openssl-libs-1.0.2k-22.el7_9.ppc64.rpm

ppc64le: openssl-1.0.2k-22.el7_9.ppc64le.rpm openssl-debuginfo-1.0.2k-22.el7_9.ppc64le.rpm openssl-devel-1.0.2k-22.el7_9.ppc64le.rpm openssl-libs-1.0.2k-22.el7_9.ppc64le.rpm

s390x: openssl-1.0.2k-22.el7_9.s390x.rpm openssl-debuginfo-1.0.2k-22.el7_9.s390.rpm openssl-debuginfo-1.0.2k-22.el7_9.s390x.rpm openssl-devel-1.0.2k-22.el7_9.s390.rpm openssl-devel-1.0.2k-22.el7_9.s390x.rpm openssl-libs-1.0.2k-22.el7_9.s390.rpm openssl-libs-1.0.2k-22.el7_9.s390x.rpm

x86_64: openssl-1.0.2k-22.el7_9.x86_64.rpm openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-devel-1.0.2k-22.el7_9.i686.rpm openssl-devel-1.0.2k-22.el7_9.x86_64.rpm openssl-libs-1.0.2k-22.el7_9.i686.rpm openssl-libs-1.0.2k-22.el7_9.x86_64.rpm

Red Hat Enterprise Linux Server Optional (v. 7):

ppc64: openssl-debuginfo-1.0.2k-22.el7_9.ppc.rpm openssl-debuginfo-1.0.2k-22.el7_9.ppc64.rpm openssl-perl-1.0.2k-22.el7_9.ppc64.rpm openssl-static-1.0.2k-22.el7_9.ppc.rpm openssl-static-1.0.2k-22.el7_9.ppc64.rpm

ppc64le: openssl-debuginfo-1.0.2k-22.el7_9.ppc64le.rpm openssl-perl-1.0.2k-22.el7_9.ppc64le.rpm openssl-static-1.0.2k-22.el7_9.ppc64le.rpm

s390x: openssl-debuginfo-1.0.2k-22.el7_9.s390.rpm openssl-debuginfo-1.0.2k-22.el7_9.s390x.rpm openssl-perl-1.0.2k-22.el7_9.s390x.rpm openssl-static-1.0.2k-22.el7_9.s390.rpm openssl-static-1.0.2k-22.el7_9.s390x.rpm

x86_64: openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-perl-1.0.2k-22.el7_9.x86_64.rpm openssl-static-1.0.2k-22.el7_9.i686.rpm openssl-static-1.0.2k-22.el7_9.x86_64.rpm

Red Hat Enterprise Linux Workstation (v. 7):

Source: openssl-1.0.2k-22.el7_9.src.rpm

x86_64: openssl-1.0.2k-22.el7_9.x86_64.rpm openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-devel-1.0.2k-22.el7_9.i686.rpm openssl-devel-1.0.2k-22.el7_9.x86_64.rpm openssl-libs-1.0.2k-22.el7_9.i686.rpm openssl-libs-1.0.2k-22.el7_9.x86_64.rpm

Red Hat Enterprise Linux Workstation Optional (v. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

APPLE-SA-2021-05-25-5 Safari 14.1.1

Safari 14.1.1 addresses the following issues.

WebKit Available for: macOS Catalina and macOS Mojave Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: Multiple memory corruption issues were addressed with improved memory handling. CVE-2021-30749: an anonymous researcher and mipu94 of SEFCOM lab, ASU. working with Trend Micro Zero Day Initiative CVE-2021-30734: Jack Dates of RET2 Systems, Inc. (@ret2systems) working with Trend Micro Zero Day Initiative

WebKit Available for: macOS Catalina and macOS Mojave Impact: Processing maliciously crafted web content may lead to universal cross site scripting Description: A cross-origin issue with iframe elements was addressed with improved tracking of security origins. CVE-2021-30744: Dan Hite of jsontop

WebKit Available for: macOS Catalina and macOS Mojave Impact: A malicious website may be able to access restricted ports on arbitrary servers Description: A logic issue was addressed with improved restrictions. CVE-2021-30720: David Schütz (@xdavidhu)

WebKit Available for: macOS Catalina and macOS Mojave Impact: A malicious application may be able to leak sensitive user information Description: A logic issue was addressed with improved restrictions. CVE-2021-30682: an anonymous researcher and 1lastBr3ath

WebKit Available for: macOS Catalina and macOS Mojave Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. CVE-2021-21779: Marcin Towalski of Cisco Talos

WebKit Available for: macOS Catalina and macOS Mojave Impact: Processing maliciously crafted web content may lead to universal cross site scripting Description: A logic issue was addressed with improved state management. CVE-2021-30689: an anonymous researcher

WebKit Available for: macOS Catalina and macOS Mojave Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: An integer overflow was addressed with improved input validation. CVE-2021-30663: an anonymous researcher

WebRTC Available for: macOS Catalina and macOS Mojave Impact: A remote attacker may be able to cause a denial of service Description: A null pointer dereference was addressed with improved input validation. CVE-2021-23841: Tavis Ormandy of Google CVE-2021-30698: Tavis Ormandy of Google

Additional recognition

WebKit We would like to acknowledge Chris Salls (@salls) of Makai Security for their assistance.

Installation note:

This update may be obtained from the Mac App Store. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202103-03


                                        https://security.gentoo.org/

Severity: Normal Title: OpenSSL: Multiple vulnerabilities Date: March 31, 2021 Bugs: #769785, #777681 ID: 202103-03


Synopsis

Multiple vulnerabilities have been found in OpenSSL, the worst of which could allow remote attackers to cause a Denial of Service condition.

Affected packages

 -------------------------------------------------------------------
  Package              /     Vulnerable     /            Unaffected
 -------------------------------------------------------------------

1 dev-libs/openssl < 1.1.1k >= 1.1.1k

Description

Multiple vulnerabilities have been discovered in OpenSSL. Please review the CVE identifiers referenced below for details.

Impact

Please review the referenced CVE identifiers for details.

Workaround

There is no known workaround at this time.

Resolution

All OpenSSL users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=dev-libs/openssl-1.1.1k"

References

[ 1 ] CVE-2021-23840 https://nvd.nist.gov/vuln/detail/CVE-2021-23840 [ 2 ] CVE-2021-23841 https://nvd.nist.gov/vuln/detail/CVE-2021-23841 [ 3 ] CVE-2021-3449 https://nvd.nist.gov/vuln/detail/CVE-2021-3449 [ 4 ] CVE-2021-3450 https://nvd.nist.gov/vuln/detail/CVE-2021-3450

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202103-03

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2021 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5 . Bugs fixed (https://bugzilla.redhat.com/):

1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment

  1. JIRA issues fixed (https://issues.jboss.org/):

LOG-1168 - Disable hostname verification in syslog TLS settings LOG-1235 - Using HTTPS without a secret does not translate into the correct 'scheme' value in Fluentd LOG-1375 - ssl_ca_cert should be optional LOG-1378 - CLO should support sasl_plaintext(Password over http) LOG-1392 - In fluentd config, flush_interval can't be set with flush_mode=immediate LOG-1494 - Syslog output is serializing json incorrectly LOG-1555 - Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server LOG-1575 - Rejected by Elasticsearch and unexpected json-parsing LOG-1735 - Regression introducing flush_at_shutdown LOG-1774 - The collector logs should be excluded in fluent.conf LOG-1776 - fluentd total_limit_size sets value beyond available space LOG-1822 - OpenShift Alerting Rules Style-Guide Compliance LOG-1859 - CLO Should not error and exit early on missing ca-bundle when cluster wide proxy is not enabled LOG-1862 - Unsupported kafka parameters when enabled Kafka SASL LOG-1903 - Fix the Display of ClusterLogging type in OLM LOG-1911 - CLF API changes to Opt-in to multiline error detection LOG-1918 - Alert FluentdNodeDown always firing LOG-1939 - Opt-in multiline detection breaks cloudwatch forwarding

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Moderate: ACS 3.67 security and enhancement update Advisory ID: RHSA-2021:4902-01 Product: RHACS Advisory URL: https://access.redhat.com/errata/RHSA-2021:4902 Issue date: 2021-12-01 CVE Names: CVE-2018-20673 CVE-2019-5827 CVE-2019-13750 CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 CVE-2020-12762 CVE-2020-13435 CVE-2020-14155 CVE-2020-16135 CVE-2020-24370 CVE-2020-27304 CVE-2021-3200 CVE-2021-3445 CVE-2021-3580 CVE-2021-3749 CVE-2021-3800 CVE-2021-3801 CVE-2021-20231 CVE-2021-20232 CVE-2021-20266 CVE-2021-22876 CVE-2021-22898 CVE-2021-22925 CVE-2021-23343 CVE-2021-23840 CVE-2021-23841 CVE-2021-27645 CVE-2021-28153 CVE-2021-29923 CVE-2021-32690 CVE-2021-33560 CVE-2021-33574 CVE-2021-35942 CVE-2021-36084 CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 CVE-2021-39293 =====================================================================

  1. Summary:

Updated images are now available for Red Hat Advanced Cluster Security for Kubernetes (RHACS).

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

The release of RHACS 3.67 provides the following new features, bug fixes, security patches and system changes:

OpenShift Dedicated support

RHACS 3.67 is thoroughly tested and supported on OpenShift Dedicated on Amazon Web Services and Google Cloud Platform.

  1. Use OpenShift OAuth server as an identity provider If you are using RHACS with OpenShift, you can now configure the built-in OpenShift OAuth server as an identity provider for RHACS.

  2. Enhancements for CI outputs Red Hat has improved the usability of RHACS CI integrations. CI outputs now show additional detailed information about the vulnerabilities and the security policies responsible for broken builds.

  3. Runtime Class policy criteria Users can now use RHACS to define the container runtime configuration that may be used to run a pod’s containers using the Runtime Class policy criteria.

Security Fix(es):

  • civetweb: directory traversal when using the built-in example HTTP form-based file upload mechanism via the mg_handle_form_request API (CVE-2020-27304)

  • nodejs-axios: Regular expression denial of service in trim function (CVE-2021-3749)

  • nodejs-prismjs: ReDoS vulnerability (CVE-2021-3801)

  • golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet (CVE-2021-29923)

  • helm: information disclosure vulnerability (CVE-2021-32690)

  • golang: archive/zip: malformed archive may cause panic or memory exhaustion (incomplete fix of CVE-2021-33196) (CVE-2021-39293)

  • nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe (CVE-2021-23343)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Bug Fixes The release of RHACS 3.67 includes the following bug fixes:

  1. Previously, when using RHACS with the Compliance Operator integration, RHACS did not respect or populate Compliance Operator TailoredProfiles.

  2. Previously, the Alpine Linux package manager (APK) in Image policy looked for the presence of apk package in the image rather than the apk-tools package. This issue has been fixed.

System changes The release of RHACS 3.67 includes the following system changes:

  1. Scanner now identifies vulnerabilities in Ubuntu 21.10 images.
  2. The Port exposure method policy criteria now include route as an exposure method.
  3. The OpenShift: Kubeadmin Secret Accessed security policy now allows the OpenShift Compliance Operator to check for the existence of the Kubeadmin secret without creating a violation.
  4. The OpenShift Compliance Operator integration now supports using TailoredProfiles.
  5. The RHACS Jenkins plugin now provides additional security information.
  6. When you enable the environment variable ROX_NETWORK_ACCESS_LOG for Central, the logs contain the Request URI and X-Forwarded-For header values.
  7. The default uid:gid pair for the Scanner image is now 65534:65534.
  8. RHACS adds a new default Scope Manager role that includes minimum permissions to create and modify access scopes.
  9. If microdnf is part of an image or shows up in process execution, RHACS reports it as a security violation for the Red Hat Package Manager in Image or the Red Hat Package Manager Execution security policies.
  10. In addition to manually uploading vulnerability definitions in offline mode, you can now upload definitions in online mode.
  11. You can now format the output of the following roxctl CLI commands in table, csv, or JSON format: image scan, image check & deployment check
  12. You can now use a regular expression for the deployment name while specifying policy exclusions

  13. Solution:

To take advantage of these new features, fixes and changes, please upgrade Red Hat Advanced Cluster Security for Kubernetes to version 3.67.

  1. Bugs fixed (https://bugzilla.redhat.com/):

1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe 1978144 - CVE-2021-32690 helm: information disclosure vulnerability 1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet 1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function 2005445 - CVE-2021-3801 nodejs-prismjs: ReDoS vulnerability 2006044 - CVE-2021-39293 golang: archive/zip: malformed archive may cause panic or memory exhaustion (incomplete fix of CVE-2021-33196) 2016640 - CVE-2020-27304 civetweb: directory traversal when using the built-in example HTTP form-based file upload mechanism via the mg_handle_form_request API

  1. JIRA issues fixed (https://issues.jboss.org/):

RHACS-65 - Release RHACS 3.67.0

  1. References:

https://access.redhat.com/security/cve/CVE-2018-20673 https://access.redhat.com/security/cve/CVE-2019-5827 https://access.redhat.com/security/cve/CVE-2019-13750 https://access.redhat.com/security/cve/CVE-2019-13751 https://access.redhat.com/security/cve/CVE-2019-17594 https://access.redhat.com/security/cve/CVE-2019-17595 https://access.redhat.com/security/cve/CVE-2019-18218 https://access.redhat.com/security/cve/CVE-2019-19603 https://access.redhat.com/security/cve/CVE-2019-20838 https://access.redhat.com/security/cve/CVE-2020-12762 https://access.redhat.com/security/cve/CVE-2020-13435 https://access.redhat.com/security/cve/CVE-2020-14155 https://access.redhat.com/security/cve/CVE-2020-16135 https://access.redhat.com/security/cve/CVE-2020-24370 https://access.redhat.com/security/cve/CVE-2020-27304 https://access.redhat.com/security/cve/CVE-2021-3200 https://access.redhat.com/security/cve/CVE-2021-3445 https://access.redhat.com/security/cve/CVE-2021-3580 https://access.redhat.com/security/cve/CVE-2021-3749 https://access.redhat.com/security/cve/CVE-2021-3800 https://access.redhat.com/security/cve/CVE-2021-3801 https://access.redhat.com/security/cve/CVE-2021-20231 https://access.redhat.com/security/cve/CVE-2021-20232 https://access.redhat.com/security/cve/CVE-2021-20266 https://access.redhat.com/security/cve/CVE-2021-22876 https://access.redhat.com/security/cve/CVE-2021-22898 https://access.redhat.com/security/cve/CVE-2021-22925 https://access.redhat.com/security/cve/CVE-2021-23343 https://access.redhat.com/security/cve/CVE-2021-23840 https://access.redhat.com/security/cve/CVE-2021-23841 https://access.redhat.com/security/cve/CVE-2021-27645 https://access.redhat.com/security/cve/CVE-2021-28153 https://access.redhat.com/security/cve/CVE-2021-29923 https://access.redhat.com/security/cve/CVE-2021-32690 https://access.redhat.com/security/cve/CVE-2021-33560 https://access.redhat.com/security/cve/CVE-2021-33574 https://access.redhat.com/security/cve/CVE-2021-35942 https://access.redhat.com/security/cve/CVE-2021-36084 https://access.redhat.com/security/cve/CVE-2021-36085 https://access.redhat.com/security/cve/CVE-2021-36086 https://access.redhat.com/security/cve/CVE-2021-36087 https://access.redhat.com/security/cve/CVE-2021-39293 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYafeGdzjgjWX9erEAQgZ8Q/9H5ov4ZfKZszdJu0WvRMetEt6DMU2RTZr Kjv4h4FnmsMDYYDocnkFvsRjcpdGxtoUShAqD6+FrTNXjPtA/v1tsQTJzhg4o50w tKa9T4aHfrYXjGvWgQXJJEGmGaYMYePUOv77x6pLfMB+FmgfOtb8kzOdNzAtqX3e lq8b2DrQuPSRiWkUgFM2hmS7OtUsqTIShqWu67HJdOY74qDN4DGp7GnG6inCrUjV x4/4X5Fb7JrAYiy57C5eZwYW61HmrG7YHk9SZTRYgRW0rfgLncVsny4lX1871Ch2 e8ttu0EJFM1EJyuCJwJd1Q+rhua6S1VSY+etLUuaYme5DtvozLXQTLUK31qAq/hK qnLYQjaSieea9j1dV6YNHjnvV0XGczyZYwzmys/CNVUxwvSHr1AJGmQ3zDeOt7Qz vguWmPzyiob3RtHjfUlUpPYeI6HVug801YK6FAoB9F2BW2uHVgbtKOwG5pl5urJt G4taizPtH8uJj5hem5nHnSE1sVGTiStb4+oj2LQonRkgLQ2h7tsX8Z8yWM/3TwUT PTBX9AIHwt8aCx7XxTeEIs0H9B1T9jYfy06o9H2547un9sBoT0Sm7fqKuJKic8N/ pJ2kXBiVJ9B4G+JjWe8rh1oC1yz5Q5/5HZ19VYBjHhYEhX4s9s2YsF1L1uMoT3NN T0pPNmsPGZY= =ux5P -----END PGP SIGNATURE-----

-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:

Red Hat OpenShift Container Storage is software-defined storage integrated with and optimized for the Red Hat OpenShift Container Platform. Red Hat OpenShift Container Storage is highly scalable, production-grade persistent storage for stateful applications running in the Red Hat OpenShift Container Platform. In addition to persistent storage, Red Hat OpenShift Container Storage provides a multicloud data management service with an S3 compatible API.

Bug Fix(es):

  • Previously, when the namespace store target was deleted, no alert was sent to the namespace bucket because of an issue in calculating the namespace bucket health. With this update, the issue in calculating the namespace bucket health is fixed and alerts are triggered as expected. (BZ#1993873)

  • Previously, the Multicloud Object Gateway (MCG) components performed slowly and there was a lot of pressure on the MCG components due to non-optimized database queries. With this update the non-optimized database queries are fixed which reduces the compute resources and time taken for queries. Bugs fixed (https://bugzilla.redhat.com/):

1993873 - [4.8.z clone] Alert NooBaaNamespaceBucketErrorState is not triggered when namespacestore's target bucket is deleted 2006958 - CVE-2020-26301 nodejs-ssh2: Command injection by calling vulnerable method with untrusted input

  1. Bugs fixed (https://bugzilla.redhat.com/):

1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option 1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option

  1. Description:

This release adds the new Apache HTTP Server 2.4.37 Service Pack 10 packages that are part of the JBoss Core Services offering.

This release serves as a replacement for Red Hat JBoss Core Services Apache HTTP Server 2.4.37 Service Pack 9 and includes bug fixes and enhancements. Solution:

Before applying this update, make sure all previously released errata relevant to your system have been applied

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202102-1488",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "business intelligence",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "5.9.0.0.0"
      },
      {
        "model": "openssl",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "1.1.1j"
      },
      {
        "model": "graalvm",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "20.3.1.2"
      },
      {
        "model": "mysql server",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.0.23"
      },
      {
        "model": "nessus network monitor",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "tenable",
        "version": "5.12.1"
      },
      {
        "model": "essbase",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "21.2"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "10.0"
      },
      {
        "model": "mysql enterprise monitor",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.0.23"
      },
      {
        "model": "graalvm",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "21.0.0.2"
      },
      {
        "model": "jd edwards world security",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "a9.4"
      },
      {
        "model": "nessus network monitor",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "tenable",
        "version": "5.11.0"
      },
      {
        "model": "tenable.sc",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "tenable",
        "version": "5.13.0"
      },
      {
        "model": "peoplesoft enterprise peopletools",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.57"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "snapcenter",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "nessus network monitor",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "tenable",
        "version": "5.13.0"
      },
      {
        "model": "business intelligence",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "5.5.0.0.0"
      },
      {
        "model": "safari",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "14.1.1"
      },
      {
        "model": "oncommand insight",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "nessus network monitor",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "tenable",
        "version": "5.11.1"
      },
      {
        "model": "ipados",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "14.6"
      },
      {
        "model": "peoplesoft enterprise peopletools",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.58"
      },
      {
        "model": "oncommand workflow automation",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "openssl",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "1.0.2"
      },
      {
        "model": "mysql server",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.0.15"
      },
      {
        "model": "nessus network monitor",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "tenable",
        "version": "5.12.0"
      },
      {
        "model": "zfs storage appliance kit",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.8"
      },
      {
        "model": "business intelligence",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "12.2.1.4.0"
      },
      {
        "model": "mysql server",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "5.7.33"
      },
      {
        "model": "enterprise manager for storage management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "13.4.0.0"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "peoplesoft enterprise peopletools",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.59"
      },
      {
        "model": "business intelligence",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "12.2.1.3.0"
      },
      {
        "model": "openssl",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "1.0.2y"
      },
      {
        "model": "openssl",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "1.1.1"
      },
      {
        "model": "communications cloud native core policy",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "1.15.0"
      },
      {
        "model": "enterprise manager ops center",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "12.4.0.0"
      },
      {
        "model": "graalvm",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "19.3.5"
      },
      {
        "model": "macos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "11.4"
      },
      {
        "model": "tenable.sc",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "tenable",
        "version": "5.17.0"
      },
      {
        "model": "iphone os",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "14.6"
      },
      {
        "model": "macos",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "11.1"
      },
      {
        "model": "hitachi device manager",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u7acb",
        "version": null
      },
      {
        "model": "gnu/linux",
        "scope": null,
        "trust": 0.8,
        "vendor": "debian",
        "version": null
      },
      {
        "model": "rv3000",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u7acb",
        "version": null
      },
      {
        "model": "hitachi tuning manager",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u7acb",
        "version": null
      },
      {
        "model": "hitachi ops center common services",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u7acb",
        "version": null
      },
      {
        "model": "tenable.sc",
        "scope": null,
        "trust": 0.8,
        "vendor": "tenable",
        "version": null
      },
      {
        "model": "openssl",
        "scope": null,
        "trust": 0.8,
        "vendor": "openssl",
        "version": null
      },
      {
        "model": "hitachi ops center analyzer viewpoint",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u7acb",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001396"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23841"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "164562"
      },
      {
        "db": "PACKETSTORM",
        "id": "164489"
      },
      {
        "db": "PACKETSTORM",
        "id": "164967"
      },
      {
        "db": "PACKETSTORM",
        "id": "165129"
      },
      {
        "db": "PACKETSTORM",
        "id": "165096"
      },
      {
        "db": "PACKETSTORM",
        "id": "165002"
      },
      {
        "db": "PACKETSTORM",
        "id": "164927"
      }
    ],
    "trust": 0.7
  },
  "cve": "CVE-2021-23841",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 4.3,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 8.6,
            "id": "CVE-2021-23841",
            "impactScore": 2.9,
            "integrityImpact": "NONE",
            "severity": "MEDIUM",
            "trust": 1.8,
            "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 4.3,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 8.6,
            "id": "VHN-382524",
            "impactScore": 2.9,
            "integrityImpact": "NONE",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:N/I:N/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "HIGH",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 5.9,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 2.2,
            "id": "CVE-2021-23841",
            "impactScore": 3.6,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "High",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 5.9,
            "baseSeverity": "Medium",
            "confidentialityImpact": "None",
            "exploitabilityScore": null,
            "id": "CVE-2021-23841",
            "impactScore": null,
            "integrityImpact": "None",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2021-23841",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "NVD",
            "id": "CVE-2021-23841",
            "trust": 0.8,
            "value": "Medium"
          },
          {
            "author": "VULHUB",
            "id": "VHN-382524",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-382524"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001396"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23841"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "The OpenSSL public API function X509_issuer_and_serial_hash() attempts to create a unique hash value based on the issuer and serial number data contained within an X509 certificate. However it fails to correctly handle any errors that may occur while parsing the issuer field (which might occur if the issuer field is maliciously constructed). This may subsequently result in a NULL pointer deref and a crash leading to a potential denial of service attack. The function X509_issuer_and_serial_hash() is never directly called by OpenSSL itself so applications are only vulnerable if they use this function directly and they use it on certificates that may have been obtained from untrusted sources. OpenSSL versions 1.1.1i and below are affected by this issue. Users of these versions should upgrade to OpenSSL 1.1.1j. OpenSSL versions 1.0.2x and below are affected by this issue. However OpenSSL 1.0.2 is out of support and no longer receiving public updates. Premium support customers of OpenSSL 1.0.2 should upgrade to 1.0.2y. Other users should upgrade to 1.1.1j. Fixed in OpenSSL 1.1.1j (Affected 1.1.1-1.1.1i). Fixed in OpenSSL 1.0.2y (Affected 1.0.2-1.0.2x). Please keep an eye on CNNVD or manufacturer announcements. Clusters and applications are all visible and\nmanaged from a single console\u2014with\nsecurity policy built in. \n\nSecurity fixes: \n\n* nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a\npointer to a root domain name (CVE-2021-23017)\n\n* redis: Lua scripts can overflow the heap-based Lua stack (CVE-2021-32626)\n\n* redis: Integer overflow issue with Streams (CVE-2021-32627)\n\n* redis: Integer overflow bug in the ziplist data structure\n(CVE-2021-32628)\n\n* redis: Integer overflow issue with intsets (CVE-2021-32687)\n\n* redis: Integer overflow issue with strings (CVE-2021-41099)\n\n* redis: Out of bounds read in lua debugger protocol parser\n(CVE-2021-32672)\n\n* redis: Denial of service via Redis Standard Protocol (RESP) request\n(CVE-2021-32675)\n\n* helm: information disclosure vulnerability (CVE-2021-32690)\n\nBug fixes:\n\n* KUBE-API: Support move agent to different cluster in the same namespace\n(BZ# 1977358)\n\n* Add columns to the Agent CRD list (BZ# 1977398)\n\n* ClusterDeployment controller watches all Secrets from all namespaces (BZ#\n1986081)\n\n* RHACM 2.3.3 images (BZ# 1999365)\n\n* Workaround for Network Manager not supporting nmconnections priority (BZ#\n2001294)\n\n* create cluster page empty in Safary Browser (BZ# 2002280)\n\n* Compliance state doesn\u0027t get updated after fixing the issue causing\ninitially the policy not being able to update the managed object (BZ#\n2002667)\n\n* Overview page displays VMware based managed cluster as other (BZ#\n2004188)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1963121 - CVE-2021-23017 nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a pointer to a root domain name\n1977358 - [4.8.0] KUBE-API: Support move agent to different cluster in the same namespace\n1977398 - [4.8.0] [master] Add columns to the Agent CRD list\n1978144 - CVE-2021-32690 helm: information disclosure vulnerability\n1986081 - [4.8.0] ClusterDeployment controller watches all Secrets from all namespaces\n1999365 - RHACM 2.3.3 images\n2001294 - [4.8.0] Workaround for Network Manager not supporting nmconnections priority\n2002280 - create cluster page empty in Safary Browser\n2002667 - Compliance state doesn\u0027t get updated after fixing the issue causing initially the policy not being able to update the managed object\n2004188 - Overview page displays VMware based managed cluster as other\n2010991 - CVE-2021-32687 redis: Integer overflow issue with intsets\n2011000 - CVE-2021-32675 redis: Denial of service via Redis Standard Protocol (RESP) request\n2011001 - CVE-2021-32672 redis: Out of bounds read in lua debugger protocol parser\n2011004 - CVE-2021-32628 redis: Integer overflow bug in the ziplist data structure\n2011010 - CVE-2021-32627 redis: Integer overflow issue with Streams\n2011017 - CVE-2021-32626 redis: Lua scripts can overflow the heap-based Lua stack\n2011020 - CVE-2021-41099 redis: Integer overflow issue with strings\n\n5. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64\nRed Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Workstation (v. 7) - x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - x86_64\n\n3. Description:\n\nOpenSSL is a toolkit that implements the Secure Sockets Layer (SSL) and\nTransport Layer Security (TLS) protocols, as well as a full-strength\ngeneral-purpose cryptography library. \n\nSecurity Fix(es):\n\n* openssl: integer overflow in CipherUpdate (CVE-2021-23840)\n\n* openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n(CVE-2021-23841)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nFor the update to take effect, all services linked to the OpenSSL library\nmust be restarted, or the system rebooted. Bugs fixed (https://bugzilla.redhat.com/):\n\n1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate\n\n6. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nopenssl-1.0.2k-22.el7_9.src.rpm\n\nx86_64:\nopenssl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-libs-1.0.2k-22.el7_9.i686.rpm\nopenssl-libs-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nSource:\nopenssl-1.0.2k-22.el7_9.src.rpm\n\nx86_64:\nopenssl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-libs-1.0.2k-22.el7_9.i686.rpm\nopenssl-libs-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nx86_64:\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-devel-1.0.2k-22.el7_9.i686.rpm\nopenssl-devel-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-perl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-static-1.0.2k-22.el7_9.i686.rpm\nopenssl-static-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nopenssl-1.0.2k-22.el7_9.src.rpm\n\nppc64:\nopenssl-1.0.2k-22.el7_9.ppc64.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.ppc.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.ppc64.rpm\nopenssl-devel-1.0.2k-22.el7_9.ppc.rpm\nopenssl-devel-1.0.2k-22.el7_9.ppc64.rpm\nopenssl-libs-1.0.2k-22.el7_9.ppc.rpm\nopenssl-libs-1.0.2k-22.el7_9.ppc64.rpm\n\nppc64le:\nopenssl-1.0.2k-22.el7_9.ppc64le.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.ppc64le.rpm\nopenssl-devel-1.0.2k-22.el7_9.ppc64le.rpm\nopenssl-libs-1.0.2k-22.el7_9.ppc64le.rpm\n\ns390x:\nopenssl-1.0.2k-22.el7_9.s390x.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.s390.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.s390x.rpm\nopenssl-devel-1.0.2k-22.el7_9.s390.rpm\nopenssl-devel-1.0.2k-22.el7_9.s390x.rpm\nopenssl-libs-1.0.2k-22.el7_9.s390.rpm\nopenssl-libs-1.0.2k-22.el7_9.s390x.rpm\n\nx86_64:\nopenssl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-devel-1.0.2k-22.el7_9.i686.rpm\nopenssl-devel-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-libs-1.0.2k-22.el7_9.i686.rpm\nopenssl-libs-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nppc64:\nopenssl-debuginfo-1.0.2k-22.el7_9.ppc.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.ppc64.rpm\nopenssl-perl-1.0.2k-22.el7_9.ppc64.rpm\nopenssl-static-1.0.2k-22.el7_9.ppc.rpm\nopenssl-static-1.0.2k-22.el7_9.ppc64.rpm\n\nppc64le:\nopenssl-debuginfo-1.0.2k-22.el7_9.ppc64le.rpm\nopenssl-perl-1.0.2k-22.el7_9.ppc64le.rpm\nopenssl-static-1.0.2k-22.el7_9.ppc64le.rpm\n\ns390x:\nopenssl-debuginfo-1.0.2k-22.el7_9.s390.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.s390x.rpm\nopenssl-perl-1.0.2k-22.el7_9.s390x.rpm\nopenssl-static-1.0.2k-22.el7_9.s390.rpm\nopenssl-static-1.0.2k-22.el7_9.s390x.rpm\n\nx86_64:\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-perl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-static-1.0.2k-22.el7_9.i686.rpm\nopenssl-static-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nopenssl-1.0.2k-22.el7_9.src.rpm\n\nx86_64:\nopenssl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-devel-1.0.2k-22.el7_9.i686.rpm\nopenssl-devel-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-libs-1.0.2k-22.el7_9.i686.rpm\nopenssl-libs-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2021-05-25-5 Safari 14.1.1\n\nSafari 14.1.1 addresses the following issues. \n\nWebKit\nAvailable for: macOS Catalina and macOS Mojave\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: Multiple memory corruption issues were addressed with\nimproved memory handling. \nCVE-2021-30749: an anonymous researcher and mipu94 of SEFCOM lab,\nASU. working with Trend Micro Zero Day Initiative\nCVE-2021-30734: Jack Dates of RET2 Systems, Inc. (@ret2systems)\nworking with Trend Micro Zero Day Initiative\n\nWebKit\nAvailable for: macOS Catalina and macOS Mojave\nImpact: Processing maliciously crafted web content may lead to\nuniversal cross site scripting\nDescription: A cross-origin issue with iframe elements was addressed\nwith improved tracking of security origins. \nCVE-2021-30744: Dan Hite of jsontop\n\nWebKit\nAvailable for: macOS Catalina and macOS Mojave\nImpact: A malicious website may be able to access restricted ports on\narbitrary servers\nDescription: A logic issue was addressed with improved restrictions. \nCVE-2021-30720: David Sch\u00fctz (@xdavidhu)\n\nWebKit\nAvailable for: macOS Catalina and macOS Mojave\nImpact: A malicious application may be able to leak sensitive user\ninformation\nDescription: A logic issue was addressed with improved restrictions. \nCVE-2021-30682: an anonymous researcher and 1lastBr3ath\n\nWebKit\nAvailable for: macOS Catalina and macOS Mojave\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2021-21779: Marcin Towalski of Cisco Talos\n\nWebKit\nAvailable for: macOS Catalina and macOS Mojave\nImpact: Processing maliciously crafted web content may lead to\nuniversal cross site scripting\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30689: an anonymous researcher\n\nWebKit\nAvailable for: macOS Catalina and macOS Mojave\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: An integer overflow was addressed with improved input\nvalidation. \nCVE-2021-30663: an anonymous researcher\n\nWebRTC\nAvailable for: macOS Catalina and macOS Mojave\nImpact: A remote attacker may be able to cause a denial of service\nDescription: A null pointer dereference was addressed with improved\ninput validation. \nCVE-2021-23841: Tavis Ormandy of Google\nCVE-2021-30698: Tavis Ormandy of Google\n\nAdditional recognition\n\nWebKit\nWe would like to acknowledge Chris Salls (@salls) of Makai Security\nfor their assistance. \n\nInstallation note:\n\nThis update may be obtained from the Mac App Store. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202103-03\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                            https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n  Severity: Normal\n     Title: OpenSSL: Multiple vulnerabilities\n      Date: March 31, 2021\n      Bugs: #769785, #777681\n        ID: 202103-03\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been found in OpenSSL, the worst of which\ncould allow remote attackers to cause a Denial of Service condition. \n\nAffected packages\n=================\n\n     -------------------------------------------------------------------\n      Package              /     Vulnerable     /            Unaffected\n     -------------------------------------------------------------------\n   1  dev-libs/openssl             \u003c 1.1.1k                  \u003e= 1.1.1k\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in OpenSSL. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll OpenSSL users should upgrade to the latest version:\n\n   # emerge --sync\n   # emerge --ask --oneshot --verbose \"\u003e=dev-libs/openssl-1.1.1k\"\n\nReferences\n==========\n\n[ 1 ] CVE-2021-23840\n       https://nvd.nist.gov/vuln/detail/CVE-2021-23840\n[ 2 ] CVE-2021-23841\n       https://nvd.nist.gov/vuln/detail/CVE-2021-23841\n[ 3 ] CVE-2021-3449\n       https://nvd.nist.gov/vuln/detail/CVE-2021-3449\n[ 4 ] CVE-2021-3450\n       https://nvd.nist.gov/vuln/detail/CVE-2021-3450\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n  https://security.gentoo.org/glsa/202103-03\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2021 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. Bugs fixed (https://bugzilla.redhat.com/):\n\n1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1168 - Disable hostname verification in syslog TLS settings\nLOG-1235 - Using HTTPS without a secret does not translate into the correct \u0027scheme\u0027 value in Fluentd\nLOG-1375 - ssl_ca_cert should be optional\nLOG-1378 - CLO should support sasl_plaintext(Password over http)\nLOG-1392 - In fluentd config, flush_interval can\u0027t be set with flush_mode=immediate\nLOG-1494 - Syslog output is serializing json incorrectly\nLOG-1555 - Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server\nLOG-1575 - Rejected by Elasticsearch and unexpected json-parsing\nLOG-1735 - Regression introducing flush_at_shutdown \nLOG-1774 - The collector logs should  be excluded in fluent.conf\nLOG-1776 - fluentd total_limit_size sets value beyond available space\nLOG-1822 - OpenShift Alerting Rules Style-Guide Compliance\nLOG-1859 - CLO Should not error and exit early on missing ca-bundle when cluster wide proxy is not enabled\nLOG-1862 - Unsupported kafka parameters when enabled Kafka SASL\nLOG-1903 - Fix the Display of ClusterLogging type in OLM\nLOG-1911 - CLF API changes to Opt-in to multiline error detection\nLOG-1918 - Alert `FluentdNodeDown` always firing \nLOG-1939 - Opt-in multiline detection breaks cloudwatch forwarding\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Moderate: ACS 3.67 security and enhancement update\nAdvisory ID:       RHSA-2021:4902-01\nProduct:           RHACS\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2021:4902\nIssue date:        2021-12-01\nCVE Names:         CVE-2018-20673 CVE-2019-5827 CVE-2019-13750 \n                   CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 \n                   CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 \n                   CVE-2020-12762 CVE-2020-13435 CVE-2020-14155 \n                   CVE-2020-16135 CVE-2020-24370 CVE-2020-27304 \n                   CVE-2021-3200 CVE-2021-3445 CVE-2021-3580 \n                   CVE-2021-3749 CVE-2021-3800 CVE-2021-3801 \n                   CVE-2021-20231 CVE-2021-20232 CVE-2021-20266 \n                   CVE-2021-22876 CVE-2021-22898 CVE-2021-22925 \n                   CVE-2021-23343 CVE-2021-23840 CVE-2021-23841 \n                   CVE-2021-27645 CVE-2021-28153 CVE-2021-29923 \n                   CVE-2021-32690 CVE-2021-33560 CVE-2021-33574 \n                   CVE-2021-35942 CVE-2021-36084 CVE-2021-36085 \n                   CVE-2021-36086 CVE-2021-36087 CVE-2021-39293 \n=====================================================================\n\n1. Summary:\n\nUpdated images are now available for Red Hat Advanced Cluster Security for\nKubernetes (RHACS). \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nThe release of RHACS 3.67 provides the following new features, bug fixes,\nsecurity patches and system changes:\n\nOpenShift Dedicated support\n\nRHACS 3.67 is thoroughly tested and supported on OpenShift Dedicated on\nAmazon Web Services and Google Cloud Platform. \n\n1. Use OpenShift OAuth server as an identity provider\nIf you are using RHACS with OpenShift, you can now configure the built-in\nOpenShift OAuth server as an identity provider for RHACS. \n\n2. Enhancements for CI outputs\nRed Hat has improved the usability of RHACS CI integrations. CI outputs now\nshow additional detailed information about the vulnerabilities and the\nsecurity policies responsible for broken builds. \n\n3. Runtime Class policy criteria\nUsers can now use RHACS to define the container runtime configuration that\nmay be used to run a pod\u2019s containers using the Runtime Class policy\ncriteria. \n\nSecurity Fix(es):\n\n* civetweb: directory traversal when using the built-in example HTTP\nform-based file upload mechanism via the mg_handle_form_request API\n(CVE-2020-27304)\n\n* nodejs-axios: Regular expression denial of service in trim function\n(CVE-2021-3749)\n\n* nodejs-prismjs: ReDoS vulnerability (CVE-2021-3801)\n\n* golang: net: incorrect parsing of extraneous zero characters at the\nbeginning of an IP address octet (CVE-2021-29923)\n\n* helm: information disclosure vulnerability (CVE-2021-32690)\n\n* golang: archive/zip: malformed archive may cause panic or memory\nexhaustion (incomplete fix of CVE-2021-33196) (CVE-2021-39293)\n\n* nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n(CVE-2021-23343)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fixes\nThe release of RHACS 3.67 includes the following bug fixes:\n\n1. Previously, when using RHACS with the Compliance Operator integration,\nRHACS did not respect or populate Compliance Operator TailoredProfiles. \n\n2. Previously, the Alpine Linux package manager (APK) in Image policy\nlooked for the presence of apk package in the image rather than the\napk-tools package. This issue has been fixed. \n\nSystem changes\nThe release of RHACS 3.67 includes the following system changes:\n\n1. Scanner now identifies vulnerabilities in Ubuntu 21.10 images. \n2. The Port exposure method policy criteria now include route as an\nexposure method. \n3. The OpenShift: Kubeadmin Secret Accessed security policy now allows the\nOpenShift Compliance Operator to check for the existence of the Kubeadmin\nsecret without creating a violation. \n4. The OpenShift Compliance Operator integration now supports using\nTailoredProfiles. \n5. The RHACS Jenkins plugin now provides additional security information. \n6. When you enable the environment variable ROX_NETWORK_ACCESS_LOG for\nCentral, the logs contain the Request URI and X-Forwarded-For header\nvalues. \n7. The default uid:gid pair for the Scanner image is now 65534:65534. \n8. RHACS adds a new default Scope Manager role that includes minimum\npermissions to create and modify access scopes. \n9. If microdnf is part of an image or shows up in process execution, RHACS\nreports it as a security violation for the Red Hat Package Manager in Image\nor the Red Hat Package Manager Execution security policies. \n10. In addition to manually uploading vulnerability definitions in offline\nmode, you can now upload definitions in online mode. \n11. You can now format the output of the following roxctl CLI commands in\ntable, csv, or JSON format: image scan, image check \u0026 deployment check\n12. You can now use a regular expression for the deployment name while\nspecifying policy exclusions\n\n3. Solution:\n\nTo take advantage of these new features, fixes and changes, please upgrade\nRed Hat Advanced Cluster Security for Kubernetes to version 3.67. \n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n1978144 - CVE-2021-32690 helm: information disclosure vulnerability\n1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n2005445 - CVE-2021-3801 nodejs-prismjs: ReDoS vulnerability\n2006044 - CVE-2021-39293 golang: archive/zip: malformed archive may cause panic or memory exhaustion (incomplete fix of CVE-2021-33196)\n2016640 - CVE-2020-27304 civetweb: directory traversal when using the built-in example HTTP form-based file upload mechanism via the mg_handle_form_request API\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nRHACS-65 - Release RHACS 3.67.0\n\n6. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-20673\nhttps://access.redhat.com/security/cve/CVE-2019-5827\nhttps://access.redhat.com/security/cve/CVE-2019-13750\nhttps://access.redhat.com/security/cve/CVE-2019-13751\nhttps://access.redhat.com/security/cve/CVE-2019-17594\nhttps://access.redhat.com/security/cve/CVE-2019-17595\nhttps://access.redhat.com/security/cve/CVE-2019-18218\nhttps://access.redhat.com/security/cve/CVE-2019-19603\nhttps://access.redhat.com/security/cve/CVE-2019-20838\nhttps://access.redhat.com/security/cve/CVE-2020-12762\nhttps://access.redhat.com/security/cve/CVE-2020-13435\nhttps://access.redhat.com/security/cve/CVE-2020-14155\nhttps://access.redhat.com/security/cve/CVE-2020-16135\nhttps://access.redhat.com/security/cve/CVE-2020-24370\nhttps://access.redhat.com/security/cve/CVE-2020-27304\nhttps://access.redhat.com/security/cve/CVE-2021-3200\nhttps://access.redhat.com/security/cve/CVE-2021-3445\nhttps://access.redhat.com/security/cve/CVE-2021-3580\nhttps://access.redhat.com/security/cve/CVE-2021-3749\nhttps://access.redhat.com/security/cve/CVE-2021-3800\nhttps://access.redhat.com/security/cve/CVE-2021-3801\nhttps://access.redhat.com/security/cve/CVE-2021-20231\nhttps://access.redhat.com/security/cve/CVE-2021-20232\nhttps://access.redhat.com/security/cve/CVE-2021-20266\nhttps://access.redhat.com/security/cve/CVE-2021-22876\nhttps://access.redhat.com/security/cve/CVE-2021-22898\nhttps://access.redhat.com/security/cve/CVE-2021-22925\nhttps://access.redhat.com/security/cve/CVE-2021-23343\nhttps://access.redhat.com/security/cve/CVE-2021-23840\nhttps://access.redhat.com/security/cve/CVE-2021-23841\nhttps://access.redhat.com/security/cve/CVE-2021-27645\nhttps://access.redhat.com/security/cve/CVE-2021-28153\nhttps://access.redhat.com/security/cve/CVE-2021-29923\nhttps://access.redhat.com/security/cve/CVE-2021-32690\nhttps://access.redhat.com/security/cve/CVE-2021-33560\nhttps://access.redhat.com/security/cve/CVE-2021-33574\nhttps://access.redhat.com/security/cve/CVE-2021-35942\nhttps://access.redhat.com/security/cve/CVE-2021-36084\nhttps://access.redhat.com/security/cve/CVE-2021-36085\nhttps://access.redhat.com/security/cve/CVE-2021-36086\nhttps://access.redhat.com/security/cve/CVE-2021-36087\nhttps://access.redhat.com/security/cve/CVE-2021-39293\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n7. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYafeGdzjgjWX9erEAQgZ8Q/9H5ov4ZfKZszdJu0WvRMetEt6DMU2RTZr\nKjv4h4FnmsMDYYDocnkFvsRjcpdGxtoUShAqD6+FrTNXjPtA/v1tsQTJzhg4o50w\ntKa9T4aHfrYXjGvWgQXJJEGmGaYMYePUOv77x6pLfMB+FmgfOtb8kzOdNzAtqX3e\nlq8b2DrQuPSRiWkUgFM2hmS7OtUsqTIShqWu67HJdOY74qDN4DGp7GnG6inCrUjV\nx4/4X5Fb7JrAYiy57C5eZwYW61HmrG7YHk9SZTRYgRW0rfgLncVsny4lX1871Ch2\ne8ttu0EJFM1EJyuCJwJd1Q+rhua6S1VSY+etLUuaYme5DtvozLXQTLUK31qAq/hK\nqnLYQjaSieea9j1dV6YNHjnvV0XGczyZYwzmys/CNVUxwvSHr1AJGmQ3zDeOt7Qz\nvguWmPzyiob3RtHjfUlUpPYeI6HVug801YK6FAoB9F2BW2uHVgbtKOwG5pl5urJt\nG4taizPtH8uJj5hem5nHnSE1sVGTiStb4+oj2LQonRkgLQ2h7tsX8Z8yWM/3TwUT\nPTBX9AIHwt8aCx7XxTeEIs0H9B1T9jYfy06o9H2547un9sBoT0Sm7fqKuJKic8N/\npJ2kXBiVJ9B4G+JjWe8rh1oC1yz5Q5/5HZ19VYBjHhYEhX4s9s2YsF1L1uMoT3NN\nT0pPNmsPGZY=\n=ux5P\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nRed Hat OpenShift Container Storage is software-defined storage integrated\nwith and optimized for the Red Hat OpenShift Container Platform. \nRed Hat OpenShift Container Storage is highly scalable, production-grade\npersistent storage for stateful applications running in the Red Hat\nOpenShift Container Platform. In addition to persistent storage, Red Hat\nOpenShift Container Storage provides a multicloud data management service\nwith an S3 compatible API. \n\nBug Fix(es):\n\n* Previously, when the namespace store target was deleted, no alert was\nsent to the namespace bucket because of an issue in calculating the\nnamespace bucket health. With this update, the issue in calculating the\nnamespace bucket health is fixed and alerts are triggered as expected. \n(BZ#1993873)\n\n* Previously, the Multicloud Object Gateway (MCG) components performed\nslowly and there was a lot of pressure on the MCG components due to\nnon-optimized database queries. With this update the non-optimized\ndatabase queries are fixed which reduces the compute resources and time\ntaken for queries. Bugs fixed (https://bugzilla.redhat.com/):\n\n1993873 - [4.8.z clone] Alert NooBaaNamespaceBucketErrorState is not triggered when namespacestore\u0027s target bucket is deleted\n2006958 - CVE-2020-26301 nodejs-ssh2: Command injection by calling vulnerable method with untrusted input\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option\n1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option\n\n5. Description:\n\nThis release adds the new Apache HTTP Server 2.4.37 Service Pack 10\npackages that are part of the JBoss Core Services offering. \n\nThis release serves as a replacement for Red Hat JBoss Core Services Apache\nHTTP Server 2.4.37 Service Pack 9 and includes bug fixes and enhancements. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-23841"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001396"
      },
      {
        "db": "VULHUB",
        "id": "VHN-382524"
      },
      {
        "db": "PACKETSTORM",
        "id": "164562"
      },
      {
        "db": "PACKETSTORM",
        "id": "164489"
      },
      {
        "db": "PACKETSTORM",
        "id": "162824"
      },
      {
        "db": "PACKETSTORM",
        "id": "162041"
      },
      {
        "db": "PACKETSTORM",
        "id": "164967"
      },
      {
        "db": "PACKETSTORM",
        "id": "165129"
      },
      {
        "db": "PACKETSTORM",
        "id": "165096"
      },
      {
        "db": "PACKETSTORM",
        "id": "165002"
      },
      {
        "db": "PACKETSTORM",
        "id": "164927"
      }
    ],
    "trust": 2.52
  },
  "exploit_availability": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/exploit_availability#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "reference": "https://www.scap.org.cn/vuln/vhn-382524",
        "trust": 0.1,
        "type": "unknown"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-382524"
      }
    ]
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2021-23841",
        "trust": 3.6
      },
      {
        "db": "TENABLE",
        "id": "TNS-2021-03",
        "trust": 1.1
      },
      {
        "db": "TENABLE",
        "id": "TNS-2021-09",
        "trust": 1.1
      },
      {
        "db": "PULSESECURE",
        "id": "SA44846",
        "trust": 1.1
      },
      {
        "db": "SIEMENS",
        "id": "SSA-637483",
        "trust": 1.1
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-22-258-05",
        "trust": 0.8
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-21-336-06",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU94508446",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU99475301",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU90348129",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001396",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "165096",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "162824",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "164927",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "165002",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "165129",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "162041",
        "trust": 0.2
      },
      {
        "db": "PACKETSTORM",
        "id": "162151",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "164583",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161525",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "165099",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "162823",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "164928",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "164889",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "162826",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "164890",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "161459",
        "trust": 0.1
      },
      {
        "db": "VULHUB",
        "id": "VHN-382524",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "164562",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "164489",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "164967",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-382524"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001396"
      },
      {
        "db": "PACKETSTORM",
        "id": "164562"
      },
      {
        "db": "PACKETSTORM",
        "id": "164489"
      },
      {
        "db": "PACKETSTORM",
        "id": "162824"
      },
      {
        "db": "PACKETSTORM",
        "id": "162041"
      },
      {
        "db": "PACKETSTORM",
        "id": "164967"
      },
      {
        "db": "PACKETSTORM",
        "id": "165129"
      },
      {
        "db": "PACKETSTORM",
        "id": "165096"
      },
      {
        "db": "PACKETSTORM",
        "id": "165002"
      },
      {
        "db": "PACKETSTORM",
        "id": "164927"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23841"
      }
    ]
  },
  "id": "VAR-202102-1488",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-382524"
      }
    ],
    "trust": 0.30766129
  },
  "last_update_date": "2024-11-29T21:27:51.722000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "hitachi-sec-2023-126",
        "trust": 0.8,
        "url": "https://www.debian.org/security/2021/dsa-4855"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001396"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-476",
        "trust": 1.1
      },
      {
        "problemtype": "Integer overflow or wraparound (CWE-190) [NVD evaluation ]",
        "trust": 0.8
      },
      {
        "problemtype": "CWE-190",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-382524"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001396"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23841"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841"
      },
      {
        "trust": 1.2,
        "url": "https://security.gentoo.org/glsa/202103-03"
      },
      {
        "trust": 1.1,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf"
      },
      {
        "trust": 1.1,
        "url": "https://kb.pulsesecure.net/articles/pulse_security_advisories/sa44846"
      },
      {
        "trust": 1.1,
        "url": "https://security.netapp.com/advisory/ntap-20210219-0009/"
      },
      {
        "trust": 1.1,
        "url": "https://security.netapp.com/advisory/ntap-20210513-0002/"
      },
      {
        "trust": 1.1,
        "url": "https://support.apple.com/kb/ht212528"
      },
      {
        "trust": 1.1,
        "url": "https://support.apple.com/kb/ht212529"
      },
      {
        "trust": 1.1,
        "url": "https://support.apple.com/kb/ht212534"
      },
      {
        "trust": 1.1,
        "url": "https://www.openssl.org/news/secadv/20210216.txt"
      },
      {
        "trust": 1.1,
        "url": "https://www.tenable.com/security/tns-2021-03"
      },
      {
        "trust": 1.1,
        "url": "https://www.tenable.com/security/tns-2021-09"
      },
      {
        "trust": 1.1,
        "url": "https://www.debian.org/security/2021/dsa-4855"
      },
      {
        "trust": 1.1,
        "url": "http://seclists.org/fulldisclosure/2021/may/67"
      },
      {
        "trust": 1.1,
        "url": "http://seclists.org/fulldisclosure/2021/may/70"
      },
      {
        "trust": 1.1,
        "url": "http://seclists.org/fulldisclosure/2021/may/68"
      },
      {
        "trust": 1.1,
        "url": "https://www.oracle.com//security-alerts/cpujul2021.html"
      },
      {
        "trust": 1.1,
        "url": "https://www.oracle.com/security-alerts/cpuapr2021.html"
      },
      {
        "trust": 1.1,
        "url": "https://www.oracle.com/security-alerts/cpuapr2022.html"
      },
      {
        "trust": 1.1,
        "url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
      },
      {
        "trust": 1.0,
        "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=122a19ab48091c657f7cb1fb3af9fc07bd557bbf"
      },
      {
        "trust": 1.0,
        "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=8252ee4d90f3f2004d3d0aeeed003ad49c9a7807"
      },
      {
        "trust": 1.0,
        "url": "https://security.netapp.com/advisory/ntap-20240621-0006/"
      },
      {
        "trust": 0.8,
        "url": "http://jvn.jp/vu/jvnvu94508446/index.html"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu90348129/"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu99475301/"
      },
      {
        "trust": 0.8,
        "url": "https://us-cert.cisa.gov/ics/advisories/icsa-21-336-06"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2021-23840"
      },
      {
        "trust": 0.7,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2021-23841"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840"
      },
      {
        "trust": 0.7,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-14155"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2019-20838"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-24370"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-17594"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-3800"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-33574"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-3445"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-3200"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-22876"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-17595"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-36085"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-19603"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-13750"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-20231"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-3580"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-16135"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-20266"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-27645"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-22925"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-22898"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-36087"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-13751"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-35942"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-12762"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2020-13435"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-36086"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-28153"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-20232"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-33560"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-18218"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2019-5827"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-36084"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.3,
        "url": "https://issues.jboss.org/):"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-3426"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-3572"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-20673"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-3778"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-3796"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27645"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20266"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-32690"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-42574"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-28153"
      },
      {
        "trust": 0.1,
        "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=122a19ab48091c657f7cb1fb3af9fc07bd557bbf"
      },
      {
        "trust": 0.1,
        "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=8252ee4d90f3f2004d3d0aeeed003ad49c9a7807"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21670"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25648"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22922"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36222"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-32626"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-32687"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22543"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-37750"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21670"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32626"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-41099"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25741"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22923"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23017"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32675"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3656"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3653"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3656"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22543"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22924"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22922"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25648"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21671"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-4658"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22924"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-32675"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2016-4658"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:3925"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41099"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3653"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32627"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32687"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37576"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32690"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32628"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21671"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32672"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-36222"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23017"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25741"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-32627"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-32672"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22923"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-32628"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-37576"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:3798"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30698"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30744"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/kb/ht201222"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30663"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21779"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30689"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30749"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30720"
      },
      {
        "trust": 0.1,
        "url": "https://www.apple.com/support/security/pgp/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30682"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht212534."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30734"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3450"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3449"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23133"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3573"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-35521"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25014"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-35522"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26141"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27777"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26147"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-14615"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-17541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36386"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36332"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29650"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14145"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24587"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26144"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25012"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36331"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29155"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33033"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20197"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3487"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-0427"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36312"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-31829"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10001"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-31440"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25009"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26145"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3564"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-10001"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-35448"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3489"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-17541"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24503"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-28971"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25013"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26146"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26139"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3679"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-35524"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24588"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36158"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24504"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33194"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36330"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3348"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24503"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20284"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29646"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-31535"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0427"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14615"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3481"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-0129"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3635"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26143"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29368"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14145"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-35523"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20194"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3659"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33200"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29660"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/logging/cluster-logging-upgrading.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26140"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3600"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25010"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20239"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3732"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-28950"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:4627"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-31916"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23343"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27304"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-39293"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29923"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3749"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:4902"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23343"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27304"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3801"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:4845"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20095"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28493"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-42771"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26301"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26301"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-28957"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8037"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8037"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20095"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28493"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23369"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#low"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23383"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23369"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23383"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:4032"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-26691"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13950"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-26690"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17567"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35452"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-26691"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-26690"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3712"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:4614"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-30641"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30641"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17567"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13950"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-35452"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3712"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-382524"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001396"
      },
      {
        "db": "PACKETSTORM",
        "id": "164562"
      },
      {
        "db": "PACKETSTORM",
        "id": "164489"
      },
      {
        "db": "PACKETSTORM",
        "id": "162824"
      },
      {
        "db": "PACKETSTORM",
        "id": "162041"
      },
      {
        "db": "PACKETSTORM",
        "id": "164967"
      },
      {
        "db": "PACKETSTORM",
        "id": "165129"
      },
      {
        "db": "PACKETSTORM",
        "id": "165096"
      },
      {
        "db": "PACKETSTORM",
        "id": "165002"
      },
      {
        "db": "PACKETSTORM",
        "id": "164927"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23841"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-382524"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001396"
      },
      {
        "db": "PACKETSTORM",
        "id": "164562"
      },
      {
        "db": "PACKETSTORM",
        "id": "164489"
      },
      {
        "db": "PACKETSTORM",
        "id": "162824"
      },
      {
        "db": "PACKETSTORM",
        "id": "162041"
      },
      {
        "db": "PACKETSTORM",
        "id": "164967"
      },
      {
        "db": "PACKETSTORM",
        "id": "165129"
      },
      {
        "db": "PACKETSTORM",
        "id": "165096"
      },
      {
        "db": "PACKETSTORM",
        "id": "165002"
      },
      {
        "db": "PACKETSTORM",
        "id": "164927"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23841"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2021-02-16T00:00:00",
        "db": "VULHUB",
        "id": "VHN-382524"
      },
      {
        "date": "2021-05-14T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2021-001396"
      },
      {
        "date": "2021-10-20T15:45:47",
        "db": "PACKETSTORM",
        "id": "164562"
      },
      {
        "date": "2021-10-13T14:47:32",
        "db": "PACKETSTORM",
        "id": "164489"
      },
      {
        "date": "2021-05-26T17:48:26",
        "db": "PACKETSTORM",
        "id": "162824"
      },
      {
        "date": "2021-03-31T14:36:01",
        "db": "PACKETSTORM",
        "id": "162041"
      },
      {
        "date": "2021-11-15T17:25:56",
        "db": "PACKETSTORM",
        "id": "164967"
      },
      {
        "date": "2021-12-02T16:06:16",
        "db": "PACKETSTORM",
        "id": "165129"
      },
      {
        "date": "2021-11-29T18:12:32",
        "db": "PACKETSTORM",
        "id": "165096"
      },
      {
        "date": "2021-11-17T15:25:40",
        "db": "PACKETSTORM",
        "id": "165002"
      },
      {
        "date": "2021-11-11T14:53:11",
        "db": "PACKETSTORM",
        "id": "164927"
      },
      {
        "date": "2021-02-16T17:15:13.377000",
        "db": "NVD",
        "id": "CVE-2021-23841"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-01-09T00:00:00",
        "db": "VULHUB",
        "id": "VHN-382524"
      },
      {
        "date": "2023-07-20T06:25:00",
        "db": "JVNDB",
        "id": "JVNDB-2021-001396"
      },
      {
        "date": "2024-11-21T05:51:55.460000",
        "db": "NVD",
        "id": "CVE-2021-23841"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "162041"
      },
      {
        "db": "PACKETSTORM",
        "id": "165129"
      }
    ],
    "trust": 0.2
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "OpenSSL\u00a0 In \u00a0NULL\u00a0 Pointer dereference vulnerability",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001396"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "overflow",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "164562"
      },
      {
        "db": "PACKETSTORM",
        "id": "164489"
      },
      {
        "db": "PACKETSTORM",
        "id": "164927"
      }
    ],
    "trust": 0.3
  }
}

var-202201-0429
Vulnerability from variot

follow-redirects is vulnerable to Exposure of Private Personal Information to an Unauthorized Actor. Bugs fixed (https://bugzilla.redhat.com/):

2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion 2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic 2032128 - Observability - dashboard name contains / would cause error when generating dashboard cm 2033051 - ACM application placement fails after renaming the application name 2039197 - disable the obs metric collect should not impact the managed cluster upgrade 2039820 - Observability - cluster list should only contain OCP311 cluster on OCP311 dashboard 2042223 - the value of name label changed from clusterclaim name to cluster name 2043535 - CVE-2022-0144 nodejs-shelljs: improper privilege management 2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor 2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2048500 - VMWare Cluster creation does not accept ecdsa-sha2-nistp521 ssh keys 2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function 2052573 - CVE-2022-24450 nats-server: misusing the "dynamically provisioned sandbox accounts" feature authenticated user can obtain the privileges of the System account 2053211 - clusterSelector matchLabels spec are cleared when changing app name/namespace during creating an app in UI 2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak 2053279 - Application cluster status is not updated in UI after restoring 2056610 - OpenStack cluster creation is using deprecated floating IP config for 4.7+ 2057249 - RHACM 2.4.3 images 2059039 - The value of Vendor reported by cluster metrics was Other even if the vendor label in managedcluster was Openshift 2059954 - Subscriptions stop reconciling after channel secrets are recreated 2062202 - CVE-2022-0778 openssl: Infinite loop in BN_mod_sqrt() reachable when parsing certificates 2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server 2069368 - CVE-2022-24778 imgcrypt: Unauthorized access to encryted container image on a shared system due to missing check in CheckAuthorization() code path 2074156 - Placementrule is not reconciling on a new fresh environment 2074543 - The cluster claimed from clusterpool can not auto imported

  1. Summary:

Red Hat Advanced Cluster Management for Kubernetes 2.3.6 General Availability release images, which provide security updates and bug fixes. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:

https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/

Security updates:

  • Nodejs-json-schema: Prototype pollution vulnerability (CVE-2021-3918)

  • Nanoid: Information disclosure via valueOf() function (CVE-2021-23566)

  • Golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)

  • Follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor (CVE-2022-0155)

Bug fixes:

  • Inform ACM policy is not checking properly the node fields (BZ# 2015588)

  • ImagePullPolicy is "Always" for multicluster-operators-subscription-rhel8 image (BZ# 2021128)

  • Traceback blocks reconciliation of helm repository hosted on AWS S3 storage (BZ# 2021576)

  • RHACM 2.3.6 images (BZ# 2029507)

  • Console UI enabled SNO UI Options not displayed during cluster creating (BZ# 2030002)

  • Grc pod restarts for each new GET request to the Governance Policy Page (BZ# 2037351)

  • Clustersets do not appear in UI (BZ# 2049810)

  • Solution:

Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):

2015588 - Inform ACM policy is not checking properly the node fields 2021128 - imagePullPolicy is "Always" for multicluster-operators-subscription-rhel8 image 2021576 - traceback blocks reconciliation of helm repository hosted on AWS S3 storage 2024702 - CVE-2021-3918 nodejs-json-schema: Prototype pollution vulnerability 2029507 - RHACM 2.3.6 images 2030002 - Console UI enabled SNO UI Options not displayed during cluster creating 2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic 2037351 - grc pod restarts for each new GET request to the Governance Policy Page 2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor 2049810 - Clustersets do not appear in UI 2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Moderate: RHV Manager (ovirt-engine) [ovirt-4.5.3] bug fix and security update Advisory ID: RHSA-2022:8502-01 Product: Red Hat Virtualization Advisory URL: https://access.redhat.com/errata/RHSA-2022:8502 Issue date: 2022-11-16 CVE Names: CVE-2022-0155 CVE-2022-2805 ==================================================================== 1. Summary:

Updated ovirt-engine packages that fix several bugs and add various enhancements are now available.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

RHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4 - noarch

  1. Description:

The ovirt-engine package provides the Red Hat Virtualization Manager, a centralized management platform that allows system administrators to view and manage virtual machines. The Manager provides a comprehensive range of features including search capabilities, resource management, live migrations, and virtual infrastructure provisioning.

Bug Fix(es):

  • Ghost OVFs are written when using floating SD to migrate VMs between 2 RHV environments. (BZ#1705338)

  • RHV engine is reporting a delete disk with wipe as completing successfully when it actually fails from a timeout. (BZ#1836318)

  • [DR] Failover / Failback HA VM Fails to be started due to 'VM XXX is being imported' (BZ#1968433)

  • Virtual Machine with lease fails to run on DR failover (BZ#1974535)

  • Disk is missing after importing VM from Storage Domain that was detached from another DC. (BZ#1983567)

  • Unable to switch RHV host into maintenance mode as there are image transfer in progress (BZ#2123141)

  • not able to import disk in 4.5.2 (BZ#2134549)

Enhancement(s):

  • [RFE] Show last events for user VMs (BZ#1886211)

  • Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/2974891

  1. Bugs fixed (https://bugzilla.redhat.com/):

1705338 - Ghost OVFs are written when using floating SD to migrate VMs between 2 RHV environments. 1836318 - RHV engine is reporting a delete disk with wipe as completing successfully when it actually fails from a timeout. 1886211 - [RFE] Show last events for user VMs 1968433 - [DR] Failover / Failback HA VM Fails to be started due to 'VM XXX is being imported' 1974535 - Virtual Machine with lease fails to run on DR failover 1983567 - Disk is missing after importing VM from Storage Domain that was detached from another DC. 2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor 2079545 - CVE-2022-2805 ovirt-engine: RHVM admin password is logged unfiltered when using otopi-style 2118672 - Use rpm instead of auto in package_facts ansible module to prevent mistakes of determining the correct package manager inside package_facts module 2123141 - Unable to switch RHV host into maintenance mode as there are image transfer in progress 2127836 - Create template dialog is not closed when clicking in OK and the template is not created 2134549 - not able to import disk in 4.5.2 2137207 - The RemoveDisk job finishes before the disk was removed from the DB

  1. Package List:

RHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4:

Source: ovirt-engine-4.5.3.2-1.el8ev.src.rpm ovirt-engine-dwh-4.5.7-1.el8ev.src.rpm ovirt-engine-ui-extensions-1.3.6-1.el8ev.src.rpm ovirt-web-ui-1.9.2-1.el8ev.src.rpm

noarch: ovirt-engine-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-backend-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-dbscripts-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-dwh-4.5.7-1.el8ev.noarch.rpm ovirt-engine-dwh-grafana-integration-setup-4.5.7-1.el8ev.noarch.rpm ovirt-engine-dwh-setup-4.5.7-1.el8ev.noarch.rpm ovirt-engine-health-check-bundler-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-restapi-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-setup-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-setup-base-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-setup-plugin-cinderlib-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-setup-plugin-imageio-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-setup-plugin-ovirt-engine-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-setup-plugin-ovirt-engine-common-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-setup-plugin-websocket-proxy-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-tools-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-tools-backup-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-ui-extensions-1.3.6-1.el8ev.noarch.rpm ovirt-engine-vmconsole-proxy-helper-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-webadmin-portal-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-websocket-proxy-4.5.3.2-1.el8ev.noarch.rpm ovirt-web-ui-1.9.2-1.el8ev.noarch.rpm python3-ovirt-engine-lib-4.5.3.2-1.el8ev.noarch.rpm rhvm-4.5.3.2-1.el8ev.noarch.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. References:

https://access.redhat.com/security/cve/CVE-2022-0155 https://access.redhat.com/security/cve/CVE-2022-2805 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBY3UyLtzjgjWX9erEAQjacQ//emo9BwMrctxmlrqBwa5vAlrr2Kt3ZVCY hAHTbaUk+sXw9JxGeCZ/aD8/c6ij5oCprdMs4sOGmOfTHEkmj+GbPWfdEluoJvr0 PM001KBuucWC6YDaW/R3V20oZrqdRAlPX7yvTzxuNNlpnpmGx/UkAwB2GSechs91 kXp+E74e1RgOgbFRtzZcgfwCb0Df2Swi2vXdnPDfri5fRVztgwcrIcljLoTBkMy7 8M719eYwsuu1987MqSnIvBOHEj2oWN2IQJTaeNPoz3MqgvYKwqEdiozchJpWvXqi WddEaLT8S+1WhDf4VCIkdtIZrww/Ya2BxoFoEroCr7jTSDy9c9aFcnjn4wqnhO9s yqKfxpTWz9mpgTdHHT4FC06L9AUsxa/UaLKydO3tZhc+IjPH0O63SDBi/pZ5WVAH oCmYtRJA2OYlQABpHXR2x7Pj2Jv7JRNWHjGnabxWVoY6E09vdIrPliz0taPI59s7 YvNtXhkWPIa3w5kyibIxTVLqjR4gr2zrpPa2Oc6QGvEP9zyu59bAxoXKSQj0SYM8 BFykrVd3ahlPGFqOl6UBdvPJpXpJtNXK3lJBCGu2glFSwPXX26ij2fLUW3b7DnUC +xMPlL9m45KHx/Y7s4WnDvlvSNRjhy/Ttddgm/JwYOLxlzTWd1Qez/vfyDuIK7rk QvQket8bo7Q=xS+k -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202201-0429",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "follow-redirects",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "follow redirects",
        "version": "1.14.7"
      },
      {
        "model": "sinec ins",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": null
      },
      {
        "model": "follow-redirects",
        "scope": null,
        "trust": 0.8,
        "vendor": "follow redirects",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-003215"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0155"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "166309"
      },
      {
        "db": "PACKETSTORM",
        "id": "166812"
      },
      {
        "db": "PACKETSTORM",
        "id": "166516"
      },
      {
        "db": "PACKETSTORM",
        "id": "166204"
      },
      {
        "db": "PACKETSTORM",
        "id": "166946"
      },
      {
        "db": "PACKETSTORM",
        "id": "166970"
      },
      {
        "db": "PACKETSTORM",
        "id": "169919"
      }
    ],
    "trust": 0.7
  },
  "cve": "CVE-2022-0155",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 4.3,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "CVE-2022-0155",
            "impactScore": 2.9,
            "integrityImpact": "NONE",
            "severity": "MEDIUM",
            "trust": 1.9,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:N/A:N",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 6.5,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.8,
            "id": "CVE-2022-0155",
            "impactScore": 3.6,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:N/A:N",
            "version": "3.1"
          },
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "security@huntr.dev",
            "availabilityImpact": "HIGH",
            "baseScore": 8.0,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.1,
            "id": "CVE-2022-0155",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "LOW",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:L/UI:R/S:U/C:H/I:H/A:H",
            "version": "3.0"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "None",
            "baseScore": 6.5,
            "baseSeverity": "Medium",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2022-0155",
            "impactScore": null,
            "integrityImpact": "None",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "Required",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:N/A:N",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2022-0155",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "security@huntr.dev",
            "id": "CVE-2022-0155",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "NVD",
            "id": "CVE-2022-0155",
            "trust": 0.8,
            "value": "Medium"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202201-685",
            "trust": 0.6,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2022-0155",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-0155"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-003215"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-685"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0155"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0155"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "follow-redirects is vulnerable to Exposure of Private Personal Information to an Unauthorized Actor. Bugs fixed (https://bugzilla.redhat.com/):\n\n2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion\n2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic\n2032128 - Observability - dashboard name contains `/` would cause error when generating dashboard cm\n2033051 - ACM application placement fails after renaming the application name\n2039197 - disable the obs metric collect should not impact the managed cluster upgrade\n2039820 - Observability - cluster list should only contain OCP311 cluster on OCP311 dashboard\n2042223 - the value of name label changed from clusterclaim name to cluster name\n2043535 - CVE-2022-0144 nodejs-shelljs: improper privilege management\n2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2048500 - VMWare Cluster creation does not accept ecdsa-sha2-nistp521 ssh keys\n2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function\n2052573 - CVE-2022-24450 nats-server: misusing the \"dynamically provisioned sandbox accounts\" feature  authenticated user can obtain the privileges of the System account\n2053211 - clusterSelector matchLabels spec are cleared when changing app name/namespace during creating an app in UI\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2053279 - Application cluster status is not updated in UI after restoring\n2056610 - OpenStack cluster creation is using deprecated floating IP config for 4.7+\n2057249 - RHACM 2.4.3 images\n2059039 - The value of Vendor reported by cluster metrics was Other even if the vendor label in managedcluster was Openshift\n2059954 - Subscriptions stop reconciling after channel secrets are recreated\n2062202 - CVE-2022-0778 openssl: Infinite loop in BN_mod_sqrt() reachable when parsing certificates\n2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server\n2069368 - CVE-2022-24778 imgcrypt: Unauthorized access to encryted container image on a shared system due to missing check in CheckAuthorization() code path\n2074156 - Placementrule is not reconciling on a new fresh environment\n2074543 - The cluster claimed from clusterpool can not auto imported\n\n5. Summary:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.3.6 General\nAvailability\nrelease images, which provide security updates and bug fixes. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/\n\nSecurity updates:\n\n* Nodejs-json-schema: Prototype pollution vulnerability (CVE-2021-3918)\n\n* Nanoid: Information disclosure via valueOf() function (CVE-2021-23566)\n\n* Golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)\n\n* Follow-redirects: Exposure of Private Personal Information to an\nUnauthorized Actor (CVE-2022-0155)\n\nBug fixes:\n\n* Inform ACM policy is not checking properly the node fields (BZ# 2015588)\n\n* ImagePullPolicy is \"Always\" for multicluster-operators-subscription-rhel8\nimage (BZ# 2021128)\n\n* Traceback blocks reconciliation of helm repository hosted on AWS S3\nstorage (BZ# 2021576)\n\n* RHACM 2.3.6 images (BZ# 2029507)\n\n* Console UI enabled SNO UI Options not displayed during cluster creating\n(BZ# 2030002)\n\n* Grc pod restarts for each new GET request to the Governance Policy Page\n(BZ# 2037351)\n\n* Clustersets do not appear in UI (BZ# 2049810)\n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n2015588 - Inform ACM policy is not checking properly the node fields\n2021128 - imagePullPolicy is \"Always\" for multicluster-operators-subscription-rhel8 image\n2021576 - traceback blocks reconciliation of helm repository hosted on AWS S3 storage\n2024702 - CVE-2021-3918 nodejs-json-schema: Prototype pollution vulnerability\n2029507 - RHACM 2.3.6 images\n2030002 - Console UI enabled SNO UI Options not displayed during cluster creating\n2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic\n2037351 - grc pod restarts for each new GET request to the Governance Policy Page\n2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor\n2049810 - Clustersets do not appear in UI\n2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Moderate: RHV Manager (ovirt-engine) [ovirt-4.5.3] bug fix and security update\nAdvisory ID:       RHSA-2022:8502-01\nProduct:           Red Hat Virtualization\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:8502\nIssue date:        2022-11-16\nCVE Names:         CVE-2022-0155 CVE-2022-2805\n====================================================================\n1. Summary:\n\nUpdated ovirt-engine packages that fix several bugs and add various\nenhancements are now available. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4 - noarch\n\n3. Description:\n\nThe ovirt-engine package provides the Red Hat Virtualization Manager, a\ncentralized management platform that allows system administrators to view\nand manage virtual machines. The Manager provides a comprehensive range of\nfeatures including search capabilities, resource management, live\nmigrations, and virtual infrastructure provisioning. \n\nBug Fix(es):\n\n* Ghost OVFs are written when using floating SD to migrate VMs between 2\nRHV environments. (BZ#1705338)\n\n* RHV engine is reporting a delete disk with wipe as completing\nsuccessfully when it actually fails from a timeout. (BZ#1836318)\n\n* [DR] Failover / Failback HA VM Fails to be started due to \u0027VM XXX is\nbeing imported\u0027 (BZ#1968433)\n\n* Virtual Machine with lease fails to run on DR failover (BZ#1974535)\n\n* Disk is missing after importing VM from Storage Domain that was detached\nfrom another DC. (BZ#1983567)\n\n* Unable to switch RHV host into maintenance mode as there are image\ntransfer in progress (BZ#2123141)\n\n* not able to import disk in 4.5.2 (BZ#2134549)\n\nEnhancement(s):\n\n* [RFE] Show last events for user VMs (BZ#1886211)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/2974891\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1705338 - Ghost OVFs are written when using floating SD to migrate VMs between 2 RHV environments. \n1836318 - RHV engine is reporting a delete disk with wipe as completing successfully when it actually fails from a timeout. \n1886211 - [RFE] Show last events for user VMs\n1968433 - [DR] Failover / Failback HA VM Fails to be started due to \u0027VM XXX is being imported\u0027\n1974535 - Virtual Machine with lease fails to run on DR failover\n1983567 - Disk is missing after importing VM from Storage Domain that was detached from another DC. \n2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor\n2079545 - CVE-2022-2805 ovirt-engine: RHVM admin password is logged unfiltered when using otopi-style\n2118672 - Use rpm instead of auto in package_facts ansible module to prevent mistakes of determining the correct package manager inside package_facts module\n2123141 - Unable to switch RHV host into maintenance mode as there are image transfer in progress\n2127836 - Create template dialog is not closed when clicking in OK and the template is not created\n2134549 - not able to import disk in 4.5.2\n2137207 - The RemoveDisk job finishes before the disk was removed from the DB\n\n6. Package List:\n\nRHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4:\n\nSource:\novirt-engine-4.5.3.2-1.el8ev.src.rpm\novirt-engine-dwh-4.5.7-1.el8ev.src.rpm\novirt-engine-ui-extensions-1.3.6-1.el8ev.src.rpm\novirt-web-ui-1.9.2-1.el8ev.src.rpm\n\nnoarch:\novirt-engine-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-backend-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-dbscripts-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-dwh-4.5.7-1.el8ev.noarch.rpm\novirt-engine-dwh-grafana-integration-setup-4.5.7-1.el8ev.noarch.rpm\novirt-engine-dwh-setup-4.5.7-1.el8ev.noarch.rpm\novirt-engine-health-check-bundler-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-restapi-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-setup-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-setup-base-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-setup-plugin-cinderlib-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-setup-plugin-imageio-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-setup-plugin-ovirt-engine-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-setup-plugin-ovirt-engine-common-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-setup-plugin-vmconsole-proxy-helper-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-setup-plugin-websocket-proxy-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-tools-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-tools-backup-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-ui-extensions-1.3.6-1.el8ev.noarch.rpm\novirt-engine-vmconsole-proxy-helper-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-webadmin-portal-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-websocket-proxy-4.5.3.2-1.el8ev.noarch.rpm\novirt-web-ui-1.9.2-1.el8ev.noarch.rpm\npython3-ovirt-engine-lib-4.5.3.2-1.el8ev.noarch.rpm\nrhvm-4.5.3.2-1.el8ev.noarch.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-0155\nhttps://access.redhat.com/security/cve/CVE-2022-2805\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBY3UyLtzjgjWX9erEAQjacQ//emo9BwMrctxmlrqBwa5vAlrr2Kt3ZVCY\nhAHTbaUk+sXw9JxGeCZ/aD8/c6ij5oCprdMs4sOGmOfTHEkmj+GbPWfdEluoJvr0\nPM001KBuucWC6YDaW/R3V20oZrqdRAlPX7yvTzxuNNlpnpmGx/UkAwB2GSechs91\nkXp+E74e1RgOgbFRtzZcgfwCb0Df2Swi2vXdnPDfri5fRVztgwcrIcljLoTBkMy7\n8M719eYwsuu1987MqSnIvBOHEj2oWN2IQJTaeNPoz3MqgvYKwqEdiozchJpWvXqi\nWddEaLT8S+1WhDf4VCIkdtIZrww/Ya2BxoFoEroCr7jTSDy9c9aFcnjn4wqnhO9s\nyqKfxpTWz9mpgTdHHT4FC06L9AUsxa/UaLKydO3tZhc+IjPH0O63SDBi/pZ5WVAH\noCmYtRJA2OYlQABpHXR2x7Pj2Jv7JRNWHjGnabxWVoY6E09vdIrPliz0taPI59s7\nYvNtXhkWPIa3w5kyibIxTVLqjR4gr2zrpPa2Oc6QGvEP9zyu59bAxoXKSQj0SYM8\nBFykrVd3ahlPGFqOl6UBdvPJpXpJtNXK3lJBCGu2glFSwPXX26ij2fLUW3b7DnUC\n+xMPlL9m45KHx/Y7s4WnDvlvSNRjhy/Ttddgm/JwYOLxlzTWd1Qez/vfyDuIK7rk\nQvQket8bo7Q=xS+k\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-0155"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-003215"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-0155"
      },
      {
        "db": "PACKETSTORM",
        "id": "166309"
      },
      {
        "db": "PACKETSTORM",
        "id": "166812"
      },
      {
        "db": "PACKETSTORM",
        "id": "166516"
      },
      {
        "db": "PACKETSTORM",
        "id": "166204"
      },
      {
        "db": "PACKETSTORM",
        "id": "166946"
      },
      {
        "db": "PACKETSTORM",
        "id": "166970"
      },
      {
        "db": "PACKETSTORM",
        "id": "169919"
      }
    ],
    "trust": 2.34
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2022-0155",
        "trust": 4.0
      },
      {
        "db": "SIEMENS",
        "id": "SSA-637483",
        "trust": 1.7
      },
      {
        "db": "JVN",
        "id": "JVNVU99475301",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-003215",
        "trust": 0.8
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-22-258-05",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "166812",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "166516",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "166204",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "166946",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "166970",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "169919",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4616",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5020",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1071",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5790",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5990",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3482",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022071510",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022032840",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-685",
        "trust": 0.6
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-0155",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "166309",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-0155"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-003215"
      },
      {
        "db": "PACKETSTORM",
        "id": "166309"
      },
      {
        "db": "PACKETSTORM",
        "id": "166812"
      },
      {
        "db": "PACKETSTORM",
        "id": "166516"
      },
      {
        "db": "PACKETSTORM",
        "id": "166204"
      },
      {
        "db": "PACKETSTORM",
        "id": "166946"
      },
      {
        "db": "PACKETSTORM",
        "id": "166970"
      },
      {
        "db": "PACKETSTORM",
        "id": "169919"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-685"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0155"
      }
    ]
  },
  "id": "VAR-202201-0429",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-11-23T19:43:12.977000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Drop\u00a0Cookie\u00a0header\u00a0across\u00a0domains. Siemens Siemens\u00a0Security\u00a0Advisory",
        "trust": 0.8,
        "url": "https://github.com/follow-redirects/follow-redirects/commit/8b347cbcef7c7b72a6e9be20f5710c17d6163c22"
      },
      {
        "title": "Follow Redirects Security vulnerabilities",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=178984"
      },
      {
        "title": "Red Hat: Moderate: RHV Manager (ovirt-engine) [ovirt-4.5.3] bug fix and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228502 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.10 security updates and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221715 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.4.4 security updates and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221681 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Red Hat Advanced Cluster Management 2.3.6 security updates and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220595 - Security Advisory"
      },
      {
        "title": "IBM: Security Bulletin: IBM Security QRadar Analyst Workflow app for IBM QRadar SIEM is vulnerable to using components with known vulnerabilities",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=e84bc00c9f55b86e956036a09317820b"
      },
      {
        "title": "IBM: Security Bulletin: IBM Security QRadar Analyst Workflow app for IBM QRadar SIEM is vulnerable to using components with known vulnerabilities",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=2f42526bdbba457e2271ed17ea2e3e9a"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.8 security and container updates",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221083 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.4.3 security updates and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221476 - Security Advisory"
      },
      {
        "title": "IBM: Security Bulletin: IBM QRadar Assistant app for IBM QRadar SIEM includes components with multiple known vulnerabilities",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=0c5e20c044e4005143b2303b28407553"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.2.11 security updates and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220856 - Security Advisory"
      },
      {
        "title": "IBM: Security Bulletin: Netcool Operations Insight v1.6.6 contains fixes for multiple security vulnerabilities.",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=68c6989b84f14aaac220c13b754c7702"
      },
      {
        "title": "ioBroker.switchbot-ble",
        "trust": 0.1,
        "url": "https://github.com/mrbungle64/ioBroker.switchbot-ble "
      },
      {
        "title": "node-red-contrib-ecovacs-deebot",
        "trust": 0.1,
        "url": "https://github.com/mrbungle64/node-red-contrib-ecovacs-deebot "
      },
      {
        "title": "ioBroker.ecovacs-deebot",
        "trust": 0.1,
        "url": "https://github.com/mrbungle64/ioBroker.ecovacs-deebot "
      },
      {
        "title": "ecovacs-deebot.js",
        "trust": 0.1,
        "url": "https://github.com/mrbungle64/ecovacs-deebot.js "
      },
      {
        "title": "ioBroker.e3dc-rscp",
        "trust": 0.1,
        "url": "https://github.com/git-kick/ioBroker.e3dc-rscp "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-0155"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-003215"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-685"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-359",
        "trust": 1.0
      },
      {
        "problemtype": "Disclosure of Personal Information to Unauthorized Actors (CWE-359) [ others ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-003215"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0155"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.0,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0155"
      },
      {
        "trust": 1.7,
        "url": "https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406"
      },
      {
        "trust": 1.7,
        "url": "https://github.com/follow-redirects/follow-redirects/commit/8b347cbcef7c7b72a6e9be20f5710c17d6163c22"
      },
      {
        "trust": 1.7,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu99475301/index.html"
      },
      {
        "trust": 0.8,
        "url": "https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406/"
      },
      {
        "trust": 0.7,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.7,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2022-0155"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022071510"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4616"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166970/red-hat-security-advisory-2022-1715-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/node-js-follow-redirects-information-disclosure-via-cookie-header-38829"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/ibm-security-qradar-siem-information-disclosure-39657"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1071"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169919/red-hat-security-advisory-2022-8502-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166812/red-hat-security-advisory-2022-1476-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166516/red-hat-security-advisory-2022-1083-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5020"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5790"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3482"
      },
      {
        "trust": 0.6,
        "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5990"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022032840"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166946/red-hat-security-advisory-2022-1681-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166204/red-hat-security-advisory-2022-0595-02.html"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-0536"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0235"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2022-0235"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0536"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/install/index#installing"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-22942"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-0920"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-0330"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-0920"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-23566"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23566"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-43565"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43565"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/index"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0185"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-4122"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3712"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-4155"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-4019"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-4192"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3984"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-42574"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-4193"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3872"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3521"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0413"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-25236"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-31566"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22822"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-22827"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0392"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-22824"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-23219"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3999"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-23308"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0330"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0516"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0516"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0392"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0261"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/index"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3999"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-31566"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-45960"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-46143"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0361"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0847"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23177"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-23852"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0261"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-22826"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-22825"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0318"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0359"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-46143"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0359"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0413"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/install/index#installing"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0435"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0435"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0492"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4154"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-4154"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-22822"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-23177"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45960"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0144"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0318"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-22823"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-24450"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0361"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-25315"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-23218"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0847"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-25235"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0144"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0492"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-21803"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1154"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-24785"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24723"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24785"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-1154"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-25636"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-25636"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1271"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-4028"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-4115"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-24723"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4115"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2018-25032"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4028"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21803"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-1271"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0613"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0613"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/359.html"
      },
      {
        "trust": 0.1,
        "url": "https://github.com/mrbungle64/iobroker.switchbot-ble"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.1,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-security-qradar-analyst-workflow-app-for-ibm-qradar-siem-is-vulnerable-to-using-components-with-known-vulnerabilities-2/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-0465"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3200"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23434"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27645"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27645"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33574"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13435"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-5827"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-28153"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24370"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13751"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0466"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3564"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19603"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-35942"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25710"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3572"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-12762"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36086"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25710"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-40346"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22898"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-0466"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16135"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23434"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36084"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3800"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36087"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3445"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0856"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html/release_notes/index"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25214"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22925"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25709"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0465"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20232"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20838"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22876"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20231"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3752"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14155"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25709"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36085"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html-single/install/index#installing"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33560"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17595"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-28153"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3573"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25214"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3426"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-39241"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3580"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html/release_notes/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0778"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-41190"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0811"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27191"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:1476"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24778"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41190"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0811"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22825"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:1083"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22823"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22824"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3521"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4034"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4034"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20321"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-42739"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3918"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4155"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-25704"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3872"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4192"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-20612"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-42739"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3984"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3918"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25704"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-42574"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0185"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4193"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4122"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36322"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-20612"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-20617"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20321"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0595"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3712"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4019"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-20617"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36322"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:1681"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24773"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1365"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24772"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24771"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1365"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24771"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24772"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23555"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24450"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23555"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24773"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4083"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4083"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0711"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0711"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:1715"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2805"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/articles/2974891"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:8502"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2805"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-0155"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-003215"
      },
      {
        "db": "PACKETSTORM",
        "id": "166309"
      },
      {
        "db": "PACKETSTORM",
        "id": "166812"
      },
      {
        "db": "PACKETSTORM",
        "id": "166516"
      },
      {
        "db": "PACKETSTORM",
        "id": "166204"
      },
      {
        "db": "PACKETSTORM",
        "id": "166946"
      },
      {
        "db": "PACKETSTORM",
        "id": "166970"
      },
      {
        "db": "PACKETSTORM",
        "id": "169919"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-685"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0155"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULMON",
        "id": "CVE-2022-0155"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-003215"
      },
      {
        "db": "PACKETSTORM",
        "id": "166309"
      },
      {
        "db": "PACKETSTORM",
        "id": "166812"
      },
      {
        "db": "PACKETSTORM",
        "id": "166516"
      },
      {
        "db": "PACKETSTORM",
        "id": "166204"
      },
      {
        "db": "PACKETSTORM",
        "id": "166946"
      },
      {
        "db": "PACKETSTORM",
        "id": "166970"
      },
      {
        "db": "PACKETSTORM",
        "id": "169919"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-685"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0155"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-01-10T00:00:00",
        "db": "VULMON",
        "id": "CVE-2022-0155"
      },
      {
        "date": "2023-02-10T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-003215"
      },
      {
        "date": "2022-03-15T15:44:21",
        "db": "PACKETSTORM",
        "id": "166309"
      },
      {
        "date": "2022-04-21T15:12:25",
        "db": "PACKETSTORM",
        "id": "166812"
      },
      {
        "date": "2022-03-29T15:53:19",
        "db": "PACKETSTORM",
        "id": "166516"
      },
      {
        "date": "2022-03-04T16:17:56",
        "db": "PACKETSTORM",
        "id": "166204"
      },
      {
        "date": "2022-05-04T05:42:06",
        "db": "PACKETSTORM",
        "id": "166946"
      },
      {
        "date": "2022-05-05T17:33:41",
        "db": "PACKETSTORM",
        "id": "166970"
      },
      {
        "date": "2022-11-17T13:22:54",
        "db": "PACKETSTORM",
        "id": "169919"
      },
      {
        "date": "2022-01-10T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202201-685"
      },
      {
        "date": "2022-01-10T20:15:08.177000",
        "db": "NVD",
        "id": "CVE-2022-0155"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-10-28T00:00:00",
        "db": "VULMON",
        "id": "CVE-2022-0155"
      },
      {
        "date": "2023-02-10T07:20:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-003215"
      },
      {
        "date": "2022-11-18T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202201-685"
      },
      {
        "date": "2024-11-21T06:38:01.143000",
        "db": "NVD",
        "id": "CVE-2022-0155"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-685"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "follow-redirects\u00a0 Personal Information Disclosure Vulnerability to Unauthorized Actors in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-003215"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "other",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-685"
      }
    ],
    "trust": 0.6
  }
}

var-202312-0208
Vulnerability from variot

A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 2). Affected software does not correctly validate the response received by an UMC server. An attacker can use this to crash the affected software by providing and configuring a malicious UMC server or by manipulating the traffic from a legitimate UMC server (i.e. leveraging CVE-2023-48427).

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202312-0208",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2023-48431"
      }
    ]
  },
  "cve": "CVE-2023-48431",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 8.6,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 3.9,
            "id": "CVE-2023-48431",
            "impactScore": 4.0,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "CHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:N/I:N/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "HIGH",
            "attackVector": "NETWORK",
            "author": "productcert@siemens.com",
            "availabilityImpact": "HIGH",
            "baseScore": 6.8,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 2.2,
            "id": "CVE-2023-48431",
            "impactScore": 4.0,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "CHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:C/C:N/I:N/A:H",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2023-48431",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "productcert@siemens.com",
            "id": "CVE-2023-48431",
            "trust": 1.0,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2023-48431"
      },
      {
        "db": "NVD",
        "id": "CVE-2023-48431"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 2). Affected software does not correctly validate the response received by an UMC server. An attacker can use this to crash the affected software by providing and configuring a malicious UMC server or by manipulating the traffic from a legitimate UMC server (i.e. leveraging CVE-2023-48427).",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2023-48431"
      }
    ],
    "trust": 1.0
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "SIEMENS",
        "id": "SSA-077170",
        "trust": 1.0
      },
      {
        "db": "NVD",
        "id": "CVE-2023-48431",
        "trust": 1.0
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2023-48431"
      }
    ]
  },
  "id": "VAR-202312-0208",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-08-14T12:59:56.343000Z",
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-754",
        "trust": 1.0
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2023-48431"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.0,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf"
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2023-48431"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2023-48431"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-12-12T12:15:15.777000",
        "db": "NVD",
        "id": "CVE-2023-48431"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-12-14T19:37:00.257000",
        "db": "NVD",
        "id": "CVE-2023-48431"
      }
    ]
  }
}

var-202312-0209
Vulnerability from variot

A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 2). The Web UI of affected devices does not check the length of parameters in certain conditions. This allows a malicious admin to crash the server by sending a crafted request to the server. The server will automatically restart.

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202312-0209",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2023-48429"
      }
    ]
  },
  "cve": "CVE-2023-48429",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "productcert@siemens.com",
            "availabilityImpact": "LOW",
            "baseScore": 2.7,
            "baseSeverity": "LOW",
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 1.2,
            "id": "CVE-2023-48429",
            "impactScore": 1.4,
            "integrityImpact": "NONE",
            "privilegesRequired": "HIGH",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:N/I:N/A:L",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "productcert@siemens.com",
            "id": "CVE-2023-48429",
            "trust": 1.0,
            "value": "LOW"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2023-48429"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 2). The Web UI of affected devices does not check the length of parameters in certain conditions. This allows a malicious admin to crash the server by sending a crafted request to the server. The server will automatically restart.",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2023-48429"
      }
    ],
    "trust": 1.0
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "SIEMENS",
        "id": "SSA-077170",
        "trust": 1.0
      },
      {
        "db": "NVD",
        "id": "CVE-2023-48429",
        "trust": 1.0
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2023-48429"
      }
    ]
  },
  "id": "VAR-202312-0209",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-08-14T12:52:11.300000Z",
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-394",
        "trust": 1.0
      },
      {
        "problemtype": "CWE-754",
        "trust": 1.0
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2023-48429"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.0,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf"
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2023-48429"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2023-48429"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-12-12T12:15:15.083000",
        "db": "NVD",
        "id": "CVE-2023-48429"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-12-14T19:37:51.017000",
        "db": "NVD",
        "id": "CVE-2023-48429"
      }
    ]
  }
}

var-202201-1080
Vulnerability from variot

There is a carry propagation bug in the MIPS32 and MIPS64 squaring procedure. Many EC algorithms are affected, including some of the TLS 1.3 default curves. Impact was not analyzed in detail, because the pre-requisites for attack are considered unlikely and include reusing private keys. Analysis suggests that attacks against RSA and DSA as a result of this defect would be very difficult to perform and are not believed likely. Attacks against DH are considered just feasible (although very difficult) because most of the work necessary to deduce information about a private key may be performed offline. The amount of resources required for such an attack would be significant. However, for an attack on TLS to be meaningful, the server would have to share the DH private key among multiple clients, which is no longer an option since CVE-2016-0701. This issue affects OpenSSL versions 1.0.2, 1.1.1 and 3.0.0. It was addressed in the releases of 1.1.1m and 3.0.1 on the 15th of December 2021. For the 1.0.2 release it is addressed in git commit 6fc1aaaf3 that is available to premium support customers only. It will be made available in 1.0.2zc when it is released. The issue only affects OpenSSL on MIPS platforms. Fixed in OpenSSL 3.0.1 (Affected 3.0.0). Fixed in OpenSSL 1.1.1m (Affected 1.1.1-1.1.1l). Fixed in OpenSSL 1.0.2zc-dev (Affected 1.0.2-1.0.2zb). - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202210-02


                                       https://security.gentoo.org/

Severity: Normal Title: OpenSSL: Multiple Vulnerabilities Date: October 16, 2022 Bugs: #741570, #809980, #832339, #835343, #842489, #856592 ID: 202210-02


Synopsis

Multiple vulnerabilities have been discovered in OpenSSL, the worst of which could result in denial of service.

Background

OpenSSL is an Open Source toolkit implementing the Secure Sockets Layer (SSL v2/v3) and Transport Layer Security (TLS v1) as well as a general purpose cryptography library.

Affected packages

-------------------------------------------------------------------
 Package              /     Vulnerable     /            Unaffected
-------------------------------------------------------------------

1 dev-libs/openssl < 1.1.1q >= 1.1.1q

Description

Multiple vulnerabilities have been discovered in OpenSSL. Please review the CVE identifiers referenced below for details.

Impact

Please review the referenced CVE identifiers for details.

Workaround

There is no known workaround at this time.

Resolution

All OpenSSL users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=dev-libs/openssl-1.1.1q"

References

[ 1 ] CVE-2020-1968 https://nvd.nist.gov/vuln/detail/CVE-2020-1968 [ 2 ] CVE-2021-3711 https://nvd.nist.gov/vuln/detail/CVE-2021-3711 [ 3 ] CVE-2021-3712 https://nvd.nist.gov/vuln/detail/CVE-2021-3712 [ 4 ] CVE-2021-4160 https://nvd.nist.gov/vuln/detail/CVE-2021-4160 [ 5 ] CVE-2022-0778 https://nvd.nist.gov/vuln/detail/CVE-2022-0778 [ 6 ] CVE-2022-1292 https://nvd.nist.gov/vuln/detail/CVE-2022-1292 [ 7 ] CVE-2022-1473 https://nvd.nist.gov/vuln/detail/CVE-2022-1473 [ 8 ] CVE-2022-2097 https://nvd.nist.gov/vuln/detail/CVE-2022-2097

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202210-02

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5 . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512


Debian Security Advisory DSA-5103-1 security@debian.org https://www.debian.org/security/ Salvatore Bonaccorso March 15, 2022 https://www.debian.org/security/faq


Package : openssl CVE ID : CVE-2021-4160 CVE-2022-0778 Debian Bug : 989604

Tavis Ormandy discovered that the BN_mod_sqrt() function of OpenSSL could be tricked into an infinite loop. This could result in denial of service via malformed certificates.

For the oldstable distribution (buster), this problem has been fixed in version 1.1.1d-0+deb10u8.

For the stable distribution (bullseye), this problem has been fixed in version 1.1.1k-1+deb11u2.

For the detailed security status of openssl please refer to its security tracker page at: https://security-tracker.debian.org/tracker/openssl

Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/

Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----

iQKTBAEBCgB9FiEERkRAmAjBceBVMd3uBUy48xNDz0QFAmIwxQtfFIAAAAAALgAo aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDQ2 NDQ0MDk4MDhDMTcxRTA1NTMxRERFRTA1NENCOEYzMTM0M0NGNDQACgkQBUy48xND z0R2qw//c0GbzcbXlLfibf7Nki5CMJUdWqx1si8O2uQ1vKxgC07rCAx1Lrw0TtIl Tq1vYRtSbvy8P4Qn3E6/lbSYTnM7JbkriZ1HS3Mw4VFlOBA8lWMif4KotrcMAoYE IOQlhhTCkKZM8cL4YKDwN7XSy5LSdt/sw5rIi1ZpgVTEXQeKIDPa5WK6YyIGNG6k h83TPYZp+8e3Fuoubb8RY5CUfFomdMHRazHcrCkjY+yvFTFdKbUza9RjUs44xu2Z ZUTfIddR8D8mWfKOyvAVMw0A7/zjFW1IX0vC0RhHwjrulLgJbqWvcYQgEJy/wOKd tWjVwGya7+Fxn6GFL0rHZP/OFq9mDwxyBDfDg/hD+TSnbxtyHIxUH4QoWdPPgJxP ahln2TNfsnQsCopdn9dJ/XOrkC35R7Jp11kmX8MCTP6k8ob4mdQIACcRND/jcPgT tOBoUBCrha98Qvdh6UAGegTxqOBaNhG52fpNjEegq/q7kxlugdOtbY1nZXvuHHI5 C9Gd6e4JqpRlMDuT7rC8qchXJM8VnhWdVdz95gkeQCA21+AGJ+CEvTpSRPY6qCrM rUvS3HVrBFNLWNlsA68or3y8CfxjFbpXnSxflCmoBtmAp6z9TXm59Fu7N6Qqkpom yV0hQAqqeFa9u3NZKoNrj/FGWYXZ+zMt+jifRLokuB0IhFUOJ70= =SB84 -----END PGP SIGNATURE----- . If that applies then:

OpenSSL 1.0.2 users should apply git commit 6fc1aaaf3 (premium support customers only) OpenSSL 1.1.1 users should upgrade to 1.1.1m OpenSSL 3.0.0 users should upgrade to 3.0.1

This issue was found on the 10th of December 2021 and subsequently fixed by Bernd Edlinger.

Note

OpenSSL 1.0.2 is out of support and no longer receiving public updates.

References

URL for this Security Advisory: https://www.openssl.org/news/secadv/20220128.txt

Note: the online version of the advisory may be updated with additional details over time.

For details of OpenSSL severity classifications please see: https://www.openssl.org/policies/secpolicy.html

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202201-1080",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "jd edwards enterpriseone tools",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "9.2.6.3"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "10.0"
      },
      {
        "model": "openssl",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "1.0.2zb"
      },
      {
        "model": "health sciences inform publisher",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "6.3.1.1"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "enterprise manager ops center",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "12.4.0.0"
      },
      {
        "model": "jd edwards world security",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "a9.4"
      },
      {
        "model": "openssl",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "1.1.1m"
      },
      {
        "model": "peoplesoft enterprise peopletools",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.59"
      },
      {
        "model": "health sciences inform publisher",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "6.2.1.1"
      },
      {
        "model": "openssl",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "1.0.2"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "11.0"
      },
      {
        "model": "openssl",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "1.1.1"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "9.0"
      },
      {
        "model": "openssl",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "openssl",
        "version": "3.0.0"
      },
      {
        "model": "peoplesoft enterprise peopletools",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.58"
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-4160"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Siemens reported these vulnerabilities to CISA.",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2650"
      }
    ],
    "trust": 0.6
  },
  "cve": "CVE-2021-4160",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 4.3,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "CVE-2021-4160",
            "impactScore": 2.9,
            "integrityImpact": "NONE",
            "severity": "MEDIUM",
            "trust": 1.1,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:N/A:N",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "HIGH",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 5.9,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.2,
            "id": "CVE-2021-4160",
            "impactScore": 3.6,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:N/A:N",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2021-4160",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202201-2650",
            "trust": 0.6,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2021-4160",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-4160"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2650"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-4160"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "There is a carry propagation bug in the MIPS32 and MIPS64 squaring procedure. Many EC algorithms are affected, including some of the TLS 1.3 default curves. Impact was not analyzed in detail, because the pre-requisites for attack are considered unlikely and include reusing private keys. Analysis suggests that attacks against RSA and DSA as a result of this defect would be very difficult to perform and are not believed likely. Attacks against DH are considered just feasible (although very difficult) because most of the work necessary to deduce information about a private key may be performed offline. The amount of resources required for such an attack would be significant. However, for an attack on TLS to be meaningful, the server would have to share the DH private key among multiple clients, which is no longer an option since CVE-2016-0701. This issue affects OpenSSL versions 1.0.2, 1.1.1 and 3.0.0. It was addressed in the releases of 1.1.1m and 3.0.1 on the 15th of December 2021. For the 1.0.2 release it is addressed in git commit 6fc1aaaf3 that is available to premium support customers only. It will be made available in 1.0.2zc when it is released. The issue only affects OpenSSL on MIPS platforms. Fixed in OpenSSL 3.0.1 (Affected 3.0.0). Fixed in OpenSSL 1.1.1m (Affected 1.1.1-1.1.1l). Fixed in OpenSSL 1.0.2zc-dev (Affected 1.0.2-1.0.2zb). - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202210-02\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n    Title: OpenSSL: Multiple Vulnerabilities\n     Date: October 16, 2022\n     Bugs: #741570, #809980, #832339, #835343, #842489, #856592\n       ID: 202210-02\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been discovered in OpenSSL, the worst of\nwhich could result in denial of service. \n\nBackground\n==========\n\nOpenSSL is an Open Source toolkit implementing the Secure Sockets Layer\n(SSL v2/v3) and Transport Layer Security (TLS v1) as well as a general\npurpose cryptography library. \n\nAffected packages\n=================\n\n    -------------------------------------------------------------------\n     Package              /     Vulnerable     /            Unaffected\n    -------------------------------------------------------------------\n  1  dev-libs/openssl           \u003c 1.1.1q                    \u003e= 1.1.1q\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in OpenSSL. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll OpenSSL users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=dev-libs/openssl-1.1.1q\"\n\nReferences\n==========\n\n[ 1 ] CVE-2020-1968\n      https://nvd.nist.gov/vuln/detail/CVE-2020-1968\n[ 2 ] CVE-2021-3711\n      https://nvd.nist.gov/vuln/detail/CVE-2021-3711\n[ 3 ] CVE-2021-3712\n      https://nvd.nist.gov/vuln/detail/CVE-2021-3712\n[ 4 ] CVE-2021-4160\n      https://nvd.nist.gov/vuln/detail/CVE-2021-4160\n[ 5 ] CVE-2022-0778\n      https://nvd.nist.gov/vuln/detail/CVE-2022-0778\n[ 6 ] CVE-2022-1292\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1292\n[ 7 ] CVE-2022-1473\n      https://nvd.nist.gov/vuln/detail/CVE-2022-1473\n[ 8 ] CVE-2022-2097\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2097\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202210-02\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA512\n\n- -------------------------------------------------------------------------\nDebian Security Advisory DSA-5103-1                   security@debian.org\nhttps://www.debian.org/security/                     Salvatore Bonaccorso\nMarch 15, 2022                        https://www.debian.org/security/faq\n- -------------------------------------------------------------------------\n\nPackage        : openssl\nCVE ID         : CVE-2021-4160 CVE-2022-0778\nDebian Bug     : 989604\n\nTavis Ormandy discovered that the BN_mod_sqrt() function of OpenSSL\ncould be tricked into an infinite loop. This could result in denial of\nservice via malformed certificates. \n\nFor the oldstable distribution (buster), this problem has been fixed\nin version 1.1.1d-0+deb10u8. \n\nFor the stable distribution (bullseye), this problem has been fixed in\nversion 1.1.1k-1+deb11u2. \n\nFor the detailed security status of openssl please refer to\nits security tracker page at:\nhttps://security-tracker.debian.org/tracker/openssl\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQKTBAEBCgB9FiEERkRAmAjBceBVMd3uBUy48xNDz0QFAmIwxQtfFIAAAAAALgAo\naXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDQ2\nNDQ0MDk4MDhDMTcxRTA1NTMxRERFRTA1NENCOEYzMTM0M0NGNDQACgkQBUy48xND\nz0R2qw//c0GbzcbXlLfibf7Nki5CMJUdWqx1si8O2uQ1vKxgC07rCAx1Lrw0TtIl\nTq1vYRtSbvy8P4Qn3E6/lbSYTnM7JbkriZ1HS3Mw4VFlOBA8lWMif4KotrcMAoYE\nIOQlhhTCkKZM8cL4YKDwN7XSy5LSdt/sw5rIi1ZpgVTEXQeKIDPa5WK6YyIGNG6k\nh83TPYZp+8e3Fuoubb8RY5CUfFomdMHRazHcrCkjY+yvFTFdKbUza9RjUs44xu2Z\nZUTfIddR8D8mWfKOyvAVMw0A7/zjFW1IX0vC0RhHwjrulLgJbqWvcYQgEJy/wOKd\ntWjVwGya7+Fxn6GFL0rHZP/OFq9mDwxyBDfDg/hD+TSnbxtyHIxUH4QoWdPPgJxP\nahln2TNfsnQsCopdn9dJ/XOrkC35R7Jp11kmX8MCTP6k8ob4mdQIACcRND/jcPgT\ntOBoUBCrha98Qvdh6UAGegTxqOBaNhG52fpNjEegq/q7kxlugdOtbY1nZXvuHHI5\nC9Gd6e4JqpRlMDuT7rC8qchXJM8VnhWdVdz95gkeQCA21+AGJ+CEvTpSRPY6qCrM\nrUvS3HVrBFNLWNlsA68or3y8CfxjFbpXnSxflCmoBtmAp6z9TXm59Fu7N6Qqkpom\nyV0hQAqqeFa9u3NZKoNrj/FGWYXZ+zMt+jifRLokuB0IhFUOJ70=\n=SB84\n-----END PGP SIGNATURE-----\n. If that applies then:\n\nOpenSSL 1.0.2 users should apply git commit 6fc1aaaf3 (premium support\ncustomers only)\nOpenSSL 1.1.1 users should upgrade to 1.1.1m\nOpenSSL 3.0.0 users should upgrade to 3.0.1\n\nThis issue was found on the 10th of December 2021 and subsequently fixed\nby Bernd Edlinger. \n\nNote\n====\n\nOpenSSL 1.0.2 is out of support and no longer receiving public updates. \n\nReferences\n==========\n\nURL for this Security Advisory:\nhttps://www.openssl.org/news/secadv/20220128.txt\n\nNote: the online version of the advisory may be updated with additional details\nover time. \n\nFor details of OpenSSL severity classifications please see:\nhttps://www.openssl.org/policies/secpolicy.html\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-4160"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-4160"
      },
      {
        "db": "PACKETSTORM",
        "id": "168714"
      },
      {
        "db": "PACKETSTORM",
        "id": "169298"
      },
      {
        "db": "PACKETSTORM",
        "id": "169638"
      }
    ],
    "trust": 1.26
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2021-4160",
        "trust": 2.0
      },
      {
        "db": "SIEMENS",
        "id": "SSA-637483",
        "trust": 1.7
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-22-258-05",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "168714",
        "trust": 0.7
      },
      {
        "db": "CS-HELP",
        "id": "SB2022062021",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022012811",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022060710",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022031611",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022042517",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022051735",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.2512",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.2191",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4616",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.2417",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2650",
        "trust": 0.6
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-4160",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "169298",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "169638",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-4160"
      },
      {
        "db": "PACKETSTORM",
        "id": "168714"
      },
      {
        "db": "PACKETSTORM",
        "id": "169298"
      },
      {
        "db": "PACKETSTORM",
        "id": "169638"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2650"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-4160"
      }
    ]
  },
  "id": "VAR-202201-1080",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-11-23T19:57:37.228000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "OpenSSL Fixes for encryption problem vulnerabilities",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=180884"
      },
      {
        "title": "Debian Security Advisories: DSA-5103-1 openssl -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=4ecbdda56426ff105b6a2939daf5c4e7"
      },
      {
        "title": "Red Hat: CVE-2021-4160",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=CVE-2021-4160"
      },
      {
        "title": "IBM: Security Bulletin: IBM Sterling Control Center vulnerable to multiple issues to due IBM Cognos Analystics (CVE-2022-4160, CVE-2021-3733)",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=9d831a6a306a903e583b6a76777d1085"
      },
      {
        "title": "IBM: Security Bulletin: Vulnerabilities in OpenSSL affect IBM Spectrum Protect Plus SQL, File Indexing, and Windows Host agents",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=316fcbda8419e3988baf55ecd43960a6"
      },
      {
        "title": "IBM: Security Bulletin: IBM Cognos Analytics has addressed multiple vulnerabilities (CVE-2022-34339, CVE-2021-3712, CVE-2021-3711, CVE-2021-4160, CVE-2021-29425, CVE-2021-3733, CVE-2021-3737, CVE-2022-0391, CVE-2021-43138, CVE-2022-24758)",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=cbece86f0c3bef5a678f2bb3dbbb854b"
      },
      {
        "title": "",
        "trust": 0.1,
        "url": "https://github.com/actions-marketplace-validations/neuvector_scan-action "
      },
      {
        "title": "",
        "trust": 0.1,
        "url": "https://github.com/neuvector/scan-action "
      },
      {
        "title": "nodejs-helloworld",
        "trust": 0.1,
        "url": "https://github.com/andrewd-sysdig/nodejs-helloworld "
      },
      {
        "title": "",
        "trust": 0.1,
        "url": "https://github.com/tianocore-docs/ThirdPartySecurityAdvisories "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-4160"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2650"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "NVD-CWE-noinfo",
        "trust": 1.0
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-4160"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.3,
        "url": "https://www.oracle.com/security-alerts/cpuapr2022.html"
      },
      {
        "trust": 1.8,
        "url": "https://www.openssl.org/news/secadv/20220128.txt"
      },
      {
        "trust": 1.8,
        "url": "https://www.debian.org/security/2022/dsa-5103"
      },
      {
        "trust": 1.8,
        "url": "https://security.gentoo.org/glsa/202210-02"
      },
      {
        "trust": 1.7,
        "url": "https://www.oracle.com/security-alerts/cpujul2022.html"
      },
      {
        "trust": 1.7,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf"
      },
      {
        "trust": 1.0,
        "url": "https://security.netapp.com/advisory/ntap-20240621-0006/"
      },
      {
        "trust": 1.0,
        "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=6fc1aaaf303185aa5e483e06bdfae16daa9193a7"
      },
      {
        "trust": 1.0,
        "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=e9e726506cd2a3fd9c0f12daf8cc1fe934c7dddb"
      },
      {
        "trust": 1.0,
        "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=3bf7b73ea7123045b8f972badc67ed6878e6c37f"
      },
      {
        "trust": 0.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4160"
      },
      {
        "trust": 0.7,
        "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=6fc1aaaf303185aa5e483e06bdfae16daa9193a7"
      },
      {
        "trust": 0.7,
        "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=3bf7b73ea7123045b8f972badc67ed6878e6c37f"
      },
      {
        "trust": 0.7,
        "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=e9e726506cd2a3fd9c0f12daf8cc1fe934c7dddb"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022051735"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.2417"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4616"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-4160"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022060710"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/openssl-weak-encryption-via-mips-bn-mod-exp-37400"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.2191"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022012811"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022042517"
      },
      {
        "trust": 0.6,
        "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022031611"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022062021"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/168714/gentoo-linux-security-advisory-202210-02.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.2512"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/.html"
      },
      {
        "trust": 0.1,
        "url": "https://github.com/actions-marketplace-validations/neuvector_scan-action"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-1968"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3711"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3712"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1473"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2097"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1292"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/faq"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/"
      },
      {
        "trust": 0.1,
        "url": "https://security-tracker.debian.org/tracker/openssl"
      },
      {
        "trust": 0.1,
        "url": "https://www.openssl.org/news/secadv/20220315.txt"
      },
      {
        "trust": 0.1,
        "url": "https://www.openssl.org/support/contracts.html"
      },
      {
        "trust": 0.1,
        "url": "https://www.openssl.org/policies/secpolicy.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-0701"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-4160"
      },
      {
        "db": "PACKETSTORM",
        "id": "168714"
      },
      {
        "db": "PACKETSTORM",
        "id": "169298"
      },
      {
        "db": "PACKETSTORM",
        "id": "169638"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2650"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-4160"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULMON",
        "id": "CVE-2021-4160"
      },
      {
        "db": "PACKETSTORM",
        "id": "168714"
      },
      {
        "db": "PACKETSTORM",
        "id": "169298"
      },
      {
        "db": "PACKETSTORM",
        "id": "169638"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2650"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-4160"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-01-28T00:00:00",
        "db": "VULMON",
        "id": "CVE-2021-4160"
      },
      {
        "date": "2022-10-17T13:44:06",
        "db": "PACKETSTORM",
        "id": "168714"
      },
      {
        "date": "2022-03-28T19:12:00",
        "db": "PACKETSTORM",
        "id": "169298"
      },
      {
        "date": "2022-01-28T12:12:12",
        "db": "PACKETSTORM",
        "id": "169638"
      },
      {
        "date": "2022-01-28T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202201-2650"
      },
      {
        "date": "2022-01-28T22:15:15.133000",
        "db": "NVD",
        "id": "CVE-2021-4160"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-11-09T00:00:00",
        "db": "VULMON",
        "id": "CVE-2021-4160"
      },
      {
        "date": "2022-10-18T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202201-2650"
      },
      {
        "date": "2024-11-21T06:37:02.273000",
        "db": "NVD",
        "id": "CVE-2021-4160"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2650"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "OpenSSL Input validation error vulnerability",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2650"
      }
    ],
    "trust": 0.6
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "input validation error",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-2650"
      }
    ],
    "trust": 0.6
  }
}

var-202102-1466
Vulnerability from variot

Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function. Lodash Contains a command injection vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. There is a security vulnerability in Lodash. Please keep an eye on CNNVD or vendor announcements. Description:

The ovirt-engine package provides the manager for virtualization environments. This manager enables admins to define hosts and networks, as well as to add storage, create VMs and manage user permissions.

Bug Fix(es):

  • This release adds the queue attribute to the virtio-scsi driver in the virtual machine configuration. This improvement enables multi-queue performance with the virtio-scsi driver. (BZ#911394)

  • With this release, source-load-balancing has been added as a new sub-option for xmit_hash_policy. It can be configured for bond modes balance-xor (2), 802.3ad (4) and balance-tlb (5), by specifying xmit_hash_policy=vlan+srcmac. (BZ#1683987)

  • The default DataCenter/Cluster will be set to compatibility level 4.6 on new installations of Red Hat Virtualization 4.4.6.; (BZ#1950348)

  • With this release, support has been added for copying disks between regular Storage Domains and Managed Block Storage Domains. It is now possible to migrate disks between Managed Block Storage Domains and regular Storage Domains. (BZ#1906074)

  • Previously, the engine-config value LiveSnapshotPerformFreezeInEngine was set by default to false and was supposed to be uses in cluster compatibility levels below 4.4. The value was set to general version. With this release, each cluster level has it's own value, defaulting to false for 4.4 and above. This will reduce unnecessary overhead in removing time outs of the file system freeze command. (BZ#1932284)

  • With this release, running virtual machines is supported for up to 16TB of RAM on x86_64 architectures. (BZ#1944723)

  • This release adds the gathering of oVirt/RHV related certificates to allow easier debugging of issues for faster customer help and issue resolution. Information from certificates is now included as part of the sosreport. Note that no corresponding private key information is gathered, due to security considerations. (BZ#1845877)

  • Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/2974891

  1. Bugs fixed (https://bugzilla.redhat.com/):

1113630 - [RFE] indicate vNICs that are out-of-sync from their configuration on engine 1310330 - [RFE] Provide a way to remove stale LUNs from hypervisors 1589763 - [downstream clone] Error changing CD for a running VM when ISO image is on a block domain 1621421 - [RFE] indicate vNIC is out of sync on network QoS modification on engine 1717411 - improve engine logging when migration fail 1766414 - [downstream] [UI] hint after updating mtu on networks connected to running VMs 1775145 - Incorrect message from hot-plugging memory 1821199 - HP VM fails to migrate between identical hosts (the same cpu flags) not supporting TSC. 1845877 - [RFE] Collect information about RHV PKI 1875363 - engine-setup failing on FIPS enabled rhel8 machine 1906074 - [RFE] Support disks copy between regular and managed block storage domains 1910858 - vm_ovf_generations is not cleared while detaching the storage domain causing VM import with old stale configuration 1917718 - [RFE] Collect memory usage from guests without ovirt-guest-agent and memory ballooning 1919195 - Unable to create snapshot without saving memory of running VM from VM Portal. 1919984 - engine-setup failse to deploy the grafana service in an external DWH server 1924610 - VM Portal shows N/A as the VM IP address even if the guest agent is running and the IP is shown in the webadmin portal 1926018 - Failed to run VM after FIPS mode is enabled 1926823 - Integrating ELK with RHV-4.4 fails as RHVH is missing 'rsyslog-gnutls' package. 1928158 - Rename 'CA Certificate' link in welcome page to 'Engine CA certificate' 1928188 - Failed to parse 'writeOps' value 'XXXX' to integer: For input string: "XXXX" 1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 1929211 - Failed to parse 'writeOps' value 'XXXX' to integer: For input string: "XXXX" 1930522 - [RHV-4.4.5.5] Failed to deploy RHEL AV 8.4.0 host to RHV with error "missing groups or modules: virt:8.4" 1930565 - Host upgrade failed in imgbased but RHVM shows upgrade successful 1930895 - RHEL 8 virtual machine with qemu-guest-agent installed displays Guest OS Memory Free/Cached/Buffered: Not Configured 1932284 - Engine handled FS freeze is not fast enough for Windows systems 1935073 - Ansible ovirt_disk module can create disks with conflicting IDs that cannot be removed 1942083 - upgrade ovirt-cockpit-sso to 0.1.4-2 1943267 - Snapshot creation is failing for VM having vGPU. 1944723 - [RFE] Support virtual machines with 16TB memory 1948577 - [welcome page] remove "Infrastructure Migration" section (obsoleted) 1949543 - rhv-log-collector-analyzer fails to run MAC Pools rule 1949547 - rhv-log-collector-analyzer report contains 'b characters 1950348 - Set compatibility level 4.6 for Default DataCenter/Cluster during new installations of RHV 4.4.6 1950466 - Host installation failed 1954401 - HP VMs pinning is wiped after edit->ok and pinned to first physical CPUs. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Moderate: OpenShift Container Platform 4.8.2 bug fix and security update Advisory ID: RHSA-2021:2438-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2021:2438 Issue date: 2021-07-27 CVE Names: CVE-2016-2183 CVE-2020-7774 CVE-2020-15106 CVE-2020-15112 CVE-2020-15113 CVE-2020-15114 CVE-2020-15136 CVE-2020-26160 CVE-2020-26541 CVE-2020-28469 CVE-2020-28500 CVE-2020-28852 CVE-2021-3114 CVE-2021-3121 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3537 CVE-2021-3541 CVE-2021-3636 CVE-2021-20206 CVE-2021-20271 CVE-2021-20291 CVE-2021-21419 CVE-2021-21623 CVE-2021-21639 CVE-2021-21640 CVE-2021-21648 CVE-2021-22133 CVE-2021-23337 CVE-2021-23362 CVE-2021-23368 CVE-2021-23382 CVE-2021-25735 CVE-2021-25737 CVE-2021-26539 CVE-2021-26540 CVE-2021-27292 CVE-2021-28092 CVE-2021-29059 CVE-2021-29622 CVE-2021-32399 CVE-2021-33034 CVE-2021-33194 CVE-2021-33909 =====================================================================

  1. Summary:

Red Hat OpenShift Container Platform release 4.8.2 is now available with updates to packages and images that fix several bugs and add enhancements.

This release includes a security update for Red Hat OpenShift Container Platform 4.8.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

This advisory contains the container images for Red Hat OpenShift Container Platform 4.8.2. See the following advisory for the RPM packages for this release:

https://access.redhat.com/errata/RHSA-2021:2437

Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:

https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel ease-notes.html

Security Fix(es):

  • SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32) (CVE-2016-2183)

  • gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)

  • nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)

  • etcd: Large slice causes panic in decodeRecord method (CVE-2020-15106)

  • etcd: DoS in wal/wal.go (CVE-2020-15112)

  • etcd: directories created via os.MkdirAll are not checked for permissions (CVE-2020-15113)

  • etcd: gateway can include itself as an endpoint resulting in resource exhaustion and leads to DoS (CVE-2020-15114)

  • etcd: no authentication is performed against endpoints provided in the

  • --endpoints flag (CVE-2020-15136)

  • jwt-go: access restriction bypass vulnerability (CVE-2020-26160)

  • nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)

  • nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions (CVE-2020-28500)

  • golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag (CVE-2020-28852)

  • golang: crypto/elliptic: incorrect operations on the P-224 curve (CVE-2021-3114)

  • containernetworking-cni: Arbitrary path injection via type field in CNI configuration (CVE-2021-20206)

  • containers/storage: DoS via malicious image (CVE-2021-20291)

  • prometheus: open redirect under the /new endpoint (CVE-2021-29622)

  • golang: x/net/html: infinite loop in ParseFragment (CVE-2021-33194)

  • go.elastic.co/apm: leaks sensitive HTTP headers during panic (CVE-2021-22133)

Space precludes listing in detail the following additional CVEs fixes: (CVE-2021-27292), (CVE-2021-28092), (CVE-2021-29059), (CVE-2021-23382), (CVE-2021-26539), (CVE-2021-26540), (CVE-2021-23337), (CVE-2021-23362) and (CVE-2021-23368)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Additional Changes:

You may download the oc tool and use it to inspect release image metadata as follows:

(For x86_64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.8.2-x86_64

The image digest is ssha256:0e82d17ababc79b10c10c5186920232810aeccbccf2a74c691487090a2c98ebc

(For s390x architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.8.2-s390x

The image digest is sha256:a284c5c3fa21b06a6a65d82be1dc7e58f378aa280acd38742fb167a26b91ecb5

(For ppc64le architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.8.2-ppc64le

The image digest is sha256:da989b8e28bccadbb535c2b9b7d3597146d14d254895cd35f544774f374cdd0f

All OpenShift Container Platform 4.8 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.8/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor

  1. Solution:

For OpenShift Container Platform 4.8 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel ease-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.8/updating/updating-cluster - -cli.html

  1. Bugs fixed (https://bugzilla.redhat.com/):

1369383 - CVE-2016-2183 SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32) 1725981 - oc explain does not work well with full resource.group names 1747270 - [osp] Machine with name "-worker"couldn't join the cluster 1772993 - rbd block devices attached to a host are visible in unprivileged container pods 1786273 - [4.6] KAS pod logs show "error building openapi models ... has invalid property: anyOf" for CRDs 1786314 - [IPI][OSP] Install fails on OpenStack with self-signed certs unless the node running the installer has the CA cert in its system trusts 1801407 - Router in v4v6 mode puts brackets around IPv4 addresses in the Forwarded header 1812212 - ArgoCD example application cannot be downloaded from github 1817954 - [ovirt] Workers nodes are not numbered sequentially 1824911 - PersistentVolume yaml editor is read-only with system:persistent-volume-provisioner ClusterRole 1825219 - openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with "Unable to connect to the server" 1825417 - The containerruntimecontroller doesn't roll back to CR-1 if we delete CR-2 1834551 - ClusterOperatorDown fires when operator is only degraded; states will block upgrades 1835264 - Intree provisioner doesn't respect PVC.spec.dataSource sometimes 1839101 - Some sidebar links in developer perspective don't follow same project 1840881 - The KubeletConfigController cannot process multiple confs for a pool/ pool changes 1846875 - Network setup test high failure rate 1848151 - Console continues to poll the ClusterVersion resource when the user doesn't have authority 1850060 - After upgrading to 3.11.219 timeouts are appearing. 1852637 - Kubelet sets incorrect image names in node status images section 1852743 - Node list CPU column only show usage 1853467 - container_fs_writes_total is inconsistent with CPU/memory in summarizing cgroup values 1857008 - [Edge] [BareMetal] Not provided STATE value for machines 1857477 - Bad helptext for storagecluster creation 1859382 - check-endpoints panics on graceful shutdown 1862084 - Inconsistency of time formats in the OpenShift web-console 1864116 - Cloud credential operator scrolls warnings about unsupported platform 1866222 - Should output all options when runing operator-sdk init --help 1866318 - [RHOCS Usability Study][Dashboard] Users found it difficult to navigate to the OCS dashboard 1866322 - [RHOCS Usability Study][Dashboard] Alert details page does not help to explain the Alert 1866331 - [RHOCS Usability Study][Dashboard] Users need additional tooltips or definitions 1868755 - [vsphere] terraform provider vsphereprivate crashes when network is unavailable on host 1868870 - CVE-2020-15113 etcd: directories created via os.MkdirAll are not checked for permissions 1868872 - CVE-2020-15112 etcd: DoS in wal/wal.go 1868874 - CVE-2020-15114 etcd: gateway can include itself as an endpoint resulting in resource exhaustion and leads to DoS 1868880 - CVE-2020-15136 etcd: no authentication is performed against endpoints provided in the --endpoints flag 1868883 - CVE-2020-15106 etcd: Large slice causes panic in decodeRecord method 1871303 - [sig-instrumentation] Prometheus when installed on the cluster should have important platform topology metrics 1871770 - [IPI baremetal] The Keepalived.conf file is not indented evenly 1872659 - ClusterAutoscaler doesn't scale down when a node is not needed anymore 1873079 - SSH to api and console route is possible when the clsuter is hosted on Openstack 1873649 - proxy.config.openshift.io should validate user inputs 1874322 - openshift/oauth-proxy: htpasswd using SHA1 to store credentials 1874931 - Accessibility - Keyboard shortcut to exit YAML editor not easily discoverable 1876918 - scheduler test leaves taint behind 1878199 - Remove Log Level Normalization controller in cluster-config-operator release N+1 1878655 - [aws-custom-region] creating manifests take too much time when custom endpoint is unreachable 1878685 - Ingress resource with "Passthrough" annotation does not get applied when using the newer "networking.k8s.io/v1" API 1879077 - Nodes tainted after configuring additional host iface 1879140 - console auth errors not understandable by customers 1879182 - switch over to secure access-token logging by default and delete old non-sha256 tokens 1879184 - CVO must detect or log resource hotloops 1879495 - [4.6] namespace \“openshift-user-workload-monitoring\” does not exist” 1879638 - Binary file uploaded to a secret in OCP 4 GUI is not properly converted to Base64-encoded string 1879944 - [OCP 4.8] Slow PV creation with vsphere 1880757 - AWS: master not removed from LB/target group when machine deleted 1880758 - Component descriptions in cloud console have bad description (Managed by Terraform) 1881210 - nodePort for router-default metrics with NodePortService does not exist 1881481 - CVO hotloops on some service manifests 1881484 - CVO hotloops on deployment manifests 1881514 - CVO hotloops on imagestreams from cluster-samples-operator 1881520 - CVO hotloops on (some) clusterrolebindings 1881522 - CVO hotloops on clusterserviceversions packageserver 1881662 - Error getting volume limit for plugin kubernetes.io/ in kubelet logs 1881694 - Evidence of disconnected installs pulling images from the local registry instead of quay.io 1881938 - migrator deployment doesn't tolerate masters 1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability 1883587 - No option for user to select volumeMode 1883993 - Openshift 4.5.8 Deleting pv disk vmdk after delete machine 1884053 - cluster DNS experiencing disruptions during cluster upgrade in insights cluster 1884800 - Failed to set up mount unit: Invalid argument 1885186 - Removing ssh keys MC does not remove the key from authorized_keys 1885349 - [IPI Baremetal] Proxy Information Not passed to metal3 1885717 - activeDeadlineSeconds DeadlineExceeded does not show terminated container statuses 1886572 - auth: error contacting auth provider when extra ingress (not default) goes down 1887849 - When creating new storage class failure_domain is missing. 1888712 - Worker nodes do not come up on a baremetal IPI deployment with control plane network configured on a vlan on top of bond interface due to Pending CSRs 1889689 - AggregatedAPIErrors alert may never fire 1890678 - Cypress: Fix 'structure' accesibility violations 1890828 - Intermittent prune job failures causing operator degradation 1891124 - CP Conformance: CRD spec and status failures 1891301 - Deleting bmh by "oc delete bmh' get stuck 1891696 - [LSO] Add capacity UI does not check for node present in selected storageclass 1891766 - [LSO] Min-Max filter's from OCS wizard accepts Negative values and that cause PV not getting created 1892642 - oauth-server password metrics do not appear in UI after initial OCP installation 1892718 - HostAlreadyClaimed: The new route cannot be loaded with a new api group version 1893850 - Add an alert for requests rejected by the apiserver 1893999 - can't login ocp cluster with oc 4.7 client without the username 1895028 - [gcp-pd-csi-driver-operator] Volumes created by CSI driver are not deleted on cluster deletion 1895053 - Allow builds to optionally mount in cluster trust stores 1896226 - recycler-pod template should not be in kubelet static manifests directory 1896321 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types 1896751 - [RHV IPI] Worker nodes stuck in the Provisioning Stage if the machineset has a long name 1897415 - [Bare Metal - Ironic] provide the ability to set the cipher suite for ipmitool when doing a Bare Metal IPI install 1897621 - Auth test.Login test.logs in as kubeadmin user: Timeout 1897918 - [oVirt] e2e tests fail due to kube-apiserver not finishing 1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability 1899057 - fix spurious br-ex MAC address error log 1899187 - [Openstack] node-valid-hostname.service failes during the first boot leading to 5 minute provisioning delay 1899587 - [External] RGW usage metrics shown on Object Service Dashboard is incorrect 1900454 - Enable host-based disk encryption on Azure platform 1900819 - Scaled ingress replicas following sharded pattern don't balance evenly across multi-AZ 1901207 - Search Page - Pipeline resources table not immediately updated after Name filter applied or removed 1901535 - Remove the managingOAuthAPIServer field from the authentication.operator API 1901648 - "do you need to set up custom dns" tooltip inaccurate 1902003 - Jobs Completions column is not sorting when there are "0 of 1" and "1 of 1" in the list. 1902076 - image registry operator should monitor status of its routes 1902247 - openshift-oauth-apiserver apiserver pod crashloopbackoffs 1903055 - [OSP] Validation should fail when no any IaaS flavor or type related field are given 1903228 - Pod stuck in Terminating, runc init process frozen 1903383 - Latest RHCOS 47.83. builds failing to install: mount /root.squashfs failed 1903553 - systemd container renders node NotReady after deleting it 1903700 - metal3 Deployment doesn't have unique Pod selector 1904006 - The --dir option doest not work for command oc image extract 1904505 - Excessive Memory Use in Builds 1904507 - vsphere-problem-detector: implement missing metrics 1904558 - Random init-p error when trying to start pod 1905095 - Images built on OCP 4.6 clusters create manifests that result in quay.io (and other registries) rejecting those manifests 1905147 - ConsoleQuickStart Card's prerequisites is a combined text instead of a list 1905159 - Installation on previous unused dasd fails after formatting 1905331 - openshift-multus initContainer multus-binary-copy, etc. are not requesting required resources: cpu, memory 1905460 - Deploy using virtualmedia for disabled provisioning network on real BM(HPE) fails 1905577 - Control plane machines not adopted when provisioning network is disabled 1905627 - Warn users when using an unsupported browser such as IE 1905709 - Machine API deletion does not properly handle stopped instances on AWS or GCP 1905849 - Default volumesnapshotclass should be created when creating default storageclass 1906056 - Bundles skipped via the skips field cannot be pinned 1906102 - CBO produces standard metrics 1906147 - ironic-rhcos-downloader should not use --insecure 1906304 - Unexpected value NaN parsing x/y attribute when viewing pod Memory/CPU usage chart 1906740 - [aws]Machine should be "Failed" when creating a machine with invalid region 1907309 - Migrate controlflow v1alpha1 to v1beta1 in storage 1907315 - the internal load balancer annotation for AWS should use "true" instead of "0.0.0.0/0" as value 1907353 - [4.8] OVS daemonset is wasting resources even though it doesn't do anything 1907614 - Update kubernetes deps to 1.20 1908068 - Enable DownwardAPIHugePages feature gate 1908169 - The example of Import URL is "Fedora cloud image list" for all templates. 1908170 - sriov network resource injector: Hugepage injection doesn't work with mult container 1908343 - Input labels in Manage columns modal should be clickable 1908378 - [sig-network] pods should successfully create sandboxes by getting pod - Static Pod Failures 1908655 - "Evaluating rule failed" for "record: node:node_num_cpu:sum" rule 1908762 - [Dualstack baremetal cluster] multicast traffic is not working on ovn-kubernetes 1908765 - [SCALE] enable OVN lflow data path groups 1908774 - [SCALE] enable OVN DB memory trimming on compaction 1908916 - CNO: turn on OVN DB RAFT diffs once all master DB pods are capable of it 1909091 - Pod/node/ip/template isn't showing when vm is running 1909600 - Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error 1909849 - release-openshift-origin-installer-e2e-aws-upgrade-fips-4.4 is perm failing 1909875 - [sig-cluster-lifecycle] Cluster version operator acknowledges upgrade : timed out waiting for cluster to acknowledge upgrade 1910067 - UPI: openstacksdk fails on "server group list" 1910113 - periodic-ci-openshift-release-master-ocp-4.5-ci-e2e-44-stable-to-45-ci is never passing 1910318 - OC 4.6.9 Installer failed: Some pods are not scheduled: 3 node(s) didn't match node selector: AWS compute machines without status 1910378 - socket timeouts for webservice communication between pods 1910396 - 4.6.9 cred operator should back-off when provisioning fails on throttling 1910500 - Could not list CSI provisioner on web when create storage class on GCP platform 1911211 - Should show the cert-recovery-controller version correctly 1911470 - ServiceAccount Registry Authfiles Do Not Contain Entries for Public Hostnames 1912571 - libvirt: Support setting dnsmasq options through the install config 1912820 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade 1913112 - BMC details should be optional for unmanaged hosts 1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag 1913341 - GCP: strange cluster behavior in CI run 1913399 - switch to v1beta1 for the priority and fairness APIs 1913525 - Panic in OLM packageserver when invoking webhook authorization endpoint 1913532 - After a 4.6 to 4.7 upgrade, a node went unready 1913974 - snapshot test periodically failing with "can't open '/mnt/test/data': No such file or directory" 1914127 - Deletion of oc get svc router-default -n openshift-ingress hangs 1914446 - openshift-service-ca-operator and openshift-service-ca pods run as root 1914994 - Panic observed in k8s-prometheus-adapter since k8s 1.20 1915122 - Size of the hostname was preventing proper DNS resolution of the worker node names 1915693 - Not able to install gpu-operator on cpumanager enabled node. 1915971 - Role and Role Binding breadcrumbs do not work as expected 1916116 - the left navigation menu would not be expanded if repeat clicking the links in Overview page 1916118 - [OVN] Source IP is not EgressIP if configured allow 0.0.0.0/0 in the EgressFirewall 1916392 - scrape priority and fairness endpoints for must-gather 1916450 - Alertmanager: add title and text fields to Adv. config. section of Slack Receiver form 1916489 - [sig-scheduling] SchedulerPriorities [Serial] fails with "Error waiting for 1 pods to be running - probably a timeout: Timeout while waiting for pods with labels to be ready" 1916553 - Default template's description is empty on details tab 1916593 - Destroy cluster sometimes stuck in a loop 1916872 - need ability to reconcile exgw annotations on pod add 1916890 - [OCP 4.7] api or api-int not available during installation 1917241 - [en_US] The tooltips of Created date time is not easy to read in all most of UIs. 1917282 - [Migration] MCO stucked for rhel worker after enable the migration prepare state 1917328 - It should default to current namespace when create vm from template action on details page 1917482 - periodic-ci-openshift-release-master-ocp-4.7-e2e-metal-ipi failing with "cannot go from state 'deploy failed' to state 'manageable'" 1917485 - [oVirt] ovirt machine/machineset object has missing some field validations 1917667 - Master machine config pool updates are stalled during the migration from SDN to OVNKube. 1917906 - [oauth-server] bump k8s.io/apiserver to 1.20.3 1917931 - [e2e-gcp-upi] failing due to missing pyopenssl library 1918101 - [vsphere]Delete Provisioning machine took about 12 minutes 1918376 - Image registry pullthrough does not support ICSP, mirroring e2es do not pass 1918442 - Service Reject ACL does not work on dualstack 1918723 - installer fails to write boot record on 4k scsi lun on s390x 1918729 - Add hide/reveal button for the token field in the KMS configuration page 1918750 - CVE-2021-3114 golang: crypto/elliptic: incorrect operations on the P-224 curve 1918785 - Pod request and limit calculations in console are incorrect 1918910 - Scale from zero annotations should not requeue if instance type missing 1919032 - oc image extract - will not extract files from image rootdir - "error: unexpected directory from mapping tests.test" 1919048 - Whereabouts IPv6 addresses not calculated when leading hextets equal 0 1919151 - [Azure] dnsrecords with invalid domain should not be published to Azure dnsZone 1919168 - oc adm catalog mirror doesn't work for the air-gapped cluster 1919291 - [Cinder-csi-driver] Filesystem did not expand for on-line volume resize 1919336 - vsphere-problem-detector should check if datastore is part of datastore cluster 1919356 - Add missing profile annotation in cluster-update-keys manifests 1919391 - CVE-2021-20206 containernetworking-cni: Arbitrary path injection via type field in CNI configuration 1919398 - Permissive Egress NetworkPolicy (0.0.0.0/0) is blocking all traffic 1919406 - OperatorHub filter heading "Provider Type" should be "Source" 1919737 - hostname lookup delays when master node down 1920209 - Multus daemonset upgrade takes the longest time in the cluster during an upgrade 1920221 - GCP jobs exhaust zone listing query quota sometimes due to too many initializations of cloud provider in tests 1920300 - cri-o does not support configuration of stream idle time 1920307 - "VM not running" should be "Guest agent required" on vm details page in dev console 1920532 - Problem in trying to connect through the service to a member that is the same as the caller. 1920677 - Various missingKey errors in the devconsole namespace 1920699 - Operation cannot be fulfilled on clusterresourcequotas.quota.openshift.io error when creating different OpenShift resources 1920901 - [4.7]"500 Internal Error" for prometheus route in https_proxy cluster 1920903 - oc adm top reporting unknown status for Windows node 1920905 - Remove DNS lookup workaround from cluster-api-provider 1921106 - A11y Violation: button name(s) on Utilization Card on Cluster Dashboard 1921184 - kuryr-cni binds to wrong interface on machine with two interfaces 1921227 - Fix issues related to consuming new extensions in Console static plugins 1921264 - Bundle unpack jobs can hang indefinitely 1921267 - ResourceListDropdown not internationalized 1921321 - SR-IOV obliviously reboot the node 1921335 - ThanosSidecarUnhealthy 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1921720 - test: openshift-tests.[sig-cli] oc observe works as expected [Suite:openshift/conformance/parallel] 1921763 - operator registry has high memory usage in 4.7... cleanup row closes 1921778 - Push to stage now failing with semver issues on old releases 1921780 - Search page not fully internationalized 1921781 - DefaultList component not internationalized 1921878 - [kuryr] Egress network policy with namespaceSelector in Kuryr behaves differently than in OVN-Kubernetes 1921885 - Server-side Dry-run with Validation Downloads Entire OpenAPI spec often 1921892 - MAO: controller runtime manager closes event recorder 1921894 - Backport Avoid node disruption when kube-apiserver-to-kubelet-signer is rotated 1921937 - During upgrade /etc/hostname becomes a directory, nodes are set with kubernetes.io/hostname=localhost label 1921953 - ClusterServiceVersion property inference does not infer package and version 1922063 - "Virtual Machine" should be "Templates" in template wizard 1922065 - Rootdisk size is default to 15GiB in customize wizard 1922235 - [build-watch] e2e-aws-upi - e2e-aws-upi container setup failing because of Python code version mismatch 1922264 - Restore snapshot as a new PVC: RWO/RWX access modes are not click-able if parent PVC is deleted 1922280 - [v2v] on the upstream release, In VM import wizard I see RHV but no oVirt 1922646 - Panic in authentication-operator invoking webhook authorization 1922648 - FailedCreatePodSandBox due to "failed to pin namespaces [uts]: [pinns:e]: /var/run/utsns exists and is not a directory: File exists" 1922764 - authentication operator is degraded due to number of kube-apiservers 1922992 - some button text on YAML sidebar are not translated 1922997 - [Migration]The SDN migration rollback failed. 1923038 - [OSP] Cloud Info is loaded twice 1923157 - Ingress traffic performance drop due to NodePort services 1923786 - RHV UPI fails with unhelpful message when ASSET_DIR is not set. 1923811 - Registry claims Available=True despite .status.readyReplicas == 0 while .spec.replicas == 2 1923847 - Error occurs when creating pods if configuring multiple key-only labels in default cluster-wide node selectors or project-wide node selectors 1923984 - Incorrect anti-affinity for UWM prometheus 1924020 - panic: runtime error: index out of range [0] with length 0 1924075 - kuryr-controller restart when enablePortPoolsPrepopulation = true 1924083 - "Activity" Pane of Persistent Storage tab shows events related to Noobaa too 1924140 - [OSP] Typo in OPENSHFIT_INSTALL_SKIP_PREFLIGHT_VALIDATIONS variable 1924171 - ovn-kube must handle single-stack to dual-stack migration 1924358 - metal UPI setup fails, no worker nodes 1924502 - Failed to start transient scope unit: Argument list too long / systemd[1]: Failed to set up mount unit: Invalid argument 1924536 - 'More about Insights' link points to support link 1924585 - "Edit Annotation" are not correctly translated in Chinese 1924586 - Control Plane status and Operators status are not fully internationalized 1924641 - [User Experience] The message "Missing storage class" needs to be displayed after user clicks Next and needs to be rephrased 1924663 - Insights operator should collect related pod logs when operator is degraded 1924701 - Cluster destroy fails when using byo with Kuryr 1924728 - Difficult to identify deployment issue if the destination disk is too small 1924729 - Create Storageclass for CephFS provisioner assumes incorrect default FSName in external mode (side-effect of fix for Bug 1878086) 1924747 - InventoryItem doesn't internationalize resource kind 1924788 - Not clear error message when there are no NADs available for the user 1924816 - Misleading error messages in ironic-conductor log 1924869 - selinux avc deny after installing OCP 4.7 1924916 - PVC reported as Uploading when it is actually cloning 1924917 - kuryr-controller in crash loop if IP is removed from secondary interfaces 1924953 - newly added 'excessive etcd leader changes' test case failing in serial job 1924968 - Monitoring list page filter options are not translated 1924983 - some components in utils directory not localized 1925017 - [UI] VM Details-> Network Interfaces, 'Name,' is displayed instead on 'Name' 1925061 - Prometheus backed by a PVC may start consuming a lot of RAM after 4.6 -> 4.7 upgrade due to series churn 1925083 - Some texts are not marked for translation on idp creation page. 1925087 - Add i18n support for the Secret page 1925148 - Shouldn't create the redundant imagestream when use oc new-app --name=testapp2 -i with exist imagestream 1925207 - VM from custom template - cloudinit disk is not added if creating the VM from custom template using customization wizard 1925216 - openshift installer fails immediately failed to fetch Install Config 1925236 - OpenShift Route targets every port of a multi-port service 1925245 - oc idle: Clusters upgrading with an idled workload do not have annotations on the workload's service 1925261 - Items marked as mandatory in KMS Provider form are not enforced 1925291 - Baremetal IPI - While deploying with IPv6 provision network with subnet other than /64 masters fail to PXE boot 1925343 - [ci] e2e-metal tests are not using reserved instances 1925493 - Enable snapshot e2e tests 1925586 - cluster-etcd-operator is leaking transports 1925614 - Error: InstallPlan.operators.coreos.com not found 1925698 - On GCP, load balancers report kube-apiserver fails its /readyz check 50% of the time, causing load balancer backend churn and disruptions to apiservers 1926029 - [RFE] Either disable save or give warning when no disks support snapshot 1926054 - Localvolume CR is created successfully, when the storageclass name defined in the localvolume exists. 1926072 - Close button (X) does not work in the new "Storage cluster exists" Warning alert message(introduced via fix for Bug 1867400) 1926082 - Insights operator should not go degraded during upgrade 1926106 - [ja_JP][zh_CN] Create Project, Delete Project and Delete PVC modal are not fully internationalized 1926115 - Texts in “Insights” popover on overview page are not marked for i18n 1926123 - Pseudo bug: revert "force cert rotation every couple days for development" in 4.7 1926126 - some kebab/action menu translation issues 1926131 - Add HPA page is not fully internationalized 1926146 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it 1926154 - Create new pool with arbiter - wrong replica 1926278 - [oVirt] consume K8S 1.20 packages 1926279 - Pod ignores mtu setting from sriovNetworkNodePolicies in case of PF partitioning 1926285 - ignore pod not found status messages 1926289 - Accessibility: Modal content hidden from screen readers 1926310 - CannotRetrieveUpdates alerts on Critical severity 1926329 - [Assisted-4.7][Staging] monitoring stack in staging is being overloaded by the amount of metrics being exposed by assisted-installer pods and scraped by prometheus. 1926336 - Service details can overflow boxes at some screen widths 1926346 - move to go 1.15 and registry.ci.openshift.org 1926364 - Installer timeouts because proxy blocked connection to Ironic API running on bootstrap VM 1926465 - bootstrap kube-apiserver does not have --advertise-address set – was: [BM][IPI][DualStack] Installation fails cause Kubernetes service doesn't have IPv6 endpoints 1926484 - API server exits non-zero on 2 SIGTERM signals 1926547 - OpenShift installer not reporting IAM permission issue when removing the Shared Subnet Tag 1926579 - Setting .spec.policy is deprecated and will be removed eventually. Please use .spec.profile instead is being logged every 3 seconds in scheduler operator log 1926598 - Duplicate alert rules are displayed on console for thanos-querier api return wrong results 1926776 - "Template support" modal appears when select the RHEL6 common template 1926835 - [e2e][automation] prow gating use unsupported CDI version 1926843 - pipeline with finally tasks status is improper 1926867 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade 1926893 - When deploying the operator via OLM (after creating the respective catalogsource), the deployment "lost" the resources section. 1926903 - NTO may fail to disable stalld when relying on Tuned '[service]' plugin 1926931 - Inconsistent ovs-flow rule on one of the app node for egress node 1926943 - vsphere-problem-detector: Alerts in CI jobs 1926977 - [sig-devex][Feature:ImageEcosystem][Slow] openshift sample application repositories rails/nodejs 1927013 - Tables don't render properly at smaller screen widths 1927017 - CCO does not relinquish leadership when restarting for proxy CA change 1927042 - Empty static pod files on UPI deployments are confusing 1927047 - multiple external gateway pods will not work in ingress with IP fragmentation 1927068 - Workers fail to PXE boot when IPv6 provisionining network has subnet other than /64 1927075 - [e2e][automation] Fix pvc string in pvc.view 1927118 - OCP 4.7: NVIDIA GPU Operator DCGM metrics not displayed in OpenShift Console Monitoring Metrics page 1927244 - UPI installation with Kuryr timing out on bootstrap stage 1927263 - kubelet service takes around 43 secs to start container when started from stopped state 1927264 - FailedCreatePodSandBox due to multus inability to reach apiserver 1927310 - Performance: Console makes unnecessary requests for en-US messages on load 1927340 - Race condition in OperatorCondition reconcilation 1927366 - OVS configuration service unable to clone NetworkManager's connections in the overlay FS 1927391 - Fix flake in TestSyncPodsDeletesWhenSourcesAreReady 1927393 - 4.7 still points to 4.6 catalog images 1927397 - p&f: add auto update for priority & fairness bootstrap configuration objects 1927423 - Happy "Not Found" and no visible error messages on error-list page when /silences 504s 1927465 - Homepage dashboard content not internationalized 1927678 - Reboot interface defaults to softPowerOff so fencing is too slow 1927731 - /usr/lib/dracut/modules.d/30ignition/ignition --version sigsev 1927797 - 'Pod(s)' should be included in the pod donut label when a horizontal pod autoscaler is enabled 1927882 - Can't create cluster role binding from UI when a project is selected 1927895 - global RuntimeConfig is overwritten with merge result 1927898 - i18n Admin Notifier 1927902 - i18n Cluster Utilization dashboard duration 1927903 - "CannotRetrieveUpdates" - critical error in openshift web console 1927925 - Manually misspelled as Manualy 1927941 - StatusDescriptor detail item and Status component can cause runtime error when the status is an object or array 1927942 - etcd should use socket option (SO_REUSEADDR) instead of wait for port release on process restart 1927944 - cluster version operator cycles terminating state waiting for leader election 1927993 - Documentation Links in OKD Web Console are not Working 1928008 - Incorrect behavior when we click back button after viewing the node details in Internal-attached mode 1928045 - N+1 scaling Info message says "single zone" even if the nodes are spread across 2 or 0 zones 1928147 - Domain search set in the required domains in Option 119 of DHCP Server is ignored by RHCOS on RHV 1928157 - 4.7 CNO claims to be done upgrading before it even starts 1928164 - Traffic to outside the cluster redirected when OVN is used and NodePort service is configured 1928297 - HAProxy fails with 500 on some requests 1928473 - NetworkManager overlay FS not being created on None platform 1928512 - sap license management logs gatherer 1928537 - Cannot IPI with tang/tpm disk encryption 1928640 - Definite error message when using StorageClass based on azure-file / Premium_LRS 1928658 - Update plugins and Jenkins version to prepare openshift-sync-plugin 1.0.46 release 1928850 - Unable to pull images due to limited quota on Docker Hub 1928851 - manually creating NetNamespaces will break things and this is not obvious 1928867 - golden images - DV should not be created with WaitForFirstConsumer 1928869 - Remove css required to fix search bug in console caused by pf issue in 2021.1 1928875 - Update translations 1928893 - Memory Pressure Drop Down Info is stating "Disk" capacity is low instead of memory 1928931 - DNSRecord CRD is using deprecated v1beta1 API 1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 1929052 - Add new Jenkins agent maven dir for 3.6 1929056 - kube-apiserver-availability.rules are failing evaluation 1929110 - LoadBalancer service check test fails during vsphere upgrade 1929136 - openshift isn't able to mount nfs manila shares to pods 1929175 - LocalVolumeSet: PV is created on disk belonging to other provisioner 1929243 - Namespace column missing in Nodes Node Details / pods tab 1929277 - Monitoring workloads using too high a priorityclass 1929281 - Update Tech Preview badge to transparent border color when upgrading to PatternFly v4.87.1 1929314 - ovn-kubernetes endpoint slice controller doesn't run on CI jobs 1929359 - etcd-quorum-guard uses origin-cli [4.8] 1929577 - Edit Application action overwrites Deployment envFrom values on save 1929654 - Registry for Azure uses legacy V1 StorageAccount 1929693 - Pod stuck at "ContainerCreating" status 1929733 - oVirt CSI driver operator is constantly restarting 1929769 - Getting 404 after switching user perspective in another tab and reload Project details 1929803 - Pipelines shown in edit flow for Workloads created via ContainerImage flow 1929824 - fix alerting on volume name check for vsphere 1929917 - Bare-metal operator is firing for ClusterOperatorDown for 15m during 4.6 to 4.7 upgrade 1929944 - The etcdInsufficientMembers alert fires incorrectly when any instance is down and not when quorum is lost 1930007 - filter dropdown item filter and resource list dropdown item filter doesn't support multi selection 1930015 - OS list is overlapped by buttons in template wizard 1930064 - Web console crashes during VM creation from template when no storage classes are defined 1930220 - Cinder CSI driver is not able to mount volumes under heavier load 1930240 - Generated clouds.yaml incomplete when provisioning network is disabled 1930248 - After creating a remediation flow and rebooting a worker there is no access to the openshift-web-console 1930268 - intel vfio devices are not expose as resources 1930356 - Darwin binary missing from mirror.openshift.com 1930393 - Gather info about unhealthy SAP pods 1930546 - Monitoring-dashboard-workload keep loading when user with cluster-role cluster-monitoring-view login develoer console 1930570 - Jenkins templates are displayed in Developer Catalog twice 1930620 - the logLevel field in containerruntimeconfig can't be set to "trace" 1930631 - Image local-storage-mustgather in the doc does not come from product registry 1930893 - Backport upstream patch 98956 for pod terminations 1931005 - Related objects page doesn't show the object when its name is empty 1931103 - remove periodic log within kubelet 1931115 - Azure cluster install fails with worker type workers Standard_D4_v2 1931215 - [RFE] Cluster-api-provider-ovirt should handle affinity groups 1931217 - [RFE] Installer should create RHV Affinity group for OCP cluster VMS 1931467 - Kubelet consuming a large amount of CPU and memory and node becoming unhealthy 1931505 - [IPI baremetal] Two nodes hold the VIP post remove and start of the Keepalived container 1931522 - Fresh UPI install on BM with bonding using OVN Kubernetes fails 1931529 - SNO: mentioning of 4 nodes in error message - Cluster network CIDR prefix 24 does not contain enough addresses for 4 hosts each one with 25 prefix (128 addresses) 1931629 - Conversational Hub Fails due to ImagePullBackOff 1931637 - Kubeturbo Operator fails due to ImagePullBackOff 1931652 - [single-node] etcd: discover-etcd-initial-cluster graceful termination race. 1931658 - [single-node] cluster-etcd-operator: cluster never pivots from bootstrapIP endpoint 1931674 - [Kuryr] Enforce nodes MTU for the Namespaces and Pods 1931852 - Ignition HTTP GET is failing, because DHCP IPv4 config is failing silently 1931883 - Fail to install Volume Expander Operator due to CrashLookBackOff 1931949 - Red Hat Integration Camel-K Operator keeps stuck in Pending state 1931974 - Operators cannot access kubeapi endpoint on OVNKubernetes on ipv6 1931997 - network-check-target causes upgrade to fail from 4.6.18 to 4.7 1932001 - Only one of multiple subscriptions to the same package is honored 1932097 - Apiserver liveness probe is marking it as unhealthy during normal shutdown 1932105 - machine-config ClusterOperator claims level while control-plane still updating 1932133 - AWS EBS CSI Driver doesn’t support “csi.storage.k8s.io/fsTyps” parameter 1932135 - When “iopsPerGB” parameter is not set, event for AWS EBS CSI Driver provisioning is not clear 1932152 - When “iopsPerGB” parameter is set to a wrong number, events for AWS EBS CSI Driver provisioning are not clear 1932154 - [AWS ] machine stuck in provisioned phase , no warnings or errors 1932182 - catalog operator causing CPU spikes and bad etcd performance 1932229 - Can’t find kubelet metrics for aws ebs csi volumes 1932281 - [Assisted-4.7][UI] Unable to change upgrade channel once upgrades were discovered 1932323 - CVE-2021-26540 sanitize-html: improper validation of hostnames set by the "allowedIframeHostnames" option can lead to bypass hostname whitelist for iframe element 1932324 - CRIO fails to create a Pod in sandbox stage - starting container process caused: process_linux.go:472: container init caused: Running hook #0:: error running hook: exit status 255, stdout: , stderr: \"\n" 1932362 - CVE-2021-26539 sanitize-html: improper handling of internationalized domain name (IDN) can lead to bypass hostname whitelist validation 1932401 - Cluster Ingress Operator degrades if external LB redirects http to https because of new "canary" route 1932453 - Update Japanese timestamp format 1932472 - Edit Form/YAML switchers cause weird collapsing/code-folding issue 1932487 - [OKD] origin-branding manifest is missing cluster profile annotations 1932502 - Setting MTU for a bond interface using Kernel arguments is not working 1932618 - Alerts during a test run should fail the test job, but were not 1932624 - ClusterMonitoringOperatorReconciliationErrors is pending at the end of an upgrade and probably should not be 1932626 - During a 4.8 GCP upgrade OLM fires an alert indicating the operator is unhealthy 1932673 - Virtual machine template provided by red hat should not be editable. The UI allows to edit and then reverse the change after it was made 1932789 - Proxy with port is unable to be validated if it overlaps with service/cluster network 1932799 - During a hive driven baremetal installation the process does not go beyond 80% in the bootstrap VM 1932805 - e2e: test OAuth API connections in the tests by that name 1932816 - No new local storage operator bundle image is built 1932834 - enforce the use of hashed access/authorize tokens 1933101 - Can not upgrade a Helm Chart that uses a library chart in the OpenShift dev console 1933102 - Canary daemonset uses default node selector 1933114 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it [Suite:openshift/conformance/parallel/minimal] 1933159 - multus DaemonSets should use maxUnavailable: 33% 1933173 - openshift-sdn/sdn DaemonSet should use maxUnavailable: 10% 1933174 - openshift-sdn/ovs DaemonSet should use maxUnavailable: 10% 1933179 - network-check-target DaemonSet should use maxUnavailable: 10% 1933180 - openshift-image-registry/node-ca DaemonSet should use maxUnavailable: 10% 1933184 - openshift-cluster-csi-drivers DaemonSets should use maxUnavailable: 10% 1933263 - user manifest with nodeport services causes bootstrap to block 1933269 - Cluster unstable replacing an unhealthy etcd member 1933284 - Samples in CRD creation are ordered arbitarly 1933414 - Machines are created with unexpected name for Ports 1933599 - bump k8s.io/apiserver to 1.20.3 1933630 - [Local Volume] Provision disk failed when disk label has unsupported value like ":" 1933664 - Getting Forbidden for image in a container template when creating a sample app 1933708 - Grafana is not displaying deployment config resources in dashboard Default /Kubernetes / Compute Resources / Namespace (Workloads) 1933711 - EgressDNS: Keep short lived records at most 30s 1933730 - [AI-UI-Wizard] Toggling "Use extra disks for local storage" checkbox highlights the "Next" button to move forward but grays out once clicked 1933761 - Cluster DNS service caps TTLs too low and thus evicts from its cache too aggressively 1933772 - MCD Crash Loop Backoff 1933805 - TargetDown alert fires during upgrades because of normal upgrade behavior 1933857 - Details page can throw an uncaught exception if kindObj prop is undefined 1933880 - Kuryr-Controller crashes when it's missing the status object 1934021 - High RAM usage on machine api termination node system oom 1934071 - etcd consuming high amount of memory and CPU after upgrade to 4.6.17 1934080 - Both old and new Clusterlogging CSVs stuck in Pending during upgrade 1934085 - Scheduling conformance tests failing in a single node cluster 1934107 - cluster-authentication-operator builds URL incorrectly for IPv6 1934112 - Add memory and uptime metadata to IO archive 1934113 - mcd panic when there's not enough free disk space 1934123 - [OSP] First public endpoint is used to fetch ignition config from Glance URL (with multiple endpoints) on OSP 1934163 - Thanos Querier restarting and gettin alert ThanosQueryHttpRequestQueryRangeErrorRateHigh 1934174 - rootfs too small when enabling NBDE 1934176 - Machine Config Operator degrades during cluster update with failed to convert Ignition config spec v2 to v3 1934177 - knative-camel-operator CreateContainerError "container_linux.go:366: starting container process caused: chdir to cwd (\"/home/nonroot\") set in config.json failed: permission denied" 1934216 - machineset-controller stuck in CrashLoopBackOff after upgrade to 4.7.0 1934229 - List page text filter has input lag 1934397 - Extend OLM operator gatherer to include Operator/ClusterServiceVersion conditions 1934400 - [ocp_4][4.6][apiserver-auth] OAuth API servers are not ready - PreconditionNotReady 1934516 - Setup different priority classes for prometheus-k8s and prometheus-user-workload pods 1934556 - OCP-Metal images 1934557 - RHCOS boot image bump for LUKS fixes 1934643 - Need BFD failover capability on ECMP routes 1934711 - openshift-ovn-kubernetes ovnkube-node DaemonSet should use maxUnavailable: 10% 1934773 - Canary client should perform canary probes explicitly over HTTPS (rather than redirect from HTTP) 1934905 - CoreDNS's "errors" plugin is not enabled for custom upstream resolvers 1935058 - Can’t finish install sts clusters on aws government region 1935102 - Error: specifying a root certificates file with the insecure flag is not allowed during oc login 1935155 - IGMP/MLD packets being dropped 1935157 - [e2e][automation] environment tests broken 1935165 - OCP 4.6 Build fails when filename contains an umlaut 1935176 - Missing an indication whether the deployed setup is SNO. 1935269 - Topology operator group shows child Jobs. Not shown in details view's resources. 1935419 - Failed to scale worker using virtualmedia on Dell R640 1935528 - [AWS][Proxy] ingress reports degrade with CanaryChecksSucceeding=False in the cluster with proxy setting 1935539 - Openshift-apiserver CO unavailable during cluster upgrade from 4.6 to 4.7 1935541 - console operator panics in DefaultDeployment with nil cm 1935582 - prometheus liveness probes cause issues while replaying WAL 1935604 - high CPU usage fails ingress controller 1935667 - pipelinerun status icon rendering issue 1935706 - test: Detect when the master pool is still updating after upgrade 1935732 - Update Jenkins agent maven directory to be version agnostic [ART ocp build data] 1935814 - Pod and Node lists eventually have incorrect row heights when additional columns have long text 1935909 - New CSV using ServiceAccount named "default" stuck in Pending during upgrade 1936022 - DNS operator performs spurious updates in response to API's defaulting of daemonset's terminationGracePeriod and service's clusterIPs 1936030 - Ingress operator performs spurious updates in response to API's defaulting of NodePort service's clusterIPs field 1936223 - The IPI installer has a typo. It is missing the word "the" in "the Engine". 1936336 - Updating multus-cni builder & base images to be consistent with ART 4.8 (closed) 1936342 - kuryr-controller restarting after 3 days cluster running - pools without members 1936443 - Hive based OCP IPI baremetal installation fails to connect to API VIP port 22623 1936488 - [sig-instrumentation][Late] Alerts shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured: Prometheus query error 1936515 - sdn-controller is missing some health checks 1936534 - When creating a worker with a used mac-address stuck on registering 1936585 - configure alerts if the catalogsources are missing 1936620 - OLM checkbox descriptor renders switch instead of checkbox 1936721 - network-metrics-deamon not associated with a priorityClassName 1936771 - [aws ebs csi driver] The event for Pod consuming a readonly PVC is not clear 1936785 - Configmap gatherer doesn't include namespace name (in the archive path) in case of a configmap with binary data 1936788 - RBD RWX PVC creation with Filesystem volume mode selection is creating RWX PVC with Block volume mode instead of disabling Filesystem volume mode selection 1936798 - Authentication log gatherer shouldn't scan all the pod logs in the openshift-authentication namespace 1936801 - Support ServiceBinding 0.5.0+ 1936854 - Incorrect imagestream is shown as selected in knative service container image edit flow 1936857 - e2e-ovirt-ipi-install-install is permafailing on 4.5 nightlies 1936859 - ovirt 4.4 -> 4.5 upgrade jobs are permafailing 1936867 - Periodic vsphere IPI install is broken - missing pip 1936871 - [Cinder CSI] Topology aware provisioning doesn't work when Nova and Cinder AZs are different 1936904 - Wrong output YAML when syncing groups without --confirm 1936983 - Topology view - vm details screen isntt stop loading 1937005 - when kuryr quotas are unlimited, we should not sent alerts 1937018 - FilterToolbar component does not handle 'null' value for 'rowFilters' prop 1937020 - Release new from image stream chooses incorrect ID based on status 1937077 - Blank White page on Topology 1937102 - Pod Containers Page Not Translated 1937122 - CAPBM changes to support flexible reboot modes 1937145 - [Local storage] PV provisioned by localvolumeset stays in "Released" status after the pod/pvc deleted 1937167 - [sig-arch] Managed cluster should have no crashlooping pods in core namespaces over four minutes 1937244 - [Local Storage] The model name of aws EBS doesn't be extracted well 1937299 - pod.spec.volumes.awsElasticBlockStore.partition is not respected on NVMe volumes 1937452 - cluster-network-operator CI linting fails in master branch 1937459 - Wrong Subnet retrieved for Service without Selector 1937460 - [CI] Network quota pre-flight checks are failing the installation 1937464 - openstack cloud credentials are not getting configured with correct user_domain_name across the cluster 1937466 - KubeClientCertificateExpiration alert is confusing, without explanation in the documentation 1937496 - Metrics viewer in OCP Console is missing date in a timestamp for selected datapoint 1937535 - Not all image pulls within OpenShift builds retry 1937594 - multiple pods in ContainerCreating state after migration from OpenshiftSDN to OVNKubernetes 1937627 - Bump DEFAULT_DOC_URL for 4.8 1937628 - Bump upgrade channels for 4.8 1937658 - Description for storage class encryption during storagecluster creation needs to be updated 1937666 - Mouseover on headline 1937683 - Wrong icon classification of output in buildConfig when the destination is a DockerImage 1937693 - ironic image "/" cluttered with files 1937694 - [oVirt] split ovirt providerIDReconciler logic into NodeController and ProviderIDController 1937717 - If browser default font size is 20, the layout of template screen breaks 1937722 - OCP 4.8 vuln due to BZ 1936445 1937929 - Operand page shows a 404:Not Found error for OpenShift GitOps Operator 1937941 - [RFE]fix wording for favorite templates 1937972 - Router HAProxy config file template is slow to render due to repetitive regex compilations 1938131 - [AWS] Missing iam:ListAttachedRolePolicies permission in permissions.go 1938321 - Cannot view PackageManifest objects in YAML on 'Home > Search' page nor 'CatalogSource details > Operators tab' 1938465 - thanos-querier should set a CPU request on the thanos-query container 1938466 - packageserver deployment sets neither CPU or memory request on the packageserver container 1938467 - The default cluster-autoscaler should get default cpu and memory requests if user omits them 1938468 - kube-scheduler-operator has a container without a CPU request 1938492 - Marketplace extract container does not request CPU or memory 1938493 - machine-api-operator declares restrictive cpu and memory limits where it should not 1938636 - Can't set the loglevel of the container: cluster-policy-controller and kube-controller-manager-recovery-controller 1938903 - Time range on dashboard page will be empty after drog and drop mouse in the graph 1938920 - ovnkube-master/ovs-node DaemonSets should use maxUnavailable: 10% 1938947 - Update blocked from 4.6 to 4.7 when using spot/preemptible instances 1938949 - [VPA] Updater failed to trigger evictions due to "vpa-admission-controller" not found 1939054 - machine healthcheck kills aws spot instance before generated 1939060 - CNO: nodes and masters are upgrading simultaneously 1939069 - Add source to vm template silently failed when no storage class is defined in the cluster 1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string 1939168 - Builds failing for OCP 3.11 since PR#25 was merged 1939226 - kube-apiserver readiness probe appears to be hitting /healthz, not /readyz 1939227 - kube-apiserver liveness probe appears to be hitting /healthz, not /livez 1939232 - CI tests using openshift/hello-world broken by Ruby Version Update 1939270 - fix co upgradeableFalse status and reason 1939294 - OLM may not delete pods with grace period zero (force delete) 1939412 - missed labels for thanos-ruler pods 1939485 - CVE-2021-20291 containers/storage: DoS via malicious image 1939547 - Include container="POD" in resource queries 1939555 - VSphereProblemDetectorControllerDegraded: context canceled during upgrade to 4.8.0 1939573 - after entering valid git repo url on add flow page, throwing warning message instead Validated 1939580 - Authentication operator is degraded during 4.8 to 4.8 upgrade and normal 4.8 e2e runs 1939606 - Attempting to put a host into maintenance mode warns about Ceph cluster health, but no storage cluster problems are apparent 1939661 - support new AWS region ap-northeast-3 1939726 - clusteroperator/network should not change condition/Degraded during normal serial test execution 1939731 - Image registry operator reports unavailable during normal serial run 1939734 - Node Fanout Causes Excessive WATCH Secret Calls, Taking Down Clusters 1939740 - dual stack nodes with OVN single ipv6 fails on bootstrap phase 1939752 - ovnkube-master sbdb container does not set requests on cpu or memory 1939753 - Delete HCO is stucking if there is still VM in the cluster 1939815 - Change the Warning Alert for Encrypted PVs in Create StorageClass(provisioner:RBD) page 1939853 - [DOC] Creating manifests API should not allow folder in the "file_name" 1939865 - GCP PD CSI driver does not have CSIDriver instance 1939869 - [e2e][automation] Add annotations to datavolume for HPP 1939873 - Unlimited number of characters accepted for base domain name 1939943 - cluster-kube-apiserver-operator check-endpoints observed a panic: runtime error: invalid memory address or nil pointer dereference 1940030 - cluster-resource-override: fix spelling mistake for run-level match expression in webhook configuration 1940057 - Openshift builds should use a wach instead of polling when checking for pod status 1940142 - 4.6->4.7 updates stick on OpenStackCinderCSIDriverOperatorCR_OpenStackCinderDriverControllerServiceController_Deploying 1940159 - [OSP] cluster destruction fails to remove router in BYON (with provider network) with Kuryr as primary network 1940206 - Selector and VolumeTableRows not i18ned 1940207 - 4.7->4.6 rollbacks stuck on prometheusrules admission webhook "no route to host" 1940314 - Failed to get type for Dashboard Kubernetes / Compute Resources / Namespace (Workloads) 1940318 - No data under 'Current Bandwidth' for Dashboard 'Kubernetes / Networking / Pod' 1940322 - Split of dashbard is wrong, many Network parts 1940337 - rhos-ipi installer fails with not clear message when openstack tenant doesn't have flavors needed for compute machines 1940361 - [e2e][automation] Fix vm action tests with storageclass HPP 1940432 - Gather datahubs.installers.datahub.sap.com resources from SAP clusters 1940488 - After fix for CVE-2021-3344, Builds do not mount node entitlement keys 1940498 - pods may fail to add logical port due to lr-nat-del/lr-nat-add error messages 1940499 - hybrid-overlay not logging properly before exiting due to an error 1940518 - Components in bare metal components lack resource requests 1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header 1940704 - prjquota is dropped from rootflags if rootfs is reprovisioned 1940755 - [Web-console][Local Storage] LocalVolumeSet could not be created from web-console without detail error info 1940865 - Add BareMetalPlatformType into e2e upgrade service unsupported list 1940876 - Components in ovirt components lack resource requests 1940889 - Installation failures in OpenStack release jobs 1940933 - [sig-arch] Check if alerts are firing during or after upgrade success: AggregatedAPIDown on v1beta1.metrics.k8s.io 1940939 - Wrong Openshift node IP as kubelet setting VIP as node IP 1940940 - csi-snapshot-controller goes unavailable when machines are added removed to cluster 1940950 - vsphere: client/bootstrap CSR double create 1940972 - vsphere: [4.6] CSR approval delayed for unknown reason 1941000 - cinder storageclass creates persistent volumes with wrong label failure-domain.beta.kubernetes.io/zone in multi availability zones architecture on OSP 16. 1941334 - [RFE] Cluster-api-provider-ovirt should handle auto pinning policy 1941342 - Add kata-osbuilder-generate.service as part of the default presets 1941456 - Multiple pods stuck in ContainerCreating status with the message "failed to create container for [kubepods burstable podxxx] : dbus: connection closed by user" being seen in the journal log 1941526 - controller-manager-operator: Observed a panic: nil pointer dereference 1941592 - HAProxyDown not Firing 1941606 - [assisted operator] Assisted Installer Operator CSV related images should be digests for icsp 1941625 - Developer -> Topology - i18n misses 1941635 - Developer -> Monitoring - i18n misses 1941636 - BM worker nodes deployment with virtual media failed while trying to clean raid 1941645 - Developer -> Builds - i18n misses 1941655 - Developer -> Pipelines - i18n misses 1941667 - Developer -> Project - i18n misses 1941669 - Developer -> ConfigMaps - i18n misses 1941759 - Errored pre-flight checks should not prevent install 1941798 - Some details pages don't have internationalized ResourceKind labels 1941801 - Many filter toolbar dropdowns haven't been internationalized 1941815 - From the web console the terminal can no longer connect after using leaving and returning to the terminal view 1941859 - [assisted operator] assisted pod deploy first time in error state 1941901 - Toleration merge logic does not account for multiple entries with the same key 1941915 - No validation against template name in boot source customization 1941936 - when setting parameters in containerRuntimeConfig, it will show incorrect information on its description 1941980 - cluster-kube-descheduler operator is broken when upgraded from 4.7 to 4.8 1941990 - Pipeline metrics endpoint changed in osp-1.4 1941995 - fix backwards incompatible trigger api changes in osp1.4 1942086 - Administrator -> Home - i18n misses 1942117 - Administrator -> Workloads - i18n misses 1942125 - Administrator -> Serverless - i18n misses 1942193 - Operand creation form - broken/cutoff blue line on the Accordion component (fieldGroup) 1942207 - [vsphere] hostname are changed when upgrading from 4.6 to 4.7.x causing upgrades to fail 1942271 - Insights operator doesn't gather pod information from openshift-cluster-version 1942375 - CRI-O failing with error "reserving ctr name" 1942395 - The status is always "Updating" on dc detail page after deployment has failed. 1942521 - [Assisted-4.7] [Staging][OCS] Minimum memory for selected role is failing although minimum OCP requirement satisfied 1942522 - Resolution fails to sort channel if inner entry does not satisfy predicate 1942536 - Corrupted image preventing containers from starting 1942548 - Administrator -> Networking - i18n misses 1942553 - CVE-2021-22133 go.elastic.co/apm: leaks sensitive HTTP headers during panic 1942555 - Network policies in ovn-kubernetes don't support external traffic from router when the endpoint publishing strategy is HostNetwork 1942557 - Query is reporting "no datapoint" when label cluster="" is set but work when the label is removed or when running directly in Prometheus 1942608 - crictl cannot list the images with an error: error locating item named "manifest" for image with ID 1942614 - Administrator -> Storage - i18n misses 1942641 - Administrator -> Builds - i18n misses 1942673 - Administrator -> Pipelines - i18n misses 1942694 - Resource names with a colon do not display property in the browser window title 1942715 - Administrator -> User Management - i18n misses 1942716 - Quay Container Security operator has Medium <-> Low colors reversed 1942725 - [SCC] openshift-apiserver degraded when creating new pod after installing Stackrox which creates a less privileged SCC [4.8] 1942736 - Administrator -> Administration - i18n misses 1942749 - Install Operator form should use info icon for popovers 1942837 - [OCPv4.6] unable to deploy pod with unsafe sysctls 1942839 - Windows VMs fail to start on air-gapped environments 1942856 - Unable to assign nodes for EgressIP even if the egress-assignable label is set 1942858 - [RFE]Confusing detach volume UX 1942883 - AWS EBS CSI driver does not support partitions 1942894 - IPA error when provisioning masters due to an error from ironic.conductor - /dev/sda is busy 1942935 - must-gather improvements 1943145 - vsphere: client/bootstrap CSR double create 1943175 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies (set azure storage account TLS version default to 1.2) 1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() 1943219 - unable to install IPI PRIVATE OpenShift cluster in Azure - SSH access from the Internet should be blocked 1943224 - cannot upgrade openshift-kube-descheduler from 4.7.2 to latest 1943238 - The conditions table does not occupy 100% of the width. 1943258 - [Assisted-4.7][Staging][Advanced Networking] Cluster install fails while waiting for control plane 1943314 - [OVN SCALE] Combine Logical Flows inside Southbound DB. 1943315 - avoid workload disruption for ICSP changes 1943320 - Baremetal node loses connectivity with bonded interface and OVNKubernetes 1943329 - TLSSecurityProfile missing from KubeletConfig CRD Manifest 1943356 - Dynamic plugins surfaced in the UI should be referred to as "Console plugins" 1943539 - crio-wipe is failing to start "Failed to shutdown storage before wiping: A layer is mounted: layer is in use by a container" 1943543 - DeploymentConfig Rollback doesn't reset params correctly 1943558 - [assisted operator] Assisted Service pod unable to reach self signed local registry in disco environement 1943578 - CoreDNS caches NXDOMAIN responses for up to 900 seconds 1943614 - add bracket logging on openshift/builder calls into buildah to assist test-platform team triage 1943637 - upgrade from ocp 4.5 to 4.6 does not clear SNAT rules on ovn 1943649 - don't use hello-openshift for network-check-target 1943667 - KubeDaemonSetRolloutStuck fires during upgrades too often because it does not accurately detect progress 1943719 - storage-operator/vsphere-problem-detector causing upgrades to fail that would have succeeded in past versions 1943804 - API server on AWS takes disruption between 70s and 110s after pod begins termination via external LB 1943845 - Router pods should have startup probes configured 1944121 - OVN-kubernetes references AddressSets after deleting them, causing ovn-controller errors 1944160 - CNO: nbctl daemon should log reconnection info 1944180 - OVN-Kube Master does not release election lock on shutdown 1944246 - Ironic fails to inspect and move node to "manageable' but get bmh remains in "inspecting" 1944268 - openshift-install AWS SDK is missing endpoints for the ap-northeast-3 region 1944509 - Translatable texts without context in ssh expose component 1944581 - oc project not works with cluster proxy 1944587 - VPA could not take actions based on the recommendation when min-replicas=1 1944590 - The field name "VolumeSnapshotContent" is wrong on VolumeSnapshotContent detail page 1944602 - Consistant fallures of features/project-creation.feature Cypress test in CI 1944631 - openshif authenticator should not accept non-hashed tokens 1944655 - [manila-csi-driver-operator] openstack-manila-csi-nodeplugin pods stucked with ".. still connecting to unix:///var/lib/kubelet/plugins/csi-nfsplugin/csi.sock" 1944660 - dm-multipath race condition on bare metal causing /boot partition mount failures 1944674 - Project field become to "All projects" and disabled in "Review and create virtual machine" step in devconsole 1944678 - Whereabouts IPAM CNI duplicate IP addresses assigned to pods 1944761 - field level help instances do not use common util component 1944762 - Drain on worker node during an upgrade fails due to PDB set for image registry pod when only a single replica is present 1944763 - field level help instances do not use common util component 1944853 - Update to nodejs >=14.15.4 for ARM 1944974 - Duplicate KubeControllerManagerDown/KubeSchedulerDown alerts 1944986 - Clarify the ContainerRuntimeConfiguration cr description on the validation 1945027 - Button 'Copy SSH Command' does not work 1945085 - Bring back API data in etcd test 1945091 - In k8s 1.21 bump Feature:IPv6DualStack tests are disabled 1945103 - 'User credentials' shows even the VM is not running 1945104 - In k8s 1.21 bump '[sig-storage] [cis-hostpath] [Testpattern: Generic Ephemeral-volume' tests are disabled 1945146 - Remove pipeline Tech preview badge for pipelines GA operator 1945236 - Bootstrap ignition shim doesn't follow proxy settings 1945261 - Operator dependency not consistently chosen from default channel 1945312 - project deletion does not reset UI project context 1945326 - console-operator: does not check route health periodically 1945387 - Image Registry deployment should have 2 replicas and hard anti-affinity rules 1945398 - 4.8 CI failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial] 1945431 - alerts: SystemMemoryExceedsReservation triggers too quickly 1945443 - operator-lifecycle-manager-packageserver flaps Available=False with no reason or message 1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service 1945548 - catalog resource update failed if spec.secrets set to "" 1945584 - Elasticsearch operator fails to install on 4.8 cluster on ppc64le/s390x 1945599 - Optionally set KERNEL_VERSION and RT_KERNEL_VERSION 1945630 - Pod log filename no longer in -.log format 1945637 - QE- Automation- Fixing smoke test suite for pipeline-plugin 1945646 - gcp-routes.sh running as initrc_t unnecessarily 1945659 - [oVirt] remove ovirt_cafile from ovirt-credentials secret 1945677 - Need ACM Managed Cluster Info metric enabled for OCP monitoring telemetry 1945687 - Dockerfile needs updating to new container CI registry 1945700 - Syncing boot mode after changing device should be restricted to Supermicro 1945816 - " Ingresses " should be kept in English for Chinese 1945818 - Chinese translation issues: Operator should be the same with English Operators 1945849 - Unnecessary series churn when a new version of kube-state-metrics is rolled out 1945910 - [aws] support byo iam roles for instances 1945948 - SNO: pods can't reach ingress when the ingress uses a different IPv6. 1946079 - Virtual master is not getting an IP address 1946097 - [oVirt] oVirt credentials secret contains unnecessary "ovirt_cafile" 1946119 - panic parsing install-config 1946243 - No relevant error when pg limit is reached in block pools page 1946307 - [CI] [UPI] use a standardized and reliable way to install google cloud SDK in UPI image 1946320 - Incorrect error message in Deployment Attach Storage Page 1946449 - [e2e][automation] Fix cloud-init tests as UI changed 1946458 - Edit Application action overwrites Deployment envFrom values on save 1946459 - In bare metal IPv6 environment, [sig-storage] [Driver: nfs] tests are failing in CI. 1946479 - In k8s 1.21 bump BoundServiceAccountTokenVolume is disabled by default 1946497 - local-storage-diskmaker pod logs "DeviceSymlinkExists" and "not symlinking, could not get lock: " 1946506 - [on-prem] mDNS plugin no longer needed 1946513 - honor use specified system reserved with auto node sizing 1946540 - auth operator: only configure webhook authenticators for internal auth when oauth-apiserver pods are ready 1946584 - Machine-config controller fails to generate MC, when machine config pool with dashes in name presents under the cluster 1946607 - etcd readinessProbe is not reflective of actual readiness 1946705 - Fix issues with "search" capability in the Topology Quick Add component 1946751 - DAY2 Confusing event when trying to add hosts to a cluster that completed installation 1946788 - Serial tests are broken because of router 1946790 - Marketplace operator flakes Available=False OperatorStarting during updates 1946838 - Copied CSVs show up as adopted components 1946839 - [Azure] While mirroring images to private registry throwing error: invalid character '<' looking for beginning of value 1946865 - no "namespace:kube_pod_container_resource_requests_cpu_cores:sum" and "namespace:kube_pod_container_resource_requests_memory_bytes:sum" metrics 1946893 - the error messages are inconsistent in DNS status conditions if the default service IP is taken 1946922 - Ingress details page doesn't show referenced secret name and link 1946929 - the default dns operator's Progressing status is always True and cluster operator dns Progressing status is False 1947036 - "failed to create Matchbox client or connect" on e2e-metal jobs or metal clusters via cluster-bot 1947066 - machine-config-operator pod crashes when noProxy is * 1947067 - [Installer] Pick up upstream fix for installer console output 1947078 - Incorrect skipped status for conditional tasks in the pipeline run 1947080 - SNO IPv6 with 'temporary 60-day domain' option fails with IPv4 exception 1947154 - [master] [assisted operator] Unable to re-register an SNO instance if deleting CRDs during install 1947164 - Print "Successfully pushed" even if the build push fails. 1947176 - OVN-Kubernetes leaves stale AddressSets around if the deletion was missed. 1947293 - IPv6 provision addresses range larger then /64 prefix (e.g. /48) 1947311 - When adding a new node to localvolumediscovery UI does not show pre-existing node name's 1947360 - [vSphere csi driver operator] operator pod runs as “BestEffort” qosClass 1947371 - [vSphere csi driver operator] operator doesn't create “csidriver” instance 1947402 - Single Node cluster upgrade: AWS EBS CSI driver deployment is stuck on rollout 1947478 - discovery v1 beta1 EndpointSlice is deprecated in Kubernetes 1.21 (OCP 4.8) 1947490 - If Clevis on a managed LUKs volume with Ignition enables, the system will fails to automatically open the LUKs volume on system boot 1947498 - policy v1 beta1 PodDisruptionBudget is deprecated in Kubernetes 1.21 (OCP 4.8) 1947663 - disk details are not synced in web-console 1947665 - Internationalization values for ceph-storage-plugin should be in file named after plugin 1947684 - MCO on SNO sometimes has rendered configs and sometimes does not 1947712 - [OVN] Many faults and Polling interval stuck for 4 seconds every roughly 5 minutes intervals. 1947719 - 8 APIRemovedInNextReleaseInUse info alerts display 1947746 - Show wrong kubernetes version from kube-scheduler/kube-controller-manager operator pods 1947756 - [azure-disk-csi-driver-operator] Should allow more nodes to be updated simultaneously for speeding up cluster upgrade 1947767 - [azure-disk-csi-driver-operator] Uses the same storage type in the sc created by it as the default sc? 1947771 - [kube-descheduler]descheduler operator pod should not run as “BestEffort” qosClass 1947774 - CSI driver operators use "Always" imagePullPolicy in some containers 1947775 - [vSphere csi driver operator] doesn’t use the downstream images from payload. 1947776 - [vSphere csi driver operator] Should allow more nodes to be updated simultaneously for speeding up cluster upgrade 1947779 - [LSO] Should allow more nodes to be updated simultaneously for speeding up LSO upgrade 1947785 - Cloud Compute: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947789 - Console: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947791 - MCO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947793 - DevEx: APIRemovedInNextReleaseInUse info alerts display 1947794 - OLM: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component does not trigger APIRemovedInNextReleaseInUse alert 1947795 - Networking: APIRemovedInNextReleaseInUse info alerts display 1947797 - CVO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947798 - Images: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947800 - Ingress: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947801 - Kube Storage Version Migrator APIRemovedInNextReleaseInUse info alerts display 1947803 - Openshift Apiserver: APIRemovedInNextReleaseInUse info alerts display 1947806 - Re-enable h2spec, http/2 and grpc-interop e2e tests in openshift/origin 1947828 - download it link should save pod log in -.log format 1947866 - disk.csi.azure.com.spec.operatorLogLevel is not updated when CSO loglevel is changed 1947917 - Egress Firewall does not reliably apply firewall rules 1947946 - Operator upgrades can delete existing CSV before completion 1948011 - openshift-controller-manager constantly reporting type "Upgradeable" status Unknown 1948012 - service-ca constantly reporting type "Upgradeable" status Unknown 1948019 - [4.8] Large number of requests to the infrastructure cinder volume service 1948022 - Some on-prem namespaces missing from must-gather 1948040 - cluster-etcd-operator: etcd is using deprecated logger 1948082 - Monitoring should not set Available=False with no reason on updates 1948137 - CNI DEL not called on node reboot - OCP 4 CRI-O. 1948232 - DNS operator performs spurious updates in response to API's defaulting of daemonset's maxSurge and service's ipFamilies and ipFamilyPolicy fields 1948311 - Some jobs failing due to excessive watches: the server has received too many requests and has asked us to try again later 1948359 - [aws] shared tag was not removed from user provided IAM role 1948410 - [LSO] Local Storage Operator uses imagePullPolicy as "Always" 1948415 - [vSphere csi driver operator] clustercsidriver.spec.logLevel doesn't take effective after changing 1948427 - No action is triggered after click 'Continue' button on 'Show community Operator' windows 1948431 - TechPreviewNoUpgrade does not enable CSI migration 1948436 - The outbound traffic was broken intermittently after shutdown one egressIP node 1948443 - OCP 4.8 nightly still showing v1.20 even after 1.21 merge 1948471 - [sig-auth][Feature:OpenShiftAuthorization][Serial] authorization TestAuthorizationResourceAccessReview should succeed [Suite:openshift/conformance/serial] 1948505 - [vSphere csi driver operator] vmware-vsphere-csi-driver-operator pod restart every 10 minutes 1948513 - get-resources.sh doesn't honor the no_proxy settings 1948524 - 'DeploymentUpdated' Updated Deployment.apps/downloads -n openshift-console because it changed message is printed every minute 1948546 - VM of worker is in error state when a network has port_security_enabled=False 1948553 - When setting etcd spec.LogLevel is not propagated to etcd operand 1948555 - A lot of events "rpc error: code = DeadlineExceeded desc = context deadline exceeded" were seen in azure disk csi driver verification test 1948563 - End-to-End Secure boot deployment fails "Invalid value for input variable" 1948582 - Need ability to specify local gateway mode in CNO config 1948585 - Need a CI jobs to test local gateway mode with bare metal 1948592 - [Cluster Network Operator] Missing Egress Router Controller 1948606 - DNS e2e test fails "[sig-arch] Only known images used by tests" because it does not use a known image 1948610 - External Storage [Driver: disk.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly] 1948626 - TestRouteAdmissionPolicy e2e test is failing often 1948628 - ccoctl needs to plan for future (non-AWS) platform support in the CLI 1948634 - upgrades: allow upgrades without version change 1948640 - [Descheduler] operator log reports key failed with : kubedeschedulers.operator.openshift.io "cluster" not found 1948701 - unneeded CCO alert already covered by CVO 1948703 - p&f: probes should not get 429s 1948705 - [assisted operator] SNO deployment fails - ClusterDeployment shows bootstrap.ign was not found 1948706 - Cluster Autoscaler Operator manifests missing annotation for ibm-cloud-managed profile 1948708 - cluster-dns-operator includes a deployment with node selector of masters for the IBM cloud managed profile 1948711 - thanos querier and prometheus-adapter should have 2 replicas 1948714 - cluster-image-registry-operator targets master nodes in ibm-cloud-managed-profile 1948716 - cluster-ingress-operator deployment targets master nodes for ibm-cloud-managed profile 1948718 - cluster-network-operator deployment manifest for ibm-cloud-managed profile contains master node selector 1948719 - Machine API components should use 1.21 dependencies 1948721 - cluster-storage-operator deployment targets master nodes for ibm-cloud-managed profile 1948725 - operator lifecycle manager does not include profile annotations for ibm-cloud-managed 1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing 1948771 - ~50% of GCP upgrade jobs in 4.8 failing with "AggregatedAPIDown" alert on packages.coreos.com 1948782 - Stale references to the single-node-production-edge cluster profile 1948787 - secret.StringData shouldn't be used for reads 1948788 - Clicking an empty metrics graph (when there is no data) should still open metrics viewer 1948789 - Clicking on a metrics graph should show request and limits queries as well on the resulting metrics page 1948919 - Need minor update in message on channel modal 1948923 - [aws] installer forces the platform.aws.amiID option to be set, while installing a cluster into GovCloud or C2S region 1948926 - Memory Usage of Dashboard 'Kubernetes / Compute Resources / Pod' contain wrong CPU query 1948936 - [e2e][automation][prow] Prow script point to deleted resource 1948943 - (release-4.8) Limit the number of collected pods in the workloads gatherer 1948953 - Uninitialized cloud provider error when provisioning a cinder volume 1948963 - [RFE] Cluster-api-provider-ovirt should handle hugepages 1948966 - Add the ability to run a gather done by IO via a Kubernetes Job 1948981 - Align dependencies and libraries with latest ironic code 1948998 - style fixes by GoLand and golangci-lint 1948999 - Can not assign multiple EgressIPs to a namespace by using automatic way. 1949019 - PersistentVolumes page cannot sync project status automatically which will block user to create PV 1949022 - Openshift 4 has a zombie problem 1949039 - Wrong env name to get podnetinfo for hugepage in app-netutil 1949041 - vsphere: wrong image names in bundle 1949042 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the http2 tests (on OpenStack) 1949050 - Bump k8s to latest 1.21 1949061 - [assisted operator][nmstate] Continuous attempts to reconcile InstallEnv in the case of invalid NMStateConfig 1949063 - [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service 1949075 - Extend openshift/api for Add card customization 1949093 - PatternFly v4.96.2 regression results in a.pf-c-button hover issues 1949096 - Restore private git clone tests 1949099 - network-check-target code cleanup 1949105 - NetworkPolicy ... should enforce ingress policy allowing any port traffic to a server on a specific protocol 1949145 - Move openshift-user-critical priority class to CCO 1949155 - Console doesn't correctly check for favorited or last namespace on load if project picker used 1949180 - Pipelines plugin model kinds aren't picked up by parser 1949202 - sriov-network-operator not available from operatorhub on ppc64le 1949218 - ccoctl not included in container image 1949237 - Bump OVN: Lots of conjunction warnings in ovn-controller container logs 1949277 - operator-marketplace: deployment manifests for ibm-cloud-managed profile have master node selectors 1949294 - [assisted operator] OPENSHIFT_VERSIONS in assisted operator subscription does not propagate 1949306 - need a way to see top API accessors 1949313 - Rename vmware-vsphere- images to vsphere- images before 4.8 ships 1949316 - BaremetalHost resource automatedCleaningMode ignored due to outdated vendoring 1949347 - apiserver-watcher support for dual-stack 1949357 - manila-csi-controller pod not running due to secret lack(in another ns) 1949361 - CoreDNS resolution failure for external hostnames with "A: dns: overflow unpacking uint16" 1949364 - Mention scheduling profiles in scheduler operator repository 1949370 - Testability of: Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error 1949384 - Edit Default Pull Secret modal - i18n misses 1949387 - Fix the typo in auto node sizing script 1949404 - label selector on pvc creation page - i18n misses 1949410 - The referred role doesn't exist if create rolebinding from rolebinding tab of role page 1949411 - VolumeSnapshot, VolumeSnapshotClass and VolumeSnapshotConent Details tab is not translated - i18n misses 1949413 - Automatic boot order setting is done incorrectly when using by-path style device names 1949418 - Controller factory workers should always restart on panic() 1949419 - oauth-apiserver logs "[SHOULD NOT HAPPEN] failed to update managedFields for authentication.k8s.io/v1, Kind=TokenReview: failed to convert new object (authentication.k8s.io/v1, Kind=TokenReview)" 1949420 - [azure csi driver operator] pvc.status.capacity and pv.spec.capacity are processed not the same as in-tree plugin 1949435 - ingressclass controller doesn't recreate the openshift-default ingressclass after deleting it 1949480 - Listeners timeout are constantly being updated 1949481 - cluster-samples-operator restarts approximately two times per day and logs too many same messages 1949509 - Kuryr should manage API LB instead of CNO 1949514 - URL is not visible for routes at narrow screen widths 1949554 - Metrics of vSphere CSI driver sidecars are not collected 1949582 - OCP v4.7 installation with OVN-Kubernetes fails with error "egress bandwidth restriction -1 is not equals" 1949589 - APIRemovedInNextEUSReleaseInUse Alert Missing 1949591 - Alert does not catch removed api usage during end-to-end tests. 1949593 - rename DeprecatedAPIInUse alert to APIRemovedInNextReleaseInUse 1949612 - Install with 1.21 Kubelet is spamming logs with failed to get stats failed command 'du' 1949626 - machine-api fails to create AWS client in new regions 1949661 - Kubelet Workloads Management changes for OCPNODE-529 1949664 - Spurious keepalived liveness probe failures 1949671 - System services such as openvswitch are stopped before pod containers on system shutdown or reboot 1949677 - multus is the first pod on a new node and the last to go ready 1949711 - cvo unable to reconcile deletion of openshift-monitoring namespace 1949721 - Pick 99237: Use the audit ID of a request for better correlation 1949741 - Bump golang version of cluster-machine-approver 1949799 - ingresscontroller should deny the setting when spec.tuningOptions.threadCount exceed 64 1949810 - OKD 4.7 unable to access Project Topology View 1949818 - Add e2e test to perform MCO operation Single Node OpenShift 1949820 - Unable to use oc adm top is shortcut when asking for imagestreams 1949862 - The ccoctl tool hits the panic sometime when running the delete subcommand 1949866 - The ccoctl fails to create authentication file when running the command ccoctl aws create-identity-provider with --output-dir parameter 1949880 - adding providerParameters.gcp.clientAccess to existing ingresscontroller doesn't work 1949882 - service-idler build error 1949898 - Backport RP#848 to OCP 4.8 1949907 - Gather summary of PodNetworkConnectivityChecks 1949923 - some defined rootVolumes zones not used on installation 1949928 - Samples Operator updates break CI tests 1949935 - Fix incorrect access review check on start pipeline kebab action 1949956 - kaso: add minreadyseconds to ensure we don't have an LB outage on kas 1949967 - Update Kube dependencies in MCO to 1.21 1949972 - Descheduler metrics: populate build info data and make the metrics entries more readeable 1949978 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the h2spec conformance tests [Suite:openshift/conformance/parallel/minimal] 1949990 - (release-4.8) Extend the OLM operator gatherer to include CSV display name 1949991 - openshift-marketplace pods are crashlooping 1950007 - [CI] [UPI] easy_install is not reliable enough to be used in an image 1950026 - [Descheduler] Need better way to handle evicted pod count for removeDuplicate pod strategy 1950047 - CSV deployment template custom annotations are not propagated to deployments 1950112 - SNO: machine-config pool is degraded: error running chcon -R -t var_run_t /run/mco-machine-os-content/os-content-321709791 1950113 - in-cluster operators need an API for additional AWS tags 1950133 - MCO creates empty conditions on the kubeletconfig object 1950159 - Downstream ovn-kubernetes repo should have no linter errors 1950175 - Update Jenkins and agent base image to Go 1.16 1950196 - ssh Key is added even with 'Expose SSH access to this virtual machine' unchecked 1950210 - VPA CRDs use deprecated API version 1950219 - KnativeServing is not shown in list on global config page 1950232 - [Descheduler] - The minKubeVersion should be 1.21 1950236 - Update OKD imagestreams to prefer centos7 images 1950270 - should use "kubernetes.io/os" in the dns/ingresscontroller node selector description when executing oc explain command 1950284 - Tracking bug for NE-563 - support user-defined tags on AWS load balancers 1950341 - NetworkPolicy: allow-from-router policy does not allow access to service when the endpoint publishing strategy is HostNetwork on OpenshiftSDN network 1950379 - oauth-server is in pending/crashbackoff at beginning 50% of CI runs 1950384 - [sig-builds][Feature:Builds][sig-devex][Feature:Jenkins][Slow] openshift pipeline build perm failing 1950409 - Descheduler operator code and docs still reference v1beta1 1950417 - The Marketplace Operator is building with EOL k8s versions 1950430 - CVO serves metrics over HTTP, despite a lack of consumers 1950460 - RFE: Change Request Size Input to Number Spinner Input 1950471 - e2e-metal-ipi-ovn-dualstack is failing with etcd unable to bootstrap 1950532 - Include "update" when referring to operator approval and channel 1950543 - Document non-HA behaviors in the MCO (SingleNodeOpenshift) 1950590 - CNO: Too many OVN netFlows collectors causes ovnkube pods CrashLoopBackOff 1950653 - BuildConfig ignores Args 1950761 - Monitoring operator deployments anti-affinity rules prevent their rollout on single-node 1950908 - kube_pod_labels metric does not contain k8s labels 1950912 - [e2e][automation] add devconsole tests 1950916 - [RFE]console page show error when vm is poused 1950934 - Unnecessary rollouts can happen due to unsorted endpoints 1950935 - Updating cluster-network-operator builder & base images to be consistent with ART 1950978 - the ingressclass cannot be removed even after deleting the related custom ingresscontroller 1951007 - ovn master pod crashed 1951029 - Drainer panics on missing context for node patch 1951034 - (release-4.8) Split up the GatherClusterOperators into smaller parts 1951042 - Panics every few minutes in kubelet logs post-rebase 1951043 - Start Pipeline Modal Parameters should accept empty string defaults 1951058 - [gcp-pd-csi-driver-operator] topology and multipods capabilities are not enabled in e2e tests 1951066 - [IBM][ROKS] Enable volume snapshot controllers on IBM Cloud 1951084 - avoid benign "Path \"/run/secrets/etc-pki-entitlement\" from \"/etc/containers/mounts.conf\" doesn't exist, skipping" messages 1951158 - Egress Router CRD missing Addresses entry 1951169 - Improve API Explorer discoverability from the Console 1951174 - re-pin libvirt to 6.0.0 1951203 - oc adm catalog mirror can generate ICSPs that exceed etcd's size limit 1951209 - RerunOnFailure runStrategy shows wrong VM status (Starting) on Succeeded VMI 1951212 - User/Group details shows unrelated subjects in role bindings tab 1951214 - VM list page crashes when the volume type is sysprep 1951339 - Cluster-version operator does not manage operand container environments when manifest lacks opinions 1951387 - opm index add doesn't respect deprecated bundles 1951412 - Configmap gatherer can fail incorrectly 1951456 - Docs and linting fixes 1951486 - Replace "kubevirt_vmi_network_traffic_bytes_total" with new metrics names 1951505 - Remove deprecated techPreviewUserWorkload field from CMO's configmap 1951558 - Backport Upstream 101093 for Startup Probe Fix 1951585 - enterprise-pod fails to build 1951636 - assisted service operator use default serviceaccount in operator bundle 1951637 - don't rollout a new kube-apiserver revision on oauth accessTokenInactivityTimeout changes 1951639 - Bootstrap API server unclean shutdown causes reconcile delay 1951646 - Unexpected memory climb while container not in use 1951652 - Add retries to opm index add 1951670 - Error gathering bootstrap log after pivot: The bootstrap machine did not execute the release-image.service systemd unit 1951671 - Excessive writes to ironic Nodes 1951705 - kube-apiserver needs alerts on CPU utlization 1951713 - [OCP-OSP] After changing image in machine object it enters in Failed - Can't find created instance 1951853 - dnses.operator.openshift.io resource's spec.nodePlacement.tolerations godoc incorrectly describes default behavior 1951858 - unexpected text '0' on filter toolbar on RoleBinding tab 1951860 - [4.8] add Intel XXV710 NIC model (1572) support in SR-IOV Operator 1951870 - sriov network resources injector: user defined injection removed existing pod annotations 1951891 - [migration] cannot change ClusterNetwork CIDR during migration 1951952 - [AWS CSI Migration] Metrics for cloudprovider error requests are lost 1952001 - Delegated authentication: reduce the number of watch requests 1952032 - malformatted assets in CMO 1952045 - Mirror nfs-server image used in jenkins-e2e 1952049 - Helm: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1952079 - rebase openshift/sdn to kube 1.21 1952111 - Optimize importing from @patternfly/react-tokens 1952174 - DNS operator claims to be done upgrading before it even starts 1952179 - OpenStack Provider Ports UI Underscore Variables 1952187 - Pods stuck in ImagePullBackOff with errors like rpc error: code = Unknown desc = Error committing the finished image: image with ID "SomeLongID" already exists, but uses a different top layer: that ID 1952211 - cascading mounts happening exponentially on when deleting openstack-cinder-csi-driver-node pods 1952214 - Console Devfile Import Dev Preview broken 1952238 - Catalog pods don't report termination logs to catalog-operator 1952262 - Need support external gateway via hybrid overlay 1952266 - etcd operator bumps status.version[name=operator] before operands update 1952268 - etcd operator should not set Degraded=True EtcdMembersDegraded on healthy machine-config node reboots 1952282 - CSR approver races with nodelink controller and does not requeue 1952310 - VM cannot start up if the ssh key is added by another template 1952325 - [e2e][automation] Check support modal in ssh tests and skip template parentSupport 1952333 - openshift/kubernetes vulnerable to CVE-2021-3121 1952358 - Openshift-apiserver CO unavailable in fresh OCP 4.7.5 installations 1952367 - No VM status on overview page when VM is pending 1952368 - worker pool went degraded due to no rpm-ostree on rhel worker during applying new mc 1952372 - VM stop action should not be there if the VM is not running 1952405 - console-operator is not reporting correct Available status 1952448 - Switch from Managed to Disabled mode: no IP removed from configuration and no container metal3-static-ip-manager stopped 1952460 - In k8s 1.21 bump '[sig-network] Firewall rule control plane should not expose well-known ports' test is disabled 1952473 - Monitor pod placement during upgrades 1952487 - Template filter does not work properly 1952495 - “Create” button on the Templates page is confuse 1952527 - [Multus] multi-networkpolicy does wrong filtering 1952545 - Selection issue when inserting YAML snippets 1952585 - Operator links for 'repository' and 'container image' should be clickable in OperatorHub 1952604 - Incorrect port in external loadbalancer config 1952610 - [aws] image-registry panics when the cluster is installed in a new region 1952611 - Tracking bug for OCPCLOUD-1115 - support user-defined tags on AWS EC2 Instances 1952618 - 4.7.4->4.7.8 Upgrade Caused OpenShift-Apiserver Outage 1952625 - Fix translator-reported text issues 1952632 - 4.8 installer should default ClusterVersion channel to stable-4.8 1952635 - Web console displays a blank page- white space instead of cluster information 1952665 - [Multus] multi-networkpolicy pod continue restart due to OOM (out of memory) 1952666 - Implement Enhancement 741 for Kubelet 1952667 - Update Readme for cluster-baremetal-operator with details about the operator 1952684 - cluster-etcd-operator: metrics controller panics on invalid response from client 1952728 - It was not clear for users why Snapshot feature was not available 1952730 - “Customize virtual machine” and the “Advanced” feature are confusing in wizard 1952732 - Users did not understand the boot source labels 1952741 - Monitoring DB: after set Time Range as Custom time range, no data display 1952744 - PrometheusDuplicateTimestamps with user workload monitoring enabled 1952759 - [RFE]It was not immediately clear what the Star icon meant 1952795 - cloud-network-config-controller CRD does not specify correct plural name 1952819 - failed to configure pod interface: error while waiting on flows for pod: timed out waiting for OVS flows 1952820 - [LSO] Delete localvolume pv is failed 1952832 - [IBM][ROKS] Enable the Web console UI to deploy OCS in External mode on IBM Cloud 1952891 - Upgrade failed due to cinder csi driver not deployed 1952904 - Linting issues in gather/clusterconfig package 1952906 - Unit tests for configobserver.go 1952931 - CI does not check leftover PVs 1952958 - Runtime error loading console in Safari 13 1953019 - [Installer][baremetal][metal3] The baremetal IPI installer fails on delete cluster with: failed to clean baremetal bootstrap storage pool 1953035 - Installer should error out if publish: Internal is set while deploying OCP cluster on any on-prem platform 1953041 - openshift-authentication-operator uses 3.9k% of its requested CPU 1953077 - Handling GCP's: Error 400: Permission accesscontextmanager.accessLevels.list is not valid for this resource 1953102 - kubelet CPU use during an e2e run increased 25% after rebase 1953105 - RHCOS system components registered a 3.5x increase in CPU use over an e2e run before and after 4/9 1953169 - endpoint slice controller doesn't handle services target port correctly 1953257 - Multiple EgressIPs per node for one namespace when "oc get hostsubnet" 1953280 - DaemonSet/node-resolver is not recreated by dns operator after deleting it 1953291 - cluster-etcd-operator: peer cert DNS SAN is populated incorrectly 1953418 - [e2e][automation] Fix vm wizard validate tests 1953518 - thanos-ruler pods failed to start up for "cannot unmarshal DNS message" 1953530 - Fix openshift/sdn unit test flake 1953539 - kube-storage-version-migrator: priorityClassName not set 1953543 - (release-4.8) Add missing sample archive data 1953551 - build failure: unexpected trampoline for shared or dynamic linking 1953555 - GlusterFS tests fail on ipv6 clusters 1953647 - prometheus-adapter should have a PodDisruptionBudget in HA topology 1953670 - ironic container image build failing because esp partition size is too small 1953680 - ipBlock ignoring all other cidr's apart from the last one specified 1953691 - Remove unused mock 1953703 - Inconsistent usage of Tech preview badge in OCS plugin of OCP Console 1953726 - Fix issues related to loading dynamic plugins 1953729 - e2e unidling test is flaking heavily on SNO jobs 1953795 - Ironic can't virtual media attach ISOs sourced from ingress routes 1953798 - GCP e2e (parallel and upgrade) regularly trigger KubeAPIErrorBudgetBurn alert, also happens on AWS 1953803 - [AWS] Installer should do pre-check to ensure user-provided private hosted zone name is valid for OCP cluster 1953810 - Allow use of storage policy in VMC environments 1953830 - The oc-compliance build does not available for OCP4.8 1953846 - SystemMemoryExceedsReservation alert should consider hugepage reservation 1953977 - [4.8] packageserver pods restart many times on the SNO cluster 1953979 - Ironic caching virtualmedia images results in disk space limitations 1954003 - Alerts shouldn't report any alerts in firing or pending state: openstack-cinder-csi-driver-controller-metrics TargetDown 1954025 - Disk errors while scaling up a node with multipathing enabled 1954087 - Unit tests for kube-scheduler-operator 1954095 - Apply user defined tags in AWS Internal Registry 1954105 - TaskRuns Tab in PipelineRun Details Page makes cluster based calls for TaskRuns 1954124 - oc set volume not adding storageclass to pvc which leads to issues using snapshots 1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js 1954177 - machine-api: admissionReviewVersions v1beta1 is going to be removed in 1.22 1954187 - multus: admissionReviewVersions v1beta1 is going to be removed in 1.22 1954248 - Disable Alertmanager Protractor e2e tests 1954317 - [assisted operator] Environment variables set in the subscription not being inherited by the assisted-service container 1954330 - NetworkPolicy: allow-from-router with label policy-group.network.openshift.io/ingress: "" does not work on a upgraded cluster 1954421 - Get 'Application is not available' when access Prometheus UI 1954459 - Error: Gateway Time-out display on Alerting console 1954460 - UI, The status of "Used Capacity Breakdown [Pods]" is "Not available" 1954509 - FC volume is marked as unmounted after failed reconstruction 1954540 - Lack translation for local language on pages under storage menu 1954544 - authn operator: endpoints controller should use the context it creates 1954554 - Add e2e tests for auto node sizing 1954566 - Cannot update a component (UtilizationCard) error when switching perspectives manually 1954597 - Default image for GCP does not support ignition V3 1954615 - Undiagnosed panic detected in pod: pods/openshift-cloud-credential-operator_cloud-credential-operator 1954634 - apirequestcounts does not honor max users 1954638 - apirequestcounts should indicate removedinrelease of empty instead of 2.0 1954640 - Support of gatherers with different periods 1954671 - disable volume expansion support in vsphere csi driver storage class 1954687 - localvolumediscovery and localvolumset e2es are disabled 1954688 - LSO has missing examples for localvolumesets 1954696 - [API-1009] apirequestcounts should indicate useragent 1954715 - Imagestream imports become very slow when doing many in parallel 1954755 - Multus configuration should allow for net-attach-defs referenced in the openshift-multus namespace 1954765 - CCO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1954768 - baremetal-operator: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1954770 - Backport upstream fix for Kubelet getting stuck in DiskPressure 1954773 - OVN: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component does not trigger APIRemovedInNextReleaseInUse alert 1954783 - [aws] support byo private hosted zone 1954790 - KCM Alert PodDisruptionBudget At and Limit do not alert with maxUnavailable or MinAvailable by percentage 1954830 - verify-client-go job is failing for release-4.7 branch 1954865 - Add necessary priority class to pod-identity-webhook deployment 1954866 - Add necessary priority class to downloads 1954870 - Add necessary priority class to network components 1954873 - dns server may not be specified for clusters with more than 2 dns servers specified by openstack. 1954891 - Add necessary priority class to pruner 1954892 - Add necessary priority class to ingress-canary 1954931 - (release-4.8) Remove legacy URL anonymization in the ClusterOperator related resources 1954937 - [API-1009] oc get apirequestcount shows blank for column REQUESTSINCURRENTHOUR 1954959 - unwanted decorator shown for revisions in topology though should only be shown only for knative services 1954972 - TechPreviewNoUpgrade featureset can be undone 1954973 - "read /proc/pressure/cpu: operation not supported" in node-exporter logs 1954994 - should update to 2.26.0 for prometheus resources label 1955051 - metrics "kube_node_status_capacity_cpu_cores" does not exist 1955089 - Support [sig-cli] oc observe works as expected test for IPv6 1955100 - Samples: APIRemovedInNextReleaseInUse info alerts display 1955102 - Add vsphere_node_hw_version_total metric to the collected metrics 1955114 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM 1955196 - linuxptp-daemon crash on 4.8 1955226 - operator updates apirequestcount CRD over and over 1955229 - release-openshift-origin-installer-e2e-aws-calico-4.7 is permfailing 1955256 - stop collecting API that no longer exists 1955324 - Kubernetes Autoscaler should use Go 1.16 for testing scripts 1955336 - Failure to Install OpenShift on GCP due to Cluster Name being similar to / contains "google" 1955414 - 4.8 -> 4.7 rollbacks broken on unrecognized flowschema openshift-etcd-operator 1955445 - Drop crio image metrics with high cardinality 1955457 - Drop container_memory_failures_total metric because of high cardinality 1955467 - Disable collection of node_mountstats_nfs metrics in node_exporter 1955474 - [aws-ebs-csi-driver] rebase from version v1.0.0 1955478 - Drop high-cardinality metrics from kube-state-metrics which aren't used 1955517 - Failed to upgrade from 4.6.25 to 4.7.8 due to the machine-config degradation 1955548 - [IPI][OSP] OCP 4.6/4.7 IPI with kuryr exceeds defined serviceNetwork range 1955554 - MAO does not react to events triggered from Validating Webhook Configurations 1955589 - thanos-querier should have a PodDisruptionBudget in HA topology 1955595 - Add DevPreviewLongLifecycle Descheduler profile 1955596 - Pods stuck in creation phase on realtime kernel SNO 1955610 - release-openshift-origin-installer-old-rhcos-e2e-aws-4.7 is permfailing 1955622 - 4.8-e2e-metal-assisted jobs: Timeout of 360 seconds expired waiting for Cluster to be in status ['installing', 'error'] 1955701 - [4.8] RHCOS boot image bump for RHEL 8.4 Beta 1955749 - OCP branded templates need to be translated 1955761 - packageserver clusteroperator does not set reason or message for Available condition 1955783 - NetworkPolicy: ACL audit log message for allow-from-router policy should also include the namespace to distinguish between two policies similarly named configured in respective namespaces 1955803 - OperatorHub - console accepts any value for "Infrastructure features" annotation 1955822 - CIS Benchmark 5.4.1 Fails on ROKS 4: Prefer using secrets as files over secrets as environment variables 1955854 - Ingress clusteroperator reports Degraded=True/Available=False if any ingresscontroller is degraded or unavailable 1955862 - Local Storage Operator using LocalVolume CR fails to create PV's when backend storage failure is simulated 1955874 - Webscale: sriov vfs are not created and sriovnetworknodestate indicates sync succeeded - state is not correct 1955879 - Customer tags cannot be seen in S3 level when set spec.managementState from Managed-> Removed-> Managed in configs.imageregistry with high ratio 1955969 - Workers cannot be deployed attached to multiple networks. 1956079 - Installer gather doesn't collect any networking information 1956208 - Installer should validate root volume type 1956220 - Set htt proxy system properties as expected by kubernetes-client 1956281 - Disconnected installs are failing with kubelet trying to pause image from the internet 1956334 - Event Listener Details page does not show Triggers section 1956353 - test: analyze job consistently fails 1956372 - openshift-gcp-routes causes disruption during upgrade by stopping before all pods terminate 1956405 - Bump k8s dependencies in cluster resource override admission operator 1956411 - Apply custom tags to AWS EBS volumes 1956480 - [4.8] Bootimage bump tracker 1956606 - probes FlowSchema manifest not included in any cluster profile 1956607 - Multiple manifests lack cluster profile annotations 1956609 - [cluster-machine-approver] CSRs for replacement control plane nodes not approved after restore from backup 1956610 - manage-helm-repos manifest lacks cluster profile annotations 1956611 - OLM CRD schema validation failing against CRs where the value of a string field is a blank string 1956650 - The container disk URL is empty for Windows guest tools 1956768 - aws-ebs-csi-driver-controller-metrics TargetDown 1956826 - buildArgs does not work when the value is taken from a secret 1956895 - Fix chatty kubelet log message 1956898 - fix log files being overwritten on container state loss 1956920 - can't open terminal for pods that have more than one container running 1956959 - ipv6 disconnected sno crd deployment hive reports success status and clusterdeployrmet reporting false 1956978 - Installer gather doesn't include pod names in filename 1957039 - Physical VIP for pod -> Svc -> Host is incorrectly set to an IP of 169.254.169.2 for Local GW 1957041 - Update CI e2echart with more node info 1957127 - Delegated authentication: reduce the number of watch requests 1957131 - Conformance tests for OpenStack require the Cinder client that is not included in the "tests" image 1957146 - Only run test/extended/router/idle tests on OpenshiftSDN or OVNKubernetes 1957149 - CI: "Managed cluster should start all core operators" fails with: OpenStackCinderDriverStaticResourcesControllerDegraded: "volumesnapshotclass.yaml" (string): missing dynamicClient 1957179 - Incorrect VERSION in node_exporter 1957190 - CI jobs failing due too many watch requests (prometheus-operator) 1957198 - Misspelled console-operator condition 1957227 - Issue replacing the EnvVariables using the unsupported ConfigMap 1957260 - [4.8] [gcp] Installer is missing new region/zone europe-central2 1957261 - update godoc for new build status image change trigger fields 1957295 - Apply priority classes conventions as test to openshift/origin repo 1957315 - kuryr-controller doesn't indicate being out of quota 1957349 - [Azure] Machine object showing Failed phase even node is ready and VM is running properly 1957374 - mcddrainerr doesn't list specific pod 1957386 - Config serve and validate command should be under alpha 1957446 - prepare CCO for future without v1beta1 CustomResourceDefinitions 1957502 - Infrequent panic in kube-apiserver in aws-serial job 1957561 - lack of pseudolocalization for some text on Cluster Setting page 1957584 - Routes are not getting created when using hostname without FQDN standard 1957597 - Public DNS records were not deleted when destroying a cluster which is using byo private hosted zone 1957645 - Event "Updated PrometheusRule.monitoring.coreos.com/v1 because it changed" is frequently looped with weird empty {} changes 1957708 - e2e-metal-ipi and related jobs fail to bootstrap due to multiple VIP's 1957726 - Pod stuck in ContainerCreating - Failed to start transient scope unit: Connection timed out 1957748 - Ptp operator pod should have CPU and memory requests set but not limits 1957756 - Device Replacemet UI, The status of the disk is "replacement ready" before I clicked on "start replacement" 1957772 - ptp daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent 1957775 - CVO creating cloud-controller-manager too early causing upgrade failures 1957809 - [OSP] Install with invalid platform.openstack.machinesSubnet results in runtime error 1957822 - Update apiserver tlsSecurityProfile description to include Custom profile 1957832 - CMO end-to-end tests work only on AWS 1957856 - 'resource name may not be empty' is shown in CI testing 1957869 - baremetal IPI power_interface for irmc is inconsistent 1957879 - cloud-controller-manage ClusterOperator manifest does not declare relatedObjects 1957889 - Incomprehensible documentation of the GatherClusterOperatorPodsAndEvents gatherer 1957893 - ClusterDeployment / Agent conditions show "ClusterAlreadyInstalling" during each spoke install 1957895 - Cypress helper projectDropdown.shouldContain is not an assertion 1957908 - Many e2e failed requests caused by kube-storage-version-migrator-operator's version reads 1957926 - "Add Capacity" should allow to add n3 (or n4) local devices at once 1957951 - [aws] destroy can get blocked on instances stuck in shutting-down state 1957967 - Possible test flake in listPage Cypress view 1957972 - Leftover templates from mdns 1957976 - Ironic execute_deploy_steps command to ramdisk times out, resulting in a failed deployment in 4.7 1957982 - Deployment Actions clickable for view-only projects 1957991 - ClusterOperatorDegraded can fire during installation 1958015 - "config-reloader-cpu" and "config-reloader-memory" flags have been deprecated for prometheus-operator 1958080 - Missing i18n for login, error and selectprovider pages 1958094 - Audit log files are corrupted sometimes 1958097 - don't show "old, insecure token format" if the token does not actually exist 1958114 - Ignore staged vendor files in pre-commit script 1958126 - [OVN]Egressip doesn't take effect 1958158 - OAuth proxy container for AlertManager and Thanos are flooding the logs 1958216 - ocp libvirt: dnsmasq options in install config should allow duplicate option names 1958245 - cluster-etcd-operator: static pod revision is not visible from etcd logs 1958285 - Deployment considered unhealthy despite being available and at latest generation 1958296 - OLM must explicitly alert on deprecated APIs in use 1958329 - pick 97428: add more context to log after a request times out 1958367 - Build metrics do not aggregate totals by build strategy 1958391 - Update MCO KubeletConfig to mixin the API Server TLS Security Profile Singleton 1958405 - etcd: current health checks and reporting are not adequate to ensure availability 1958406 - Twistlock flags mode of /var/run/crio/crio.sock 1958420 - openshift-install 4.7.10 fails with segmentation error 1958424 - aws: support more auth options in manual mode 1958439 - Install/Upgrade button on Install/Upgrade Helm Chart page does not work with Form View 1958492 - CCO: pod-identity-webhook still accesses APIRemovedInNextReleaseInUse 1958643 - All pods creation stuck due to SR-IOV webhook timeout 1958679 - Compression on pool can't be disabled via UI 1958753 - VMI nic tab is not loadable 1958759 - Pulling Insights report is missing retry logic 1958811 - VM creation fails on API version mismatch 1958812 - Cluster upgrade halts as machine-config-daemon fails to parse rpm-ostree status during cluster upgrades 1958861 - [CCO] pod-identity-webhook certificate request failed 1958868 - ssh copy is missing when vm is running 1958884 - Confusing error message when volume AZ not found 1958913 - "Replacing an unhealthy etcd member whose node is not ready" procedure results in new etcd pod in CrashLoopBackOff 1958930 - network config in machine configs prevents addition of new nodes with static networking via kargs 1958958 - [SCALE] segfault with ovnkube adding to address set 1958972 - [SCALE] deadlock in ovn-kube when scaling up to 300 nodes 1959041 - LSO Cluster UI,"Troubleshoot" link does not exist after scale down osd pod 1959058 - ovn-kubernetes has lock contention on the LSP cache 1959158 - packageserver clusteroperator Available condition set to false on any Deployment spec change 1959177 - Descheduler dev manifests are missing permissions 1959190 - Set LABEL io.openshift.release.operator=true for driver-toolkit image addition to payload 1959194 - Ingress controller should use minReadySeconds because otherwise it is disrupted during deployment updates 1959278 - Should remove prometheus servicemonitor from openshift-user-workload-monitoring 1959294 - openshift-operator-lifecycle-manager:olm-operator-serviceaccount should not rely on external networking for health check 1959327 - Degraded nodes on upgrade - Cleaning bootversions: Read-only file system 1959406 - Difficult to debug performance on ovn-k without pprof enabled 1959471 - Kube sysctl conformance tests are disabled, meaning we can't submit conformance results 1959479 - machines doesn't support dual-stack loadbalancers on Azure 1959513 - Cluster-kube-apiserver does not use library-go for audit pkg 1959519 - Operand details page only renders one status donut no matter how many 'podStatuses' descriptors are used 1959550 - Overly generic CSS rules for dd and dt elements breaks styling elsewhere in console 1959564 - Test verify /run filesystem contents failing 1959648 - oc adm top --help indicates that oc adm top can display storage usage while it cannot 1959650 - Gather SDI-related MachineConfigs 1959658 - showing a lot "constructing many client instances from the same exec auth config" 1959696 - Deprecate 'ConsoleConfigRoute' struct in console-operator config 1959699 - [RFE] Collect LSO pod log and daemonset log managed by LSO 1959703 - Bootstrap gather gets into an infinite loop on bootstrap-in-place mode 1959711 - Egressnetworkpolicy doesn't work when configure the EgressIP 1959786 - [dualstack]EgressIP doesn't work on dualstack cluster for IPv6 1959916 - Console not works well against a proxy in front of openshift clusters 1959920 - UEFISecureBoot set not on the right master node 1959981 - [OCPonRHV] - Affinity Group should not create by default if we define empty affinityGroupsNames: [] 1960035 - iptables is missing from ose-keepalived-ipfailover image 1960059 - Remove "Grafana UI" link from Console Monitoring > Dashboards page 1960089 - ImageStreams list page, detail page and breadcrumb are not following CamelCase conventions 1960129 - [e2e][automation] add smoke tests about VM pages and actions 1960134 - some origin images are not public 1960171 - Enable SNO checks for image-registry 1960176 - CCO should recreate a user for the component when it was removed from the cloud providers 1960205 - The kubelet log flooded with reconcileState message once CPU manager enabled 1960255 - fixed obfuscation permissions 1960257 - breaking changes in pr template 1960284 - ExternalTrafficPolicy Local does not preserve connections correctly on shutdown, policy Cluster has significant performance cost 1960323 - Address issues raised by coverity security scan 1960324 - manifests: extra "spec.version" in console quickstarts makes CVO hotloop 1960330 - manifests: invalid selector in ServiceMonitor makes CVO hotloop 1960334 - manifests: invalid selector in ServiceMonitor makes CVO hotloop 1960337 - manifests: invalid selector in ServiceMonitor makes CVO hotloop 1960339 - manifests: unset "preemptionPolicy" makes CVO hotloop 1960531 - Items under 'Current Bandwidth' for Dashboard 'Kubernetes / Networking / Pod' keep added for every access 1960534 - Some graphs of console dashboards have no legend and tooltips are difficult to undstand compared with grafana 1960546 - Add virt_platform metric to the collected metrics 1960554 - Remove rbacv1beta1 handling code 1960612 - Node disk info in overview/details does not account for second drive where /var is located 1960619 - Image registry integration tests use old-style OAuth tokens 1960683 - GlobalConfigPage is constantly requesting resources 1960711 - Enabling IPsec runtime causing incorrect MTU on Pod interfaces 1960716 - Missing details for debugging 1960732 - Outdated manifests directory in CSI driver operator repositories 1960757 - [OVN] hostnetwork pod can access MCS port 22623 or 22624 on master 1960758 - oc debug / oc adm must-gather do not require openshift/tools and openshift/must-gather to be "the newest" 1960767 - /metrics endpoint of the Grafana UI is accessible without authentication 1960780 - CI: failed to create PDB "service-test" the server could not find the requested resource 1961064 - Documentation link to network policies is outdated 1961067 - Improve log gathering logic 1961081 - policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget in CMO logs 1961091 - Gather MachineHealthCheck definitions 1961120 - CSI driver operators fail when upgrading a cluster 1961173 - recreate existing static pod manifests instead of updating 1961201 - [sig-network-edge] DNS should answer A and AAAA queries for a dual-stack service is constantly failing 1961314 - Race condition in operator-registry pull retry unit tests 1961320 - CatalogSource does not emit any metrics to indicate if it's ready or not 1961336 - Devfile sample for BuildConfig is not defined 1961356 - Update single quotes to double quotes in string 1961363 - Minor string update for " No Storage classes found in cluster, adding source is disabled." 1961393 - DetailsPage does not work with group~version~kind 1961452 - Remove "Alertmanager UI" link from Console Monitoring > Alerting page 1961466 - Some dropdown placeholder text on route creation page is not translated 1961472 - openshift-marketplace pods in CrashLoopBackOff state after RHACS installed with an SCC with readOnlyFileSystem set to true 1961506 - NodePorts do not work on RHEL 7.9 workers (was "4.7 -> 4.8 upgrade is stuck at Ingress operator Degraded with rhel 7.9 workers") 1961536 - clusterdeployment without pull secret is crashing assisted service pod 1961538 - manifests: invalid namespace in ClusterRoleBinding makes CVO hotloop 1961545 - Fixing Documentation Generation 1961550 - HAproxy pod logs showing error "another server named 'pod:httpd-7c7ccfffdc-wdkvk:httpd:8080-tcp:10.128.x.x:8080' was already defined at line 326, please use distinct names" 1961554 - respect the shutdown-delay-duration from OpenShiftAPIServerConfig 1961561 - The encryption controllers send lots of request to an API server 1961582 - Build failure on s390x 1961644 - NodeAuthenticator tests are failing in IPv6 1961656 - driver-toolkit missing some release metadata 1961675 - Kebab menu of taskrun contains Edit options which should not be present 1961701 - Enhance gathering of events 1961717 - Update runtime dependencies to Wallaby builds for bugfixes 1961829 - Quick starts prereqs not shown when description is long 1961852 - Excessive lock contention when adding many pods selected by the same NetworkPolicy 1961878 - Add Sprint 199 translations 1961897 - Remove history listener before console UI is unmounted 1961925 - New ManagementCPUsOverride admission plugin blocks pod creation in clusters with no nodes 1962062 - Monitoring dashboards should support default values of "All" 1962074 - SNO:the pod get stuck in CreateContainerError and prompt "failed to add conmon to systemd sandbox cgroup: dial unix /run/systemd/private: connect: resource temporarily unavailable" after adding a performanceprofile 1962095 - Replace gather-job image without FQDN 1962153 - VolumeSnapshot routes are ambiguous, too generic 1962172 - Single node CI e2e tests kubelet metrics endpoints intermittent downtime 1962219 - NTO relies on unreliable leader-for-life implementation. 1962256 - use RHEL8 as the vm-example 1962261 - Monitoring components requesting more memory than they use 1962274 - OCP on RHV installer fails to generate an install-config with only 2 hosts in RHV cluster 1962347 - Cluster does not exist logs after successful installation 1962392 - After upgrade from 4.5.16 to 4.6.17, customer's application is seeing re-transmits 1962415 - duplicate zone information for in-tree PV after enabling migration 1962429 - Cannot create windows vm because kubemacpool.io denied the request 1962525 - [Migration] SDN migration stuck on MCO on RHV cluster 1962569 - NetworkPolicy details page should also show Egress rules 1962592 - Worker nodes restarting during OS installation 1962602 - Cloud credential operator scrolls info "unable to provide upcoming..." on unsupported platform 1962630 - NTO: Ship the current upstream TuneD 1962687 - openshift-kube-storage-version-migrator pod failed due to Error: container has runAsNonRoot and image will run as root 1962698 - Console-operator can not create resource console-public configmap in the openshift-config-managed namespace 1962718 - CVE-2021-29622 prometheus: open redirect under the /new endpoint 1962740 - Add documentation to Egress Router 1962850 - [4.8] Bootimage bump tracker 1962882 - Version pod does not set priorityClassName 1962905 - Ramdisk ISO source defaulting to "http" breaks deployment on a good amount of BMCs 1963068 - ironic container should not specify the entrypoint 1963079 - KCM/KS: ability to enforce localhost communication with the API server. 1963154 - Current BMAC reconcile flow skips Ironic's deprovision step 1963159 - Add Sprint 200 translations 1963204 - Update to 8.4 IPA images 1963205 - Installer is using old redirector 1963208 - Translation typos/inconsistencies for Sprint 200 files 1963209 - Some strings in public.json have errors 1963211 - Fix grammar issue in kubevirt-plugin.json string 1963213 - Memsource download script running into API error 1963219 - ImageStreamTags not internationalized 1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment 1963267 - Warning: Invalid DOM property classname. Did you mean className? console warnings in volumes table 1963502 - create template from is not descriptive 1963676 - in vm wizard when selecting an os template it looks like selecting the flavor too 1963833 - Cluster monitoring operator crashlooping on single node clusters due to segfault 1963848 - Use OS-shipped stalld vs. the NTO-shipped one. 1963866 - NTO: use the latest k8s 1.21.1 and openshift vendor dependencies 1963871 - cluster-etcd-operator:[build] upgrade to go 1.16 1963896 - The VM disks table does not show easy links to PVCs 1963912 - "[sig-network] DNS should provide DNS for {services, cluster, subdomain, hostname}" failures on vsphere 1963932 - Installation failures in bootstrap in OpenStack release jobs 1963964 - Characters are not escaped on config ini file causing Kuryr bootstrap to fail 1964059 - rebase openshift/sdn to kube 1.21.1 1964197 - Failing Test vendor/k8s.io/kube-aggregator/pkg/apiserver TestProxyCertReload due to hardcoded certificate expiration 1964203 - e2e-metal-ipi, e2e-metal-ipi-ovn-dualstack and e2e-metal-ipi-ovn-ipv6 are failing due to "Unknown provider baremetal" 1964243 - The oc compliance fetch-raw doesn’t work for disconnected cluster 1964270 - Failed to install 'cluster-kube-descheduler-operator' with error: "clusterkubedescheduleroperator.4.8.0-202105211057.p0.assembly.stream\": must be no more than 63 characters" 1964319 - Network policy "deny all" interpreted as "allow all" in description page 1964334 - alertmanager/prometheus/thanos-querier /metrics endpoints are not secured 1964472 - Make project and namespace requirements more visible rather than giving me an error after submission 1964486 - Bulk adding of CIDR IPS to whitelist is not working 1964492 - Pick 102171: Implement support for watch initialization in P&F 1964625 - NETID duplicate check is only required in NetworkPolicy Mode 1964748 - Sync upstream 1.7.2 downstream 1964756 - PVC status is always in 'Bound' status when it is actually cloning 1964847 - Sanity check test suite missing from the repo 1964888 - opoenshift-apiserver imagestreamimports depend on >34s timeout support, WAS: transport: loopyWriter.run returning. connection error: desc = "transport is closing" 1964936 - error log for "oc adm catalog mirror" is not correct 1964979 - Add mapping from ACI to infraenv to handle creation order issues 1964997 - Helm Library charts are showing and can be installed from Catalog 1965024 - [DR] backup and restore should perform consistency checks on etcd snapshots 1965092 - [Assisted-4.7] [Staging][OLM] Operators deployments start before all workers finished installation 1965283 - 4.7->4.8 upgrades: cluster operators are not ready: openshift-controller-manager (Upgradeable=Unknown NoData: ), service-ca (Upgradeable=Unknown NoData: 1965330 - oc image extract fails due to security capabilities on files 1965334 - opm index add fails during image extraction 1965367 - Typo in in etcd-metric-serving-ca resource name 1965370 - "Route" is not translated in Korean or Chinese 1965391 - When storage class is already present wizard do not jumps to "Stoarge and nodes" 1965422 - runc is missing Provides oci-runtime in rpm spec 1965522 - [v2v] Multiple typos on VM Import screen 1965545 - Pod stuck in ContainerCreating: Unit ...slice already exists 1965909 - Replace "Enable Taint Nodes" by "Mark nodes as dedicated" 1965921 - [oVirt] High performance VMs shouldn't be created with Existing policy 1965929 - kube-apiserver should use cert auth when reaching out to the oauth-apiserver with a TokenReview request 1966077 - hidden descriptor is visible in the Operator instance details page1966116 - DNS SRV request which worked in 4.7.9 stopped working in 4.7.11 1966126 - root_ca_cert_publisher_sync_duration_seconds metric can have an excessive cardinality 1966138 - (release-4.8) Update K8s & OpenShift API versions 1966156 - Issue with Internal Registry CA on the service pod 1966174 - No storage class is installed, OCS and CNV installations fail 1966268 - Workaround for Network Manager not supporting nmconnections priority 1966401 - Revamp Ceph Table in Install Wizard flow 1966410 - kube-controller-manager should not trigger APIRemovedInNextReleaseInUse alert 1966416 - (release-4.8) Do not exceed the data size limit 1966459 - 'policy/v1beta1 PodDisruptionBudget' and 'batch/v1beta1 CronJob' appear in image-registry-operator log 1966487 - IP address in Pods list table are showing node IP other than pod IP 1966520 - Add button from ocs add capacity should not be enabled if there are no PV's 1966523 - (release-4.8) Gather MachineAutoScaler definitions 1966546 - [master] KubeAPI - keep day1 after cluster is successfully installed 1966561 - Workload partitioning annotation workaround needed for CSV annotation propagation bug 1966602 - don't require manually setting IPv6DualStack feature gate in 4.8 1966620 - The bundle.Dockerfile in the repo is obsolete 1966632 - [4.8.0] [assisted operator] Unable to re-register an SNO instance if deleting CRDs during install 1966654 - Alertmanager PDB is not created, but Prometheus UWM is 1966672 - Add Sprint 201 translations 1966675 - Admin console string updates 1966677 - Change comma to semicolon 1966683 - Translation bugs from Sprint 201 files 1966684 - Verify "Creating snapshot for claim <1>{pvcName}</1>" displays correctly 1966697 - Garbage collector logs every interval - move to debug level 1966717 - include full timestamps in the logs 1966759 - Enable downstream plugin for Operator SDK 1966795 - [tests] Release 4.7 broken due to the usage of wrong OCS version 1966813 - "Replacing an unhealthy etcd member whose node is not ready" procedure results in new etcd pod in CrashLoopBackOff 1966862 - vsphere IPI - local dns prepender is not prepending nameserver 127.0.0.1 1966892 - [master] [Assisted-4.8][SNO] SNO node cannot transition into "Writing image to disk" from "Waiting for bootkub[e" 1966952 - [4.8.0] [Assisted-4.8][SNO][Dual Stack] DHCPv6 settings "ipv6.dhcp-duid=ll" missing from dual stack install 1967104 - [4.8.0] InfraEnv ctrl: log the amount of NMstate Configs baked into the image 1967126 - [4.8.0] [DOC] KubeAPI docs should clarify that the InfraEnv Spec pullSecretRef is currently ignored 1967197 - 404 errors loading some i18n namespaces 1967207 - Getting started card: console customization resources link shows other resources 1967208 - Getting started card should use semver library for parsing the version instead of string manipulation 1967234 - Console is continuously polling for ConsoleLink acm-link 1967275 - Awkward wrapping in getting started dashboard card 1967276 - Help menu tooltip overlays dropdown 1967398 - authentication operator still uses previous deleted pod ip rather than the new created pod ip to do health check 1967403 - (release-4.8) Increase workloads fingerprint gatherer pods limit 1967423 - [master] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion 1967444 - openshift-local-storage pods found with invalid priority class, should be openshift-user-critical or begin with system- while running e2e tests 1967531 - the ccoctl tool should extend MaxItems when listRoles, the default value 100 is a little small 1967578 - [4.8.0] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion 1967591 - The ManagementCPUsOverride admission plugin should not mutate containers with the limit 1967595 - Fixes the remaining lint issues 1967614 - prometheus-k8s pods can't be scheduled due to volume node affinity conflict 1967623 - [OCPonRHV] - ./openshift-install installation with install-config doesn't work if ovirt-config.yaml doesn't exist and user should fill the FQDN URL 1967625 - Add OpenShift Dockerfile for cloud-provider-aws 1967631 - [4.8.0] Cluster install failed due to timeout while "Waiting for control plane" 1967633 - [4.8.0] [Assisted-4.8][SNO] SNO node cannot transition into "Writing image to disk" from "Waiting for bootkube" 1967639 - Console whitescreens if user preferences fail to load 1967662 - machine-api-operator should not use deprecated "platform" field in infrastructures.config.openshift.io 1967667 - Add Sprint 202 Round 1 translations 1967713 - Insights widget shows invalid link to the OCM 1967717 - Insights Advisor widget is missing a description paragraph and contains deprecated naming 1967745 - When setting DNS node placement by toleration to not tolerate master node, effect value should not allow string other than "NoExecute" 1967803 - should update to 7.5.5 for grafana resources version label 1967832 - Add more tests for periodic.go 1967833 - Add tasks pool to tasks_processing 1967842 - Production logs are spammed on "OCS requirements validation status Insufficient hosts to deploy OCS. A minimum of 3 hosts is required to deploy OCS" 1967843 - Fix null reference to messagesToSearch in gather_logs.go 1967902 - [4.8.0] Assisted installer chrony manifests missing index numberring 1967933 - Network-Tools debug scripts not working as expected 1967945 - [4.8.0] [assisted operator] Assisted Service Postgres crashes msg: "mkdir: cannot create directory '/var/lib/pgsql/data/userdata': Permission denied" 1968019 - drain timeout and pool degrading period is too short 1968067 - [master] Agent validation not including reason for being insufficient 1968168 - [4.8.0] KubeAPI - keep day1 after cluster is successfully installed 1968175 - [4.8.0] Agent validation not including reason for being insufficient 1968373 - [4.8.0] BMAC re-attaches installed node on ISO regeneration 1968385 - [4.8.0] Infra env require pullSecretRef although it shouldn't be required 1968435 - [4.8.0] Unclear message in case of missing clusterImageSet 1968436 - Listeners timeout updated to remain using default value 1968449 - [4.8.0] Wrong Install-config override documentation 1968451 - [4.8.0] Garbage collector not cleaning up directories of removed clusters 1968452 - [4.8.0] [doc] "Mirror Registry Configuration" doc section needs clarification of functionality and limitations 1968454 - [4.8.0] backend events generated with wrong namespace for agent 1968455 - [4.8.0] Assisted Service operator's controllers are starting before the base service is ready 1968515 - oc should set user-agent when talking with registry 1968531 - Sync upstream 1.8.0 downstream 1968558 - [sig-cli] oc adm storage-admin [Suite:openshift/conformance/parallel] doesn't clean up properly 1968567 - [OVN] Egress router pod not running and openshift.io/scc is restricted 1968625 - Pods using sr-iov interfaces failign to start for Failed to create pod sandbox 1968700 - catalog-operator crashes when status.initContainerStatuses[].state.waiting is nil 1968701 - Bare metal IPI installation is failed due to worker inspection failure 1968754 - CI: e2e-metal-ipi-upgrade failing on KubeletHasDiskPressure, which triggers machine-config RequiredPoolsFailed 1969212 - [FJ OCP4.8 Bug - PUBLIC VERSION]: Masters repeat reboot every few minutes during workers provisioning 1969284 - Console Query Browser: Can't reset zoom to fixed time range after dragging to zoom 1969315 - [4.8.0] BMAC doesn't check if ISO Url changed before queuing BMH for reconcile 1969352 - [4.8.0] Creating BareMetalHost without the "inspect.metal3.io" does not automatically add it 1969363 - [4.8.0] Infra env should show the time that ISO was generated. 1969367 - [4.8.0] BMAC should wait for an ISO to exist for 1 minute before using it 1969386 - Filesystem's Utilization doesn't show in VM overview tab 1969397 - OVN bug causing subports to stay DOWN fails installations 1969470 - [4.8.0] Misleading error in case of install-config override bad input 1969487 - [FJ OCP4.8 Bug]: Avoid always do delete_configuration clean step 1969525 - Replace golint with revive 1969535 - Topology edit icon does not link correctly when branch name contains slash 1969538 - Install a VolumeSnapshotClass by default on CSI Drivers that support it 1969551 - [4.8.0] Assisted service times out on GetNextSteps due tooc adm release infotaking too long 1969561 - Test "an end user can use OLM can subscribe to the operator" generates deprecation alert 1969578 - installer: accesses v1beta1 RBAC APIs and causes APIRemovedInNextReleaseInUse to fire 1969599 - images without registry are being prefixed with registry.hub.docker.com instead of docker.io 1969601 - manifest for networks.config.openshift.io CRD uses deprecated apiextensions.k8s.io/v1beta1 1969626 - Portfoward stream cleanup can cause kubelet to panic 1969631 - EncryptionPruneControllerDegraded: etcdserver: request timed out 1969681 - MCO: maxUnavailable of ds/machine-config-daemon does not get updated due to missing resourcemerge check 1969712 - [4.8.0] Assisted service reports a malformed iso when we fail to download the base iso 1969752 - [4.8.0] [assisted operator] Installed Clusters are missing DNS setups 1969773 - [4.8.0] Empty cluster name on handleEnsureISOErrors log after applying InfraEnv.yaml 1969784 - WebTerminal widget should send resize events 1969832 - Applying a profile with multiple inheritance where parents include a common ancestor fails 1969891 - Fix rotated pipelinerun status icon issue in safari 1969900 - Test files should not use deprecated APIs that will trigger APIRemovedInNextReleaseInUse 1969903 - Provisioning a large number of hosts results in an unexpected delay in hosts becoming available 1969951 - Cluster local doesn't work for knative services created from dev console 1969969 - ironic-rhcos-downloader container uses and old base image 1970062 - ccoctl does not work with STS authentication 1970068 - ovnkube-master logs "Failed to find node ips for gateway" error 1970126 - [4.8.0] Disable "metrics-events" when deploying using the operator 1970150 - master pool is still upgrading when machine config reports level / restarts on osimageurl change 1970262 - [4.8.0] Remove Agent CRD Status fields not needed 1970265 - [4.8.0] Add State and StateInfo to DebugInfo in ACI and Agent CRDs 1970269 - [4.8.0] missing role in agent CRD 1970271 - [4.8.0] Add ProgressInfo to Agent and AgentClusterInstalll CRDs 1970381 - Monitoring dashboards: Custom time range inputs should retain their values 1970395 - [4.8.0] SNO with AI/operator - kubeconfig secret is not created until the spoke is deployed 1970401 - [4.8.0] AgentLabelSelector is required yet not supported 1970415 - SR-IOV Docs needs documentation for disabling port security on a network 1970470 - Add pipeline annotation to Secrets which are created for a private repo 1970494 - [4.8.0] Missing value-filling of log line in assisted-service operator pod 1970624 - 4.7->4.8 updates: AggregatedAPIDown for v1beta1.metrics.k8s.io 1970828 - "500 Internal Error" for all openshift-monitoring routes 1970975 - 4.7 -> 4.8 upgrades on AWS take longer than expected 1971068 - Removing invalid AWS instances from the CF templates 1971080 - 4.7->4.8 CI: KubePodNotReady due to MCD's 5m sleep between drain attempts 1971188 - Web Console does not show OpenShift Virtualization Menu with VirtualMachine CRDs of version v1alpha3 ! 1971293 - [4.8.0] Deleting agent from one namespace causes all agents with the same name to be deleted from all namespaces 1971308 - [4.8.0] AI KubeAPI AgentClusterInstall confusing "Validated" condition about VIP not matching machine network 1971529 - [Dummy bug for robot] 4.7.14 upgrade to 4.8 and then downgrade back to 4.7.14 doesn't work - clusteroperator/kube-apiserver is not upgradeable 1971589 - [4.8.0] Telemetry-client won't report metrics in case the cluster was installed using the assisted operator 1971630 - [4.8.0] ACM/ZTP with Wan emulation fails to start the agent service 1971632 - [4.8.0] ACM/ZTP with Wan emulation, several clusters fail to step past discovery 1971654 - [4.8.0] InfraEnv controller should always requeue for backend response HTTP StatusConflict (code 409) 1971739 - Keep /boot RW when kdump is enabled 1972085 - [4.8.0] Updating configmap within AgentServiceConfig is not logged properly 1972128 - ironic-static-ip-manager container still uses 4.7 base image 1972140 - [4.8.0] ACM/ZTP with Wan emulation, SNO cluster installs do not show as installed although they are 1972167 - Several operators degraded because Failed to create pod sandbox when installing an sts cluster 1972213 - Openshift Installer| UEFI mode | BM hosts have BIOS halted 1972262 - [4.8.0] "baremetalhost.metal3.io/detached" uses boolean value where string is expected 1972426 - Adopt failure can trigger deprovisioning 1972436 - [4.8.0] [DOCS] AgentServiceConfig examples in operator.md doc should each contain databaseStorage + filesystemStorage 1972526 - [4.8.0] clusterDeployments controller should send an event to InfraEnv for backend cluster registration 1972530 - [4.8.0] no indication for missing debugInfo in AgentClusterInstall 1972565 - performance issues due to lost node, pods taking too long to relaunch 1972662 - DPDK KNI modules need some additional tools 1972676 - Requirements for authenticating kernel modules with X.509 1972687 - Using bound SA tokens causes causes failures to /apis/authorization.openshift.io/v1/clusterrolebindings 1972690 - [4.8.0] infra-env condition message isn't informative in case of missing pull secret 1972702 - [4.8.0] Domain dummy.com (not belonging to Red Hat) is being used in a default configuration 1972768 - kube-apiserver setup fail while installing SNO due to port being used 1972864 - Newlocal-with-fallback` service annotation does not preserve source IP 1973018 - Ironic rhcos downloader breaks image cache in upgrade process from 4.7 to 4.8 1973117 - No storage class is installed, OCS and CNV installations fail 1973233 - remove kubevirt images and references 1973237 - RHCOS-shipped stalld systemd units do not use SCHED_FIFO to run stalld. 1973428 - Placeholder bug for OCP 4.8.0 image release 1973667 - [4.8] NetworkPolicy tests were mistakenly marked skipped 1973672 - fix ovn-kubernetes NetworkPolicy 4.7->4.8 upgrade issue 1973995 - [Feature:IPv6DualStack] tests are failing in dualstack 1974414 - Uninstalling kube-descheduler clusterkubedescheduleroperator.4.6.0-202106010807.p0.git.5db84c5 removes some clusterrolebindings 1974447 - Requirements for nvidia GPU driver container for driver toolkit 1974677 - [4.8.0] KubeAPI CVO progress is not available on CR/conditions only in events. 1974718 - Tuned net plugin fails to handle net devices with n/a value for a channel 1974743 - [4.8.0] All resources not being cleaned up after clusterdeployment deletion 1974746 - [4.8.0] File system usage not being logged appropriately 1974757 - [4.8.0] Assisted-service deployed on an IPv6 cluster installed with proxy: agentclusterinstall shows error pulling an image from quay. 1974773 - Using bound SA tokens causes fail to query cluster resource especially in a sts cluster 1974839 - CVE-2021-29059 nodejs-is-svg: Regular expression denial of service if the application is provided and checks a crafted invalid SVG string 1974850 - [4.8] coreos-installer failing Execshield 1974931 - [4.8.0] Assisted Service Operator should be Infrastructure Operator for Red Hat OpenShift 1974978 - 4.8.0.rc0 upgrade hung, stuck on DNS clusteroperator progressing 1975155 - Kubernetes service IP cannot be accessed for rhel worker 1975227 - [4.8.0] KubeAPI Move conditions consts to CRD types 1975360 - [4.8.0] [master] timeout on kubeAPI subsystem test: SNO full install and validate MetaData 1975404 - [4.8.0] Confusing behavior when multi-node spoke workers present when only controlPlaneAgents specified 1975432 - Alert InstallPlanStepAppliedWithWarnings does not resolve 1975527 - VMware UPI is configuring static IPs via ignition rather than afterburn 1975672 - [4.8.0] Production logs are spammed on "Found unpreparing host: id 08f22447-2cf1-a107-eedf-12c7421f7380 status insufficient" 1975789 - worker nodes rebooted when we simulate a case where the api-server is down 1975938 - gcp-realtime: e2e test failing [sig-storage] Multi-AZ Cluster Volumes should only be allowed to provision PDs in zones where nodes exist [Suite:openshift/conformance/parallel] [Suite:k8s] 1975964 - 4.7 nightly upgrade to 4.8 and then downgrade back to 4.7 nightly doesn't work - ingresscontroller "default" is degraded 1976079 - [4.8.0] Openshift Installer| UEFI mode | BM hosts have BIOS halted 1976263 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel] 1976376 - disable jenkins client plugin test whose Jenkinsfile references master branch openshift/origin artifacts 1976590 - [Tracker] [SNO][assisted-operator][nmstate] Bond Interface is down when booting from the discovery ISO 1977233 - [4.8] Unable to authenticate against IDP after upgrade to 4.8-rc.1 1977351 - CVO pod skipped by workload partitioning with incorrect error stating cluster is not SNO 1977352 - [4.8.0] [SNO] No DNS to cluster API from assisted-installer-controller 1977426 - Installation of OCP 4.6.13 fails when teaming interface is used with OVNKubernetes 1977479 - CI failing on firing CertifiedOperatorsCatalogError due to slow livenessProbe responses 1977540 - sriov webhook not worked when upgrade from 4.7 to 4.8 1977607 - [4.8.0] Post making changes to AgentServiceConfig assisted-service operator is not detecting the change and redeploying assisted-service pod 1977924 - Pod fails to run when a custom SCC with a specific set of volumes is used 1980788 - NTO-shipped stalld can segfault 1981633 - enhance service-ca injection 1982250 - Performance Addon Operator fails to install after catalog source becomes ready 1982252 - olm Operator is in CrashLoopBackOff state with error "couldn't cleanup cross-namespace ownerreferences"

  1. References:

https://access.redhat.com/security/cve/CVE-2016-2183 https://access.redhat.com/security/cve/CVE-2020-7774 https://access.redhat.com/security/cve/CVE-2020-15106 https://access.redhat.com/security/cve/CVE-2020-15112 https://access.redhat.com/security/cve/CVE-2020-15113 https://access.redhat.com/security/cve/CVE-2020-15114 https://access.redhat.com/security/cve/CVE-2020-15136 https://access.redhat.com/security/cve/CVE-2020-26160 https://access.redhat.com/security/cve/CVE-2020-26541 https://access.redhat.com/security/cve/CVE-2020-28469 https://access.redhat.com/security/cve/CVE-2020-28500 https://access.redhat.com/security/cve/CVE-2020-28852 https://access.redhat.com/security/cve/CVE-2021-3114 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/cve/CVE-2021-3516 https://access.redhat.com/security/cve/CVE-2021-3517 https://access.redhat.com/security/cve/CVE-2021-3518 https://access.redhat.com/security/cve/CVE-2021-3520 https://access.redhat.com/security/cve/CVE-2021-3537 https://access.redhat.com/security/cve/CVE-2021-3541 https://access.redhat.com/security/cve/CVE-2021-3636 https://access.redhat.com/security/cve/CVE-2021-20206 https://access.redhat.com/security/cve/CVE-2021-20271 https://access.redhat.com/security/cve/CVE-2021-20291 https://access.redhat.com/security/cve/CVE-2021-21419 https://access.redhat.com/security/cve/CVE-2021-21623 https://access.redhat.com/security/cve/CVE-2021-21639 https://access.redhat.com/security/cve/CVE-2021-21640 https://access.redhat.com/security/cve/CVE-2021-21648 https://access.redhat.com/security/cve/CVE-2021-22133 https://access.redhat.com/security/cve/CVE-2021-23337 https://access.redhat.com/security/cve/CVE-2021-23362 https://access.redhat.com/security/cve/CVE-2021-23368 https://access.redhat.com/security/cve/CVE-2021-23382 https://access.redhat.com/security/cve/CVE-2021-25735 https://access.redhat.com/security/cve/CVE-2021-25737 https://access.redhat.com/security/cve/CVE-2021-26539 https://access.redhat.com/security/cve/CVE-2021-26540 https://access.redhat.com/security/cve/CVE-2021-27292 https://access.redhat.com/security/cve/CVE-2021-28092 https://access.redhat.com/security/cve/CVE-2021-29059 https://access.redhat.com/security/cve/CVE-2021-29622 https://access.redhat.com/security/cve/CVE-2021-32399 https://access.redhat.com/security/cve/CVE-2021-33034 https://access.redhat.com/security/cve/CVE-2021-33194 https://access.redhat.com/security/cve/CVE-2021-33909 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYQCOF9zjgjWX9erEAQjsEg/+NSFQdRcZpqA34LWRtxn+01y2MO0WLroQ d4o+3h0ECKYNRFKJe6n7z8MdmPpvV2uNYN0oIwidTESKHkFTReQ6ZolcV/sh7A26 Z7E+hhpTTObxAL7Xx8nvI7PNffw3CIOZSpnKws5TdrwuMkH5hnBSSZntP5obp9Vs ImewWWl7CNQtFewtXbcmUojNzIvU1mujES2DTy2ffypLoOW6kYdJzyWubigIoR6h gep9HKf1X4oGPuDNF5trSdxKwi6W68+VsOA25qvcNZMFyeTFhZqowot/Jh1HUHD8 TWVpDPA83uuExi/c8tE8u7VZgakWkRWcJUsIw68VJVOYGvpP6K/MjTpSuP2itgUX X//1RGQM7g6sYTCSwTOIrMAPbYH0IMbGDjcS4fSZcfg6c+WJnEpZ72ZgjHZV8mxb 1BtQSs2lil48/cwDKM0yMO2nYsKiz4DCCx2W5izP0rLwNA8Hvqh9qlFgkxJWWOvA mtBCelB0E74qrE4NXbX+MIF7+ZQKjd1evE91/VWNs0FLR/xXdP3C5ORLU3Fag0G/ 0oTV73NdxP7IXVAdsECwU2AqS9ne1y01zJKtd7hq7H/wtkbasqCNq5J7HikJlLe6 dpKh5ZRQzYhGeQvho9WQfz/jd4HZZTcB6wxrWubbd05bYt/i/0gau90LpuFEuSDx +bLvJlpGiMg= =NJcM -----END PGP SIGNATURE-----

-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:

Red Hat Advanced Cluster Management for Kubernetes 2.3.0 images

Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in.

Bugs:

  • RFE Make the source code for the endpoint-metrics-operator public (BZ# 1913444)

  • cluster became offline after apiserver health check (BZ# 1942589)

  • Solution:

Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):

1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension 1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag 1913444 - RFE Make the source code for the endpoint-metrics-operator public 1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull 1927520 - RHACM 2.3.0 images 1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection 1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash() 1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate 1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms 1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization 1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string 1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application 1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header 1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call 1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS 1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service 1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service 1942589 - cluster became offline after apiserver health check 1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() 1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character 1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data 1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service 1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option 1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing 1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js 1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service 1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS) 1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option 1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe 1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command 1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets 1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs 1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method 1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions 1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id 1983131 - Defragmenting an etcd member doesn't reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters

  1. VDSM manages and monitors the host's storage, memory and networks as well as virtual machine creation, other host administration tasks, statistics gathering, and log collection.

Bug Fix(es):

  • An update in libvirt has changed the way block threshold events are submitted. As a result, the VDSM was confused by the libvirt event, and tried to look up a drive, logging a warning about a missing drive. In this release, the VDSM has been adapted to handle the new libvirt behavior, and does not log warnings about missing drives. (BZ#1948177)

  • Previously, when a virtual machine was powered off on the source host of a live migration and the migration finished successfully at the same time, the two events interfered with each other, and sometimes prevented migration cleanup resulting in additional migrations from the host being blocked. In this release, additional migrations are not blocked. (BZ#1959436)

  • Previously, when failing to execute a snapshot and re-executing it later, the second try would fail due to using the previous execution data. In this release, this data will be used only when needed, in recovery mode. (BZ#1984209)

  • Then engine deletes the volume and causes data corruption. 1998017 - Keep cinbderlib dependencies optional for 4.4.8

Bug Fix(es):

  • Documentation is referencing deprecated API for Service Export - Submariner (BZ#1936528)

  • Importing of cluster fails due to error/typo in generated command (BZ#1936642)

  • RHACM 2.2.2 images (BZ#1938215)

  • 2.2 clusterlifecycle fails to allow provision fips: true clusters on aws, vsphere (BZ#1941778)

  • Summary:

The Migration Toolkit for Containers (MTC) 1.7.4 is now available. Description:

The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202102-1466",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "banking corporate lending process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "communications session border controller",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "9.0"
      },
      {
        "model": "enterprise communications broker",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "3.2.0"
      },
      {
        "model": "banking extensibility workbench",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "banking extensibility workbench",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "primavera gateway",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "20.12.0"
      },
      {
        "model": "banking supply chain finance",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "primavera unifier",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "17.12"
      },
      {
        "model": "jd edwards enterpriseone tools",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "9.2.6.1"
      },
      {
        "model": "banking supply chain finance",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "health sciences data management workbench",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "2.5.2.1"
      },
      {
        "model": "communications services gatekeeper",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "7.0"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "communications cloud native core policy",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "1.11.0"
      },
      {
        "model": "active iq unified manager",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "financial services crime and compliance management studio",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.0.8.2.0"
      },
      {
        "model": "primavera gateway",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "19.12.0"
      },
      {
        "model": "peoplesoft enterprise peopletools",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.58"
      },
      {
        "model": "system manager",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": "9.0"
      },
      {
        "model": "primavera unifier",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "18.8"
      },
      {
        "model": "banking credit facilities process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "communications cloud native core binding support function",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "1.9.0"
      },
      {
        "model": "enterprise communications broker",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "3.3.0"
      },
      {
        "model": "financial services crime and compliance management studio",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.0.8.3.0"
      },
      {
        "model": "primavera gateway",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "17.12.11"
      },
      {
        "model": "communications design studio",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "7.4.2.0.0"
      },
      {
        "model": "primavera gateway",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "18.8.12"
      },
      {
        "model": "communications session border controller",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.4"
      },
      {
        "model": "primavera gateway",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "17.12.0"
      },
      {
        "model": "primavera gateway",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "20.12.7"
      },
      {
        "model": "cloud manager",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "primavera gateway",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "19.12.11"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "banking credit facilities process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "peoplesoft enterprise peopletools",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.59"
      },
      {
        "model": "primavera unifier",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "17.7"
      },
      {
        "model": "primavera unifier",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "19.12"
      },
      {
        "model": "banking credit facilities process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "health sciences data management workbench",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "3.0.0.0"
      },
      {
        "model": "lodash",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "lodash",
        "version": "4.17.21"
      },
      {
        "model": "banking corporate lending process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "banking trade finance process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "primavera gateway",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "18.8.0"
      },
      {
        "model": "primavera unifier",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "20.12"
      },
      {
        "model": "banking trade finance process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "retail customer management and segmentation foundation",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "19.0"
      },
      {
        "model": "banking extensibility workbench",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "banking corporate lending process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "banking trade finance process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "banking supply chain finance",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "lodash",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "lodash",
        "version": "4.17.21"
      },
      {
        "model": "lodash",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "lodash",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001309"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23337"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "163276"
      },
      {
        "db": "PACKETSTORM",
        "id": "162901"
      },
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "db": "PACKETSTORM",
        "id": "164090"
      },
      {
        "db": "PACKETSTORM",
        "id": "162151"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1137"
      }
    ],
    "trust": 1.3
  },
  "cve": "CVE-2021-23337",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "SINGLE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.5,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.0,
            "id": "CVE-2021-23337",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 1.9,
            "vectorString": "AV:N/AC:L/Au:S/C:P/I:P/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "SINGLE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 6.5,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.0,
            "id": "VHN-381798",
            "impactScore": 6.4,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:L/AU:S/C:P/I:P/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 7.2,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 1.2,
            "id": "CVE-2021-23337",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "HIGH",
            "scope": "UNCHANGED",
            "trust": 2.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 7.2,
            "baseSeverity": "High",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2021-23337",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "High",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2021-23337",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "report@snyk.io",
            "id": "CVE-2021-23337",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "NVD",
            "id": "CVE-2021-23337",
            "trust": 0.8,
            "value": "High"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202102-1137",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "VULHUB",
            "id": "VHN-381798",
            "trust": 0.1,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2021-23337",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-381798"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-23337"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001309"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1137"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23337"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23337"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function. Lodash Contains a command injection vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. There is a security vulnerability in Lodash. Please keep an eye on CNNVD or vendor announcements. Description:\n\nThe ovirt-engine package provides the manager for virtualization\nenvironments. \nThis manager enables admins to define hosts and networks, as well as to add\nstorage, create VMs and manage user permissions. \n\nBug Fix(es):\n\n* This release adds the queue attribute to the virtio-scsi driver in the\nvirtual machine configuration. This improvement enables multi-queue\nperformance with the virtio-scsi driver. (BZ#911394)\n\n* With this release, source-load-balancing has been added as a new\nsub-option for xmit_hash_policy. It can be configured for bond modes\nbalance-xor (2), 802.3ad (4) and balance-tlb (5), by specifying\nxmit_hash_policy=vlan+srcmac. (BZ#1683987)\n\n* The default DataCenter/Cluster will be set to compatibility level 4.6 on\nnew installations of Red Hat Virtualization 4.4.6.; (BZ#1950348)\n\n* With this release, support has been added for copying disks between\nregular Storage Domains and Managed Block Storage Domains. \nIt is now possible to migrate disks between Managed Block Storage Domains\nand regular Storage Domains. (BZ#1906074)\n\n* Previously, the engine-config value LiveSnapshotPerformFreezeInEngine was\nset by default to false and was supposed to be uses in cluster\ncompatibility levels below 4.4. The value was set to general version. \nWith this release, each cluster level has it\u0027s own value, defaulting to\nfalse for 4.4 and above. This will reduce unnecessary overhead in removing\ntime outs of the file system freeze command. (BZ#1932284)\n\n* With this release, running virtual machines is supported for up to 16TB\nof RAM on x86_64 architectures. (BZ#1944723)\n\n* This release adds the gathering of oVirt/RHV related certificates to\nallow easier debugging of issues for faster customer help and issue\nresolution. \nInformation from certificates is now included as part of the sosreport. \nNote that no corresponding private key information is gathered, due to\nsecurity considerations. (BZ#1845877)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/2974891\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1113630 - [RFE] indicate vNICs that are out-of-sync from their configuration on engine\n1310330 - [RFE] Provide a way to remove stale LUNs from hypervisors\n1589763 - [downstream clone] Error changing CD for a running VM when ISO image is on a block domain\n1621421 - [RFE] indicate vNIC is out of sync on network QoS modification on engine\n1717411 - improve engine logging when migration fail\n1766414 - [downstream] [UI] hint after updating mtu on networks connected to running VMs\n1775145 - Incorrect message from hot-plugging memory\n1821199 - HP VM fails to migrate between identical hosts (the same cpu flags) not supporting TSC. \n1845877 - [RFE] Collect information about RHV PKI\n1875363 - engine-setup failing on FIPS enabled rhel8 machine\n1906074 - [RFE] Support disks copy between regular and managed block storage domains\n1910858 - vm_ovf_generations is not cleared while detaching the storage domain causing VM import with old stale configuration\n1917718 - [RFE] Collect memory usage from guests without ovirt-guest-agent and memory ballooning\n1919195 - Unable to create snapshot without saving memory of running VM from VM Portal. \n1919984 - engine-setup failse to deploy the grafana service in an external DWH server\n1924610 - VM Portal shows N/A as the VM IP address even if the guest agent is running and the IP is shown in the webadmin portal\n1926018 - Failed to run VM after FIPS mode is enabled\n1926823 - Integrating ELK with RHV-4.4 fails as RHVH is missing \u0027rsyslog-gnutls\u0027 package. \n1928158 - Rename \u0027CA Certificate\u0027 link in welcome page to \u0027Engine CA certificate\u0027\n1928188 - Failed to parse \u0027writeOps\u0027 value \u0027XXXX\u0027 to integer: For input string: \"XXXX\"\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1929211 - Failed to parse \u0027writeOps\u0027 value \u0027XXXX\u0027 to integer: For input string: \"XXXX\"\n1930522 - [RHV-4.4.5.5] Failed to deploy RHEL AV 8.4.0 host to RHV with error \"missing groups or modules: virt:8.4\"\n1930565 - Host upgrade failed in imgbased but RHVM shows upgrade successful\n1930895 - RHEL 8 virtual machine with qemu-guest-agent installed displays Guest OS Memory Free/Cached/Buffered: Not Configured\n1932284 - Engine handled FS freeze is not fast enough for Windows systems\n1935073 - Ansible ovirt_disk module can create disks with conflicting IDs that cannot be removed\n1942083 - upgrade ovirt-cockpit-sso to 0.1.4-2\n1943267 - Snapshot creation is failing for VM having vGPU. \n1944723 - [RFE] Support virtual machines with 16TB memory\n1948577 - [welcome page] remove \"Infrastructure Migration\" section (obsoleted)\n1949543 - rhv-log-collector-analyzer fails to run MAC Pools rule\n1949547 - rhv-log-collector-analyzer report contains \u0027b characters\n1950348 - Set compatibility level 4.6 for Default DataCenter/Cluster during new installations of RHV 4.4.6\n1950466 - Host installation failed\n1954401 - HP VMs pinning is wiped after edit-\u003eok and pinned to first physical CPUs.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Moderate: OpenShift Container Platform 4.8.2 bug fix and security update\nAdvisory ID:       RHSA-2021:2438-01\nProduct:           Red Hat OpenShift Enterprise\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2021:2438\nIssue date:        2021-07-27\nCVE Names:         CVE-2016-2183 CVE-2020-7774 CVE-2020-15106 \n                   CVE-2020-15112 CVE-2020-15113 CVE-2020-15114 \n                   CVE-2020-15136 CVE-2020-26160 CVE-2020-26541 \n                   CVE-2020-28469 CVE-2020-28500 CVE-2020-28852 \n                   CVE-2021-3114 CVE-2021-3121 CVE-2021-3516 \n                   CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 \n                   CVE-2021-3537 CVE-2021-3541 CVE-2021-3636 \n                   CVE-2021-20206 CVE-2021-20271 CVE-2021-20291 \n                   CVE-2021-21419 CVE-2021-21623 CVE-2021-21639 \n                   CVE-2021-21640 CVE-2021-21648 CVE-2021-22133 \n                   CVE-2021-23337 CVE-2021-23362 CVE-2021-23368 \n                   CVE-2021-23382 CVE-2021-25735 CVE-2021-25737 \n                   CVE-2021-26539 CVE-2021-26540 CVE-2021-27292 \n                   CVE-2021-28092 CVE-2021-29059 CVE-2021-29622 \n                   CVE-2021-32399 CVE-2021-33034 CVE-2021-33194 \n                   CVE-2021-33909 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.8.2 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nThis release includes a security update for Red Hat OpenShift Container\nPlatform 4.8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.8.2. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2021:2437\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel\nease-notes.html\n\nSecurity Fix(es):\n\n* SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32)\n(CVE-2016-2183)\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n\n* nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)\n\n* etcd: Large slice causes panic in decodeRecord method (CVE-2020-15106)\n\n* etcd: DoS in wal/wal.go (CVE-2020-15112)\n\n* etcd: directories created via os.MkdirAll are not checked for permissions\n(CVE-2020-15113)\n\n* etcd: gateway can include itself as an endpoint resulting in resource\nexhaustion and leads to DoS (CVE-2020-15114)\n\n* etcd: no authentication is performed against endpoints provided in the\n- --endpoints flag (CVE-2020-15136)\n\n* jwt-go: access restriction bypass vulnerability (CVE-2020-26160)\n\n* nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)\n\n* nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n(CVE-2020-28500)\n\n* golang.org/x/text: Panic in language.ParseAcceptLanguage while processing\nbcp47 tag (CVE-2020-28852)\n\n* golang: crypto/elliptic: incorrect operations on the P-224 curve\n(CVE-2021-3114)\n\n* containernetworking-cni: Arbitrary path injection via type field in CNI\nconfiguration (CVE-2021-20206)\n\n* containers/storage: DoS via malicious image (CVE-2021-20291)\n\n* prometheus: open redirect under the /new endpoint (CVE-2021-29622)\n\n* golang: x/net/html: infinite loop in ParseFragment (CVE-2021-33194)\n\n* go.elastic.co/apm: leaks sensitive HTTP headers during panic\n(CVE-2021-22133)\n\nSpace precludes listing in detail the following additional CVEs fixes:\n(CVE-2021-27292), (CVE-2021-28092), (CVE-2021-29059), (CVE-2021-23382),\n(CVE-2021-26539), (CVE-2021-26540), (CVE-2021-23337), (CVE-2021-23362) and\n(CVE-2021-23368)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.8.2-x86_64\n\nThe image digest is\nssha256:0e82d17ababc79b10c10c5186920232810aeccbccf2a74c691487090a2c98ebc\n\n(For s390x architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.8.2-s390x\n\nThe image digest is\nsha256:a284c5c3fa21b06a6a65d82be1dc7e58f378aa280acd38742fb167a26b91ecb5\n\n(For ppc64le architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.8.2-ppc64le\n\nThe image digest is\nsha256:da989b8e28bccadbb535c2b9b7d3597146d14d254895cd35f544774f374cdd0f\n\nAll OpenShift Container Platform 4.8 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.8/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor\n\n3. Solution:\n\nFor OpenShift Container Platform 4.8 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.8/updating/updating-cluster\n- -cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1369383 - CVE-2016-2183 SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32)\n1725981 - oc explain does not work well with full resource.group names\n1747270 - [osp] Machine with name \"\u003ccluster-id\u003e-worker\"couldn\u0027t join the cluster\n1772993 - rbd block devices attached to a host are visible in unprivileged container pods\n1786273 - [4.6] KAS pod logs show \"error building openapi models ... has invalid property: anyOf\" for CRDs\n1786314 - [IPI][OSP] Install fails on OpenStack with self-signed certs unless the node running the installer has the CA cert in its system trusts\n1801407 - Router in v4v6 mode puts brackets around IPv4 addresses in the Forwarded header\n1812212 - ArgoCD example application cannot be downloaded from github\n1817954 - [ovirt] Workers nodes are not numbered sequentially\n1824911 - PersistentVolume yaml editor is read-only with system:persistent-volume-provisioner ClusterRole\n1825219 - openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with \"Unable to connect to the server\"\n1825417 - The containerruntimecontroller doesn\u0027t roll back to CR-1 if we delete CR-2\n1834551 - ClusterOperatorDown fires when operator is only degraded; states will block upgrades\n1835264 - Intree provisioner doesn\u0027t respect PVC.spec.dataSource sometimes\n1839101 - Some sidebar links in developer perspective don\u0027t follow same project\n1840881 - The KubeletConfigController cannot process multiple confs for a pool/ pool changes\n1846875 - Network setup test high failure rate\n1848151 - Console continues to poll the ClusterVersion resource when the user doesn\u0027t have authority\n1850060 - After upgrading to 3.11.219 timeouts are appearing. \n1852637 - Kubelet sets incorrect image names in node status images section\n1852743 - Node list CPU column only show usage\n1853467 - container_fs_writes_total is inconsistent with CPU/memory in summarizing cgroup values\n1857008 - [Edge] [BareMetal] Not provided STATE value for machines\n1857477 - Bad helptext for storagecluster creation\n1859382 - check-endpoints panics on graceful shutdown\n1862084 - Inconsistency of time formats in the OpenShift web-console\n1864116 - Cloud credential operator scrolls warnings about unsupported platform\n1866222 - Should output all options when runing `operator-sdk init --help`\n1866318 - [RHOCS Usability Study][Dashboard] Users found it difficult to navigate to the OCS dashboard\n1866322 - [RHOCS Usability Study][Dashboard] Alert details page does not help to explain the Alert\n1866331 - [RHOCS Usability Study][Dashboard] Users need additional tooltips or definitions\n1868755 - [vsphere] terraform provider vsphereprivate crashes when network is unavailable on host\n1868870 - CVE-2020-15113 etcd: directories created via os.MkdirAll are not checked for permissions\n1868872 - CVE-2020-15112 etcd: DoS in wal/wal.go\n1868874 - CVE-2020-15114 etcd: gateway can include itself as an endpoint resulting in resource exhaustion and leads to DoS\n1868880 - CVE-2020-15136 etcd: no authentication is performed against endpoints provided in the --endpoints flag\n1868883 - CVE-2020-15106 etcd: Large slice causes panic in decodeRecord method\n1871303 - [sig-instrumentation] Prometheus when installed on the cluster should have important platform topology metrics\n1871770 - [IPI baremetal] The Keepalived.conf file is not indented evenly\n1872659 - ClusterAutoscaler doesn\u0027t scale down when a node is not needed anymore\n1873079 - SSH to api and console route is possible when the clsuter is hosted on Openstack\n1873649 - proxy.config.openshift.io should validate user inputs\n1874322 - openshift/oauth-proxy: htpasswd using SHA1 to store credentials\n1874931 - Accessibility - Keyboard shortcut to exit YAML editor not easily discoverable\n1876918 - scheduler test leaves taint behind\n1878199 - Remove Log Level Normalization controller in cluster-config-operator release N+1\n1878655 - [aws-custom-region] creating manifests take too much time when custom endpoint is unreachable\n1878685 - Ingress resource with \"Passthrough\"  annotation does not get applied when using the newer \"networking.k8s.io/v1\" API\n1879077 - Nodes tainted after configuring additional host iface\n1879140 - console auth errors not understandable by customers\n1879182 - switch over to secure access-token logging by default and delete old non-sha256 tokens\n1879184 - CVO must detect or log resource hotloops\n1879495 - [4.6] namespace \\\u201copenshift-user-workload-monitoring\\\u201d does not exist\u201d\n1879638 - Binary file uploaded to a secret in OCP 4 GUI is not properly converted to Base64-encoded string\n1879944 - [OCP 4.8] Slow PV creation with vsphere\n1880757 - AWS: master not removed from LB/target group when machine deleted\n1880758 - Component descriptions in cloud console have bad description (Managed by Terraform)\n1881210 - nodePort for router-default metrics with NodePortService does not exist\n1881481 - CVO hotloops on some service manifests\n1881484 - CVO hotloops on deployment manifests\n1881514 - CVO hotloops on imagestreams from cluster-samples-operator\n1881520 - CVO hotloops on (some) clusterrolebindings\n1881522 - CVO hotloops on clusterserviceversions packageserver\n1881662 - Error getting volume limit for plugin kubernetes.io/\u003cname\u003e in kubelet logs\n1881694 - Evidence of disconnected installs pulling images from the local registry instead of quay.io\n1881938 - migrator deployment doesn\u0027t tolerate masters\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1883587 - No option for user to select volumeMode\n1883993 - Openshift 4.5.8 Deleting pv disk vmdk after delete machine\n1884053 - cluster DNS experiencing disruptions during cluster upgrade in insights cluster\n1884800 - Failed to set up mount unit: Invalid argument\n1885186 - Removing ssh keys MC does not remove the key from authorized_keys\n1885349 - [IPI Baremetal] Proxy Information Not passed to metal3\n1885717 - activeDeadlineSeconds DeadlineExceeded does not show terminated container statuses\n1886572 - auth: error contacting auth provider when extra ingress (not default)  goes down\n1887849 - When creating new storage class failure_domain is missing. \n1888712 - Worker nodes do not come up on a baremetal IPI deployment with control plane network configured on a vlan on top of bond interface due to Pending CSRs\n1889689 - AggregatedAPIErrors alert may never fire\n1890678 - Cypress:  Fix \u0027structure\u0027 accesibility violations\n1890828 - Intermittent prune job failures causing operator degradation\n1891124 - CP Conformance: CRD spec and status failures\n1891301 - Deleting bmh  by \"oc delete bmh\u0027 get stuck\n1891696 - [LSO] Add capacity UI does not check for node present in selected storageclass\n1891766 - [LSO] Min-Max filter\u0027s from OCS wizard accepts Negative values and that cause PV not getting created\n1892642 - oauth-server password metrics do not appear in UI after initial OCP installation\n1892718 - HostAlreadyClaimed: The new route cannot be loaded with a new api group version\n1893850 - Add an alert for requests rejected by the apiserver\n1893999 - can\u0027t login ocp cluster with oc 4.7 client without the username\n1895028 - [gcp-pd-csi-driver-operator] Volumes created by CSI driver are not deleted on cluster deletion\n1895053 - Allow builds to optionally mount in cluster trust stores\n1896226 - recycler-pod template should not be in kubelet static manifests directory\n1896321 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types\n1896751 - [RHV IPI] Worker nodes stuck in the Provisioning Stage if the machineset has a long name\n1897415 - [Bare Metal - Ironic] provide the ability to set the cipher suite for ipmitool when doing a Bare Metal IPI install\n1897621 - Auth test.Login test.logs in as kubeadmin user: Timeout\n1897918 - [oVirt] e2e tests fail due to kube-apiserver not finishing\n1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability\n1899057 - fix spurious br-ex MAC address error log\n1899187 - [Openstack] node-valid-hostname.service failes during the first boot leading to 5 minute provisioning delay\n1899587 - [External] RGW usage metrics shown on Object Service Dashboard  is incorrect\n1900454 - Enable host-based disk encryption on Azure platform\n1900819 - Scaled ingress replicas following sharded pattern don\u0027t balance evenly across multi-AZ\n1901207 - Search Page - Pipeline resources table not immediately updated after Name filter applied or removed\n1901535 - Remove the managingOAuthAPIServer field from the authentication.operator API\n1901648 - \"do you need to set up custom dns\" tooltip inaccurate\n1902003 - Jobs Completions column is not sorting when there are \"0 of 1\" and \"1 of 1\" in the list. \n1902076 - image registry operator should monitor status of its routes\n1902247 - openshift-oauth-apiserver apiserver pod crashloopbackoffs\n1903055 - [OSP] Validation should fail when no any IaaS flavor or type related field are given\n1903228 - Pod stuck in Terminating, runc init process frozen\n1903383 - Latest RHCOS 47.83. builds failing to install: mount /root.squashfs failed\n1903553 - systemd container renders node NotReady after deleting it\n1903700 - metal3 Deployment doesn\u0027t have unique Pod selector\n1904006 - The --dir option doest not work for command  `oc image extract`\n1904505 - Excessive Memory Use in Builds\n1904507 - vsphere-problem-detector: implement missing metrics\n1904558 - Random init-p error when trying to start pod\n1905095 - Images built on OCP 4.6 clusters create manifests that result in quay.io (and other registries) rejecting those manifests\n1905147 - ConsoleQuickStart Card\u0027s prerequisites is a combined text instead of a list\n1905159 - Installation on previous unused dasd fails after formatting\n1905331 - openshift-multus initContainer multus-binary-copy, etc. are not requesting required resources: cpu, memory\n1905460 - Deploy using virtualmedia for disabled provisioning network on real BM(HPE) fails\n1905577 - Control plane machines not adopted when provisioning network is disabled\n1905627 - Warn users when using an unsupported browser such as IE\n1905709 - Machine API deletion does not properly handle stopped instances on AWS or GCP\n1905849 - Default volumesnapshotclass should be created when creating default storageclass\n1906056 - Bundles skipped via the `skips` field cannot be pinned\n1906102 - CBO produces standard metrics\n1906147 - ironic-rhcos-downloader should not use --insecure\n1906304 - Unexpected value NaN parsing x/y attribute when viewing pod Memory/CPU usage chart\n1906740 - [aws]Machine should be \"Failed\" when creating a machine with invalid region\n1907309 - Migrate controlflow v1alpha1 to v1beta1 in storage\n1907315 - the internal load balancer annotation for AWS should use \"true\" instead of \"0.0.0.0/0\" as value\n1907353 - [4.8] OVS daemonset is wasting resources even though it doesn\u0027t do anything\n1907614 - Update kubernetes deps to 1.20\n1908068 - Enable DownwardAPIHugePages feature gate\n1908169 - The example of Import URL is \"Fedora cloud image list\" for all templates. \n1908170 - sriov network resource injector: Hugepage injection doesn\u0027t work with mult container\n1908343 - Input labels in Manage columns modal should be clickable\n1908378 - [sig-network] pods should successfully create sandboxes by getting pod - Static Pod Failures\n1908655 - \"Evaluating rule failed\" for \"record: node:node_num_cpu:sum\" rule\n1908762 - [Dualstack baremetal cluster] multicast traffic is not working on ovn-kubernetes\n1908765 - [SCALE] enable OVN lflow data path groups\n1908774 - [SCALE] enable OVN DB memory trimming on compaction\n1908916 - CNO: turn on OVN DB RAFT diffs once all master DB pods are capable of it\n1909091 - Pod/node/ip/template isn\u0027t showing when vm is running\n1909600 - Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error\n1909849 - release-openshift-origin-installer-e2e-aws-upgrade-fips-4.4 is perm failing\n1909875 - [sig-cluster-lifecycle] Cluster version operator acknowledges upgrade : timed out waiting for cluster to acknowledge upgrade\n1910067 - UPI: openstacksdk fails on \"server group list\"\n1910113 - periodic-ci-openshift-release-master-ocp-4.5-ci-e2e-44-stable-to-45-ci is never passing\n1910318 - OC 4.6.9 Installer failed: Some pods are not scheduled: 3 node(s) didn\u0027t match node selector: AWS compute machines without status\n1910378 - socket timeouts for webservice communication between pods\n1910396 - 4.6.9 cred operator should back-off when provisioning fails on throttling\n1910500 - Could not list CSI provisioner on web when create storage class on GCP platform\n1911211 - Should show the cert-recovery-controller version  correctly\n1911470 - ServiceAccount Registry Authfiles Do Not Contain Entries for Public Hostnames\n1912571 - libvirt: Support setting dnsmasq options through the install config\n1912820 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade\n1913112 - BMC details should be optional for unmanaged hosts\n1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag\n1913341 - GCP: strange cluster behavior in CI run\n1913399 - switch to v1beta1 for the priority and fairness APIs\n1913525 - Panic in OLM packageserver when invoking webhook authorization endpoint\n1913532 - After a 4.6 to 4.7 upgrade, a node went unready\n1913974 - snapshot test periodically failing with \"can\u0027t open \u0027/mnt/test/data\u0027: No such file or directory\"\n1914127 - Deletion of oc get svc router-default -n openshift-ingress hangs\n1914446 - openshift-service-ca-operator and openshift-service-ca pods run as root\n1914994 - Panic observed in k8s-prometheus-adapter since k8s 1.20\n1915122 - Size of the hostname was preventing proper DNS resolution of the worker node names\n1915693 - Not able to install gpu-operator on cpumanager enabled node. \n1915971 - Role and Role Binding breadcrumbs do not work as expected\n1916116 - the left navigation menu would not be expanded if repeat clicking the links in Overview page\n1916118 - [OVN] Source IP is not EgressIP if configured allow 0.0.0.0/0 in the EgressFirewall\n1916392 - scrape priority and fairness endpoints for must-gather\n1916450 - Alertmanager: add title and text fields to Adv. config. section of Slack Receiver form\n1916489 - [sig-scheduling] SchedulerPriorities [Serial] fails with \"Error waiting for 1 pods to be running - probably a timeout: Timeout while waiting for pods with labels to be ready\"\n1916553 - Default template\u0027s description is empty on details tab\n1916593 - Destroy cluster sometimes stuck in a loop\n1916872 - need ability to reconcile exgw annotations on pod add\n1916890 - [OCP 4.7] api or api-int not available during installation\n1917241 - [en_US] The tooltips of Created date time is not easy to read in all most of UIs. \n1917282 - [Migration] MCO stucked for rhel worker after  enable the migration prepare state\n1917328 - It should default to current namespace when create vm from template action on details page\n1917482 - periodic-ci-openshift-release-master-ocp-4.7-e2e-metal-ipi failing with \"cannot go from state \u0027deploy failed\u0027 to state \u0027manageable\u0027\"\n1917485 - [oVirt] ovirt machine/machineset object has missing some field validations\n1917667 - Master machine config pool updates are stalled during the migration from SDN to OVNKube. \n1917906 - [oauth-server] bump k8s.io/apiserver to 1.20.3\n1917931 - [e2e-gcp-upi] failing due to missing pyopenssl library\n1918101 - [vsphere]Delete Provisioning machine took about 12 minutes\n1918376 - Image registry pullthrough does not support ICSP, mirroring e2es do not pass\n1918442 - Service Reject ACL does not work on dualstack\n1918723 - installer fails to write boot record on 4k scsi lun on s390x\n1918729 - Add hide/reveal button for the token field in the KMS configuration page\n1918750 - CVE-2021-3114 golang: crypto/elliptic: incorrect operations on the P-224 curve\n1918785 - Pod request and limit calculations in console are incorrect\n1918910 - Scale from zero annotations should not requeue if instance type missing\n1919032 - oc image extract - will not extract files from image rootdir - \"error: unexpected directory from mapping tests.test\"\n1919048 - Whereabouts IPv6 addresses not calculated when leading hextets equal 0\n1919151 - [Azure] dnsrecords with invalid domain should not be published to Azure dnsZone\n1919168 - `oc adm catalog mirror` doesn\u0027t work for the air-gapped cluster\n1919291 - [Cinder-csi-driver] Filesystem did not expand for on-line volume resize\n1919336 - vsphere-problem-detector should check if datastore is part of datastore cluster\n1919356 - Add missing profile annotation in cluster-update-keys manifests\n1919391 - CVE-2021-20206 containernetworking-cni: Arbitrary path injection via type field in CNI configuration\n1919398 - Permissive Egress NetworkPolicy (0.0.0.0/0) is blocking all traffic\n1919406 - OperatorHub filter heading \"Provider Type\" should be \"Source\"\n1919737 - hostname lookup delays when master node down\n1920209 - Multus daemonset upgrade takes the longest time in the cluster during an upgrade\n1920221 - GCP jobs exhaust zone listing query quota sometimes due to too many initializations of cloud provider in tests\n1920300 - cri-o does not support configuration of stream idle time\n1920307 - \"VM not running\" should be \"Guest agent required\" on vm details page in dev console\n1920532 - Problem in trying to connect through the service to a member that is the same as the caller. \n1920677 - Various missingKey errors in the devconsole namespace\n1920699 - Operation cannot be fulfilled on clusterresourcequotas.quota.openshift.io error when creating different OpenShift resources\n1920901 - [4.7]\"500 Internal Error\" for prometheus route in https_proxy cluster\n1920903 - oc adm top reporting unknown status for Windows node\n1920905 - Remove DNS lookup workaround from cluster-api-provider\n1921106 - A11y Violation: button name(s) on Utilization Card on Cluster Dashboard\n1921184 - kuryr-cni binds to wrong interface on machine with two interfaces\n1921227 - Fix issues related to consuming new extensions in Console static plugins\n1921264 - Bundle unpack jobs can hang indefinitely\n1921267 - ResourceListDropdown not internationalized\n1921321 - SR-IOV obliviously reboot the node\n1921335 - ThanosSidecarUnhealthy\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1921720 - test: openshift-tests.[sig-cli] oc observe works as expected [Suite:openshift/conformance/parallel]\n1921763 - operator registry has high memory usage in 4.7... cleanup row closes\n1921778 - Push to stage now failing with semver issues on old releases\n1921780 - Search page not fully internationalized\n1921781 - DefaultList component not internationalized\n1921878 - [kuryr] Egress network policy with namespaceSelector in Kuryr behaves differently than in OVN-Kubernetes\n1921885 - Server-side Dry-run with Validation Downloads Entire OpenAPI spec often\n1921892 - MAO: controller runtime manager closes event recorder\n1921894 - Backport Avoid node disruption when kube-apiserver-to-kubelet-signer is rotated\n1921937 - During upgrade /etc/hostname becomes a directory, nodes are set with kubernetes.io/hostname=localhost label\n1921953 - ClusterServiceVersion property inference does not infer package and version\n1922063 - \"Virtual Machine\" should be \"Templates\" in template wizard\n1922065 - Rootdisk size is default to 15GiB in customize wizard\n1922235 - [build-watch] e2e-aws-upi - e2e-aws-upi container setup failing because of Python code version mismatch\n1922264 - Restore snapshot as a new PVC: RWO/RWX access modes are not click-able if parent PVC is deleted\n1922280 - [v2v] on the upstream release, In VM import wizard I see RHV but no oVirt\n1922646 - Panic in authentication-operator invoking webhook authorization\n1922648 - FailedCreatePodSandBox due to \"failed to pin namespaces [uts]: [pinns:e]: /var/run/utsns exists and is not a directory: File exists\"\n1922764 - authentication operator is degraded due to number of kube-apiservers\n1922992 - some button text on YAML sidebar are not translated\n1922997 - [Migration]The SDN migration rollback failed. \n1923038 - [OSP] Cloud Info is loaded twice\n1923157 - Ingress traffic performance drop due to NodePort services\n1923786 - RHV UPI fails with unhelpful message when ASSET_DIR is not set. \n1923811 - Registry claims Available=True despite .status.readyReplicas == 0  while .spec.replicas == 2\n1923847 - Error occurs when creating pods if configuring multiple key-only labels in default cluster-wide node selectors or project-wide node selectors\n1923984 - Incorrect anti-affinity for UWM prometheus\n1924020 - panic: runtime error: index out of range [0] with length 0\n1924075 - kuryr-controller restart when enablePortPoolsPrepopulation = true\n1924083 - \"Activity\" Pane of Persistent Storage tab shows events related to Noobaa too\n1924140 - [OSP] Typo in OPENSHFIT_INSTALL_SKIP_PREFLIGHT_VALIDATIONS variable\n1924171 - ovn-kube must handle single-stack to dual-stack migration\n1924358 - metal UPI setup fails, no worker nodes\n1924502 - Failed to start transient scope unit: Argument list too long / systemd[1]: Failed to set up mount unit: Invalid argument\n1924536 - \u0027More about Insights\u0027 link points to support link\n1924585 - \"Edit Annotation\" are not correctly translated in Chinese\n1924586 - Control Plane status and Operators status are not fully internationalized\n1924641 - [User Experience] The message \"Missing storage class\" needs to be displayed after user clicks Next and needs to be rephrased\n1924663 - Insights operator should collect related pod logs when operator is degraded\n1924701 - Cluster destroy fails when using byo with Kuryr\n1924728 - Difficult to identify deployment issue if the destination disk is too small\n1924729 - Create Storageclass for CephFS provisioner assumes incorrect default FSName in external mode (side-effect of fix for Bug 1878086)\n1924747 - InventoryItem doesn\u0027t internationalize resource kind\n1924788 - Not clear error message when there are no NADs available for the user\n1924816 - Misleading error messages in ironic-conductor log\n1924869 - selinux avc deny after installing OCP 4.7\n1924916 - PVC reported as Uploading when it is actually cloning\n1924917 - kuryr-controller in crash loop if IP is removed from secondary interfaces\n1924953 - newly added \u0027excessive etcd leader changes\u0027 test case failing in serial job\n1924968 - Monitoring list page filter options are not translated\n1924983 - some components in utils directory not localized\n1925017 - [UI] VM Details-\u003e Network Interfaces, \u0027Name,\u0027 is displayed instead on \u0027Name\u0027\n1925061 - Prometheus backed by a PVC may start consuming a lot of RAM after 4.6 -\u003e 4.7 upgrade due to series churn\n1925083 - Some texts are not marked for translation on idp creation page. \n1925087 - Add i18n support for the Secret page\n1925148 - Shouldn\u0027t create the redundant imagestream when use `oc new-app --name=testapp2 -i ` with exist imagestream\n1925207 - VM from custom template - cloudinit disk is not added if creating the VM from custom template using customization wizard\n1925216 - openshift installer fails immediately failed to fetch Install Config\n1925236 - OpenShift Route targets every port of a multi-port service\n1925245 - oc idle: Clusters upgrading with an idled workload do not have annotations on the workload\u0027s service\n1925261 - Items marked as mandatory in KMS Provider form are not enforced\n1925291 - Baremetal IPI - While deploying with IPv6 provision network with subnet other than /64 masters fail to PXE boot\n1925343 - [ci] e2e-metal tests are not using reserved instances\n1925493 - Enable snapshot e2e tests\n1925586 - cluster-etcd-operator is leaking transports\n1925614 - Error: InstallPlan.operators.coreos.com not found\n1925698 - On GCP, load balancers report kube-apiserver fails its /readyz check 50% of the time, causing load balancer backend churn and disruptions to apiservers\n1926029 - [RFE] Either disable save or give warning when no disks support snapshot\n1926054 - Localvolume CR is created successfully, when the storageclass name defined in the localvolume exists. \n1926072 - Close button (X) does not work in the new \"Storage cluster exists\" Warning alert message(introduced via fix for Bug 1867400)\n1926082 - Insights operator should not go degraded during upgrade\n1926106 - [ja_JP][zh_CN] Create Project, Delete Project and Delete PVC modal are not fully internationalized\n1926115 - Texts in \u201cInsights\u201d popover on overview page are not marked for i18n\n1926123 - Pseudo bug: revert \"force cert rotation every couple days for development\" in 4.7\n1926126 - some kebab/action menu translation issues\n1926131 - Add HPA page is not fully internationalized\n1926146 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it\n1926154 - Create new pool with arbiter - wrong replica\n1926278 - [oVirt] consume K8S 1.20 packages\n1926279 - Pod ignores mtu setting from sriovNetworkNodePolicies in case of PF partitioning\n1926285 - ignore pod not found status messages\n1926289 - Accessibility: Modal content hidden from screen readers\n1926310 - CannotRetrieveUpdates alerts on Critical severity\n1926329 - [Assisted-4.7][Staging] monitoring stack in staging is being overloaded by the amount of metrics being exposed by assisted-installer pods and scraped by prometheus. \n1926336 - Service details can overflow boxes at some screen widths\n1926346 - move to go 1.15 and registry.ci.openshift.org\n1926364 - Installer timeouts because proxy blocked connection to Ironic API running on bootstrap VM\n1926465 - bootstrap kube-apiserver does not have --advertise-address set \u2013 was: [BM][IPI][DualStack] Installation fails cause Kubernetes service doesn\u0027t have IPv6 endpoints\n1926484 - API server exits non-zero on 2 SIGTERM signals\n1926547 - OpenShift installer not reporting IAM permission issue when removing the Shared Subnet Tag\n1926579 - Setting .spec.policy is deprecated and will be removed eventually. Please use .spec.profile instead is being logged every 3 seconds in scheduler operator log\n1926598 - Duplicate alert rules are displayed on console for thanos-querier api return wrong results\n1926776 - \"Template support\" modal appears when select the RHEL6 common template\n1926835 - [e2e][automation] prow gating use unsupported CDI version\n1926843 - pipeline with finally tasks status is improper\n1926867 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade\n1926893 - When deploying the operator via OLM (after creating the respective catalogsource), the deployment \"lost\" the `resources` section. \n1926903 - NTO may fail to disable stalld when relying on Tuned \u0027[service]\u0027 plugin\n1926931 - Inconsistent ovs-flow rule on one of the app node for egress node\n1926943 - vsphere-problem-detector: Alerts in CI jobs\n1926977 - [sig-devex][Feature:ImageEcosystem][Slow] openshift sample application repositories rails/nodejs\n1927013 - Tables don\u0027t render properly at smaller screen widths\n1927017 - CCO does not relinquish leadership when restarting for proxy CA change\n1927042 - Empty static pod files on UPI deployments are confusing\n1927047 - multiple external gateway pods will not work in ingress with IP fragmentation\n1927068 - Workers fail to PXE boot when IPv6 provisionining network has subnet other than /64\n1927075 - [e2e][automation] Fix pvc string in pvc.view\n1927118 - OCP 4.7: NVIDIA GPU Operator DCGM metrics not displayed in OpenShift Console Monitoring Metrics page\n1927244 - UPI installation with Kuryr timing out on bootstrap stage\n1927263 - kubelet service takes around 43 secs to start container when started from stopped state\n1927264 - FailedCreatePodSandBox due to multus inability to reach apiserver\n1927310 - Performance: Console makes unnecessary requests for en-US messages on load\n1927340 - Race condition in OperatorCondition reconcilation\n1927366 - OVS configuration service unable to clone NetworkManager\u0027s connections in the overlay FS\n1927391 - Fix flake in TestSyncPodsDeletesWhenSourcesAreReady\n1927393 - 4.7 still points to 4.6 catalog images\n1927397 - p\u0026f: add auto update for priority \u0026 fairness bootstrap configuration objects\n1927423 - Happy \"Not Found\" and no visible error messages on error-list page when /silences 504s\n1927465 - Homepage dashboard content not internationalized\n1927678 - Reboot interface defaults to softPowerOff so fencing is too slow\n1927731 - /usr/lib/dracut/modules.d/30ignition/ignition --version sigsev\n1927797 - \u0027Pod(s)\u0027 should be included in the pod donut label when a horizontal pod autoscaler is enabled\n1927882 - Can\u0027t create cluster role binding from UI when a project is selected\n1927895 - global RuntimeConfig is overwritten with merge result\n1927898 - i18n Admin Notifier\n1927902 - i18n Cluster Utilization dashboard duration\n1927903 - \"CannotRetrieveUpdates\" - critical error in openshift web console\n1927925 - Manually misspelled as Manualy\n1927941 - StatusDescriptor detail item and Status component can cause runtime error when the status is an object or array\n1927942 - etcd should use socket option (SO_REUSEADDR) instead of wait for port release on process restart\n1927944 - cluster version operator cycles terminating state waiting for leader election\n1927993 - Documentation Links in OKD Web Console are not Working\n1928008 - Incorrect behavior when we click back button after viewing the node details in Internal-attached mode\n1928045 - N+1 scaling Info message says \"single zone\" even if the nodes are spread across 2 or 0 zones\n1928147 - Domain search set in the required domains in Option 119 of DHCP Server is ignored by RHCOS on RHV\n1928157 - 4.7 CNO claims to be done upgrading before it even starts\n1928164 - Traffic to outside the cluster redirected when OVN is used and NodePort service is configured\n1928297 - HAProxy fails with 500 on some requests\n1928473 - NetworkManager overlay FS not being created on None platform\n1928512 - sap license management logs gatherer\n1928537 - Cannot IPI with tang/tpm disk encryption\n1928640 - Definite error message when using StorageClass based on azure-file / Premium_LRS\n1928658 - Update plugins and Jenkins version to prepare openshift-sync-plugin 1.0.46 release\n1928850 - Unable to pull images due to limited quota on Docker Hub\n1928851 - manually creating NetNamespaces will break things and this is not obvious\n1928867 - golden images - DV should not be created with WaitForFirstConsumer\n1928869 - Remove css required to fix search bug in console caused by pf issue in 2021.1\n1928875 - Update translations\n1928893 - Memory Pressure Drop Down Info is stating \"Disk\" capacity is low instead of memory\n1928931 - DNSRecord CRD is using deprecated v1beta1 API\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1929052 - Add new Jenkins agent maven dir for 3.6\n1929056 - kube-apiserver-availability.rules are failing evaluation\n1929110 - LoadBalancer service check test fails during vsphere upgrade\n1929136 - openshift isn\u0027t able to mount nfs manila shares to pods\n1929175 - LocalVolumeSet: PV is created on disk belonging to other provisioner\n1929243 - Namespace column missing in Nodes Node Details / pods tab\n1929277 - Monitoring workloads using too high a priorityclass\n1929281 - Update Tech Preview badge to transparent border color when upgrading to PatternFly v4.87.1\n1929314 - ovn-kubernetes endpoint slice controller doesn\u0027t run on CI jobs\n1929359 - etcd-quorum-guard uses origin-cli [4.8]\n1929577 - Edit Application action overwrites Deployment envFrom values on save\n1929654 - Registry for Azure uses legacy V1 StorageAccount\n1929693 - Pod stuck at \"ContainerCreating\" status\n1929733 - oVirt CSI driver operator is constantly restarting\n1929769 - Getting 404 after switching user perspective in another tab and reload Project details\n1929803 - Pipelines shown in edit flow for Workloads created via ContainerImage flow\n1929824 - fix alerting on volume name check for vsphere\n1929917 - Bare-metal operator is firing for ClusterOperatorDown for 15m during 4.6 to 4.7 upgrade\n1929944 - The etcdInsufficientMembers alert fires incorrectly when any instance is down and not when quorum is lost\n1930007 - filter dropdown item filter and resource list dropdown item filter doesn\u0027t support multi selection\n1930015 - OS list is overlapped by buttons in template wizard\n1930064 - Web console crashes during VM creation from template when no storage classes are defined\n1930220 - Cinder CSI driver is not able to mount volumes under heavier load\n1930240 - Generated clouds.yaml incomplete when provisioning network is disabled\n1930248 - After creating a remediation flow and rebooting a worker there is no access to the openshift-web-console\n1930268 - intel vfio devices are not expose as resources\n1930356 - Darwin binary missing from mirror.openshift.com\n1930393 - Gather info about unhealthy SAP pods\n1930546 - Monitoring-dashboard-workload keep loading when user with cluster-role cluster-monitoring-view login develoer console\n1930570 - Jenkins templates are displayed in Developer Catalog twice\n1930620 - the logLevel field in containerruntimeconfig can\u0027t be set to \"trace\"\n1930631 - Image local-storage-mustgather in the doc does not come from product registry\n1930893 - Backport upstream patch 98956 for pod terminations\n1931005 - Related objects page doesn\u0027t show the object when its name is empty\n1931103 - remove periodic log within kubelet\n1931115 - Azure cluster install fails with worker type workers Standard_D4_v2\n1931215 - [RFE] Cluster-api-provider-ovirt should handle affinity groups\n1931217 - [RFE] Installer should create RHV Affinity group for OCP cluster VMS\n1931467 - Kubelet consuming a large amount of CPU and memory and node becoming unhealthy\n1931505 - [IPI baremetal] Two nodes hold the VIP post remove and start  of the Keepalived container\n1931522 - Fresh UPI install on BM with bonding using OVN Kubernetes fails\n1931529 - SNO: mentioning of 4 nodes in error message - Cluster network CIDR prefix 24 does not contain enough addresses for 4 hosts each one with 25 prefix (128 addresses)\n1931629 - Conversational Hub Fails due to ImagePullBackOff\n1931637 - Kubeturbo Operator fails due to ImagePullBackOff\n1931652 - [single-node] etcd: discover-etcd-initial-cluster graceful termination race. \n1931658 - [single-node] cluster-etcd-operator: cluster never pivots from bootstrapIP endpoint\n1931674 - [Kuryr] Enforce nodes MTU for the Namespaces and Pods\n1931852 - Ignition HTTP GET is failing, because DHCP IPv4 config is failing silently\n1931883 - Fail to install Volume Expander Operator due to CrashLookBackOff\n1931949 - Red Hat  Integration Camel-K Operator keeps stuck in Pending state\n1931974 - Operators cannot access kubeapi endpoint on OVNKubernetes on ipv6\n1931997 - network-check-target causes upgrade to fail from 4.6.18 to 4.7\n1932001 - Only one of multiple subscriptions to the same package is honored\n1932097 - Apiserver liveness probe is marking it as unhealthy during normal shutdown\n1932105 - machine-config ClusterOperator claims level while control-plane still updating\n1932133 - AWS EBS CSI Driver doesn\u2019t support \u201ccsi.storage.k8s.io/fsTyps\u201d parameter\n1932135 - When \u201ciopsPerGB\u201d parameter is not set, event for AWS EBS CSI Driver provisioning is not clear\n1932152 - When \u201ciopsPerGB\u201d parameter is set to a wrong number, events for AWS EBS CSI Driver provisioning are not clear\n1932154 - [AWS ] machine stuck in provisioned phase , no warnings or errors\n1932182 - catalog operator causing CPU spikes and bad etcd performance\n1932229 - Can\u2019t find kubelet metrics for aws ebs csi volumes\n1932281 - [Assisted-4.7][UI] Unable to change upgrade channel once upgrades were discovered\n1932323 - CVE-2021-26540 sanitize-html: improper validation of hostnames set by the \"allowedIframeHostnames\" option can lead to bypass hostname whitelist for iframe element\n1932324 - CRIO fails to create a Pod in sandbox stage -  starting container process caused: process_linux.go:472: container init caused: Running hook #0:: error running hook: exit status 255, stdout: , stderr: \\\"\\n\"\n1932362 - CVE-2021-26539 sanitize-html: improper handling of internationalized domain name (IDN) can lead to bypass hostname whitelist validation\n1932401 - Cluster Ingress Operator degrades if external LB redirects http to https because of new \"canary\" route\n1932453 - Update Japanese timestamp format\n1932472 - Edit Form/YAML switchers cause weird collapsing/code-folding issue\n1932487 - [OKD] origin-branding manifest is missing cluster profile annotations\n1932502 - Setting MTU for a bond interface using Kernel arguments is not working\n1932618 - Alerts during a test run should fail the test job, but were not\n1932624 - ClusterMonitoringOperatorReconciliationErrors is pending at the end of an upgrade and probably should not be\n1932626 - During a 4.8 GCP upgrade OLM fires an alert indicating the operator is unhealthy\n1932673 - Virtual machine template provided by red hat should not be editable. The UI allows to edit and then reverse the change after it was made\n1932789 - Proxy with port is unable to be validated if it overlaps with service/cluster network\n1932799 - During a hive driven baremetal installation the process does not go beyond 80% in the bootstrap VM\n1932805 - e2e: test OAuth API connections in the tests by that name\n1932816 - No new local storage operator bundle image is built\n1932834 - enforce the use of hashed access/authorize tokens\n1933101 - Can not upgrade a Helm Chart that uses a library chart in the OpenShift dev console\n1933102 - Canary daemonset uses default node selector\n1933114 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it [Suite:openshift/conformance/parallel/minimal]\n1933159 - multus DaemonSets should use maxUnavailable: 33%\n1933173 - openshift-sdn/sdn DaemonSet should use maxUnavailable: 10%\n1933174 - openshift-sdn/ovs DaemonSet should use maxUnavailable: 10%\n1933179 - network-check-target DaemonSet should use maxUnavailable: 10%\n1933180 - openshift-image-registry/node-ca DaemonSet should use maxUnavailable: 10%\n1933184 - openshift-cluster-csi-drivers DaemonSets should use maxUnavailable: 10%\n1933263 - user manifest with nodeport services causes bootstrap to block\n1933269 - Cluster unstable replacing an unhealthy etcd member\n1933284 - Samples in CRD creation are ordered arbitarly\n1933414 - Machines are created with unexpected name for Ports\n1933599 - bump k8s.io/apiserver to 1.20.3\n1933630 - [Local Volume] Provision disk failed when disk label has unsupported value like \":\"\n1933664 - Getting Forbidden for image in a container template when creating a sample app\n1933708 - Grafana is not displaying deployment config resources in dashboard `Default /Kubernetes / Compute Resources / Namespace (Workloads)`\n1933711 - EgressDNS: Keep short lived records at most 30s\n1933730 - [AI-UI-Wizard] Toggling \"Use extra disks for local storage\" checkbox highlights the \"Next\" button to move forward but grays out once clicked\n1933761 - Cluster DNS service caps TTLs too low and thus evicts from its cache too aggressively\n1933772 - MCD Crash Loop Backoff\n1933805 - TargetDown alert fires during upgrades because of normal upgrade behavior\n1933857 - Details page can throw an uncaught exception if kindObj prop is undefined\n1933880 - Kuryr-Controller crashes when it\u0027s missing the status object\n1934021 - High RAM usage on machine api termination node system oom\n1934071 - etcd consuming high amount of  memory and CPU after upgrade to 4.6.17\n1934080 - Both old and new Clusterlogging CSVs stuck in Pending during upgrade\n1934085 - Scheduling conformance tests failing in a single node cluster\n1934107 - cluster-authentication-operator builds URL incorrectly for IPv6\n1934112 - Add memory and uptime metadata to IO archive\n1934113 - mcd panic when there\u0027s not enough free disk space\n1934123 - [OSP] First public endpoint is used to fetch ignition config from Glance URL (with multiple endpoints) on OSP\n1934163 - Thanos Querier restarting and gettin alert ThanosQueryHttpRequestQueryRangeErrorRateHigh\n1934174 - rootfs too small when enabling NBDE\n1934176 - Machine Config Operator degrades during cluster update with failed to convert Ignition config spec v2 to v3\n1934177 - knative-camel-operator  CreateContainerError \"container_linux.go:366: starting container process caused: chdir to cwd (\\\"/home/nonroot\\\") set in config.json failed: permission denied\"\n1934216 - machineset-controller stuck in CrashLoopBackOff after upgrade to 4.7.0\n1934229 - List page text filter has input lag\n1934397 - Extend OLM operator gatherer to include Operator/ClusterServiceVersion conditions\n1934400 - [ocp_4][4.6][apiserver-auth] OAuth API servers are not ready - PreconditionNotReady\n1934516 - Setup different priority classes for prometheus-k8s and prometheus-user-workload pods\n1934556 - OCP-Metal images\n1934557 - RHCOS boot image bump for LUKS fixes\n1934643 - Need BFD failover capability on ECMP routes\n1934711 - openshift-ovn-kubernetes ovnkube-node DaemonSet should use maxUnavailable: 10%\n1934773 - Canary client should perform canary probes explicitly over HTTPS (rather than redirect from HTTP)\n1934905 - CoreDNS\u0027s \"errors\" plugin is not enabled for custom upstream resolvers\n1935058 - Can\u2019t finish install sts clusters on aws government region\n1935102 - Error: specifying a root certificates file with the insecure flag is not allowed during oc login\n1935155 - IGMP/MLD packets being dropped\n1935157 - [e2e][automation] environment tests broken\n1935165 - OCP 4.6 Build fails when filename contains an umlaut\n1935176 - Missing an indication whether the deployed setup is SNO. \n1935269 - Topology operator group shows child Jobs. Not shown in details view\u0027s resources. \n1935419 - Failed to scale worker using virtualmedia on Dell R640\n1935528 - [AWS][Proxy] ingress reports degrade with CanaryChecksSucceeding=False in the cluster with proxy setting\n1935539 - Openshift-apiserver CO unavailable during cluster upgrade from 4.6 to 4.7\n1935541 - console operator panics in DefaultDeployment with nil cm\n1935582 - prometheus liveness probes cause issues while replaying WAL\n1935604 - high CPU usage fails ingress controller\n1935667 - pipelinerun status icon rendering issue\n1935706 - test: Detect when the master pool is still updating after upgrade\n1935732 - Update Jenkins agent maven directory to be version agnostic [ART ocp build data]\n1935814 - Pod and Node lists eventually have incorrect row heights when additional columns have long text\n1935909 - New CSV using ServiceAccount named \"default\" stuck in Pending during upgrade\n1936022 - DNS operator performs spurious updates in response to API\u0027s defaulting of daemonset\u0027s terminationGracePeriod and service\u0027s clusterIPs\n1936030 - Ingress operator performs spurious updates in response to API\u0027s defaulting of NodePort service\u0027s clusterIPs field\n1936223 - The IPI installer has a typo. It is missing the word \"the\" in \"the Engine\". \n1936336 - Updating multus-cni builder \u0026 base images to be consistent with ART 4.8 (closed)\n1936342 - kuryr-controller restarting after 3 days cluster running - pools without members\n1936443 - Hive based OCP IPI baremetal installation fails to connect to API VIP port 22623\n1936488 - [sig-instrumentation][Late] Alerts shouldn\u0027t report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured: Prometheus query error\n1936515 - sdn-controller is missing some health checks\n1936534 - When creating a worker with a used mac-address stuck on registering\n1936585 - configure alerts if the catalogsources are missing\n1936620 - OLM checkbox descriptor renders switch instead of checkbox\n1936721 - network-metrics-deamon not associated with a priorityClassName\n1936771 - [aws ebs csi driver] The event for Pod consuming a readonly PVC is not clear\n1936785 - Configmap gatherer doesn\u0027t include namespace name (in the archive path) in case of a configmap with binary data\n1936788 - RBD RWX PVC creation with  Filesystem volume mode selection is creating RWX PVC with Block volume mode instead of disabling Filesystem volume mode selection\n1936798 - Authentication log gatherer shouldn\u0027t scan all the pod logs in the openshift-authentication namespace\n1936801 - Support ServiceBinding 0.5.0+\n1936854 - Incorrect imagestream is shown as selected in knative service container image edit flow\n1936857 - e2e-ovirt-ipi-install-install is permafailing on 4.5 nightlies\n1936859 - ovirt 4.4 -\u003e 4.5 upgrade jobs are permafailing\n1936867 - Periodic vsphere IPI install is broken - missing pip\n1936871 - [Cinder CSI] Topology aware provisioning doesn\u0027t work when Nova and Cinder AZs are different\n1936904 - Wrong output YAML when syncing groups without --confirm\n1936983 - Topology view - vm details screen isntt stop loading\n1937005 - when kuryr quotas are unlimited, we should not sent alerts\n1937018 - FilterToolbar component does not handle \u0027null\u0027 value for \u0027rowFilters\u0027 prop\n1937020 - Release new from image stream chooses incorrect ID based on status\n1937077 - Blank White page on Topology\n1937102 - Pod Containers Page Not Translated\n1937122 - CAPBM changes to support flexible reboot modes\n1937145 - [Local storage] PV provisioned by localvolumeset stays in \"Released\" status after the pod/pvc deleted\n1937167 - [sig-arch] Managed cluster should have no crashlooping pods in core namespaces over four minutes\n1937244 - [Local Storage] The model name of aws EBS doesn\u0027t be extracted well\n1937299 - pod.spec.volumes.awsElasticBlockStore.partition is not respected on NVMe volumes\n1937452 - cluster-network-operator CI linting fails in master branch\n1937459 - Wrong Subnet retrieved for Service without Selector\n1937460 - [CI] Network quota pre-flight checks are failing the installation\n1937464 - openstack cloud credentials are not getting configured with correct user_domain_name across the cluster\n1937466 - KubeClientCertificateExpiration alert is confusing, without explanation in the documentation\n1937496 - Metrics viewer in OCP Console is missing date in a timestamp for selected datapoint\n1937535 - Not all image pulls within OpenShift builds retry\n1937594 - multiple pods in ContainerCreating state after migration from OpenshiftSDN to OVNKubernetes\n1937627 - Bump DEFAULT_DOC_URL for 4.8\n1937628 - Bump upgrade channels for 4.8\n1937658 - Description for storage class encryption during storagecluster creation needs to be updated\n1937666 - Mouseover on headline\n1937683 - Wrong icon classification of output in buildConfig when the destination is a DockerImage\n1937693 - ironic image \"/\" cluttered with files\n1937694 - [oVirt] split ovirt providerIDReconciler logic into NodeController and ProviderIDController\n1937717 - If browser default font size is 20, the layout of template screen breaks\n1937722 - OCP 4.8 vuln due to BZ 1936445\n1937929 - Operand page shows a 404:Not Found error for OpenShift GitOps Operator\n1937941 - [RFE]fix wording for favorite templates\n1937972 - Router HAProxy config file template is slow to render due to repetitive regex compilations\n1938131 - [AWS] Missing iam:ListAttachedRolePolicies permission in permissions.go\n1938321 - Cannot view PackageManifest objects in YAML on \u0027Home \u003e Search\u0027 page nor \u0027CatalogSource details \u003e Operators tab\u0027\n1938465 - thanos-querier should set a CPU request on the thanos-query container\n1938466 - packageserver deployment sets neither CPU or memory request on the packageserver container\n1938467 - The default cluster-autoscaler should get default cpu and memory requests if user omits them\n1938468 - kube-scheduler-operator has a container without a CPU request\n1938492 - Marketplace extract container does not request CPU or memory\n1938493 - machine-api-operator declares restrictive cpu and memory limits where it should not\n1938636 - Can\u0027t set the loglevel of the container: cluster-policy-controller and kube-controller-manager-recovery-controller\n1938903 - Time range on dashboard page will be empty after drog and drop mouse in the graph\n1938920 - ovnkube-master/ovs-node DaemonSets should use maxUnavailable: 10%\n1938947 - Update blocked from 4.6 to 4.7 when using spot/preemptible instances\n1938949 - [VPA] Updater failed to trigger evictions due to \"vpa-admission-controller\" not found\n1939054 - machine healthcheck kills aws spot instance before generated\n1939060 - CNO: nodes and masters are upgrading simultaneously\n1939069 - Add source to vm template silently failed when no storage class is defined in the cluster\n1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string\n1939168 - Builds failing for OCP 3.11 since PR#25 was merged\n1939226 - kube-apiserver readiness probe appears to be hitting /healthz, not /readyz\n1939227 - kube-apiserver liveness probe appears to be hitting /healthz, not /livez\n1939232 - CI tests using openshift/hello-world broken by Ruby Version Update\n1939270 - fix co upgradeableFalse status and reason\n1939294 - OLM may not delete pods with grace period zero (force delete)\n1939412 - missed labels for thanos-ruler pods\n1939485 - CVE-2021-20291 containers/storage: DoS via malicious image\n1939547 - Include container=\"POD\" in resource queries\n1939555 - VSphereProblemDetectorControllerDegraded: context canceled during upgrade to 4.8.0\n1939573 - after entering valid git repo url on add flow page, throwing warning message instead Validated\n1939580 - Authentication operator is degraded during 4.8 to 4.8 upgrade and normal 4.8 e2e runs\n1939606 - Attempting to put a host into maintenance mode warns about Ceph cluster health, but no storage cluster problems are apparent\n1939661 - support new AWS region ap-northeast-3\n1939726 - clusteroperator/network should not change condition/Degraded during normal serial test execution\n1939731 - Image registry operator reports unavailable during normal serial run\n1939734 - Node Fanout Causes Excessive WATCH Secret Calls, Taking Down Clusters\n1939740 - dual stack nodes with OVN single ipv6 fails on bootstrap phase\n1939752 - ovnkube-master sbdb container does not set requests on cpu or memory\n1939753 - Delete HCO is stucking if there is still VM in the cluster\n1939815 - Change the Warning Alert for Encrypted PVs in Create StorageClass(provisioner:RBD) page\n1939853 - [DOC] Creating manifests API should not allow folder in the \"file_name\"\n1939865 - GCP PD CSI driver does not have CSIDriver instance\n1939869 - [e2e][automation] Add annotations to datavolume for HPP\n1939873 - Unlimited number of characters accepted for base domain name\n1939943 - `cluster-kube-apiserver-operator check-endpoints` observed a panic: runtime error: invalid memory address or nil pointer dereference\n1940030 - cluster-resource-override: fix spelling mistake for run-level match expression in webhook configuration\n1940057 - Openshift builds should use a wach instead of polling when checking for pod status\n1940142 - 4.6-\u003e4.7 updates stick on OpenStackCinderCSIDriverOperatorCR_OpenStackCinderDriverControllerServiceController_Deploying\n1940159 - [OSP] cluster destruction fails to remove router in BYON (with provider network) with Kuryr as primary network\n1940206 - Selector and VolumeTableRows not i18ned\n1940207 - 4.7-\u003e4.6 rollbacks stuck on prometheusrules admission webhook \"no route to host\"\n1940314 - Failed to get type for Dashboard Kubernetes / Compute Resources / Namespace (Workloads)\n1940318 - No data under \u0027Current Bandwidth\u0027 for Dashboard \u0027Kubernetes / Networking / Pod\u0027\n1940322 - Split of dashbard  is wrong, many Network parts\n1940337 - rhos-ipi installer fails with not clear message when openstack tenant doesn\u0027t have flavors needed for compute machines\n1940361 - [e2e][automation] Fix vm action tests with storageclass HPP\n1940432 - Gather datahubs.installers.datahub.sap.com resources from SAP clusters\n1940488 - After fix for CVE-2021-3344, Builds do not mount node entitlement keys\n1940498 - pods may fail to add logical port due to lr-nat-del/lr-nat-add error messages\n1940499 - hybrid-overlay not logging properly before exiting due to an error\n1940518 - Components in bare metal components lack resource requests\n1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n1940704 - prjquota is dropped from rootflags if rootfs is reprovisioned\n1940755 - [Web-console][Local Storage] LocalVolumeSet could not be created from web-console without detail error info\n1940865 - Add BareMetalPlatformType into e2e upgrade service unsupported list\n1940876 - Components in ovirt components lack resource requests\n1940889 - Installation failures in OpenStack release jobs\n1940933 - [sig-arch] Check if alerts are firing during or after upgrade success: AggregatedAPIDown on v1beta1.metrics.k8s.io\n1940939 - Wrong Openshift node IP as kubelet setting VIP as node IP\n1940940 - csi-snapshot-controller goes unavailable when machines are added removed to cluster\n1940950 - vsphere: client/bootstrap CSR double create\n1940972 - vsphere: [4.6] CSR approval delayed for unknown reason\n1941000 - cinder storageclass creates persistent volumes with wrong label failure-domain.beta.kubernetes.io/zone in multi availability zones architecture on OSP 16. \n1941334 - [RFE] Cluster-api-provider-ovirt should handle auto pinning policy\n1941342 - Add `kata-osbuilder-generate.service` as part of the default presets\n1941456 - Multiple pods stuck in ContainerCreating status with the message \"failed to create container for [kubepods burstable podxxx] : dbus: connection closed by user\" being seen in the journal log\n1941526 - controller-manager-operator: Observed a panic: nil pointer dereference\n1941592 - HAProxyDown not Firing\n1941606 - [assisted operator] Assisted Installer Operator CSV related images should be digests for icsp\n1941625 - Developer -\u003e Topology - i18n misses\n1941635 - Developer -\u003e Monitoring - i18n misses\n1941636 - BM worker nodes deployment with virtual media failed while trying to clean raid\n1941645 - Developer -\u003e Builds - i18n misses\n1941655 - Developer -\u003e Pipelines - i18n misses\n1941667 - Developer -\u003e Project - i18n misses\n1941669 - Developer -\u003e ConfigMaps - i18n misses\n1941759 - Errored pre-flight checks should not prevent install\n1941798 - Some details pages don\u0027t have internationalized ResourceKind labels\n1941801 - Many filter toolbar dropdowns haven\u0027t been internationalized\n1941815 - From the web console the terminal can no longer connect after using leaving and returning to the terminal view\n1941859 - [assisted operator] assisted pod deploy first time in error state\n1941901 - Toleration merge logic does not account for multiple entries with the same key\n1941915 - No validation against template name in boot source customization\n1941936 - when setting parameters in containerRuntimeConfig, it will show incorrect information on its description\n1941980 - cluster-kube-descheduler operator is broken when upgraded from 4.7 to 4.8\n1941990 - Pipeline metrics endpoint changed in osp-1.4\n1941995 - fix backwards incompatible trigger api changes in osp1.4\n1942086 - Administrator -\u003e Home - i18n misses\n1942117 - Administrator -\u003e Workloads - i18n misses\n1942125 - Administrator -\u003e Serverless - i18n misses\n1942193 - Operand creation form - broken/cutoff blue line on the Accordion component (fieldGroup)\n1942207 - [vsphere] hostname are changed when upgrading from 4.6 to 4.7.x causing upgrades to fail\n1942271 - Insights operator doesn\u0027t gather pod information from openshift-cluster-version\n1942375 - CRI-O failing with error \"reserving ctr name\"\n1942395 - The status is always \"Updating\" on dc detail page after deployment has failed. \n1942521 - [Assisted-4.7] [Staging][OCS] Minimum memory for selected role is failing although minimum OCP requirement satisfied\n1942522 - Resolution fails to sort channel if inner entry does not satisfy predicate\n1942536 - Corrupted image preventing containers from starting\n1942548 - Administrator -\u003e Networking - i18n misses\n1942553 - CVE-2021-22133 go.elastic.co/apm: leaks sensitive HTTP headers during panic\n1942555 - Network policies in ovn-kubernetes don\u0027t support external traffic from router when the endpoint publishing strategy is HostNetwork\n1942557 - Query is reporting \"no datapoint\" when label cluster=\"\" is set but work when the label is removed or when running directly in Prometheus\n1942608 - crictl cannot list the images with an error: error locating item named \"manifest\" for image with ID\n1942614 - Administrator -\u003e Storage - i18n misses\n1942641 - Administrator -\u003e Builds - i18n misses\n1942673 - Administrator -\u003e Pipelines - i18n misses\n1942694 - Resource names with a colon do not display property in the browser window title\n1942715 - Administrator -\u003e User Management - i18n misses\n1942716 - Quay Container Security operator has Medium \u003c-\u003e Low colors reversed\n1942725 - [SCC] openshift-apiserver degraded when creating new pod after installing Stackrox which creates a less privileged SCC [4.8]\n1942736 - Administrator -\u003e Administration - i18n misses\n1942749 - Install Operator form should use info icon for popovers\n1942837 - [OCPv4.6] unable to deploy pod with unsafe sysctls\n1942839 - Windows VMs fail to start on air-gapped environments\n1942856 - Unable to assign nodes for EgressIP even if the egress-assignable label is set\n1942858 - [RFE]Confusing detach volume UX\n1942883 - AWS EBS CSI driver does not support partitions\n1942894 - IPA error when provisioning masters due to an error from ironic.conductor - /dev/sda is busy\n1942935 - must-gather improvements\n1943145 - vsphere: client/bootstrap CSR double create\n1943175 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies (set azure storage account TLS version default to 1.2)\n1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl()\n1943219 - unable to install IPI PRIVATE OpenShift cluster in Azure - SSH access from the Internet should be blocked\n1943224 - cannot upgrade openshift-kube-descheduler from 4.7.2 to latest\n1943238 - The conditions table does not occupy 100% of the width. \n1943258 - [Assisted-4.7][Staging][Advanced Networking] Cluster install fails while waiting for control plane\n1943314 - [OVN SCALE] Combine Logical Flows inside Southbound DB. \n1943315 - avoid workload disruption for ICSP changes\n1943320 - Baremetal node loses connectivity with bonded interface and OVNKubernetes\n1943329 - TLSSecurityProfile missing from KubeletConfig CRD Manifest\n1943356 - Dynamic plugins surfaced in the UI should be referred to as \"Console plugins\"\n1943539 - crio-wipe is failing to start \"Failed to shutdown storage before wiping: A layer is mounted: layer is in use by a container\"\n1943543 - DeploymentConfig Rollback doesn\u0027t reset params correctly\n1943558 - [assisted operator] Assisted Service pod unable to reach self signed local registry in disco environement\n1943578 - CoreDNS caches NXDOMAIN responses for up to 900 seconds\n1943614 - add bracket logging on openshift/builder calls into buildah to assist test-platform team triage\n1943637 - upgrade from ocp 4.5 to 4.6 does not clear SNAT rules on ovn\n1943649 - don\u0027t use hello-openshift for network-check-target\n1943667 - KubeDaemonSetRolloutStuck fires during upgrades too often because it does not accurately detect progress\n1943719 - storage-operator/vsphere-problem-detector causing upgrades to fail that would have succeeded in past versions\n1943804 - API server on AWS takes disruption between 70s and 110s after pod begins termination via external LB\n1943845 - Router pods should have startup probes configured\n1944121 - OVN-kubernetes references AddressSets after deleting them, causing ovn-controller errors\n1944160 - CNO: nbctl daemon should log reconnection info\n1944180 - OVN-Kube Master does not release election lock on shutdown\n1944246 - Ironic fails to inspect and move node to \"manageable\u0027 but get bmh remains in \"inspecting\"\n1944268 - openshift-install AWS SDK is missing endpoints for the ap-northeast-3 region\n1944509 - Translatable texts without context in ssh expose component\n1944581 - oc project not works with cluster proxy\n1944587 - VPA could not take actions based on the recommendation when min-replicas=1\n1944590 - The field name \"VolumeSnapshotContent\" is wrong on VolumeSnapshotContent detail page\n1944602 - Consistant fallures of features/project-creation.feature Cypress test in CI\n1944631 - openshif authenticator should not accept non-hashed tokens\n1944655 - [manila-csi-driver-operator] openstack-manila-csi-nodeplugin pods stucked with \".. still connecting to unix:///var/lib/kubelet/plugins/csi-nfsplugin/csi.sock\"\n1944660 - dm-multipath race condition on bare metal causing /boot partition mount failures\n1944674 - Project field become to \"All projects\" and disabled in \"Review and create virtual machine\" step in devconsole\n1944678 - Whereabouts IPAM CNI duplicate IP addresses assigned to pods\n1944761 - field level help instances do not use common util component \u003cFieldLevelHelp\u003e\n1944762 - Drain on worker node during an upgrade fails due to PDB set for image registry pod when only a single replica is present\n1944763 - field level help instances do not use common util component \u003cFieldLevelHelp\u003e\n1944853 - Update to nodejs \u003e=14.15.4 for ARM\n1944974 - Duplicate KubeControllerManagerDown/KubeSchedulerDown alerts\n1944986 - Clarify the ContainerRuntimeConfiguration cr description on the validation\n1945027 - Button \u0027Copy SSH Command\u0027 does not work\n1945085 - Bring back API data in etcd test\n1945091 - In k8s 1.21 bump Feature:IPv6DualStack tests are disabled\n1945103 - \u0027User credentials\u0027 shows even the VM is not running\n1945104 - In k8s 1.21 bump \u0027[sig-storage] [cis-hostpath] [Testpattern: Generic Ephemeral-volume\u0027 tests are disabled\n1945146 - Remove pipeline Tech preview badge for pipelines GA operator\n1945236 - Bootstrap ignition shim doesn\u0027t follow proxy settings\n1945261 - Operator dependency not consistently chosen from default channel\n1945312 - project deletion does not reset UI project context\n1945326 - console-operator: does not check route health periodically\n1945387 - Image Registry deployment should have 2 replicas and hard anti-affinity rules\n1945398 - 4.8 CI failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n1945431 - alerts: SystemMemoryExceedsReservation triggers too quickly\n1945443 - operator-lifecycle-manager-packageserver flaps Available=False with no reason or message\n1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service\n1945548 - catalog resource update failed if spec.secrets set to \"\"\n1945584 - Elasticsearch  operator fails to install on 4.8 cluster on ppc64le/s390x\n1945599 - Optionally set KERNEL_VERSION and RT_KERNEL_VERSION\n1945630 - Pod log filename no longer in \u003cpod-name\u003e-\u003ccontainer-name\u003e.log format\n1945637 - QE- Automation- Fixing smoke test suite for pipeline-plugin\n1945646 - gcp-routes.sh running as initrc_t unnecessarily\n1945659 - [oVirt] remove ovirt_cafile from ovirt-credentials secret\n1945677 - Need ACM Managed Cluster Info metric enabled for OCP monitoring telemetry\n1945687 - Dockerfile needs updating to new container CI registry\n1945700 - Syncing boot mode after changing device should be restricted to Supermicro\n1945816 - \" Ingresses \" should be kept in English for Chinese\n1945818 - Chinese translation issues: Operator should be the same with English `Operators`\n1945849 - Unnecessary series churn when a new version of kube-state-metrics is rolled out\n1945910 - [aws] support byo iam roles for instances\n1945948 - SNO: pods can\u0027t reach ingress when the ingress uses a different IPv6. \n1946079 - Virtual master is not getting an IP address\n1946097 - [oVirt] oVirt credentials secret contains unnecessary \"ovirt_cafile\"\n1946119 - panic parsing install-config\n1946243 - No relevant error when pg limit is reached in block pools page\n1946307 - [CI] [UPI] use a standardized and reliable way to install google cloud SDK in UPI image\n1946320 - Incorrect error message in Deployment Attach Storage Page\n1946449 - [e2e][automation] Fix cloud-init tests as UI changed\n1946458 - Edit Application action overwrites Deployment envFrom values on save\n1946459 - In bare metal IPv6 environment, [sig-storage] [Driver: nfs] tests are failing in CI. \n1946479 - In k8s 1.21 bump BoundServiceAccountTokenVolume is disabled by default\n1946497 - local-storage-diskmaker pod logs \"DeviceSymlinkExists\" and \"not symlinking, could not get lock: \u003cnil\u003e\"\n1946506 - [on-prem] mDNS plugin no longer needed\n1946513 - honor use specified system reserved with auto node sizing\n1946540 - auth operator: only configure webhook authenticators for internal auth when oauth-apiserver pods are ready\n1946584 - Machine-config controller fails to generate MC, when machine config pool with dashes in name presents under the cluster\n1946607 - etcd readinessProbe is not reflective of actual readiness\n1946705 - Fix issues with \"search\" capability in the Topology Quick Add component\n1946751 - DAY2 Confusing event when trying to add hosts to a cluster that completed installation\n1946788 - Serial tests are broken because of router\n1946790 - Marketplace operator flakes Available=False OperatorStarting during updates\n1946838 - Copied CSVs show up as adopted components\n1946839 - [Azure] While mirroring images to private registry throwing error: invalid character \u0027\u003c\u0027 looking for beginning of value\n1946865 - no \"namespace:kube_pod_container_resource_requests_cpu_cores:sum\" and \"namespace:kube_pod_container_resource_requests_memory_bytes:sum\" metrics\n1946893 - the error messages are inconsistent in DNS status conditions if the default service IP is taken\n1946922 - Ingress details page doesn\u0027t show referenced secret name and link\n1946929 - the default dns operator\u0027s Progressing status is always True and cluster operator dns Progressing status is False\n1947036 - \"failed to create Matchbox client or connect\" on e2e-metal jobs or metal clusters via cluster-bot\n1947066 - machine-config-operator pod crashes when noProxy is *\n1947067 - [Installer] Pick up upstream fix for installer console output\n1947078 - Incorrect skipped status for conditional tasks in the pipeline run\n1947080 - SNO IPv6 with \u0027temporary 60-day domain\u0027 option fails with IPv4 exception\n1947154 - [master] [assisted operator] Unable to re-register an SNO instance if deleting CRDs during install\n1947164 - Print \"Successfully pushed\" even if the build push fails. \n1947176 - OVN-Kubernetes leaves stale AddressSets around if the deletion was missed. \n1947293 - IPv6 provision addresses range larger then /64 prefix (e.g. /48)\n1947311 - When adding a new node to localvolumediscovery UI does not show pre-existing node name\u0027s\n1947360 - [vSphere csi driver operator] operator pod runs as \u201cBestEffort\u201d qosClass\n1947371 - [vSphere csi driver operator] operator doesn\u0027t create \u201ccsidriver\u201d instance\n1947402 - Single Node cluster upgrade: AWS EBS CSI driver deployment is stuck on rollout\n1947478 - discovery v1 beta1 EndpointSlice is deprecated in Kubernetes 1.21 (OCP 4.8)\n1947490 - If Clevis on a managed LUKs volume with Ignition enables, the system will fails to automatically open the LUKs volume on system boot\n1947498 - policy v1 beta1 PodDisruptionBudget is deprecated in Kubernetes 1.21 (OCP 4.8)\n1947663 - disk details are not synced in web-console\n1947665 - Internationalization values for ceph-storage-plugin should be in file named after plugin\n1947684 - MCO on SNO sometimes has rendered configs and sometimes does not\n1947712 - [OVN] Many faults and Polling interval stuck for 4 seconds every roughly 5 minutes intervals. \n1947719 - 8 APIRemovedInNextReleaseInUse info alerts display\n1947746 - Show wrong kubernetes version from kube-scheduler/kube-controller-manager operator pods\n1947756 - [azure-disk-csi-driver-operator] Should allow more nodes to be updated simultaneously for speeding up cluster upgrade\n1947767 - [azure-disk-csi-driver-operator] Uses the same storage type in the sc created by it as the default sc?\n1947771 - [kube-descheduler]descheduler operator pod should not run as \u201cBestEffort\u201d qosClass\n1947774 - CSI driver operators use \"Always\" imagePullPolicy in some containers\n1947775 - [vSphere csi driver operator] doesn\u2019t use the downstream images from payload. \n1947776 - [vSphere csi driver operator] Should allow more nodes to be updated simultaneously for speeding up cluster upgrade\n1947779 - [LSO] Should allow more nodes to be updated simultaneously for speeding up LSO upgrade\n1947785 - Cloud Compute: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947789 - Console: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947791 - MCO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947793 - DevEx: APIRemovedInNextReleaseInUse info alerts display\n1947794 - OLM: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component does not trigger APIRemovedInNextReleaseInUse alert\n1947795 - Networking: APIRemovedInNextReleaseInUse info alerts display\n1947797 - CVO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947798 - Images: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947800 - Ingress: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947801 - Kube Storage Version Migrator APIRemovedInNextReleaseInUse info alerts display\n1947803 - Openshift Apiserver: APIRemovedInNextReleaseInUse info alerts display\n1947806 - Re-enable h2spec, http/2 and grpc-interop e2e tests in openshift/origin\n1947828 - `download it` link should save pod log in \u003cpod-name\u003e-\u003ccontainer-name\u003e.log format\n1947866 - disk.csi.azure.com.spec.operatorLogLevel is not updated when CSO loglevel  is changed\n1947917 - Egress Firewall does not reliably apply firewall rules\n1947946 - Operator upgrades can delete existing CSV before completion\n1948011 - openshift-controller-manager constantly reporting type \"Upgradeable\" status Unknown\n1948012 - service-ca constantly reporting type \"Upgradeable\" status Unknown\n1948019 - [4.8] Large number of requests to the infrastructure cinder volume service\n1948022 - Some on-prem namespaces missing from must-gather\n1948040 - cluster-etcd-operator: etcd is using deprecated logger\n1948082 - Monitoring should not set Available=False with no reason on updates\n1948137 - CNI DEL not called on node reboot - OCP 4 CRI-O. \n1948232 - DNS operator performs spurious updates in response to API\u0027s defaulting of daemonset\u0027s maxSurge and service\u0027s ipFamilies and ipFamilyPolicy fields\n1948311 - Some jobs failing due to excessive watches: the server has received too many requests and has asked us to try again later\n1948359 - [aws] shared tag was not removed from user provided IAM role\n1948410 - [LSO] Local Storage Operator uses imagePullPolicy as \"Always\"\n1948415 - [vSphere csi driver operator] clustercsidriver.spec.logLevel doesn\u0027t take effective after changing\n1948427 - No action is triggered after click \u0027Continue\u0027 button on \u0027Show community Operator\u0027 windows\n1948431 - TechPreviewNoUpgrade does not enable CSI migration\n1948436 - The outbound traffic was broken intermittently after shutdown one egressIP node\n1948443 - OCP 4.8 nightly still showing v1.20 even after 1.21 merge\n1948471 - [sig-auth][Feature:OpenShiftAuthorization][Serial] authorization  TestAuthorizationResourceAccessReview should succeed [Suite:openshift/conformance/serial]\n1948505 - [vSphere csi driver operator] vmware-vsphere-csi-driver-operator pod restart every 10 minutes\n1948513 - get-resources.sh doesn\u0027t honor the no_proxy settings\n1948524 - \u0027DeploymentUpdated\u0027 Updated Deployment.apps/downloads -n openshift-console because it changed message is printed every minute\n1948546 - VM of worker is in error state when a network has port_security_enabled=False\n1948553 - When setting etcd spec.LogLevel is not propagated to etcd operand\n1948555 - A lot of events \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\" were seen in azure disk csi driver verification test\n1948563 - End-to-End Secure boot deployment fails \"Invalid value for input variable\"\n1948582 - Need ability to specify local gateway mode in CNO config\n1948585 - Need a CI jobs to test local gateway mode with bare metal\n1948592 - [Cluster Network Operator] Missing Egress Router Controller\n1948606 - DNS e2e test fails \"[sig-arch] Only known images used by tests\" because it does not use a known image\n1948610 - External Storage [Driver: disk.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]\n1948626 - TestRouteAdmissionPolicy e2e test is failing often\n1948628 - ccoctl needs to plan for future (non-AWS) platform support in the CLI\n1948634 - upgrades: allow upgrades without version change\n1948640 - [Descheduler] operator log reports key failed with : kubedeschedulers.operator.openshift.io \"cluster\" not found\n1948701 - unneeded CCO alert already covered by CVO\n1948703 - p\u0026f: probes should not get 429s\n1948705 - [assisted operator] SNO deployment fails - ClusterDeployment shows `bootstrap.ign was not found`\n1948706 - Cluster Autoscaler Operator manifests missing annotation for ibm-cloud-managed profile\n1948708 - cluster-dns-operator includes a deployment with node selector of masters for the IBM cloud managed profile\n1948711 - thanos querier and prometheus-adapter should have 2 replicas\n1948714 - cluster-image-registry-operator targets master nodes in ibm-cloud-managed-profile\n1948716 - cluster-ingress-operator deployment targets master nodes for ibm-cloud-managed profile\n1948718 - cluster-network-operator deployment manifest for ibm-cloud-managed profile contains master node selector\n1948719 - Machine API components should use 1.21 dependencies\n1948721 - cluster-storage-operator deployment targets master nodes for ibm-cloud-managed profile\n1948725 - operator lifecycle manager does not include profile annotations for ibm-cloud-managed\n1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing\n1948771 - ~50% of GCP upgrade jobs in 4.8 failing with \"AggregatedAPIDown\" alert on packages.coreos.com\n1948782 - Stale references to the single-node-production-edge cluster profile\n1948787 - secret.StringData shouldn\u0027t be used for reads\n1948788 - Clicking an empty metrics graph (when there is no data) should still open metrics viewer\n1948789 - Clicking on a metrics graph should show request and limits queries as well on the resulting metrics page\n1948919 - Need minor update in message on channel modal\n1948923 - [aws] installer forces the platform.aws.amiID option to be set, while installing a cluster into GovCloud or C2S region\n1948926 - Memory Usage of Dashboard \u0027Kubernetes / Compute Resources / Pod\u0027 contain wrong CPU query\n1948936 - [e2e][automation][prow] Prow script point to deleted resource\n1948943 - (release-4.8) Limit the number of collected pods in the workloads gatherer\n1948953 - Uninitialized cloud provider error when provisioning a cinder volume\n1948963 - [RFE] Cluster-api-provider-ovirt should handle hugepages\n1948966 - Add the ability to run a gather done by IO via a Kubernetes Job\n1948981 - Align dependencies and libraries with latest ironic code\n1948998 - style fixes by GoLand and golangci-lint\n1948999 - Can not assign multiple EgressIPs to a namespace by using automatic way. \n1949019 - PersistentVolumes page cannot sync project status automatically which will block user to create PV\n1949022 - Openshift 4 has a zombie problem\n1949039 - Wrong env name to get podnetinfo for hugepage in app-netutil\n1949041 - vsphere: wrong image names in bundle\n1949042 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the http2 tests  (on OpenStack)\n1949050 - Bump k8s to latest 1.21\n1949061 - [assisted operator][nmstate] Continuous attempts to reconcile InstallEnv  in the case of invalid NMStateConfig\n1949063 - [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service\n1949075 - Extend openshift/api for Add card customization\n1949093 - PatternFly v4.96.2 regression results in a.pf-c-button hover issues\n1949096 - Restore private git clone tests\n1949099 - network-check-target code cleanup\n1949105 - NetworkPolicy ... should enforce ingress policy allowing any port traffic to a server on a specific protocol\n1949145 - Move openshift-user-critical priority class to CCO\n1949155 - Console doesn\u0027t correctly check for favorited or last namespace on load if project picker used\n1949180 - Pipelines plugin model kinds aren\u0027t picked up by parser\n1949202 - sriov-network-operator not available from operatorhub on ppc64le\n1949218 - ccoctl not included in container image\n1949237 - Bump OVN: Lots of conjunction warnings in ovn-controller container logs\n1949277 - operator-marketplace: deployment manifests for ibm-cloud-managed profile have master node selectors\n1949294 - [assisted operator] OPENSHIFT_VERSIONS in assisted operator subscription does not propagate\n1949306 - need a way to see top API accessors\n1949313 - Rename vmware-vsphere-* images to vsphere-* images before 4.8 ships\n1949316 - BaremetalHost resource automatedCleaningMode ignored due to outdated vendoring\n1949347 - apiserver-watcher support for dual-stack\n1949357 - manila-csi-controller pod not running due to secret lack(in another ns)\n1949361 - CoreDNS resolution failure for external hostnames with \"A: dns: overflow unpacking uint16\"\n1949364 - Mention scheduling profiles in scheduler operator repository\n1949370 - Testability of: Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error\n1949384 - Edit Default Pull Secret modal - i18n misses\n1949387 - Fix the typo in auto node sizing script\n1949404 - label selector on pvc creation page - i18n misses\n1949410 - The referred role doesn\u0027t exist if create rolebinding from rolebinding tab of role page\n1949411 - VolumeSnapshot, VolumeSnapshotClass and VolumeSnapshotConent Details tab is not translated - i18n misses\n1949413 - Automatic boot order setting is done incorrectly when using by-path style device names\n1949418 - Controller factory workers should always restart on panic()\n1949419 - oauth-apiserver logs \"[SHOULD NOT HAPPEN] failed to update managedFields for authentication.k8s.io/v1, Kind=TokenReview: failed to convert new object (authentication.k8s.io/v1, Kind=TokenReview)\"\n1949420 - [azure csi driver operator] pvc.status.capacity and pv.spec.capacity are processed not the same as in-tree plugin\n1949435 - ingressclass controller doesn\u0027t recreate the openshift-default ingressclass after deleting it\n1949480 - Listeners timeout are constantly being updated\n1949481 - cluster-samples-operator restarts approximately two times per day and logs too many same messages\n1949509 - Kuryr should manage API LB instead of CNO\n1949514 - URL is not visible for routes at narrow screen widths\n1949554 - Metrics of vSphere CSI driver sidecars are not collected\n1949582 - OCP v4.7 installation with OVN-Kubernetes fails with error \"egress bandwidth restriction -1 is not equals\"\n1949589 - APIRemovedInNextEUSReleaseInUse Alert Missing\n1949591 - Alert does not catch removed api usage during end-to-end tests. \n1949593 - rename DeprecatedAPIInUse alert to APIRemovedInNextReleaseInUse\n1949612 - Install with 1.21 Kubelet is spamming logs with failed to get stats failed command \u0027du\u0027\n1949626 - machine-api fails to create AWS client in new regions\n1949661 - Kubelet Workloads Management changes for OCPNODE-529\n1949664 - Spurious keepalived liveness probe failures\n1949671 - System services such as openvswitch are stopped before pod containers on system shutdown or reboot\n1949677 - multus is the first pod on a new node and the last to go ready\n1949711 - cvo unable to reconcile deletion of openshift-monitoring namespace\n1949721 - Pick 99237: Use the audit ID of a request for better correlation\n1949741 - Bump golang version of cluster-machine-approver\n1949799 - ingresscontroller should deny the setting when spec.tuningOptions.threadCount exceed 64\n1949810 - OKD 4.7  unable to access Project  Topology View\n1949818 - Add e2e test to perform MCO operation Single Node OpenShift\n1949820 - Unable to use `oc adm top is` shortcut when asking for `imagestreams`\n1949862 - The ccoctl tool hits the panic sometime when running the delete subcommand\n1949866 - The ccoctl fails to create authentication file when running the command `ccoctl aws create-identity-provider` with `--output-dir` parameter\n1949880 - adding providerParameters.gcp.clientAccess to existing ingresscontroller doesn\u0027t work\n1949882 - service-idler build error\n1949898 - Backport RP#848 to OCP 4.8\n1949907 - Gather summary of PodNetworkConnectivityChecks\n1949923 - some defined rootVolumes zones not used on installation\n1949928 - Samples Operator updates break CI tests\n1949935 - Fix  incorrect access review check on start pipeline kebab action\n1949956 - kaso: add minreadyseconds to ensure we don\u0027t have an LB outage on kas\n1949967 - Update Kube dependencies in MCO to 1.21\n1949972 - Descheduler metrics: populate build info data and make the metrics entries more readeable\n1949978 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the h2spec conformance tests [Suite:openshift/conformance/parallel/minimal]\n1949990 - (release-4.8) Extend the OLM operator gatherer to include CSV display name\n1949991 - openshift-marketplace pods are crashlooping\n1950007 - [CI] [UPI] easy_install is not reliable enough to be used in an image\n1950026 - [Descheduler] Need better way to handle evicted pod count for removeDuplicate pod strategy\n1950047 - CSV deployment template custom annotations are not propagated to deployments\n1950112 - SNO: machine-config pool is degraded:   error running chcon -R -t var_run_t /run/mco-machine-os-content/os-content-321709791\n1950113 - in-cluster operators need an API for additional AWS tags\n1950133 - MCO creates empty conditions on the kubeletconfig object\n1950159 - Downstream ovn-kubernetes repo should have no linter errors\n1950175 - Update Jenkins and agent base image to Go 1.16\n1950196 - ssh Key is added even with \u0027Expose SSH access to this virtual machine\u0027 unchecked\n1950210 - VPA CRDs use deprecated API version\n1950219 - KnativeServing is not shown in list on global config page\n1950232 - [Descheduler] - The minKubeVersion should be 1.21\n1950236 - Update OKD imagestreams to prefer centos7 images\n1950270 - should use \"kubernetes.io/os\" in the dns/ingresscontroller node selector description when executing oc explain command\n1950284 - Tracking bug for NE-563 - support user-defined tags on AWS load balancers\n1950341 - NetworkPolicy: allow-from-router policy does not allow access to service when the endpoint publishing strategy is HostNetwork on OpenshiftSDN network\n1950379 - oauth-server is in pending/crashbackoff at beginning 50% of CI runs\n1950384 - [sig-builds][Feature:Builds][sig-devex][Feature:Jenkins][Slow] openshift pipeline build  perm failing\n1950409 - Descheduler operator code and docs still reference v1beta1\n1950417 - The Marketplace Operator is building with EOL k8s versions\n1950430 - CVO serves metrics over HTTP, despite a lack of consumers\n1950460 - RFE: Change Request Size Input to Number Spinner Input\n1950471 - e2e-metal-ipi-ovn-dualstack is failing with etcd unable to bootstrap\n1950532 - Include \"update\" when referring to operator approval and channel\n1950543 - Document non-HA behaviors in the MCO (SingleNodeOpenshift)\n1950590 - CNO: Too many OVN netFlows collectors causes ovnkube pods CrashLoopBackOff\n1950653 - BuildConfig ignores Args\n1950761 - Monitoring operator deployments anti-affinity rules prevent their rollout on single-node\n1950908 - kube_pod_labels metric does not contain k8s labels\n1950912 - [e2e][automation] add devconsole tests\n1950916 - [RFE]console page show error when vm is poused\n1950934 - Unnecessary rollouts can happen due to unsorted endpoints\n1950935 - Updating cluster-network-operator builder \u0026 base images to be consistent with ART\n1950978 - the ingressclass cannot be removed even after deleting the related custom ingresscontroller\n1951007 - ovn master pod crashed\n1951029 - Drainer panics on missing context for node patch\n1951034 - (release-4.8) Split up the GatherClusterOperators into smaller parts\n1951042 - Panics every few minutes in kubelet logs post-rebase\n1951043 - Start Pipeline Modal Parameters should accept empty string defaults\n1951058 - [gcp-pd-csi-driver-operator] topology and multipods capabilities are not enabled in e2e tests\n1951066 - [IBM][ROKS] Enable volume snapshot controllers on IBM Cloud\n1951084 - avoid benign \"Path \\\"/run/secrets/etc-pki-entitlement\\\" from \\\"/etc/containers/mounts.conf\\\" doesn\u0027t exist, skipping\" messages\n1951158 - Egress Router CRD missing Addresses entry\n1951169 - Improve API Explorer discoverability from the Console\n1951174 - re-pin libvirt to 6.0.0\n1951203 - oc adm catalog mirror can generate ICSPs that exceed etcd\u0027s size limit\n1951209 - RerunOnFailure runStrategy shows wrong VM status (Starting) on Succeeded VMI\n1951212 - User/Group details shows unrelated subjects in role bindings tab\n1951214 - VM list page crashes when the volume type is sysprep\n1951339 - Cluster-version operator does not manage operand container environments when manifest lacks opinions\n1951387 - opm index add doesn\u0027t respect deprecated bundles\n1951412 - Configmap gatherer can fail incorrectly\n1951456 - Docs and linting fixes\n1951486 - Replace \"kubevirt_vmi_network_traffic_bytes_total\" with new metrics names\n1951505 - Remove deprecated techPreviewUserWorkload field from CMO\u0027s configmap\n1951558 - Backport Upstream 101093 for Startup Probe Fix\n1951585 - enterprise-pod fails to build\n1951636 - assisted service operator use default serviceaccount in operator bundle\n1951637 - don\u0027t rollout a new kube-apiserver revision on oauth accessTokenInactivityTimeout changes\n1951639 - Bootstrap API server unclean shutdown causes reconcile delay\n1951646 - Unexpected memory climb while container not in use\n1951652 - Add retries to opm index add\n1951670 - Error gathering bootstrap log after pivot: The bootstrap machine did not execute the release-image.service systemd unit\n1951671 - Excessive writes to ironic Nodes\n1951705 - kube-apiserver needs alerts on CPU utlization\n1951713 - [OCP-OSP] After changing image in machine object it enters in Failed - Can\u0027t find created instance\n1951853 - dnses.operator.openshift.io resource\u0027s spec.nodePlacement.tolerations godoc incorrectly describes default behavior\n1951858 - unexpected text \u00270\u0027 on filter toolbar on RoleBinding tab\n1951860 - [4.8] add Intel XXV710 NIC model (1572) support in SR-IOV Operator\n1951870 - sriov network resources injector: user defined injection removed existing pod annotations\n1951891 - [migration] cannot change ClusterNetwork CIDR during migration\n1951952 - [AWS CSI Migration] Metrics for cloudprovider error requests are lost\n1952001 - Delegated authentication: reduce the number of watch requests\n1952032 - malformatted assets in CMO\n1952045 - Mirror nfs-server image used in jenkins-e2e\n1952049 - Helm: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1952079 - rebase openshift/sdn to kube 1.21\n1952111 - Optimize importing from @patternfly/react-tokens\n1952174 - DNS operator claims to be done upgrading before it even starts\n1952179 - OpenStack Provider Ports UI Underscore Variables\n1952187 - Pods stuck in ImagePullBackOff with errors like rpc error: code = Unknown desc = Error committing the finished image: image with ID \"SomeLongID\" already exists, but uses a different top layer: that ID\n1952211 - cascading mounts happening exponentially on when deleting openstack-cinder-csi-driver-node pods\n1952214 - Console Devfile Import Dev Preview broken\n1952238 - Catalog pods don\u0027t report termination logs to catalog-operator\n1952262 - Need support external gateway via hybrid overlay\n1952266 - etcd operator bumps status.version[name=operator] before operands update\n1952268 - etcd operator should not set Degraded=True EtcdMembersDegraded on healthy machine-config node reboots\n1952282 - CSR approver races with nodelink controller and does not requeue\n1952310 - VM cannot start up if the ssh key is added by another template\n1952325 - [e2e][automation] Check support modal in ssh tests and skip template parentSupport\n1952333 - openshift/kubernetes vulnerable to CVE-2021-3121\n1952358 - Openshift-apiserver CO unavailable in fresh OCP 4.7.5 installations\n1952367 - No VM status on overview page when VM is pending\n1952368 - worker pool went degraded due to no rpm-ostree on rhel worker during applying new mc\n1952372 - VM stop action should not be there if the VM is not running\n1952405 - console-operator is not reporting correct Available status\n1952448 - Switch from Managed to Disabled mode: no IP removed from configuration and no container metal3-static-ip-manager stopped\n1952460 - In k8s 1.21 bump \u0027[sig-network] Firewall rule control plane should not expose well-known ports\u0027 test is disabled\n1952473 - Monitor pod placement during upgrades\n1952487 - Template filter does not work properly\n1952495 - \u201cCreate\u201d button on the Templates page is confuse\n1952527 - [Multus] multi-networkpolicy does wrong filtering\n1952545 - Selection issue when inserting YAML snippets\n1952585 - Operator links for \u0027repository\u0027 and \u0027container image\u0027 should be clickable in OperatorHub\n1952604 - Incorrect port in external loadbalancer config\n1952610 - [aws] image-registry panics when the cluster is installed in a new region\n1952611 - Tracking bug for OCPCLOUD-1115 - support user-defined tags on AWS EC2 Instances\n1952618 - 4.7.4-\u003e4.7.8 Upgrade Caused OpenShift-Apiserver Outage\n1952625 - Fix translator-reported text issues\n1952632 - 4.8 installer should default ClusterVersion channel to stable-4.8\n1952635 - Web console displays a blank page- white space instead of cluster information\n1952665 - [Multus] multi-networkpolicy pod continue restart due to OOM (out of memory)\n1952666 - Implement Enhancement 741 for Kubelet\n1952667 - Update Readme for cluster-baremetal-operator with details about the operator\n1952684 - cluster-etcd-operator: metrics controller panics on invalid response from client\n1952728 - It was not clear for users why Snapshot feature was not available\n1952730 - \u201cCustomize virtual machine\u201d and the \u201cAdvanced\u201d feature are confusing in wizard\n1952732 - Users did not understand the boot source labels\n1952741 - Monitoring DB: after set Time Range as Custom time range, no data display\n1952744 - PrometheusDuplicateTimestamps with user workload monitoring enabled\n1952759 - [RFE]It was not immediately clear what the Star icon meant\n1952795 - cloud-network-config-controller CRD does not specify correct plural name\n1952819 - failed to configure pod interface: error while waiting on flows for pod: timed out waiting for OVS flows\n1952820 - [LSO] Delete localvolume pv is failed\n1952832 - [IBM][ROKS] Enable the Web console UI to deploy OCS in External mode on IBM Cloud\n1952891 - Upgrade failed due to cinder csi driver not deployed\n1952904 - Linting issues in gather/clusterconfig package\n1952906 - Unit tests for configobserver.go\n1952931 - CI does not check leftover PVs\n1952958 - Runtime error loading console in Safari 13\n1953019 - [Installer][baremetal][metal3] The baremetal IPI installer fails on delete cluster with: failed to clean baremetal bootstrap storage pool\n1953035 - Installer should error out if publish: Internal is set while deploying OCP cluster on any on-prem platform\n1953041 - openshift-authentication-operator uses 3.9k% of its requested CPU\n1953077 - Handling GCP\u0027s: Error 400: Permission accesscontextmanager.accessLevels.list is not valid for this resource\n1953102 - kubelet CPU use during an e2e run increased 25% after rebase\n1953105 - RHCOS system components registered a 3.5x increase in CPU use over an e2e run before and after 4/9\n1953169 - endpoint slice controller doesn\u0027t handle services target port correctly\n1953257 - Multiple EgressIPs per node for one namespace when \"oc get hostsubnet\"\n1953280 - DaemonSet/node-resolver is not recreated by dns operator after deleting it\n1953291 - cluster-etcd-operator: peer cert DNS SAN is populated incorrectly\n1953418 - [e2e][automation] Fix vm wizard validate tests\n1953518 - thanos-ruler pods failed to start up for \"cannot unmarshal DNS message\"\n1953530 - Fix openshift/sdn unit test flake\n1953539 - kube-storage-version-migrator: priorityClassName not set\n1953543 - (release-4.8) Add missing sample archive data\n1953551 - build failure: unexpected trampoline for shared or dynamic linking\n1953555 - GlusterFS tests fail on ipv6 clusters\n1953647 - prometheus-adapter should have a PodDisruptionBudget in HA topology\n1953670 - ironic container image build failing because esp partition size is too small\n1953680 - ipBlock ignoring all other cidr\u0027s apart from the last one specified\n1953691 - Remove unused mock\n1953703 - Inconsistent usage of Tech preview badge in OCS plugin of OCP Console\n1953726 - Fix issues related to loading dynamic plugins\n1953729 - e2e unidling test is flaking heavily on SNO jobs\n1953795 - Ironic can\u0027t virtual media attach ISOs sourced from ingress routes\n1953798 - GCP e2e (parallel and upgrade) regularly trigger KubeAPIErrorBudgetBurn alert, also happens on AWS\n1953803 - [AWS] Installer should do pre-check to ensure user-provided private hosted zone name is valid for OCP cluster\n1953810 - Allow use of storage policy in VMC environments\n1953830 - The oc-compliance build does not available for OCP4.8\n1953846 - SystemMemoryExceedsReservation alert should consider hugepage reservation\n1953977 - [4.8] packageserver pods restart many times on the SNO cluster\n1953979 - Ironic caching virtualmedia images results in disk space limitations\n1954003 - Alerts shouldn\u0027t report any alerts in firing or pending state: openstack-cinder-csi-driver-controller-metrics TargetDown\n1954025 - Disk errors while scaling up a node with multipathing enabled\n1954087 - Unit tests for kube-scheduler-operator\n1954095 - Apply user defined tags in AWS Internal Registry\n1954105 - TaskRuns Tab in PipelineRun Details Page makes cluster based calls for TaskRuns\n1954124 - oc set volume not adding storageclass to pvc which leads to issues using snapshots\n1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js\n1954177 - machine-api: admissionReviewVersions v1beta1 is going to be removed in 1.22\n1954187 - multus: admissionReviewVersions v1beta1 is going to be removed in 1.22\n1954248 - Disable Alertmanager Protractor e2e tests\n1954317 - [assisted operator] Environment variables set in the subscription not being inherited by the assisted-service container\n1954330 - NetworkPolicy: allow-from-router with label policy-group.network.openshift.io/ingress: \"\" does not work on a upgraded cluster\n1954421 - Get \u0027Application is not available\u0027 when access Prometheus UI\n1954459 - Error: Gateway Time-out display on Alerting console\n1954460 - UI, The status of \"Used Capacity Breakdown [Pods]\"  is \"Not available\"\n1954509 - FC volume is marked as unmounted after failed reconstruction\n1954540 - Lack translation for local language on pages under storage menu\n1954544 - authn operator: endpoints controller should use the context it creates\n1954554 - Add e2e tests for auto node sizing\n1954566 - Cannot update a component (`UtilizationCard`) error when switching perspectives manually\n1954597 - Default image for GCP does not support ignition V3\n1954615 - Undiagnosed panic detected in pod: pods/openshift-cloud-credential-operator_cloud-credential-operator\n1954634 - apirequestcounts does not honor max users\n1954638 - apirequestcounts should indicate removedinrelease of empty instead of 2.0\n1954640 - Support of gatherers with different periods\n1954671 - disable volume expansion support in vsphere csi driver storage class\n1954687 - localvolumediscovery and localvolumset e2es are disabled\n1954688 - LSO has missing examples for localvolumesets\n1954696 - [API-1009] apirequestcounts should indicate useragent\n1954715 - Imagestream imports become very slow when doing many in parallel\n1954755 - Multus configuration should allow for net-attach-defs referenced in the openshift-multus namespace\n1954765 - CCO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1954768 - baremetal-operator: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1954770 - Backport upstream fix for Kubelet getting stuck in DiskPressure\n1954773 - OVN: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component does not trigger APIRemovedInNextReleaseInUse alert\n1954783 - [aws] support byo private hosted zone\n1954790 - KCM Alert PodDisruptionBudget At and Limit do not alert with maxUnavailable or MinAvailable by percentage\n1954830 - verify-client-go job is failing for release-4.7 branch\n1954865 - Add necessary priority class to pod-identity-webhook deployment\n1954866 - Add necessary priority class to downloads\n1954870 - Add necessary priority class to network components\n1954873 - dns server may not be specified for clusters with more than 2 dns servers specified by  openstack. \n1954891 - Add necessary priority class to pruner\n1954892 - Add necessary priority class to ingress-canary\n1954931 - (release-4.8) Remove legacy URL anonymization in the ClusterOperator related resources\n1954937 - [API-1009] `oc get apirequestcount` shows blank for column REQUESTSINCURRENTHOUR\n1954959 - unwanted decorator shown for revisions in topology though should only be shown only for knative services\n1954972 - TechPreviewNoUpgrade featureset can be undone\n1954973 - \"read /proc/pressure/cpu: operation not supported\" in node-exporter logs\n1954994 - should update to 2.26.0 for prometheus resources label\n1955051 - metrics \"kube_node_status_capacity_cpu_cores\" does not exist\n1955089 - Support [sig-cli] oc observe works as expected test for IPv6\n1955100 - Samples: APIRemovedInNextReleaseInUse info alerts display\n1955102 - Add vsphere_node_hw_version_total metric to the collected metrics\n1955114 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM\n1955196 - linuxptp-daemon crash on 4.8\n1955226 - operator updates apirequestcount CRD over and over\n1955229 - release-openshift-origin-installer-e2e-aws-calico-4.7 is permfailing\n1955256 - stop collecting API that no longer exists\n1955324 - Kubernetes Autoscaler should use Go 1.16 for testing scripts\n1955336 - Failure to Install OpenShift on GCP due to Cluster Name being similar to / contains \"google\"\n1955414 - 4.8 -\u003e 4.7 rollbacks broken on unrecognized flowschema openshift-etcd-operator\n1955445 - Drop crio image metrics with high cardinality\n1955457 - Drop container_memory_failures_total metric because of high cardinality\n1955467 - Disable collection of node_mountstats_nfs metrics in node_exporter\n1955474 - [aws-ebs-csi-driver] rebase from version v1.0.0\n1955478 - Drop high-cardinality metrics from kube-state-metrics which aren\u0027t used\n1955517 - Failed to upgrade from 4.6.25 to 4.7.8 due to the machine-config degradation\n1955548 - [IPI][OSP] OCP 4.6/4.7 IPI with kuryr exceeds defined serviceNetwork range\n1955554 - MAO does not react to events triggered from Validating Webhook Configurations\n1955589 - thanos-querier should have a PodDisruptionBudget in HA topology\n1955595 - Add DevPreviewLongLifecycle Descheduler profile\n1955596 - Pods stuck in creation phase on realtime kernel SNO\n1955610 - release-openshift-origin-installer-old-rhcos-e2e-aws-4.7 is permfailing\n1955622 - 4.8-e2e-metal-assisted jobs: Timeout of 360 seconds expired waiting for Cluster to be in status [\u0027installing\u0027, \u0027error\u0027]\n1955701 - [4.8] RHCOS boot image bump for RHEL 8.4 Beta\n1955749 - OCP branded templates need to be translated\n1955761 - packageserver clusteroperator does not set reason or message for Available condition\n1955783 - NetworkPolicy: ACL audit log message for allow-from-router policy should also include the namespace to distinguish between two policies similarly named configured in respective namespaces\n1955803 - OperatorHub - console accepts any value for \"Infrastructure features\" annotation\n1955822 - CIS Benchmark 5.4.1 Fails on ROKS 4: Prefer using secrets as files over secrets as environment variables\n1955854 - Ingress clusteroperator reports Degraded=True/Available=False if any ingresscontroller is degraded or unavailable\n1955862 - Local Storage Operator using LocalVolume CR fails to create PV\u0027s when backend storage failure is simulated\n1955874 - Webscale: sriov vfs are not created and sriovnetworknodestate indicates sync succeeded - state is not correct\n1955879 - Customer tags cannot be seen in S3 level when set spec.managementState from Managed-\u003e Removed-\u003e Managed in configs.imageregistry with high ratio\n1955969 - Workers cannot be deployed attached to multiple networks. \n1956079 - Installer gather doesn\u0027t collect any networking information\n1956208 - Installer should validate root volume type\n1956220 - Set htt proxy system properties as expected by kubernetes-client\n1956281 - Disconnected installs are failing with kubelet trying to pause image from the internet\n1956334 - Event Listener Details page does not show Triggers section\n1956353 - test: analyze job consistently fails\n1956372 - openshift-gcp-routes causes disruption during upgrade by stopping before all pods terminate\n1956405 - Bump k8s dependencies in cluster resource override admission operator\n1956411 - Apply custom tags to AWS EBS volumes\n1956480 - [4.8] Bootimage bump tracker\n1956606 - probes FlowSchema manifest not included in any cluster profile\n1956607 - Multiple manifests lack cluster profile annotations\n1956609 - [cluster-machine-approver] CSRs for replacement control plane nodes not approved after restore from backup\n1956610 - manage-helm-repos manifest lacks cluster profile annotations\n1956611 - OLM CRD schema validation failing against CRs where the value of a string field is a blank string\n1956650 - The container disk URL is empty for Windows guest tools\n1956768 - aws-ebs-csi-driver-controller-metrics TargetDown\n1956826 - buildArgs does not work when the value is taken from a secret\n1956895 - Fix chatty kubelet log message\n1956898 - fix log files being overwritten on container state loss\n1956920 - can\u0027t open terminal for pods that have more than one container running\n1956959 - ipv6 disconnected sno crd deployment hive reports success status and clusterdeployrmet reporting false\n1956978 - Installer gather doesn\u0027t include pod names in filename\n1957039 - Physical VIP for pod -\u003e Svc -\u003e Host is incorrectly set to an IP of 169.254.169.2 for Local GW\n1957041 - Update CI e2echart with more node info\n1957127 - Delegated authentication: reduce the number of watch requests\n1957131 - Conformance tests for OpenStack require the Cinder client that is not included in the \"tests\" image\n1957146 - Only run test/extended/router/idle tests on OpenshiftSDN or OVNKubernetes\n1957149 - CI: \"Managed cluster should start all core operators\" fails with: OpenStackCinderDriverStaticResourcesControllerDegraded: \"volumesnapshotclass.yaml\" (string): missing dynamicClient\n1957179 - Incorrect VERSION in node_exporter\n1957190 - CI jobs failing due too many watch requests (prometheus-operator)\n1957198 - Misspelled console-operator condition\n1957227 - Issue replacing the EnvVariables using the unsupported ConfigMap\n1957260 - [4.8] [gcp] Installer is missing new region/zone europe-central2\n1957261 - update godoc for new build status image change trigger fields\n1957295 - Apply priority classes conventions as test to openshift/origin repo\n1957315 - kuryr-controller doesn\u0027t indicate being out of quota\n1957349 - [Azure] Machine object showing Failed phase even node is ready and VM is running properly\n1957374 - mcddrainerr doesn\u0027t list specific pod\n1957386 - Config serve and validate command should be under alpha\n1957446 - prepare CCO for future without v1beta1 CustomResourceDefinitions\n1957502 - Infrequent panic in kube-apiserver in aws-serial job\n1957561 - lack of pseudolocalization for some text on Cluster Setting page\n1957584 - Routes are not getting created  when using hostname  without FQDN standard\n1957597 - Public DNS records were not deleted when destroying a cluster which is using byo private hosted zone\n1957645 - Event \"Updated PrometheusRule.monitoring.coreos.com/v1 because it changed\" is frequently looped with weird empty {} changes\n1957708 - e2e-metal-ipi and related jobs fail to bootstrap due to multiple VIP\u0027s\n1957726 - Pod stuck in ContainerCreating - Failed to start transient scope unit: Connection timed out\n1957748 - Ptp operator pod should have CPU and memory requests set but not limits\n1957756 - Device Replacemet UI, The status of the disk is \"replacement ready\" before I clicked on \"start replacement\"\n1957772 - ptp daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1957775 - CVO creating cloud-controller-manager too early causing upgrade failures\n1957809 - [OSP] Install with invalid platform.openstack.machinesSubnet results in runtime error\n1957822 - Update apiserver tlsSecurityProfile description to include Custom profile\n1957832 - CMO end-to-end tests work only on AWS\n1957856 - \u0027resource name may not be empty\u0027 is shown in CI testing\n1957869 - baremetal IPI power_interface for irmc is inconsistent\n1957879 - cloud-controller-manage ClusterOperator manifest does not declare relatedObjects\n1957889 - Incomprehensible documentation of the GatherClusterOperatorPodsAndEvents gatherer\n1957893 - ClusterDeployment / Agent conditions show \"ClusterAlreadyInstalling\" during each spoke install\n1957895 - Cypress helper projectDropdown.shouldContain is not an assertion\n1957908 - Many e2e failed requests caused by kube-storage-version-migrator-operator\u0027s version reads\n1957926 - \"Add Capacity\" should allow to add n*3 (or n*4) local devices at once\n1957951 - [aws] destroy can get blocked on instances stuck in shutting-down state\n1957967 - Possible test flake in listPage Cypress view\n1957972 - Leftover templates from mdns\n1957976 - Ironic execute_deploy_steps command to ramdisk times out, resulting in a failed deployment in 4.7\n1957982 - Deployment Actions clickable for view-only projects\n1957991 - ClusterOperatorDegraded can fire during installation\n1958015 - \"config-reloader-cpu\" and \"config-reloader-memory\" flags have been deprecated for prometheus-operator\n1958080 - Missing i18n for login, error and selectprovider pages\n1958094 - Audit log files are corrupted sometimes\n1958097 - don\u0027t show \"old, insecure token format\" if the token does not actually exist\n1958114 - Ignore staged vendor files in pre-commit script\n1958126 - [OVN]Egressip doesn\u0027t take effect\n1958158 - OAuth proxy container for AlertManager and Thanos are flooding the logs\n1958216 - ocp libvirt: dnsmasq options in install config should allow duplicate option names\n1958245 - cluster-etcd-operator: static pod revision is not visible from etcd logs\n1958285 - Deployment considered unhealthy despite being available and at latest generation\n1958296 - OLM must explicitly alert on deprecated APIs in use\n1958329 - pick 97428: add more context to log after a request times out\n1958367 - Build metrics do not aggregate totals by build strategy\n1958391 - Update MCO KubeletConfig to mixin the API Server TLS Security Profile Singleton\n1958405 - etcd: current health checks and reporting are not adequate to ensure availability\n1958406 - Twistlock flags mode of /var/run/crio/crio.sock\n1958420 - openshift-install 4.7.10 fails with segmentation error\n1958424 - aws: support more auth options in manual mode\n1958439 - Install/Upgrade button on Install/Upgrade Helm Chart page does not work with Form View\n1958492 - CCO: pod-identity-webhook still accesses APIRemovedInNextReleaseInUse\n1958643 - All pods creation stuck due to SR-IOV webhook timeout\n1958679 - Compression on pool can\u0027t be disabled via UI\n1958753 - VMI nic tab is not loadable\n1958759 - Pulling Insights report is missing retry logic\n1958811 - VM creation fails on API version mismatch\n1958812 - Cluster upgrade halts as machine-config-daemon fails to parse `rpm-ostree status` during cluster upgrades\n1958861 - [CCO] pod-identity-webhook certificate request failed\n1958868 - ssh copy is missing when vm is running\n1958884 - Confusing error message when volume AZ not found\n1958913 - \"Replacing an unhealthy etcd member whose node is not ready\" procedure results in new etcd pod in CrashLoopBackOff\n1958930 - network config in machine configs prevents addition of new nodes with static networking via kargs\n1958958 - [SCALE] segfault with ovnkube adding to address set\n1958972 - [SCALE] deadlock in ovn-kube when scaling up to 300 nodes\n1959041 - LSO Cluster UI,\"Troubleshoot\" link does not exist after scale down osd pod\n1959058 - ovn-kubernetes has lock contention on the LSP cache\n1959158 - packageserver clusteroperator Available condition set to false on any Deployment spec change\n1959177 - Descheduler dev manifests are missing permissions\n1959190 - Set LABEL io.openshift.release.operator=true for driver-toolkit image addition to payload\n1959194 - Ingress controller should use minReadySeconds because otherwise it is disrupted during deployment updates\n1959278 - Should remove prometheus servicemonitor from openshift-user-workload-monitoring\n1959294 - openshift-operator-lifecycle-manager:olm-operator-serviceaccount should not rely on external networking for health check\n1959327 - Degraded nodes on upgrade - Cleaning bootversions: Read-only file system\n1959406 - Difficult to debug performance on ovn-k without pprof enabled\n1959471 - Kube sysctl conformance tests are disabled, meaning we can\u0027t submit conformance results\n1959479 - machines doesn\u0027t support dual-stack loadbalancers on Azure\n1959513 - Cluster-kube-apiserver does not use library-go for audit pkg\n1959519 - Operand details page only renders one status donut no matter how many \u0027podStatuses\u0027 descriptors are used\n1959550 - Overly generic CSS rules for dd and dt elements breaks styling elsewhere in console\n1959564 - Test verify /run filesystem contents failing\n1959648 - oc adm top --help indicates that oc adm top can display storage usage while it cannot\n1959650 - Gather SDI-related MachineConfigs\n1959658 - showing a lot \"constructing many client instances from the same exec auth config\"\n1959696 - Deprecate \u0027ConsoleConfigRoute\u0027 struct in console-operator config\n1959699 - [RFE] Collect LSO pod log and daemonset log managed by LSO\n1959703 - Bootstrap gather gets into an infinite loop on bootstrap-in-place mode\n1959711 - Egressnetworkpolicy  doesn\u0027t work when configure the EgressIP\n1959786 - [dualstack]EgressIP doesn\u0027t work on dualstack cluster for IPv6\n1959916 - Console not works well against a proxy in front of openshift clusters\n1959920 - UEFISecureBoot set not on the right master node\n1959981 - [OCPonRHV] - Affinity Group should not create by default if we define empty affinityGroupsNames: []\n1960035 - iptables is missing from ose-keepalived-ipfailover image\n1960059 - Remove \"Grafana UI\" link from Console Monitoring \u003e Dashboards page\n1960089 - ImageStreams list page, detail page and breadcrumb are not following CamelCase conventions\n1960129 - [e2e][automation] add smoke tests about VM pages and actions\n1960134 - some origin images are not public\n1960171 - Enable SNO checks for image-registry\n1960176 - CCO should recreate a user for the component when it was removed from the cloud providers\n1960205 - The kubelet log flooded with reconcileState message once CPU manager enabled\n1960255 - fixed obfuscation permissions\n1960257 - breaking changes in pr template\n1960284 - ExternalTrafficPolicy Local does not preserve connections correctly on shutdown, policy Cluster has significant performance cost\n1960323 - Address issues raised by coverity security scan\n1960324 - manifests: extra \"spec.version\" in console quickstarts makes CVO hotloop\n1960330 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1960334 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1960337 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1960339 - manifests: unset \"preemptionPolicy\" makes CVO hotloop\n1960531 - Items under \u0027Current Bandwidth\u0027 for Dashboard \u0027Kubernetes / Networking / Pod\u0027 keep added for every access\n1960534 - Some graphs of console dashboards have no legend and tooltips are difficult to undstand compared with grafana\n1960546 - Add virt_platform metric to the collected metrics\n1960554 - Remove rbacv1beta1 handling code\n1960612 - Node disk info in overview/details does not account for second drive where /var is located\n1960619 - Image registry integration tests use old-style OAuth tokens\n1960683 - GlobalConfigPage is constantly requesting resources\n1960711 - Enabling IPsec runtime causing incorrect MTU on Pod interfaces\n1960716 - Missing details for debugging\n1960732 - Outdated manifests directory in CSI driver operator repositories\n1960757 - [OVN] hostnetwork pod can access MCS port 22623 or 22624 on master\n1960758 - oc debug / oc adm must-gather do not require openshift/tools and openshift/must-gather to be \"the newest\"\n1960767 - /metrics endpoint of the Grafana UI is accessible without authentication\n1960780 - CI: failed to create PDB \"service-test\" the server could not find the requested resource\n1961064 - Documentation link to network policies is outdated\n1961067 - Improve log gathering logic\n1961081 - policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget in CMO logs\n1961091 - Gather MachineHealthCheck definitions\n1961120 - CSI driver operators fail when upgrading a cluster\n1961173 - recreate existing static pod manifests instead of updating\n1961201 - [sig-network-edge] DNS should answer A and AAAA queries for a dual-stack service is constantly failing\n1961314 - Race condition in operator-registry pull retry unit tests\n1961320 - CatalogSource does not emit any metrics to indicate if it\u0027s ready or not\n1961336 - Devfile sample for BuildConfig is not defined\n1961356 - Update single quotes to double quotes in string\n1961363 - Minor string update for \" No Storage classes found in cluster, adding source is disabled.\"\n1961393 - DetailsPage does not work with group~version~kind\n1961452 - Remove \"Alertmanager UI\" link from Console Monitoring \u003e Alerting page\n1961466 - Some dropdown placeholder text on route creation page is not translated\n1961472 - openshift-marketplace pods in CrashLoopBackOff state after RHACS installed with an SCC with readOnlyFileSystem set to true\n1961506 - NodePorts do not work on RHEL 7.9 workers (was \"4.7 -\u003e 4.8 upgrade is stuck at Ingress operator Degraded with rhel 7.9 workers\")\n1961536 - clusterdeployment without pull secret is crashing assisted service pod\n1961538 - manifests: invalid namespace in ClusterRoleBinding makes CVO hotloop\n1961545 - Fixing Documentation Generation\n1961550 - HAproxy pod logs showing error \"another server named \u0027pod:httpd-7c7ccfffdc-wdkvk:httpd:8080-tcp:10.128.x.x:8080\u0027 was already defined at line 326, please use distinct names\"\n1961554 - respect the shutdown-delay-duration from OpenShiftAPIServerConfig\n1961561 - The encryption controllers send lots of request to an API server\n1961582 - Build failure on s390x\n1961644 - NodeAuthenticator tests are failing in IPv6\n1961656 - driver-toolkit missing some release metadata\n1961675 - Kebab menu of taskrun contains Edit options which should not be present\n1961701 - Enhance gathering of events\n1961717 - Update runtime dependencies to Wallaby builds for bugfixes\n1961829 - Quick starts prereqs not shown when description is long\n1961852 - Excessive lock contention when adding many pods selected by the same NetworkPolicy\n1961878 - Add Sprint 199 translations\n1961897 - Remove history listener before console UI is unmounted\n1961925 - New ManagementCPUsOverride admission plugin blocks pod creation in clusters with no nodes\n1962062 - Monitoring dashboards should support default values of \"All\"\n1962074 - SNO:the pod get stuck in CreateContainerError and prompt \"failed to add conmon to systemd sandbox cgroup: dial unix /run/systemd/private: connect: resource temporarily unavailable\" after adding a performanceprofile\n1962095 - Replace gather-job image without FQDN\n1962153 - VolumeSnapshot routes are ambiguous, too generic\n1962172 - Single node CI e2e tests kubelet metrics endpoints intermittent downtime\n1962219 - NTO relies on unreliable leader-for-life implementation. \n1962256 - use RHEL8 as the vm-example\n1962261 - Monitoring components requesting more memory than they use\n1962274 - OCP on RHV installer fails to generate an install-config with only 2 hosts in RHV cluster\n1962347 - Cluster does not exist logs after successful installation\n1962392 - After upgrade from 4.5.16 to 4.6.17, customer\u0027s application is seeing re-transmits\n1962415 - duplicate zone information for in-tree PV after enabling migration\n1962429 - Cannot create windows vm because kubemacpool.io denied the request\n1962525 - [Migration] SDN migration stuck on MCO on RHV cluster\n1962569 - NetworkPolicy details page should also show Egress rules\n1962592 - Worker nodes restarting during OS installation\n1962602 - Cloud credential operator scrolls info \"unable to provide upcoming...\" on unsupported platform\n1962630 - NTO: Ship the current upstream TuneD\n1962687 - openshift-kube-storage-version-migrator pod failed due to Error: container has runAsNonRoot and image will run as root\n1962698 - Console-operator can not create resource console-public configmap in the openshift-config-managed namespace\n1962718 - CVE-2021-29622 prometheus: open redirect under the /new endpoint\n1962740 - Add documentation to Egress Router\n1962850 - [4.8] Bootimage bump tracker\n1962882 - Version pod does not set priorityClassName\n1962905 - Ramdisk ISO source defaulting to \"http\" breaks deployment on a good amount of BMCs\n1963068 - ironic container should not specify the entrypoint\n1963079 - KCM/KS: ability to enforce localhost communication with the API server. \n1963154 - Current BMAC reconcile flow skips Ironic\u0027s deprovision step\n1963159 - Add Sprint 200 translations\n1963204 - Update to 8.4 IPA images\n1963205 - Installer is using old redirector\n1963208 - Translation typos/inconsistencies for Sprint 200 files\n1963209 - Some strings in public.json have errors\n1963211 - Fix grammar issue in kubevirt-plugin.json string\n1963213 - Memsource download script running into API error\n1963219 - ImageStreamTags not internationalized\n1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment\n1963267 - Warning: Invalid DOM property `classname`. Did you mean `className`? console warnings in volumes table\n1963502 - create template from is not descriptive\n1963676 - in vm wizard when selecting an os template it looks like selecting the flavor too\n1963833 - Cluster monitoring operator crashlooping on single node clusters due to segfault\n1963848 - Use OS-shipped stalld vs. the NTO-shipped one. \n1963866 - NTO: use the latest k8s 1.21.1 and openshift vendor dependencies\n1963871 - cluster-etcd-operator:[build] upgrade to go 1.16\n1963896 - The VM disks table does not show easy links to PVCs\n1963912 - \"[sig-network] DNS should provide DNS for {services, cluster, subdomain, hostname}\" failures on vsphere\n1963932 - Installation failures in bootstrap in OpenStack release jobs\n1963964 - Characters are not escaped on config ini file causing Kuryr bootstrap to fail\n1964059 - rebase openshift/sdn to kube 1.21.1\n1964197 - Failing Test vendor/k8s.io/kube-aggregator/pkg/apiserver TestProxyCertReload due to hardcoded certificate expiration\n1964203 - e2e-metal-ipi, e2e-metal-ipi-ovn-dualstack and e2e-metal-ipi-ovn-ipv6 are failing due to \"Unknown provider baremetal\"\n1964243 - The `oc compliance fetch-raw` doesn\u2019t work for disconnected cluster\n1964270 - Failed to install \u0027cluster-kube-descheduler-operator\u0027 with error: \"clusterkubedescheduleroperator.4.8.0-202105211057.p0.assembly.stream\\\": must be no more than 63 characters\"\n1964319 - Network policy \"deny all\" interpreted as \"allow all\" in description page\n1964334 - alertmanager/prometheus/thanos-querier /metrics endpoints are not secured\n1964472 - Make project and namespace requirements more visible rather than giving me an error after submission\n1964486 - Bulk adding of CIDR IPS to whitelist is not working\n1964492 - Pick 102171: Implement support for watch initialization in P\u0026F\n1964625 - NETID duplicate check is only required in NetworkPolicy Mode\n1964748 - Sync upstream 1.7.2 downstream\n1964756 - PVC status is always in \u0027Bound\u0027 status when it is actually cloning\n1964847 - Sanity check test suite missing from the repo\n1964888 - opoenshift-apiserver imagestreamimports depend on \u003e34s timeout support, WAS: transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\n1964936 - error log for \"oc adm catalog mirror\" is not correct\n1964979 - Add mapping from ACI to infraenv to handle creation order issues\n1964997 - Helm Library charts are showing and can be installed from Catalog\n1965024 - [DR] backup and restore should perform consistency checks on etcd snapshots\n1965092 - [Assisted-4.7] [Staging][OLM] Operators deployments start before all workers finished installation\n1965283 - 4.7-\u003e4.8 upgrades: cluster operators are not ready: openshift-controller-manager (Upgradeable=Unknown NoData: ), service-ca (Upgradeable=Unknown NoData:\n1965330 - oc image extract fails due to security capabilities on files\n1965334 - opm index add fails during image extraction\n1965367 - Typo in in etcd-metric-serving-ca resource name\n1965370 - \"Route\" is not translated in Korean or Chinese\n1965391 - When storage class is already present wizard do not jumps to \"Stoarge and nodes\"\n1965422 - runc is missing Provides oci-runtime in rpm spec\n1965522 - [v2v] Multiple typos on VM Import screen\n1965545 - Pod stuck in ContainerCreating: Unit ...slice already exists\n1965909 - Replace \"Enable Taint Nodes\" by \"Mark nodes as dedicated\"\n1965921 - [oVirt] High performance VMs shouldn\u0027t be created with Existing policy\n1965929 - kube-apiserver should use cert auth when reaching out to the oauth-apiserver with a TokenReview request\n1966077 - `hidden` descriptor is visible in the Operator instance details page`\n1966116 - DNS SRV request which worked in 4.7.9 stopped working in 4.7.11\n1966126 - root_ca_cert_publisher_sync_duration_seconds metric can have an excessive cardinality\n1966138 - (release-4.8) Update K8s \u0026 OpenShift API versions\n1966156 - Issue with Internal Registry CA on the service pod\n1966174 - No storage class is installed, OCS and CNV installations fail\n1966268 - Workaround for Network Manager not supporting nmconnections priority\n1966401 - Revamp Ceph Table in Install Wizard flow\n1966410 - kube-controller-manager should not trigger APIRemovedInNextReleaseInUse alert\n1966416 - (release-4.8) Do not exceed the data size limit\n1966459 - \u0027policy/v1beta1 PodDisruptionBudget\u0027 and \u0027batch/v1beta1 CronJob\u0027 appear in image-registry-operator log\n1966487 - IP address in Pods list table are showing node IP other than pod IP\n1966520 - Add button from ocs add capacity should not be enabled if there are no PV\u0027s\n1966523 - (release-4.8) Gather MachineAutoScaler definitions\n1966546 - [master] KubeAPI - keep day1 after cluster is successfully installed\n1966561 - Workload partitioning annotation workaround needed for CSV annotation propagation bug\n1966602 - don\u0027t require manually setting IPv6DualStack feature gate in 4.8\n1966620 - The bundle.Dockerfile in the repo is obsolete\n1966632 - [4.8.0] [assisted operator] Unable to re-register an SNO instance if deleting CRDs during install\n1966654 - Alertmanager PDB is not created, but Prometheus UWM is\n1966672 - Add Sprint 201 translations\n1966675 - Admin console string updates\n1966677 - Change comma to semicolon\n1966683 - Translation bugs from Sprint 201 files\n1966684 - Verify \"Creating snapshot for claim \u003c1\u003e{pvcName}\u003c/1\u003e\" displays correctly\n1966697 - Garbage collector logs every interval - move to debug level\n1966717 - include full timestamps in the logs\n1966759 - Enable downstream plugin for Operator SDK\n1966795 - [tests] Release 4.7 broken due to the usage of wrong OCS version\n1966813 - \"Replacing an unhealthy etcd member whose node is not ready\" procedure results in new etcd pod in CrashLoopBackOff\n1966862 - vsphere IPI - local dns prepender is not prepending nameserver 127.0.0.1\n1966892 - [master] [Assisted-4.8][SNO] SNO node cannot transition into \"Writing image to disk\" from \"Waiting for bootkub[e\"\n1966952 - [4.8.0] [Assisted-4.8][SNO][Dual Stack] DHCPv6 settings \"ipv6.dhcp-duid=ll\" missing from dual stack install\n1967104 - [4.8.0] InfraEnv ctrl: log the amount of NMstate Configs baked into the image\n1967126 - [4.8.0] [DOC] KubeAPI docs should clarify that the InfraEnv Spec pullSecretRef is currently ignored\n1967197 - 404 errors loading some i18n namespaces\n1967207 - Getting started card: console customization resources link shows other resources\n1967208 - Getting started card should use semver library for parsing the version instead of string manipulation\n1967234 - Console is continuously polling for ConsoleLink acm-link\n1967275 - Awkward wrapping in getting started dashboard card\n1967276 - Help menu tooltip overlays dropdown\n1967398 - authentication operator still uses previous deleted pod ip rather than the new created pod ip to do health check\n1967403 - (release-4.8) Increase workloads fingerprint gatherer pods limit\n1967423 - [master] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion\n1967444 - openshift-local-storage pods found with invalid priority class, should be openshift-user-critical or begin with system- while running e2e tests\n1967531 - the ccoctl tool should extend MaxItems when listRoles, the default value 100 is a little small\n1967578 - [4.8.0] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion\n1967591 - The ManagementCPUsOverride admission plugin should not mutate containers with the limit\n1967595 - Fixes the remaining lint issues\n1967614 - prometheus-k8s pods can\u0027t be scheduled due to volume node affinity conflict\n1967623 - [OCPonRHV] - ./openshift-install installation with install-config doesn\u0027t work if ovirt-config.yaml doesn\u0027t exist and user should fill the FQDN URL\n1967625 - Add OpenShift Dockerfile for cloud-provider-aws\n1967631 - [4.8.0] Cluster install failed due to timeout while \"Waiting for control plane\"\n1967633 - [4.8.0] [Assisted-4.8][SNO] SNO node cannot transition into \"Writing image to disk\" from \"Waiting for bootkube\"\n1967639 - Console whitescreens if user preferences fail to load\n1967662 - machine-api-operator should not use deprecated \"platform\" field in infrastructures.config.openshift.io\n1967667 - Add Sprint 202 Round 1 translations\n1967713 - Insights widget shows invalid link to the OCM\n1967717 - Insights Advisor widget is missing a description paragraph and contains deprecated naming\n1967745 - When setting DNS node placement by toleration to not tolerate master node, effect value should not allow string other than \"NoExecute\"\n1967803 - should update to 7.5.5 for grafana resources version label\n1967832 - Add more tests for periodic.go\n1967833 - Add tasks pool to tasks_processing\n1967842 - Production logs are spammed on \"OCS requirements validation status Insufficient hosts to deploy OCS. A minimum of 3 hosts is required to deploy OCS\"\n1967843 - Fix null reference to messagesToSearch in gather_logs.go\n1967902 - [4.8.0] Assisted installer chrony manifests missing index numberring\n1967933 - Network-Tools debug scripts not working as expected\n1967945 - [4.8.0] [assisted operator] Assisted Service Postgres crashes msg: \"mkdir: cannot create directory \u0027/var/lib/pgsql/data/userdata\u0027: Permission denied\"\n1968019 - drain timeout and pool degrading period is too short\n1968067 - [master] Agent validation not including reason for being insufficient\n1968168 - [4.8.0] KubeAPI - keep day1 after cluster is successfully installed\n1968175 - [4.8.0] Agent validation not including reason for being insufficient\n1968373 - [4.8.0] BMAC re-attaches installed node on ISO regeneration\n1968385 - [4.8.0] Infra env require pullSecretRef although it shouldn\u0027t be required\n1968435 - [4.8.0] Unclear message in case of missing clusterImageSet\n1968436 - Listeners timeout updated to remain using default value\n1968449 - [4.8.0] Wrong Install-config override documentation\n1968451 - [4.8.0] Garbage collector not cleaning up directories of removed clusters\n1968452 - [4.8.0] [doc] \"Mirror Registry Configuration\" doc section needs clarification of functionality and limitations\n1968454 - [4.8.0] backend events generated with wrong namespace for agent\n1968455 - [4.8.0] Assisted Service operator\u0027s controllers are starting before the base service is ready\n1968515 - oc should set user-agent when talking with registry\n1968531 - Sync upstream 1.8.0 downstream\n1968558 - [sig-cli] oc adm storage-admin [Suite:openshift/conformance/parallel] doesn\u0027t clean up properly\n1968567 - [OVN] Egress router pod not running and openshift.io/scc is restricted\n1968625 - Pods using sr-iov interfaces failign to start for Failed to create pod sandbox\n1968700 - catalog-operator crashes when status.initContainerStatuses[].state.waiting is nil\n1968701 - Bare metal IPI installation is failed due to worker inspection failure\n1968754 - CI: e2e-metal-ipi-upgrade failing on KubeletHasDiskPressure, which triggers machine-config RequiredPoolsFailed\n1969212 - [FJ OCP4.8 Bug - PUBLIC VERSION]: Masters repeat reboot every few minutes during workers provisioning\n1969284 - Console Query Browser: Can\u0027t reset zoom to fixed time range after dragging to zoom\n1969315 - [4.8.0] BMAC doesn\u0027t check if ISO Url changed before queuing BMH for reconcile\n1969352 - [4.8.0] Creating BareMetalHost without the \"inspect.metal3.io\" does not automatically add it\n1969363 - [4.8.0] Infra env should show the time that ISO was generated. \n1969367 - [4.8.0] BMAC should wait for an ISO to exist for 1 minute before using it\n1969386 - Filesystem\u0027s Utilization doesn\u0027t show in VM overview tab\n1969397 - OVN bug causing subports to stay DOWN fails installations\n1969470 - [4.8.0] Misleading error in case of install-config override bad input\n1969487 - [FJ OCP4.8 Bug]: Avoid always do delete_configuration clean step\n1969525 - Replace golint with revive\n1969535 - Topology edit icon does not link correctly when branch name contains slash\n1969538 - Install a VolumeSnapshotClass by default on CSI Drivers that support it\n1969551 - [4.8.0] Assisted service times out on GetNextSteps due to `oc adm release info` taking too long\n1969561 - Test \"an end user can use OLM can subscribe to the operator\" generates deprecation alert\n1969578 - installer: accesses v1beta1 RBAC APIs and causes APIRemovedInNextReleaseInUse to fire\n1969599 - images without registry are being prefixed with registry.hub.docker.com instead of docker.io\n1969601 - manifest for networks.config.openshift.io CRD uses deprecated apiextensions.k8s.io/v1beta1\n1969626 - Portfoward stream cleanup can cause kubelet to panic\n1969631 - EncryptionPruneControllerDegraded: etcdserver: request timed out\n1969681 - MCO: maxUnavailable of ds/machine-config-daemon does not get updated due to missing resourcemerge check\n1969712 - [4.8.0] Assisted service reports a malformed iso when we fail to download the base iso\n1969752 - [4.8.0] [assisted operator] Installed Clusters are missing DNS setups\n1969773 - [4.8.0] Empty cluster name on handleEnsureISOErrors log after applying InfraEnv.yaml\n1969784 - WebTerminal widget should send resize events\n1969832 - Applying a profile with multiple inheritance where parents include a common ancestor fails\n1969891 - Fix rotated pipelinerun status icon issue in safari\n1969900 - Test files should not use deprecated APIs that will trigger APIRemovedInNextReleaseInUse\n1969903 - Provisioning a large number of hosts results in an unexpected delay in hosts becoming available\n1969951 - Cluster local doesn\u0027t work for knative services created from dev console\n1969969 - ironic-rhcos-downloader container uses and old base image\n1970062 - ccoctl does not work with STS authentication\n1970068 - ovnkube-master logs \"Failed to find node ips for gateway\" error\n1970126 - [4.8.0] Disable \"metrics-events\" when deploying using the operator\n1970150 - master pool is still upgrading when machine config reports level / restarts on osimageurl change\n1970262 - [4.8.0] Remove Agent CRD Status fields not needed\n1970265 - [4.8.0] Add State and StateInfo to DebugInfo in ACI and Agent CRDs\n1970269 - [4.8.0] missing role in agent CRD\n1970271 - [4.8.0] Add ProgressInfo to Agent and AgentClusterInstalll CRDs\n1970381 - Monitoring dashboards: Custom time range inputs should retain their values\n1970395 - [4.8.0] SNO with AI/operator - kubeconfig secret is not created until the spoke is deployed\n1970401 - [4.8.0] AgentLabelSelector is required yet not supported\n1970415 - SR-IOV Docs needs documentation for disabling port security on a network\n1970470 - Add pipeline annotation to Secrets which are created for a private repo\n1970494 - [4.8.0] Missing value-filling of log line in assisted-service operator pod\n1970624 - 4.7-\u003e4.8 updates: AggregatedAPIDown for v1beta1.metrics.k8s.io\n1970828 - \"500 Internal Error\" for all openshift-monitoring routes\n1970975 - 4.7 -\u003e 4.8 upgrades on AWS take longer than expected\n1971068 - Removing invalid AWS instances from the CF templates\n1971080 - 4.7-\u003e4.8 CI: KubePodNotReady due to MCD\u0027s 5m sleep between drain attempts\n1971188 - Web Console does not show OpenShift Virtualization Menu with VirtualMachine CRDs of version v1alpha3 !\n1971293 - [4.8.0] Deleting agent from one namespace causes all agents with the same name to be deleted from all namespaces\n1971308 - [4.8.0] AI KubeAPI AgentClusterInstall confusing \"Validated\" condition about VIP not matching machine network\n1971529 - [Dummy bug for robot] 4.7.14 upgrade to 4.8 and then downgrade back to 4.7.14 doesn\u0027t work - clusteroperator/kube-apiserver is not upgradeable\n1971589 - [4.8.0] Telemetry-client won\u0027t report metrics in case the cluster was installed using the assisted operator\n1971630 - [4.8.0] ACM/ZTP with Wan emulation fails to start the agent service\n1971632 - [4.8.0] ACM/ZTP with Wan emulation, several clusters fail to step past discovery\n1971654 - [4.8.0] InfraEnv controller should always requeue for backend response HTTP StatusConflict (code 409)\n1971739 - Keep /boot RW when kdump is enabled\n1972085 - [4.8.0] Updating configmap within AgentServiceConfig is not logged properly\n1972128 - ironic-static-ip-manager container still uses 4.7 base image\n1972140 - [4.8.0] ACM/ZTP with Wan emulation, SNO cluster installs do not show as installed although they are\n1972167 - Several operators degraded because Failed to create pod sandbox when installing an sts cluster\n1972213 - Openshift Installer| UEFI mode | BM hosts have BIOS halted\n1972262 - [4.8.0] \"baremetalhost.metal3.io/detached\" uses boolean value where string is expected\n1972426 - Adopt failure can trigger deprovisioning\n1972436 - [4.8.0] [DOCS] AgentServiceConfig examples in operator.md doc should each contain databaseStorage + filesystemStorage\n1972526 - [4.8.0] clusterDeployments controller should send an event to InfraEnv for backend cluster registration\n1972530 - [4.8.0] no indication for missing debugInfo in AgentClusterInstall\n1972565 - performance issues due to lost node, pods taking too long to relaunch\n1972662 - DPDK KNI modules need some additional tools\n1972676 - Requirements for authenticating kernel modules with X.509\n1972687 - Using bound SA tokens causes causes failures to /apis/authorization.openshift.io/v1/clusterrolebindings\n1972690 - [4.8.0] infra-env condition message isn\u0027t informative in case of missing pull secret\n1972702 - [4.8.0] Domain dummy.com (not belonging to Red Hat) is being used in a default configuration\n1972768 - kube-apiserver setup fail while installing SNO due to port being used\n1972864 - New `local-with-fallback` service annotation does not preserve source IP\n1973018 - Ironic rhcos downloader breaks image cache in upgrade process from 4.7 to 4.8\n1973117 - No storage class is installed, OCS and CNV installations fail\n1973233 - remove kubevirt images and references\n1973237 - RHCOS-shipped stalld systemd units do not use SCHED_FIFO to run stalld. \n1973428 - Placeholder bug for OCP 4.8.0 image release\n1973667 - [4.8] NetworkPolicy tests were mistakenly marked skipped\n1973672 - fix ovn-kubernetes NetworkPolicy 4.7-\u003e4.8 upgrade issue\n1973995 - [Feature:IPv6DualStack] tests are failing in dualstack\n1974414 - Uninstalling kube-descheduler clusterkubedescheduleroperator.4.6.0-202106010807.p0.git.5db84c5 removes some clusterrolebindings\n1974447 - Requirements for nvidia GPU driver container for driver toolkit\n1974677 - [4.8.0] KubeAPI CVO progress is not available on CR/conditions only in events. \n1974718 - Tuned net plugin fails to handle net devices with n/a value for a channel\n1974743 - [4.8.0] All resources not being cleaned up after clusterdeployment deletion\n1974746 - [4.8.0] File system usage not being logged appropriately\n1974757 - [4.8.0] Assisted-service deployed on an IPv6 cluster installed with proxy: agentclusterinstall shows error pulling an image from quay. \n1974773 - Using bound SA tokens causes fail to query cluster resource especially in a sts cluster\n1974839 - CVE-2021-29059 nodejs-is-svg: Regular expression denial of service if the application is provided and checks a crafted invalid SVG string\n1974850 - [4.8] coreos-installer failing Execshield\n1974931 - [4.8.0] Assisted Service Operator should be Infrastructure Operator for Red Hat OpenShift\n1974978 - 4.8.0.rc0 upgrade hung, stuck on DNS clusteroperator progressing\n1975155 - Kubernetes service IP cannot be accessed for rhel worker\n1975227 - [4.8.0] KubeAPI Move conditions consts to CRD types\n1975360 - [4.8.0] [master] timeout on kubeAPI subsystem test: SNO full install and validate MetaData\n1975404 - [4.8.0] Confusing behavior when multi-node spoke workers present when only controlPlaneAgents specified\n1975432 - Alert InstallPlanStepAppliedWithWarnings does not resolve\n1975527 - VMware UPI is configuring static IPs via ignition rather than afterburn\n1975672 - [4.8.0] Production logs are spammed on \"Found unpreparing host: id 08f22447-2cf1-a107-eedf-12c7421f7380 status insufficient\"\n1975789 - worker nodes rebooted when we simulate a case where the api-server is down\n1975938 - gcp-realtime: e2e test failing [sig-storage] Multi-AZ Cluster Volumes should only be allowed to provision PDs in zones where nodes exist [Suite:openshift/conformance/parallel] [Suite:k8s]\n1975964 - 4.7 nightly upgrade to 4.8 and then downgrade back to 4.7 nightly doesn\u0027t work -  ingresscontroller \"default\" is degraded\n1976079 - [4.8.0] Openshift Installer| UEFI mode | BM hosts have BIOS halted\n1976263 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel]\n1976376 - disable jenkins client plugin test whose Jenkinsfile references master branch openshift/origin artifacts\n1976590 - [Tracker] [SNO][assisted-operator][nmstate] Bond Interface is down when booting from the discovery ISO\n1977233 - [4.8] Unable to authenticate against IDP after upgrade to 4.8-rc.1\n1977351 - CVO pod skipped by workload partitioning with incorrect error stating cluster is not SNO\n1977352 - [4.8.0] [SNO] No DNS to cluster API from assisted-installer-controller\n1977426 - Installation of OCP 4.6.13 fails when teaming interface is used with OVNKubernetes\n1977479 - CI failing on firing CertifiedOperatorsCatalogError due to slow livenessProbe responses\n1977540 - sriov webhook not worked when upgrade from 4.7 to 4.8\n1977607 - [4.8.0] Post making changes to AgentServiceConfig assisted-service operator is not detecting the change and redeploying assisted-service pod\n1977924 - Pod fails to run when a custom SCC with a specific set of volumes is used\n1980788 - NTO-shipped stalld can segfault\n1981633 - enhance service-ca injection\n1982250 - Performance Addon Operator fails to install after catalog source becomes ready\n1982252 - olm Operator is in CrashLoopBackOff state with error \"couldn\u0027t cleanup cross-namespace ownerreferences\"\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2016-2183\nhttps://access.redhat.com/security/cve/CVE-2020-7774\nhttps://access.redhat.com/security/cve/CVE-2020-15106\nhttps://access.redhat.com/security/cve/CVE-2020-15112\nhttps://access.redhat.com/security/cve/CVE-2020-15113\nhttps://access.redhat.com/security/cve/CVE-2020-15114\nhttps://access.redhat.com/security/cve/CVE-2020-15136\nhttps://access.redhat.com/security/cve/CVE-2020-26160\nhttps://access.redhat.com/security/cve/CVE-2020-26541\nhttps://access.redhat.com/security/cve/CVE-2020-28469\nhttps://access.redhat.com/security/cve/CVE-2020-28500\nhttps://access.redhat.com/security/cve/CVE-2020-28852\nhttps://access.redhat.com/security/cve/CVE-2021-3114\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/cve/CVE-2021-3516\nhttps://access.redhat.com/security/cve/CVE-2021-3517\nhttps://access.redhat.com/security/cve/CVE-2021-3518\nhttps://access.redhat.com/security/cve/CVE-2021-3520\nhttps://access.redhat.com/security/cve/CVE-2021-3537\nhttps://access.redhat.com/security/cve/CVE-2021-3541\nhttps://access.redhat.com/security/cve/CVE-2021-3636\nhttps://access.redhat.com/security/cve/CVE-2021-20206\nhttps://access.redhat.com/security/cve/CVE-2021-20271\nhttps://access.redhat.com/security/cve/CVE-2021-20291\nhttps://access.redhat.com/security/cve/CVE-2021-21419\nhttps://access.redhat.com/security/cve/CVE-2021-21623\nhttps://access.redhat.com/security/cve/CVE-2021-21639\nhttps://access.redhat.com/security/cve/CVE-2021-21640\nhttps://access.redhat.com/security/cve/CVE-2021-21648\nhttps://access.redhat.com/security/cve/CVE-2021-22133\nhttps://access.redhat.com/security/cve/CVE-2021-23337\nhttps://access.redhat.com/security/cve/CVE-2021-23362\nhttps://access.redhat.com/security/cve/CVE-2021-23368\nhttps://access.redhat.com/security/cve/CVE-2021-23382\nhttps://access.redhat.com/security/cve/CVE-2021-25735\nhttps://access.redhat.com/security/cve/CVE-2021-25737\nhttps://access.redhat.com/security/cve/CVE-2021-26539\nhttps://access.redhat.com/security/cve/CVE-2021-26540\nhttps://access.redhat.com/security/cve/CVE-2021-27292\nhttps://access.redhat.com/security/cve/CVE-2021-28092\nhttps://access.redhat.com/security/cve/CVE-2021-29059\nhttps://access.redhat.com/security/cve/CVE-2021-29622\nhttps://access.redhat.com/security/cve/CVE-2021-32399\nhttps://access.redhat.com/security/cve/CVE-2021-33034\nhttps://access.redhat.com/security/cve/CVE-2021-33194\nhttps://access.redhat.com/security/cve/CVE-2021-33909\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYQCOF9zjgjWX9erEAQjsEg/+NSFQdRcZpqA34LWRtxn+01y2MO0WLroQ\nd4o+3h0ECKYNRFKJe6n7z8MdmPpvV2uNYN0oIwidTESKHkFTReQ6ZolcV/sh7A26\nZ7E+hhpTTObxAL7Xx8nvI7PNffw3CIOZSpnKws5TdrwuMkH5hnBSSZntP5obp9Vs\nImewWWl7CNQtFewtXbcmUojNzIvU1mujES2DTy2ffypLoOW6kYdJzyWubigIoR6h\ngep9HKf1X4oGPuDNF5trSdxKwi6W68+VsOA25qvcNZMFyeTFhZqowot/Jh1HUHD8\nTWVpDPA83uuExi/c8tE8u7VZgakWkRWcJUsIw68VJVOYGvpP6K/MjTpSuP2itgUX\nX//1RGQM7g6sYTCSwTOIrMAPbYH0IMbGDjcS4fSZcfg6c+WJnEpZ72ZgjHZV8mxb\n1BtQSs2lil48/cwDKM0yMO2nYsKiz4DCCx2W5izP0rLwNA8Hvqh9qlFgkxJWWOvA\nmtBCelB0E74qrE4NXbX+MIF7+ZQKjd1evE91/VWNs0FLR/xXdP3C5ORLU3Fag0G/\n0oTV73NdxP7IXVAdsECwU2AqS9ne1y01zJKtd7hq7H/wtkbasqCNq5J7HikJlLe6\ndpKh5ZRQzYhGeQvho9WQfz/jd4HZZTcB6wxrWubbd05bYt/i/0gau90LpuFEuSDx\n+bLvJlpGiMg=\n=NJcM\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.3.0 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \n\nBugs:\n\n* RFE Make the source code for the endpoint-metrics-operator public (BZ#\n1913444)\n\n* cluster became offline after apiserver health check (BZ# 1942589)\n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension\n1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag\n1913444 - RFE Make the source code for the endpoint-metrics-operator public\n1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull\n1927520 - RHACM 2.3.0 images\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection\n1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate\n1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms\n1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization\n1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string\n1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application\n1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call\n1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS\n1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service\n1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service\n1942589 - cluster became offline after apiserver health check\n1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl()\n1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character\n1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data\n1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service\n1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option\n1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing\n1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js\n1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service\n1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS)\n1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option\n1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command\n1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets\n1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs\n1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method\n1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions\n1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id\n1983131 - Defragmenting an etcd member doesn\u0027t reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters\n\n5. VDSM manages and monitors the host\u0027s storage, memory and\nnetworks as well as virtual machine creation, other host administration\ntasks, statistics gathering, and log collection. \n\nBug Fix(es):\n\n* An update in libvirt has changed the way block threshold events are\nsubmitted. \nAs a result, the VDSM was confused by the libvirt event, and tried to look\nup a drive, logging a warning about a missing drive. \nIn this release, the VDSM has been adapted to handle the new libvirt\nbehavior, and does not log warnings about missing drives. (BZ#1948177)\n\n* Previously, when a virtual machine was powered off on the source host of\na live migration and the migration finished successfully at the same time,\nthe two events  interfered with each other, and sometimes prevented\nmigration cleanup resulting in additional migrations from the host being\nblocked. \nIn this release, additional migrations are not blocked. (BZ#1959436)\n\n* Previously, when failing to execute a snapshot and re-executing it later,\nthe second try would fail due to using the previous execution data. In this\nrelease, this data will be used only when needed, in recovery mode. \n(BZ#1984209)\n\n4. Then engine deletes the volume and causes data corruption. \n1998017 - Keep cinbderlib dependencies optional for 4.4.8\n\n6. \n\nBug Fix(es):\n\n* Documentation is referencing deprecated API for Service Export -\nSubmariner (BZ#1936528)\n\n* Importing of cluster fails due to error/typo in generated command\n(BZ#1936642)\n\n* RHACM 2.2.2 images (BZ#1938215)\n\n* 2.2 clusterlifecycle fails to allow provision `fips: true` clusters on\naws, vsphere (BZ#1941778)\n\n3. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.7.4 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-23337"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001309"
      },
      {
        "db": "VULHUB",
        "id": "VHN-381798"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-23337"
      },
      {
        "db": "PACKETSTORM",
        "id": "163276"
      },
      {
        "db": "PACKETSTORM",
        "id": "162901"
      },
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "db": "PACKETSTORM",
        "id": "164090"
      },
      {
        "db": "PACKETSTORM",
        "id": "162151"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      }
    ],
    "trust": 2.43
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2021-23337",
        "trust": 4.1
      },
      {
        "db": "SIEMENS",
        "id": "SSA-637483",
        "trust": 1.7
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-22-258-05",
        "trust": 1.4
      },
      {
        "db": "PACKETSTORM",
        "id": "162901",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "162151",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU99475301",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001309",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "163690",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "164090",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.1225",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.1871",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4616",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5790",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3036",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2232",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.2182",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2555",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2657",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4568",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.2555",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5150",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022072040",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021062703",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021051230",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022012753",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022011901",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022052615",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021090922",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1137",
        "trust": 0.6
      },
      {
        "db": "VULHUB",
        "id": "VHN-381798",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-23337",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "163276",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "163747",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168352",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-381798"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-23337"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001309"
      },
      {
        "db": "PACKETSTORM",
        "id": "163276"
      },
      {
        "db": "PACKETSTORM",
        "id": "162901"
      },
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "db": "PACKETSTORM",
        "id": "164090"
      },
      {
        "db": "PACKETSTORM",
        "id": "162151"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1137"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23337"
      }
    ]
  },
  "id": "VAR-202102-1466",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-381798"
      }
    ],
    "trust": 0.30766129
  },
  "last_update_date": "2024-11-23T20:59:37.424000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "NTAP-20210312-0006",
        "trust": 0.8,
        "url": "https://security.netapp.com/advisory/ntap-20210312-0006/"
      },
      {
        "title": "IBM: Security Bulletin: IBM App Connect Enterprise Certified Container may be vulnerable to a command injection vulnerability (CVE-2021-23337)",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=a6ab32faf6383cb0cedc0fcc02621330"
      },
      {
        "title": "Debian CVElist Bug Report Logs: CVE-2021-23337 CVE-2020-28500",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=705b23b69122ed473c796891371a9f52"
      },
      {
        "title": "IBM: Security Bulletin: A security vulnerability in Node.js lodash module affects IBM Cloud Pak for Multicloud Management Managed Service",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=be717afa91143ef04a4f0fde16d094de"
      },
      {
        "title": "IBM: Security Bulletin: IBM Watson OpenScale on Cloud Pak for Data is impacted by Vulnerabilities in Node.js",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=3a6796f7c08575af6f64adb2d3b31adb"
      },
      {
        "title": "Red Hat: Important: Migration Toolkit for Containers (MTC) 1.7.4 security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226429 - Security Advisory"
      },
      {
        "title": "blank",
        "trust": 0.1,
        "url": "https://github.com/cduplantis/blank "
      },
      {
        "title": "Example.EWA.TypeScript.WebApplication",
        "trust": 0.1,
        "url": "https://github.com/Refinitiv-API-Samples/Example.EWA.TypeScript.WebApplication "
      },
      {
        "title": "loginServer",
        "trust": 0.1,
        "url": "https://github.com/DID-Create-Board/loginServer "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-23337"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001309"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-94",
        "trust": 1.1
      },
      {
        "problemtype": "Command injection (CWE-77) [NVD evaluation ]",
        "trust": 0.8
      },
      {
        "problemtype": "CWE-77",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-381798"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001309"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23337"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.3,
        "url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
      },
      {
        "trust": 1.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23337"
      },
      {
        "trust": 1.7,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf"
      },
      {
        "trust": 1.7,
        "url": "https://security.netapp.com/advisory/ntap-20210312-0006/"
      },
      {
        "trust": 1.7,
        "url": "https://github.com/lodash/lodash/blob/ddfd9b11a0126db2302cb70ec9973b66baec0975/lodash.js%23l14851"
      },
      {
        "trust": 1.7,
        "url": "https://snyk.io/vuln/snyk-java-orgfujionwebjars-1074932"
      },
      {
        "trust": 1.7,
        "url": "https://snyk.io/vuln/snyk-java-orgwebjars-1074930"
      },
      {
        "trust": 1.7,
        "url": "https://snyk.io/vuln/snyk-java-orgwebjarsbower-1074928"
      },
      {
        "trust": 1.7,
        "url": "https://snyk.io/vuln/snyk-java-orgwebjarsbowergithublodash-1074931"
      },
      {
        "trust": 1.7,
        "url": "https://snyk.io/vuln/snyk-java-orgwebjarsnpm-1074929"
      },
      {
        "trust": 1.7,
        "url": "https://snyk.io/vuln/snyk-js-lodash-1040724"
      },
      {
        "trust": 1.7,
        "url": "https://www.oracle.com//security-alerts/cpujul2021.html"
      },
      {
        "trust": 1.7,
        "url": "https://www.oracle.com/security-alerts/cpujan2022.html"
      },
      {
        "trust": 1.7,
        "url": "https://www.oracle.com/security-alerts/cpujul2022.html"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu99475301/"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-28500"
      },
      {
        "trust": 0.7,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2021-23337"
      },
      {
        "trust": 0.7,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28500"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-a-security-vulnerability-in-node-js-lodash-module-affects-ibm-cloud-automation-manager/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-watson-discovery-for-ibm-cloud-pak-for-data-affected-by-vulnerability-in-node-js-3/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2657"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.1225"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/162901/red-hat-security-advisory-2021-2179-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-security-guardium-insights-is-affected-by-multiple-vulnerabilities-5/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-integration-bus-ibm-app-connect-enterprise-v11-are-affected-by-vulnerabilities-in-node-js-cve-2021-23337/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-potential-vulnerability-with-node-js-lodash-module-3/"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022012753"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/164090/red-hat-security-advisory-2021-3459-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/support/pages/node/6494365"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.1871"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/support/pages/node/6493751"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022011901"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3036"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021090922"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.2555"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-a-security-vulnerability-in-node-js-lodash-module-affects-ibm-cloud-pak-for-multicloud-management-managed-service-2/"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022052615"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-a-security-vulnerability-in-node-js-lodash-module-affects-ibm-cloud-automation-manager-3/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/support/pages/node/6486333"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/support/pages/node/6524656"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4616"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/162151/red-hat-security-advisory-2021-1168-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022072040"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021062703"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021051230"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-cloud-pak-for-integration-is-vulnerable-to-node-js-lodash-vulnerability-cve-2021-23337/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-watson-openscale-on-cloud-pak-for-data-is-impacted-by-vulnerabilities-in-node-js/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2232"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/163690/red-hat-security-advisory-2021-2438-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5150"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2555"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.2182"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5790"
      },
      {
        "trust": 0.6,
        "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-app-connect-enterprise-certified-container-may-be-vulnerable-to-a-command-injection-vulnerability-cve-2021-23337/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4568"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-3449"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-3450"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28852"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-28852"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-25013"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29362"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29361"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-2708"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8286"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-28196"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-20305"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15358"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15358"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8927"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13434"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2017-14502"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29362"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8285"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-9169"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29363"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3114"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27618"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29361"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-13434"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-2708"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2016-10228"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8231"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3326"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9169"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-27219"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8284"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-27618"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28196"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/articles/2974891"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-28469"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33034"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-28092"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3520"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3537"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3121"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33909"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3518"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-32399"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3516"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-23368"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-23362"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3517"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3541"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28469"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-20271"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-27292"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-23382"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28851"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-21321"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-23841"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-28851"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-23840"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-21322"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26116"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8284"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23336"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20305"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13949"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28362"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8285"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8286"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/jaeger/jaeger_install/rhb"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26116"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3842"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8927"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13776"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27619"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2543"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24977"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-3842"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13776"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23336"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3177"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13949"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8231"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27619"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24977"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/ht"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2179"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/technical_notes"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21419"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15112"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25737"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.8/updating/updating-cluster"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21639"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7774"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20291"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-26540"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23368"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21419"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33194"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-26539"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15106"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29059"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25735"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2016-2183"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26160"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21623"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2438"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15112"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20206"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25735"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20206"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22133"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15113"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21640"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26160"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21640"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7774"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2437"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15136"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23382"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21623"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21639"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21648"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15106"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15136"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29622"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21648"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20291"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15113"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15114"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22133"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20271"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-2183"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15114"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3636"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20454"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20934"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29418"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13050"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-20843"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-1730"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29482"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27358"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23369"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11668"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23364"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23343"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21309"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23383"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-28918"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3560"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33033"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-14889"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-1730"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13627"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20934"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25217"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:3016"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3377"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21272"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29477"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23346"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29478"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11668"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23839"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19906"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33623"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15903"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33910"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:3459"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:1168"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29529"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29529"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3121"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3347"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3449"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28374"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27364"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-26708"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27365"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0466"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27152"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27363"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21322"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27152"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3347"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3450"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14040"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21321"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27365"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-0466"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27364"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14040"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28374"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-26708"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36084"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36085"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8559"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30629"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20838"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1785"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1897"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1927"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4189"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20095"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2526"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-5827"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-29154"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0691"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2097"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3634"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3580"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2068"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24370"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0686"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32206"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-25313"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32208"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-29824"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16845"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23177"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3737"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14155"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19603"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-42771"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1292"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0639"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36087"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6429"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20231"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-40528"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30631"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20232"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25219"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-31566"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-25314"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17595"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36086"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16845"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0512"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15586"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28493"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1650"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13435"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-381798"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001309"
      },
      {
        "db": "PACKETSTORM",
        "id": "163276"
      },
      {
        "db": "PACKETSTORM",
        "id": "162901"
      },
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "db": "PACKETSTORM",
        "id": "164090"
      },
      {
        "db": "PACKETSTORM",
        "id": "162151"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1137"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23337"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-381798"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-23337"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001309"
      },
      {
        "db": "PACKETSTORM",
        "id": "163276"
      },
      {
        "db": "PACKETSTORM",
        "id": "162901"
      },
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "db": "PACKETSTORM",
        "id": "164090"
      },
      {
        "db": "PACKETSTORM",
        "id": "162151"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1137"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-23337"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2021-02-15T00:00:00",
        "db": "VULHUB",
        "id": "VHN-381798"
      },
      {
        "date": "2021-02-15T00:00:00",
        "db": "VULMON",
        "id": "CVE-2021-23337"
      },
      {
        "date": "2021-04-05T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2021-001309"
      },
      {
        "date": "2021-06-24T17:54:53",
        "db": "PACKETSTORM",
        "id": "163276"
      },
      {
        "date": "2021-06-01T15:17:45",
        "db": "PACKETSTORM",
        "id": "162901"
      },
      {
        "date": "2021-07-28T14:53:49",
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "date": "2021-08-06T14:02:37",
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "date": "2021-09-09T13:33:33",
        "db": "PACKETSTORM",
        "id": "164090"
      },
      {
        "date": "2021-04-13T15:38:30",
        "db": "PACKETSTORM",
        "id": "162151"
      },
      {
        "date": "2022-09-13T15:42:14",
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "date": "2021-02-15T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202102-1137"
      },
      {
        "date": "2021-02-15T13:15:12.560000",
        "db": "NVD",
        "id": "CVE-2021-23337"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-09-13T00:00:00",
        "db": "VULHUB",
        "id": "VHN-381798"
      },
      {
        "date": "2022-09-13T00:00:00",
        "db": "VULMON",
        "id": "CVE-2021-23337"
      },
      {
        "date": "2022-09-20T06:02:00",
        "db": "JVNDB",
        "id": "JVNDB-2021-001309"
      },
      {
        "date": "2022-11-11T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202102-1137"
      },
      {
        "date": "2024-11-21T05:51:31.643000",
        "db": "NVD",
        "id": "CVE-2021-23337"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1137"
      }
    ],
    "trust": 0.7
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Lodash\u00a0 Command injection vulnerability in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-001309"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "code injection",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1137"
      }
    ],
    "trust": 0.6
  }
}

var-202102-1492
Vulnerability from variot

Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions. Lodash Exists in unspecified vulnerabilities.Service operation interruption (DoS) It may be in a state. lodash is an open source JavaScript utility library. There is a security vulnerability in Lodash. Please keep an eye on CNNVD or manufacturer announcements. Description:

The ovirt-engine package provides the manager for virtualization environments. This manager enables admins to define hosts and networks, as well as to add storage, create VMs and manage user permissions.

Bug Fix(es):

  • This release adds the queue attribute to the virtio-scsi driver in the virtual machine configuration. This improvement enables multi-queue performance with the virtio-scsi driver. (BZ#911394)

  • With this release, source-load-balancing has been added as a new sub-option for xmit_hash_policy. It can be configured for bond modes balance-xor (2), 802.3ad (4) and balance-tlb (5), by specifying xmit_hash_policy=vlan+srcmac. (BZ#1683987)

  • The default DataCenter/Cluster will be set to compatibility level 4.6 on new installations of Red Hat Virtualization 4.4.6.; (BZ#1950348)

  • With this release, support has been added for copying disks between regular Storage Domains and Managed Block Storage Domains. It is now possible to migrate disks between Managed Block Storage Domains and regular Storage Domains. (BZ#1906074)

  • Previously, the engine-config value LiveSnapshotPerformFreezeInEngine was set by default to false and was supposed to be uses in cluster compatibility levels below 4.4. The value was set to general version. With this release, each cluster level has it's own value, defaulting to false for 4.4 and above. This will reduce unnecessary overhead in removing time outs of the file system freeze command. (BZ#1932284)

  • With this release, running virtual machines is supported for up to 16TB of RAM on x86_64 architectures. (BZ#1944723)

  • This release adds the gathering of oVirt/RHV related certificates to allow easier debugging of issues for faster customer help and issue resolution. Information from certificates is now included as part of the sosreport. Note that no corresponding private key information is gathered, due to security considerations. (BZ#1845877)

  • Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/2974891

  1. Bugs fixed (https://bugzilla.redhat.com/):

1113630 - [RFE] indicate vNICs that are out-of-sync from their configuration on engine 1310330 - [RFE] Provide a way to remove stale LUNs from hypervisors 1589763 - [downstream clone] Error changing CD for a running VM when ISO image is on a block domain 1621421 - [RFE] indicate vNIC is out of sync on network QoS modification on engine 1717411 - improve engine logging when migration fail 1766414 - [downstream] [UI] hint after updating mtu on networks connected to running VMs 1775145 - Incorrect message from hot-plugging memory 1821199 - HP VM fails to migrate between identical hosts (the same cpu flags) not supporting TSC. 1845877 - [RFE] Collect information about RHV PKI 1875363 - engine-setup failing on FIPS enabled rhel8 machine 1906074 - [RFE] Support disks copy between regular and managed block storage domains 1910858 - vm_ovf_generations is not cleared while detaching the storage domain causing VM import with old stale configuration 1917718 - [RFE] Collect memory usage from guests without ovirt-guest-agent and memory ballooning 1919195 - Unable to create snapshot without saving memory of running VM from VM Portal. 1919984 - engine-setup failse to deploy the grafana service in an external DWH server 1924610 - VM Portal shows N/A as the VM IP address even if the guest agent is running and the IP is shown in the webadmin portal 1926018 - Failed to run VM after FIPS mode is enabled 1926823 - Integrating ELK with RHV-4.4 fails as RHVH is missing 'rsyslog-gnutls' package. 1928158 - Rename 'CA Certificate' link in welcome page to 'Engine CA certificate' 1928188 - Failed to parse 'writeOps' value 'XXXX' to integer: For input string: "XXXX" 1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 1929211 - Failed to parse 'writeOps' value 'XXXX' to integer: For input string: "XXXX" 1930522 - [RHV-4.4.5.5] Failed to deploy RHEL AV 8.4.0 host to RHV with error "missing groups or modules: virt:8.4" 1930565 - Host upgrade failed in imgbased but RHVM shows upgrade successful 1930895 - RHEL 8 virtual machine with qemu-guest-agent installed displays Guest OS Memory Free/Cached/Buffered: Not Configured 1932284 - Engine handled FS freeze is not fast enough for Windows systems 1935073 - Ansible ovirt_disk module can create disks with conflicting IDs that cannot be removed 1942083 - upgrade ovirt-cockpit-sso to 0.1.4-2 1943267 - Snapshot creation is failing for VM having vGPU. 1944723 - [RFE] Support virtual machines with 16TB memory 1948577 - [welcome page] remove "Infrastructure Migration" section (obsoleted) 1949543 - rhv-log-collector-analyzer fails to run MAC Pools rule 1949547 - rhv-log-collector-analyzer report contains 'b characters 1950348 - Set compatibility level 4.6 for Default DataCenter/Cluster during new installations of RHV 4.4.6 1950466 - Host installation failed 1954401 - HP VMs pinning is wiped after edit->ok and pinned to first physical CPUs. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Moderate: OpenShift Container Platform 4.8.2 bug fix and security update Advisory ID: RHSA-2021:2438-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2021:2438 Issue date: 2021-07-27 CVE Names: CVE-2016-2183 CVE-2020-7774 CVE-2020-15106 CVE-2020-15112 CVE-2020-15113 CVE-2020-15114 CVE-2020-15136 CVE-2020-26160 CVE-2020-26541 CVE-2020-28469 CVE-2020-28500 CVE-2020-28852 CVE-2021-3114 CVE-2021-3121 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3537 CVE-2021-3541 CVE-2021-3636 CVE-2021-20206 CVE-2021-20271 CVE-2021-20291 CVE-2021-21419 CVE-2021-21623 CVE-2021-21639 CVE-2021-21640 CVE-2021-21648 CVE-2021-22133 CVE-2021-23337 CVE-2021-23362 CVE-2021-23368 CVE-2021-23382 CVE-2021-25735 CVE-2021-25737 CVE-2021-26539 CVE-2021-26540 CVE-2021-27292 CVE-2021-28092 CVE-2021-29059 CVE-2021-29622 CVE-2021-32399 CVE-2021-33034 CVE-2021-33194 CVE-2021-33909 =====================================================================

  1. Summary:

Red Hat OpenShift Container Platform release 4.8.2 is now available with updates to packages and images that fix several bugs and add enhancements.

This release includes a security update for Red Hat OpenShift Container Platform 4.8.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

This advisory contains the container images for Red Hat OpenShift Container Platform 4.8.2. See the following advisory for the RPM packages for this release:

https://access.redhat.com/errata/RHSA-2021:2437

Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:

https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel ease-notes.html

Security Fix(es):

  • SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32) (CVE-2016-2183)

  • gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)

  • nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)

  • etcd: Large slice causes panic in decodeRecord method (CVE-2020-15106)

  • etcd: DoS in wal/wal.go (CVE-2020-15112)

  • etcd: directories created via os.MkdirAll are not checked for permissions (CVE-2020-15113)

  • etcd: gateway can include itself as an endpoint resulting in resource exhaustion and leads to DoS (CVE-2020-15114)

  • etcd: no authentication is performed against endpoints provided in the

  • --endpoints flag (CVE-2020-15136)

  • jwt-go: access restriction bypass vulnerability (CVE-2020-26160)

  • nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)

  • nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions (CVE-2020-28500)

  • golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag (CVE-2020-28852)

  • golang: crypto/elliptic: incorrect operations on the P-224 curve (CVE-2021-3114)

  • containernetworking-cni: Arbitrary path injection via type field in CNI configuration (CVE-2021-20206)

  • containers/storage: DoS via malicious image (CVE-2021-20291)

  • prometheus: open redirect under the /new endpoint (CVE-2021-29622)

  • golang: x/net/html: infinite loop in ParseFragment (CVE-2021-33194)

  • go.elastic.co/apm: leaks sensitive HTTP headers during panic (CVE-2021-22133)

Space precludes listing in detail the following additional CVEs fixes: (CVE-2021-27292), (CVE-2021-28092), (CVE-2021-29059), (CVE-2021-23382), (CVE-2021-26539), (CVE-2021-26540), (CVE-2021-23337), (CVE-2021-23362) and (CVE-2021-23368)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Additional Changes:

You may download the oc tool and use it to inspect release image metadata as follows:

(For x86_64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.8.2-x86_64

The image digest is ssha256:0e82d17ababc79b10c10c5186920232810aeccbccf2a74c691487090a2c98ebc

(For s390x architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.8.2-s390x

The image digest is sha256:a284c5c3fa21b06a6a65d82be1dc7e58f378aa280acd38742fb167a26b91ecb5

(For ppc64le architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.8.2-ppc64le

The image digest is sha256:da989b8e28bccadbb535c2b9b7d3597146d14d254895cd35f544774f374cdd0f

All OpenShift Container Platform 4.8 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.8/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor

  1. Solution:

For OpenShift Container Platform 4.8 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel ease-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.8/updating/updating-cluster - -cli.html

  1. Bugs fixed (https://bugzilla.redhat.com/):

1369383 - CVE-2016-2183 SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32) 1725981 - oc explain does not work well with full resource.group names 1747270 - [osp] Machine with name "-worker"couldn't join the cluster 1772993 - rbd block devices attached to a host are visible in unprivileged container pods 1786273 - [4.6] KAS pod logs show "error building openapi models ... has invalid property: anyOf" for CRDs 1786314 - [IPI][OSP] Install fails on OpenStack with self-signed certs unless the node running the installer has the CA cert in its system trusts 1801407 - Router in v4v6 mode puts brackets around IPv4 addresses in the Forwarded header 1812212 - ArgoCD example application cannot be downloaded from github 1817954 - [ovirt] Workers nodes are not numbered sequentially 1824911 - PersistentVolume yaml editor is read-only with system:persistent-volume-provisioner ClusterRole 1825219 - openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with "Unable to connect to the server" 1825417 - The containerruntimecontroller doesn't roll back to CR-1 if we delete CR-2 1834551 - ClusterOperatorDown fires when operator is only degraded; states will block upgrades 1835264 - Intree provisioner doesn't respect PVC.spec.dataSource sometimes 1839101 - Some sidebar links in developer perspective don't follow same project 1840881 - The KubeletConfigController cannot process multiple confs for a pool/ pool changes 1846875 - Network setup test high failure rate 1848151 - Console continues to poll the ClusterVersion resource when the user doesn't have authority 1850060 - After upgrading to 3.11.219 timeouts are appearing. 1852637 - Kubelet sets incorrect image names in node status images section 1852743 - Node list CPU column only show usage 1853467 - container_fs_writes_total is inconsistent with CPU/memory in summarizing cgroup values 1857008 - [Edge] [BareMetal] Not provided STATE value for machines 1857477 - Bad helptext for storagecluster creation 1859382 - check-endpoints panics on graceful shutdown 1862084 - Inconsistency of time formats in the OpenShift web-console 1864116 - Cloud credential operator scrolls warnings about unsupported platform 1866222 - Should output all options when runing operator-sdk init --help 1866318 - [RHOCS Usability Study][Dashboard] Users found it difficult to navigate to the OCS dashboard 1866322 - [RHOCS Usability Study][Dashboard] Alert details page does not help to explain the Alert 1866331 - [RHOCS Usability Study][Dashboard] Users need additional tooltips or definitions 1868755 - [vsphere] terraform provider vsphereprivate crashes when network is unavailable on host 1868870 - CVE-2020-15113 etcd: directories created via os.MkdirAll are not checked for permissions 1868872 - CVE-2020-15112 etcd: DoS in wal/wal.go 1868874 - CVE-2020-15114 etcd: gateway can include itself as an endpoint resulting in resource exhaustion and leads to DoS 1868880 - CVE-2020-15136 etcd: no authentication is performed against endpoints provided in the --endpoints flag 1868883 - CVE-2020-15106 etcd: Large slice causes panic in decodeRecord method 1871303 - [sig-instrumentation] Prometheus when installed on the cluster should have important platform topology metrics 1871770 - [IPI baremetal] The Keepalived.conf file is not indented evenly 1872659 - ClusterAutoscaler doesn't scale down when a node is not needed anymore 1873079 - SSH to api and console route is possible when the clsuter is hosted on Openstack 1873649 - proxy.config.openshift.io should validate user inputs 1874322 - openshift/oauth-proxy: htpasswd using SHA1 to store credentials 1874931 - Accessibility - Keyboard shortcut to exit YAML editor not easily discoverable 1876918 - scheduler test leaves taint behind 1878199 - Remove Log Level Normalization controller in cluster-config-operator release N+1 1878655 - [aws-custom-region] creating manifests take too much time when custom endpoint is unreachable 1878685 - Ingress resource with "Passthrough" annotation does not get applied when using the newer "networking.k8s.io/v1" API 1879077 - Nodes tainted after configuring additional host iface 1879140 - console auth errors not understandable by customers 1879182 - switch over to secure access-token logging by default and delete old non-sha256 tokens 1879184 - CVO must detect or log resource hotloops 1879495 - [4.6] namespace \“openshift-user-workload-monitoring\” does not exist” 1879638 - Binary file uploaded to a secret in OCP 4 GUI is not properly converted to Base64-encoded string 1879944 - [OCP 4.8] Slow PV creation with vsphere 1880757 - AWS: master not removed from LB/target group when machine deleted 1880758 - Component descriptions in cloud console have bad description (Managed by Terraform) 1881210 - nodePort for router-default metrics with NodePortService does not exist 1881481 - CVO hotloops on some service manifests 1881484 - CVO hotloops on deployment manifests 1881514 - CVO hotloops on imagestreams from cluster-samples-operator 1881520 - CVO hotloops on (some) clusterrolebindings 1881522 - CVO hotloops on clusterserviceversions packageserver 1881662 - Error getting volume limit for plugin kubernetes.io/ in kubelet logs 1881694 - Evidence of disconnected installs pulling images from the local registry instead of quay.io 1881938 - migrator deployment doesn't tolerate masters 1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability 1883587 - No option for user to select volumeMode 1883993 - Openshift 4.5.8 Deleting pv disk vmdk after delete machine 1884053 - cluster DNS experiencing disruptions during cluster upgrade in insights cluster 1884800 - Failed to set up mount unit: Invalid argument 1885186 - Removing ssh keys MC does not remove the key from authorized_keys 1885349 - [IPI Baremetal] Proxy Information Not passed to metal3 1885717 - activeDeadlineSeconds DeadlineExceeded does not show terminated container statuses 1886572 - auth: error contacting auth provider when extra ingress (not default) goes down 1887849 - When creating new storage class failure_domain is missing. 1888712 - Worker nodes do not come up on a baremetal IPI deployment with control plane network configured on a vlan on top of bond interface due to Pending CSRs 1889689 - AggregatedAPIErrors alert may never fire 1890678 - Cypress: Fix 'structure' accesibility violations 1890828 - Intermittent prune job failures causing operator degradation 1891124 - CP Conformance: CRD spec and status failures 1891301 - Deleting bmh by "oc delete bmh' get stuck 1891696 - [LSO] Add capacity UI does not check for node present in selected storageclass 1891766 - [LSO] Min-Max filter's from OCS wizard accepts Negative values and that cause PV not getting created 1892642 - oauth-server password metrics do not appear in UI after initial OCP installation 1892718 - HostAlreadyClaimed: The new route cannot be loaded with a new api group version 1893850 - Add an alert for requests rejected by the apiserver 1893999 - can't login ocp cluster with oc 4.7 client without the username 1895028 - [gcp-pd-csi-driver-operator] Volumes created by CSI driver are not deleted on cluster deletion 1895053 - Allow builds to optionally mount in cluster trust stores 1896226 - recycler-pod template should not be in kubelet static manifests directory 1896321 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types 1896751 - [RHV IPI] Worker nodes stuck in the Provisioning Stage if the machineset has a long name 1897415 - [Bare Metal - Ironic] provide the ability to set the cipher suite for ipmitool when doing a Bare Metal IPI install 1897621 - Auth test.Login test.logs in as kubeadmin user: Timeout 1897918 - [oVirt] e2e tests fail due to kube-apiserver not finishing 1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability 1899057 - fix spurious br-ex MAC address error log 1899187 - [Openstack] node-valid-hostname.service failes during the first boot leading to 5 minute provisioning delay 1899587 - [External] RGW usage metrics shown on Object Service Dashboard is incorrect 1900454 - Enable host-based disk encryption on Azure platform 1900819 - Scaled ingress replicas following sharded pattern don't balance evenly across multi-AZ 1901207 - Search Page - Pipeline resources table not immediately updated after Name filter applied or removed 1901535 - Remove the managingOAuthAPIServer field from the authentication.operator API 1901648 - "do you need to set up custom dns" tooltip inaccurate 1902003 - Jobs Completions column is not sorting when there are "0 of 1" and "1 of 1" in the list. 1902076 - image registry operator should monitor status of its routes 1902247 - openshift-oauth-apiserver apiserver pod crashloopbackoffs 1903055 - [OSP] Validation should fail when no any IaaS flavor or type related field are given 1903228 - Pod stuck in Terminating, runc init process frozen 1903383 - Latest RHCOS 47.83. builds failing to install: mount /root.squashfs failed 1903553 - systemd container renders node NotReady after deleting it 1903700 - metal3 Deployment doesn't have unique Pod selector 1904006 - The --dir option doest not work for command oc image extract 1904505 - Excessive Memory Use in Builds 1904507 - vsphere-problem-detector: implement missing metrics 1904558 - Random init-p error when trying to start pod 1905095 - Images built on OCP 4.6 clusters create manifests that result in quay.io (and other registries) rejecting those manifests 1905147 - ConsoleQuickStart Card's prerequisites is a combined text instead of a list 1905159 - Installation on previous unused dasd fails after formatting 1905331 - openshift-multus initContainer multus-binary-copy, etc. are not requesting required resources: cpu, memory 1905460 - Deploy using virtualmedia for disabled provisioning network on real BM(HPE) fails 1905577 - Control plane machines not adopted when provisioning network is disabled 1905627 - Warn users when using an unsupported browser such as IE 1905709 - Machine API deletion does not properly handle stopped instances on AWS or GCP 1905849 - Default volumesnapshotclass should be created when creating default storageclass 1906056 - Bundles skipped via the skips field cannot be pinned 1906102 - CBO produces standard metrics 1906147 - ironic-rhcos-downloader should not use --insecure 1906304 - Unexpected value NaN parsing x/y attribute when viewing pod Memory/CPU usage chart 1906740 - [aws]Machine should be "Failed" when creating a machine with invalid region 1907309 - Migrate controlflow v1alpha1 to v1beta1 in storage 1907315 - the internal load balancer annotation for AWS should use "true" instead of "0.0.0.0/0" as value 1907353 - [4.8] OVS daemonset is wasting resources even though it doesn't do anything 1907614 - Update kubernetes deps to 1.20 1908068 - Enable DownwardAPIHugePages feature gate 1908169 - The example of Import URL is "Fedora cloud image list" for all templates. 1908170 - sriov network resource injector: Hugepage injection doesn't work with mult container 1908343 - Input labels in Manage columns modal should be clickable 1908378 - [sig-network] pods should successfully create sandboxes by getting pod - Static Pod Failures 1908655 - "Evaluating rule failed" for "record: node:node_num_cpu:sum" rule 1908762 - [Dualstack baremetal cluster] multicast traffic is not working on ovn-kubernetes 1908765 - [SCALE] enable OVN lflow data path groups 1908774 - [SCALE] enable OVN DB memory trimming on compaction 1908916 - CNO: turn on OVN DB RAFT diffs once all master DB pods are capable of it 1909091 - Pod/node/ip/template isn't showing when vm is running 1909600 - Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error 1909849 - release-openshift-origin-installer-e2e-aws-upgrade-fips-4.4 is perm failing 1909875 - [sig-cluster-lifecycle] Cluster version operator acknowledges upgrade : timed out waiting for cluster to acknowledge upgrade 1910067 - UPI: openstacksdk fails on "server group list" 1910113 - periodic-ci-openshift-release-master-ocp-4.5-ci-e2e-44-stable-to-45-ci is never passing 1910318 - OC 4.6.9 Installer failed: Some pods are not scheduled: 3 node(s) didn't match node selector: AWS compute machines without status 1910378 - socket timeouts for webservice communication between pods 1910396 - 4.6.9 cred operator should back-off when provisioning fails on throttling 1910500 - Could not list CSI provisioner on web when create storage class on GCP platform 1911211 - Should show the cert-recovery-controller version correctly 1911470 - ServiceAccount Registry Authfiles Do Not Contain Entries for Public Hostnames 1912571 - libvirt: Support setting dnsmasq options through the install config 1912820 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade 1913112 - BMC details should be optional for unmanaged hosts 1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag 1913341 - GCP: strange cluster behavior in CI run 1913399 - switch to v1beta1 for the priority and fairness APIs 1913525 - Panic in OLM packageserver when invoking webhook authorization endpoint 1913532 - After a 4.6 to 4.7 upgrade, a node went unready 1913974 - snapshot test periodically failing with "can't open '/mnt/test/data': No such file or directory" 1914127 - Deletion of oc get svc router-default -n openshift-ingress hangs 1914446 - openshift-service-ca-operator and openshift-service-ca pods run as root 1914994 - Panic observed in k8s-prometheus-adapter since k8s 1.20 1915122 - Size of the hostname was preventing proper DNS resolution of the worker node names 1915693 - Not able to install gpu-operator on cpumanager enabled node. 1915971 - Role and Role Binding breadcrumbs do not work as expected 1916116 - the left navigation menu would not be expanded if repeat clicking the links in Overview page 1916118 - [OVN] Source IP is not EgressIP if configured allow 0.0.0.0/0 in the EgressFirewall 1916392 - scrape priority and fairness endpoints for must-gather 1916450 - Alertmanager: add title and text fields to Adv. config. section of Slack Receiver form 1916489 - [sig-scheduling] SchedulerPriorities [Serial] fails with "Error waiting for 1 pods to be running - probably a timeout: Timeout while waiting for pods with labels to be ready" 1916553 - Default template's description is empty on details tab 1916593 - Destroy cluster sometimes stuck in a loop 1916872 - need ability to reconcile exgw annotations on pod add 1916890 - [OCP 4.7] api or api-int not available during installation 1917241 - [en_US] The tooltips of Created date time is not easy to read in all most of UIs. 1917282 - [Migration] MCO stucked for rhel worker after enable the migration prepare state 1917328 - It should default to current namespace when create vm from template action on details page 1917482 - periodic-ci-openshift-release-master-ocp-4.7-e2e-metal-ipi failing with "cannot go from state 'deploy failed' to state 'manageable'" 1917485 - [oVirt] ovirt machine/machineset object has missing some field validations 1917667 - Master machine config pool updates are stalled during the migration from SDN to OVNKube. 1917906 - [oauth-server] bump k8s.io/apiserver to 1.20.3 1917931 - [e2e-gcp-upi] failing due to missing pyopenssl library 1918101 - [vsphere]Delete Provisioning machine took about 12 minutes 1918376 - Image registry pullthrough does not support ICSP, mirroring e2es do not pass 1918442 - Service Reject ACL does not work on dualstack 1918723 - installer fails to write boot record on 4k scsi lun on s390x 1918729 - Add hide/reveal button for the token field in the KMS configuration page 1918750 - CVE-2021-3114 golang: crypto/elliptic: incorrect operations on the P-224 curve 1918785 - Pod request and limit calculations in console are incorrect 1918910 - Scale from zero annotations should not requeue if instance type missing 1919032 - oc image extract - will not extract files from image rootdir - "error: unexpected directory from mapping tests.test" 1919048 - Whereabouts IPv6 addresses not calculated when leading hextets equal 0 1919151 - [Azure] dnsrecords with invalid domain should not be published to Azure dnsZone 1919168 - oc adm catalog mirror doesn't work for the air-gapped cluster 1919291 - [Cinder-csi-driver] Filesystem did not expand for on-line volume resize 1919336 - vsphere-problem-detector should check if datastore is part of datastore cluster 1919356 - Add missing profile annotation in cluster-update-keys manifests 1919391 - CVE-2021-20206 containernetworking-cni: Arbitrary path injection via type field in CNI configuration 1919398 - Permissive Egress NetworkPolicy (0.0.0.0/0) is blocking all traffic 1919406 - OperatorHub filter heading "Provider Type" should be "Source" 1919737 - hostname lookup delays when master node down 1920209 - Multus daemonset upgrade takes the longest time in the cluster during an upgrade 1920221 - GCP jobs exhaust zone listing query quota sometimes due to too many initializations of cloud provider in tests 1920300 - cri-o does not support configuration of stream idle time 1920307 - "VM not running" should be "Guest agent required" on vm details page in dev console 1920532 - Problem in trying to connect through the service to a member that is the same as the caller. 1920677 - Various missingKey errors in the devconsole namespace 1920699 - Operation cannot be fulfilled on clusterresourcequotas.quota.openshift.io error when creating different OpenShift resources 1920901 - [4.7]"500 Internal Error" for prometheus route in https_proxy cluster 1920903 - oc adm top reporting unknown status for Windows node 1920905 - Remove DNS lookup workaround from cluster-api-provider 1921106 - A11y Violation: button name(s) on Utilization Card on Cluster Dashboard 1921184 - kuryr-cni binds to wrong interface on machine with two interfaces 1921227 - Fix issues related to consuming new extensions in Console static plugins 1921264 - Bundle unpack jobs can hang indefinitely 1921267 - ResourceListDropdown not internationalized 1921321 - SR-IOV obliviously reboot the node 1921335 - ThanosSidecarUnhealthy 1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation 1921720 - test: openshift-tests.[sig-cli] oc observe works as expected [Suite:openshift/conformance/parallel] 1921763 - operator registry has high memory usage in 4.7... cleanup row closes 1921778 - Push to stage now failing with semver issues on old releases 1921780 - Search page not fully internationalized 1921781 - DefaultList component not internationalized 1921878 - [kuryr] Egress network policy with namespaceSelector in Kuryr behaves differently than in OVN-Kubernetes 1921885 - Server-side Dry-run with Validation Downloads Entire OpenAPI spec often 1921892 - MAO: controller runtime manager closes event recorder 1921894 - Backport Avoid node disruption when kube-apiserver-to-kubelet-signer is rotated 1921937 - During upgrade /etc/hostname becomes a directory, nodes are set with kubernetes.io/hostname=localhost label 1921953 - ClusterServiceVersion property inference does not infer package and version 1922063 - "Virtual Machine" should be "Templates" in template wizard 1922065 - Rootdisk size is default to 15GiB in customize wizard 1922235 - [build-watch] e2e-aws-upi - e2e-aws-upi container setup failing because of Python code version mismatch 1922264 - Restore snapshot as a new PVC: RWO/RWX access modes are not click-able if parent PVC is deleted 1922280 - [v2v] on the upstream release, In VM import wizard I see RHV but no oVirt 1922646 - Panic in authentication-operator invoking webhook authorization 1922648 - FailedCreatePodSandBox due to "failed to pin namespaces [uts]: [pinns:e]: /var/run/utsns exists and is not a directory: File exists" 1922764 - authentication operator is degraded due to number of kube-apiservers 1922992 - some button text on YAML sidebar are not translated 1922997 - [Migration]The SDN migration rollback failed. 1923038 - [OSP] Cloud Info is loaded twice 1923157 - Ingress traffic performance drop due to NodePort services 1923786 - RHV UPI fails with unhelpful message when ASSET_DIR is not set. 1923811 - Registry claims Available=True despite .status.readyReplicas == 0 while .spec.replicas == 2 1923847 - Error occurs when creating pods if configuring multiple key-only labels in default cluster-wide node selectors or project-wide node selectors 1923984 - Incorrect anti-affinity for UWM prometheus 1924020 - panic: runtime error: index out of range [0] with length 0 1924075 - kuryr-controller restart when enablePortPoolsPrepopulation = true 1924083 - "Activity" Pane of Persistent Storage tab shows events related to Noobaa too 1924140 - [OSP] Typo in OPENSHFIT_INSTALL_SKIP_PREFLIGHT_VALIDATIONS variable 1924171 - ovn-kube must handle single-stack to dual-stack migration 1924358 - metal UPI setup fails, no worker nodes 1924502 - Failed to start transient scope unit: Argument list too long / systemd[1]: Failed to set up mount unit: Invalid argument 1924536 - 'More about Insights' link points to support link 1924585 - "Edit Annotation" are not correctly translated in Chinese 1924586 - Control Plane status and Operators status are not fully internationalized 1924641 - [User Experience] The message "Missing storage class" needs to be displayed after user clicks Next and needs to be rephrased 1924663 - Insights operator should collect related pod logs when operator is degraded 1924701 - Cluster destroy fails when using byo with Kuryr 1924728 - Difficult to identify deployment issue if the destination disk is too small 1924729 - Create Storageclass for CephFS provisioner assumes incorrect default FSName in external mode (side-effect of fix for Bug 1878086) 1924747 - InventoryItem doesn't internationalize resource kind 1924788 - Not clear error message when there are no NADs available for the user 1924816 - Misleading error messages in ironic-conductor log 1924869 - selinux avc deny after installing OCP 4.7 1924916 - PVC reported as Uploading when it is actually cloning 1924917 - kuryr-controller in crash loop if IP is removed from secondary interfaces 1924953 - newly added 'excessive etcd leader changes' test case failing in serial job 1924968 - Monitoring list page filter options are not translated 1924983 - some components in utils directory not localized 1925017 - [UI] VM Details-> Network Interfaces, 'Name,' is displayed instead on 'Name' 1925061 - Prometheus backed by a PVC may start consuming a lot of RAM after 4.6 -> 4.7 upgrade due to series churn 1925083 - Some texts are not marked for translation on idp creation page. 1925087 - Add i18n support for the Secret page 1925148 - Shouldn't create the redundant imagestream when use oc new-app --name=testapp2 -i with exist imagestream 1925207 - VM from custom template - cloudinit disk is not added if creating the VM from custom template using customization wizard 1925216 - openshift installer fails immediately failed to fetch Install Config 1925236 - OpenShift Route targets every port of a multi-port service 1925245 - oc idle: Clusters upgrading with an idled workload do not have annotations on the workload's service 1925261 - Items marked as mandatory in KMS Provider form are not enforced 1925291 - Baremetal IPI - While deploying with IPv6 provision network with subnet other than /64 masters fail to PXE boot 1925343 - [ci] e2e-metal tests are not using reserved instances 1925493 - Enable snapshot e2e tests 1925586 - cluster-etcd-operator is leaking transports 1925614 - Error: InstallPlan.operators.coreos.com not found 1925698 - On GCP, load balancers report kube-apiserver fails its /readyz check 50% of the time, causing load balancer backend churn and disruptions to apiservers 1926029 - [RFE] Either disable save or give warning when no disks support snapshot 1926054 - Localvolume CR is created successfully, when the storageclass name defined in the localvolume exists. 1926072 - Close button (X) does not work in the new "Storage cluster exists" Warning alert message(introduced via fix for Bug 1867400) 1926082 - Insights operator should not go degraded during upgrade 1926106 - [ja_JP][zh_CN] Create Project, Delete Project and Delete PVC modal are not fully internationalized 1926115 - Texts in “Insights” popover on overview page are not marked for i18n 1926123 - Pseudo bug: revert "force cert rotation every couple days for development" in 4.7 1926126 - some kebab/action menu translation issues 1926131 - Add HPA page is not fully internationalized 1926146 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it 1926154 - Create new pool with arbiter - wrong replica 1926278 - [oVirt] consume K8S 1.20 packages 1926279 - Pod ignores mtu setting from sriovNetworkNodePolicies in case of PF partitioning 1926285 - ignore pod not found status messages 1926289 - Accessibility: Modal content hidden from screen readers 1926310 - CannotRetrieveUpdates alerts on Critical severity 1926329 - [Assisted-4.7][Staging] monitoring stack in staging is being overloaded by the amount of metrics being exposed by assisted-installer pods and scraped by prometheus. 1926336 - Service details can overflow boxes at some screen widths 1926346 - move to go 1.15 and registry.ci.openshift.org 1926364 - Installer timeouts because proxy blocked connection to Ironic API running on bootstrap VM 1926465 - bootstrap kube-apiserver does not have --advertise-address set – was: [BM][IPI][DualStack] Installation fails cause Kubernetes service doesn't have IPv6 endpoints 1926484 - API server exits non-zero on 2 SIGTERM signals 1926547 - OpenShift installer not reporting IAM permission issue when removing the Shared Subnet Tag 1926579 - Setting .spec.policy is deprecated and will be removed eventually. Please use .spec.profile instead is being logged every 3 seconds in scheduler operator log 1926598 - Duplicate alert rules are displayed on console for thanos-querier api return wrong results 1926776 - "Template support" modal appears when select the RHEL6 common template 1926835 - [e2e][automation] prow gating use unsupported CDI version 1926843 - pipeline with finally tasks status is improper 1926867 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade 1926893 - When deploying the operator via OLM (after creating the respective catalogsource), the deployment "lost" the resources section. 1926903 - NTO may fail to disable stalld when relying on Tuned '[service]' plugin 1926931 - Inconsistent ovs-flow rule on one of the app node for egress node 1926943 - vsphere-problem-detector: Alerts in CI jobs 1926977 - [sig-devex][Feature:ImageEcosystem][Slow] openshift sample application repositories rails/nodejs 1927013 - Tables don't render properly at smaller screen widths 1927017 - CCO does not relinquish leadership when restarting for proxy CA change 1927042 - Empty static pod files on UPI deployments are confusing 1927047 - multiple external gateway pods will not work in ingress with IP fragmentation 1927068 - Workers fail to PXE boot when IPv6 provisionining network has subnet other than /64 1927075 - [e2e][automation] Fix pvc string in pvc.view 1927118 - OCP 4.7: NVIDIA GPU Operator DCGM metrics not displayed in OpenShift Console Monitoring Metrics page 1927244 - UPI installation with Kuryr timing out on bootstrap stage 1927263 - kubelet service takes around 43 secs to start container when started from stopped state 1927264 - FailedCreatePodSandBox due to multus inability to reach apiserver 1927310 - Performance: Console makes unnecessary requests for en-US messages on load 1927340 - Race condition in OperatorCondition reconcilation 1927366 - OVS configuration service unable to clone NetworkManager's connections in the overlay FS 1927391 - Fix flake in TestSyncPodsDeletesWhenSourcesAreReady 1927393 - 4.7 still points to 4.6 catalog images 1927397 - p&f: add auto update for priority & fairness bootstrap configuration objects 1927423 - Happy "Not Found" and no visible error messages on error-list page when /silences 504s 1927465 - Homepage dashboard content not internationalized 1927678 - Reboot interface defaults to softPowerOff so fencing is too slow 1927731 - /usr/lib/dracut/modules.d/30ignition/ignition --version sigsev 1927797 - 'Pod(s)' should be included in the pod donut label when a horizontal pod autoscaler is enabled 1927882 - Can't create cluster role binding from UI when a project is selected 1927895 - global RuntimeConfig is overwritten with merge result 1927898 - i18n Admin Notifier 1927902 - i18n Cluster Utilization dashboard duration 1927903 - "CannotRetrieveUpdates" - critical error in openshift web console 1927925 - Manually misspelled as Manualy 1927941 - StatusDescriptor detail item and Status component can cause runtime error when the status is an object or array 1927942 - etcd should use socket option (SO_REUSEADDR) instead of wait for port release on process restart 1927944 - cluster version operator cycles terminating state waiting for leader election 1927993 - Documentation Links in OKD Web Console are not Working 1928008 - Incorrect behavior when we click back button after viewing the node details in Internal-attached mode 1928045 - N+1 scaling Info message says "single zone" even if the nodes are spread across 2 or 0 zones 1928147 - Domain search set in the required domains in Option 119 of DHCP Server is ignored by RHCOS on RHV 1928157 - 4.7 CNO claims to be done upgrading before it even starts 1928164 - Traffic to outside the cluster redirected when OVN is used and NodePort service is configured 1928297 - HAProxy fails with 500 on some requests 1928473 - NetworkManager overlay FS not being created on None platform 1928512 - sap license management logs gatherer 1928537 - Cannot IPI with tang/tpm disk encryption 1928640 - Definite error message when using StorageClass based on azure-file / Premium_LRS 1928658 - Update plugins and Jenkins version to prepare openshift-sync-plugin 1.0.46 release 1928850 - Unable to pull images due to limited quota on Docker Hub 1928851 - manually creating NetNamespaces will break things and this is not obvious 1928867 - golden images - DV should not be created with WaitForFirstConsumer 1928869 - Remove css required to fix search bug in console caused by pf issue in 2021.1 1928875 - Update translations 1928893 - Memory Pressure Drop Down Info is stating "Disk" capacity is low instead of memory 1928931 - DNSRecord CRD is using deprecated v1beta1 API 1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 1929052 - Add new Jenkins agent maven dir for 3.6 1929056 - kube-apiserver-availability.rules are failing evaluation 1929110 - LoadBalancer service check test fails during vsphere upgrade 1929136 - openshift isn't able to mount nfs manila shares to pods 1929175 - LocalVolumeSet: PV is created on disk belonging to other provisioner 1929243 - Namespace column missing in Nodes Node Details / pods tab 1929277 - Monitoring workloads using too high a priorityclass 1929281 - Update Tech Preview badge to transparent border color when upgrading to PatternFly v4.87.1 1929314 - ovn-kubernetes endpoint slice controller doesn't run on CI jobs 1929359 - etcd-quorum-guard uses origin-cli [4.8] 1929577 - Edit Application action overwrites Deployment envFrom values on save 1929654 - Registry for Azure uses legacy V1 StorageAccount 1929693 - Pod stuck at "ContainerCreating" status 1929733 - oVirt CSI driver operator is constantly restarting 1929769 - Getting 404 after switching user perspective in another tab and reload Project details 1929803 - Pipelines shown in edit flow for Workloads created via ContainerImage flow 1929824 - fix alerting on volume name check for vsphere 1929917 - Bare-metal operator is firing for ClusterOperatorDown for 15m during 4.6 to 4.7 upgrade 1929944 - The etcdInsufficientMembers alert fires incorrectly when any instance is down and not when quorum is lost 1930007 - filter dropdown item filter and resource list dropdown item filter doesn't support multi selection 1930015 - OS list is overlapped by buttons in template wizard 1930064 - Web console crashes during VM creation from template when no storage classes are defined 1930220 - Cinder CSI driver is not able to mount volumes under heavier load 1930240 - Generated clouds.yaml incomplete when provisioning network is disabled 1930248 - After creating a remediation flow and rebooting a worker there is no access to the openshift-web-console 1930268 - intel vfio devices are not expose as resources 1930356 - Darwin binary missing from mirror.openshift.com 1930393 - Gather info about unhealthy SAP pods 1930546 - Monitoring-dashboard-workload keep loading when user with cluster-role cluster-monitoring-view login develoer console 1930570 - Jenkins templates are displayed in Developer Catalog twice 1930620 - the logLevel field in containerruntimeconfig can't be set to "trace" 1930631 - Image local-storage-mustgather in the doc does not come from product registry 1930893 - Backport upstream patch 98956 for pod terminations 1931005 - Related objects page doesn't show the object when its name is empty 1931103 - remove periodic log within kubelet 1931115 - Azure cluster install fails with worker type workers Standard_D4_v2 1931215 - [RFE] Cluster-api-provider-ovirt should handle affinity groups 1931217 - [RFE] Installer should create RHV Affinity group for OCP cluster VMS 1931467 - Kubelet consuming a large amount of CPU and memory and node becoming unhealthy 1931505 - [IPI baremetal] Two nodes hold the VIP post remove and start of the Keepalived container 1931522 - Fresh UPI install on BM with bonding using OVN Kubernetes fails 1931529 - SNO: mentioning of 4 nodes in error message - Cluster network CIDR prefix 24 does not contain enough addresses for 4 hosts each one with 25 prefix (128 addresses) 1931629 - Conversational Hub Fails due to ImagePullBackOff 1931637 - Kubeturbo Operator fails due to ImagePullBackOff 1931652 - [single-node] etcd: discover-etcd-initial-cluster graceful termination race. 1931658 - [single-node] cluster-etcd-operator: cluster never pivots from bootstrapIP endpoint 1931674 - [Kuryr] Enforce nodes MTU for the Namespaces and Pods 1931852 - Ignition HTTP GET is failing, because DHCP IPv4 config is failing silently 1931883 - Fail to install Volume Expander Operator due to CrashLookBackOff 1931949 - Red Hat Integration Camel-K Operator keeps stuck in Pending state 1931974 - Operators cannot access kubeapi endpoint on OVNKubernetes on ipv6 1931997 - network-check-target causes upgrade to fail from 4.6.18 to 4.7 1932001 - Only one of multiple subscriptions to the same package is honored 1932097 - Apiserver liveness probe is marking it as unhealthy during normal shutdown 1932105 - machine-config ClusterOperator claims level while control-plane still updating 1932133 - AWS EBS CSI Driver doesn’t support “csi.storage.k8s.io/fsTyps” parameter 1932135 - When “iopsPerGB” parameter is not set, event for AWS EBS CSI Driver provisioning is not clear 1932152 - When “iopsPerGB” parameter is set to a wrong number, events for AWS EBS CSI Driver provisioning are not clear 1932154 - [AWS ] machine stuck in provisioned phase , no warnings or errors 1932182 - catalog operator causing CPU spikes and bad etcd performance 1932229 - Can’t find kubelet metrics for aws ebs csi volumes 1932281 - [Assisted-4.7][UI] Unable to change upgrade channel once upgrades were discovered 1932323 - CVE-2021-26540 sanitize-html: improper validation of hostnames set by the "allowedIframeHostnames" option can lead to bypass hostname whitelist for iframe element 1932324 - CRIO fails to create a Pod in sandbox stage - starting container process caused: process_linux.go:472: container init caused: Running hook #0:: error running hook: exit status 255, stdout: , stderr: \"\n" 1932362 - CVE-2021-26539 sanitize-html: improper handling of internationalized domain name (IDN) can lead to bypass hostname whitelist validation 1932401 - Cluster Ingress Operator degrades if external LB redirects http to https because of new "canary" route 1932453 - Update Japanese timestamp format 1932472 - Edit Form/YAML switchers cause weird collapsing/code-folding issue 1932487 - [OKD] origin-branding manifest is missing cluster profile annotations 1932502 - Setting MTU for a bond interface using Kernel arguments is not working 1932618 - Alerts during a test run should fail the test job, but were not 1932624 - ClusterMonitoringOperatorReconciliationErrors is pending at the end of an upgrade and probably should not be 1932626 - During a 4.8 GCP upgrade OLM fires an alert indicating the operator is unhealthy 1932673 - Virtual machine template provided by red hat should not be editable. The UI allows to edit and then reverse the change after it was made 1932789 - Proxy with port is unable to be validated if it overlaps with service/cluster network 1932799 - During a hive driven baremetal installation the process does not go beyond 80% in the bootstrap VM 1932805 - e2e: test OAuth API connections in the tests by that name 1932816 - No new local storage operator bundle image is built 1932834 - enforce the use of hashed access/authorize tokens 1933101 - Can not upgrade a Helm Chart that uses a library chart in the OpenShift dev console 1933102 - Canary daemonset uses default node selector 1933114 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it [Suite:openshift/conformance/parallel/minimal] 1933159 - multus DaemonSets should use maxUnavailable: 33% 1933173 - openshift-sdn/sdn DaemonSet should use maxUnavailable: 10% 1933174 - openshift-sdn/ovs DaemonSet should use maxUnavailable: 10% 1933179 - network-check-target DaemonSet should use maxUnavailable: 10% 1933180 - openshift-image-registry/node-ca DaemonSet should use maxUnavailable: 10% 1933184 - openshift-cluster-csi-drivers DaemonSets should use maxUnavailable: 10% 1933263 - user manifest with nodeport services causes bootstrap to block 1933269 - Cluster unstable replacing an unhealthy etcd member 1933284 - Samples in CRD creation are ordered arbitarly 1933414 - Machines are created with unexpected name for Ports 1933599 - bump k8s.io/apiserver to 1.20.3 1933630 - [Local Volume] Provision disk failed when disk label has unsupported value like ":" 1933664 - Getting Forbidden for image in a container template when creating a sample app 1933708 - Grafana is not displaying deployment config resources in dashboard Default /Kubernetes / Compute Resources / Namespace (Workloads) 1933711 - EgressDNS: Keep short lived records at most 30s 1933730 - [AI-UI-Wizard] Toggling "Use extra disks for local storage" checkbox highlights the "Next" button to move forward but grays out once clicked 1933761 - Cluster DNS service caps TTLs too low and thus evicts from its cache too aggressively 1933772 - MCD Crash Loop Backoff 1933805 - TargetDown alert fires during upgrades because of normal upgrade behavior 1933857 - Details page can throw an uncaught exception if kindObj prop is undefined 1933880 - Kuryr-Controller crashes when it's missing the status object 1934021 - High RAM usage on machine api termination node system oom 1934071 - etcd consuming high amount of memory and CPU after upgrade to 4.6.17 1934080 - Both old and new Clusterlogging CSVs stuck in Pending during upgrade 1934085 - Scheduling conformance tests failing in a single node cluster 1934107 - cluster-authentication-operator builds URL incorrectly for IPv6 1934112 - Add memory and uptime metadata to IO archive 1934113 - mcd panic when there's not enough free disk space 1934123 - [OSP] First public endpoint is used to fetch ignition config from Glance URL (with multiple endpoints) on OSP 1934163 - Thanos Querier restarting and gettin alert ThanosQueryHttpRequestQueryRangeErrorRateHigh 1934174 - rootfs too small when enabling NBDE 1934176 - Machine Config Operator degrades during cluster update with failed to convert Ignition config spec v2 to v3 1934177 - knative-camel-operator CreateContainerError "container_linux.go:366: starting container process caused: chdir to cwd (\"/home/nonroot\") set in config.json failed: permission denied" 1934216 - machineset-controller stuck in CrashLoopBackOff after upgrade to 4.7.0 1934229 - List page text filter has input lag 1934397 - Extend OLM operator gatherer to include Operator/ClusterServiceVersion conditions 1934400 - [ocp_4][4.6][apiserver-auth] OAuth API servers are not ready - PreconditionNotReady 1934516 - Setup different priority classes for prometheus-k8s and prometheus-user-workload pods 1934556 - OCP-Metal images 1934557 - RHCOS boot image bump for LUKS fixes 1934643 - Need BFD failover capability on ECMP routes 1934711 - openshift-ovn-kubernetes ovnkube-node DaemonSet should use maxUnavailable: 10% 1934773 - Canary client should perform canary probes explicitly over HTTPS (rather than redirect from HTTP) 1934905 - CoreDNS's "errors" plugin is not enabled for custom upstream resolvers 1935058 - Can’t finish install sts clusters on aws government region 1935102 - Error: specifying a root certificates file with the insecure flag is not allowed during oc login 1935155 - IGMP/MLD packets being dropped 1935157 - [e2e][automation] environment tests broken 1935165 - OCP 4.6 Build fails when filename contains an umlaut 1935176 - Missing an indication whether the deployed setup is SNO. 1935269 - Topology operator group shows child Jobs. Not shown in details view's resources. 1935419 - Failed to scale worker using virtualmedia on Dell R640 1935528 - [AWS][Proxy] ingress reports degrade with CanaryChecksSucceeding=False in the cluster with proxy setting 1935539 - Openshift-apiserver CO unavailable during cluster upgrade from 4.6 to 4.7 1935541 - console operator panics in DefaultDeployment with nil cm 1935582 - prometheus liveness probes cause issues while replaying WAL 1935604 - high CPU usage fails ingress controller 1935667 - pipelinerun status icon rendering issue 1935706 - test: Detect when the master pool is still updating after upgrade 1935732 - Update Jenkins agent maven directory to be version agnostic [ART ocp build data] 1935814 - Pod and Node lists eventually have incorrect row heights when additional columns have long text 1935909 - New CSV using ServiceAccount named "default" stuck in Pending during upgrade 1936022 - DNS operator performs spurious updates in response to API's defaulting of daemonset's terminationGracePeriod and service's clusterIPs 1936030 - Ingress operator performs spurious updates in response to API's defaulting of NodePort service's clusterIPs field 1936223 - The IPI installer has a typo. It is missing the word "the" in "the Engine". 1936336 - Updating multus-cni builder & base images to be consistent with ART 4.8 (closed) 1936342 - kuryr-controller restarting after 3 days cluster running - pools without members 1936443 - Hive based OCP IPI baremetal installation fails to connect to API VIP port 22623 1936488 - [sig-instrumentation][Late] Alerts shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured: Prometheus query error 1936515 - sdn-controller is missing some health checks 1936534 - When creating a worker with a used mac-address stuck on registering 1936585 - configure alerts if the catalogsources are missing 1936620 - OLM checkbox descriptor renders switch instead of checkbox 1936721 - network-metrics-deamon not associated with a priorityClassName 1936771 - [aws ebs csi driver] The event for Pod consuming a readonly PVC is not clear 1936785 - Configmap gatherer doesn't include namespace name (in the archive path) in case of a configmap with binary data 1936788 - RBD RWX PVC creation with Filesystem volume mode selection is creating RWX PVC with Block volume mode instead of disabling Filesystem volume mode selection 1936798 - Authentication log gatherer shouldn't scan all the pod logs in the openshift-authentication namespace 1936801 - Support ServiceBinding 0.5.0+ 1936854 - Incorrect imagestream is shown as selected in knative service container image edit flow 1936857 - e2e-ovirt-ipi-install-install is permafailing on 4.5 nightlies 1936859 - ovirt 4.4 -> 4.5 upgrade jobs are permafailing 1936867 - Periodic vsphere IPI install is broken - missing pip 1936871 - [Cinder CSI] Topology aware provisioning doesn't work when Nova and Cinder AZs are different 1936904 - Wrong output YAML when syncing groups without --confirm 1936983 - Topology view - vm details screen isntt stop loading 1937005 - when kuryr quotas are unlimited, we should not sent alerts 1937018 - FilterToolbar component does not handle 'null' value for 'rowFilters' prop 1937020 - Release new from image stream chooses incorrect ID based on status 1937077 - Blank White page on Topology 1937102 - Pod Containers Page Not Translated 1937122 - CAPBM changes to support flexible reboot modes 1937145 - [Local storage] PV provisioned by localvolumeset stays in "Released" status after the pod/pvc deleted 1937167 - [sig-arch] Managed cluster should have no crashlooping pods in core namespaces over four minutes 1937244 - [Local Storage] The model name of aws EBS doesn't be extracted well 1937299 - pod.spec.volumes.awsElasticBlockStore.partition is not respected on NVMe volumes 1937452 - cluster-network-operator CI linting fails in master branch 1937459 - Wrong Subnet retrieved for Service without Selector 1937460 - [CI] Network quota pre-flight checks are failing the installation 1937464 - openstack cloud credentials are not getting configured with correct user_domain_name across the cluster 1937466 - KubeClientCertificateExpiration alert is confusing, without explanation in the documentation 1937496 - Metrics viewer in OCP Console is missing date in a timestamp for selected datapoint 1937535 - Not all image pulls within OpenShift builds retry 1937594 - multiple pods in ContainerCreating state after migration from OpenshiftSDN to OVNKubernetes 1937627 - Bump DEFAULT_DOC_URL for 4.8 1937628 - Bump upgrade channels for 4.8 1937658 - Description for storage class encryption during storagecluster creation needs to be updated 1937666 - Mouseover on headline 1937683 - Wrong icon classification of output in buildConfig when the destination is a DockerImage 1937693 - ironic image "/" cluttered with files 1937694 - [oVirt] split ovirt providerIDReconciler logic into NodeController and ProviderIDController 1937717 - If browser default font size is 20, the layout of template screen breaks 1937722 - OCP 4.8 vuln due to BZ 1936445 1937929 - Operand page shows a 404:Not Found error for OpenShift GitOps Operator 1937941 - [RFE]fix wording for favorite templates 1937972 - Router HAProxy config file template is slow to render due to repetitive regex compilations 1938131 - [AWS] Missing iam:ListAttachedRolePolicies permission in permissions.go 1938321 - Cannot view PackageManifest objects in YAML on 'Home > Search' page nor 'CatalogSource details > Operators tab' 1938465 - thanos-querier should set a CPU request on the thanos-query container 1938466 - packageserver deployment sets neither CPU or memory request on the packageserver container 1938467 - The default cluster-autoscaler should get default cpu and memory requests if user omits them 1938468 - kube-scheduler-operator has a container without a CPU request 1938492 - Marketplace extract container does not request CPU or memory 1938493 - machine-api-operator declares restrictive cpu and memory limits where it should not 1938636 - Can't set the loglevel of the container: cluster-policy-controller and kube-controller-manager-recovery-controller 1938903 - Time range on dashboard page will be empty after drog and drop mouse in the graph 1938920 - ovnkube-master/ovs-node DaemonSets should use maxUnavailable: 10% 1938947 - Update blocked from 4.6 to 4.7 when using spot/preemptible instances 1938949 - [VPA] Updater failed to trigger evictions due to "vpa-admission-controller" not found 1939054 - machine healthcheck kills aws spot instance before generated 1939060 - CNO: nodes and masters are upgrading simultaneously 1939069 - Add source to vm template silently failed when no storage class is defined in the cluster 1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string 1939168 - Builds failing for OCP 3.11 since PR#25 was merged 1939226 - kube-apiserver readiness probe appears to be hitting /healthz, not /readyz 1939227 - kube-apiserver liveness probe appears to be hitting /healthz, not /livez 1939232 - CI tests using openshift/hello-world broken by Ruby Version Update 1939270 - fix co upgradeableFalse status and reason 1939294 - OLM may not delete pods with grace period zero (force delete) 1939412 - missed labels for thanos-ruler pods 1939485 - CVE-2021-20291 containers/storage: DoS via malicious image 1939547 - Include container="POD" in resource queries 1939555 - VSphereProblemDetectorControllerDegraded: context canceled during upgrade to 4.8.0 1939573 - after entering valid git repo url on add flow page, throwing warning message instead Validated 1939580 - Authentication operator is degraded during 4.8 to 4.8 upgrade and normal 4.8 e2e runs 1939606 - Attempting to put a host into maintenance mode warns about Ceph cluster health, but no storage cluster problems are apparent 1939661 - support new AWS region ap-northeast-3 1939726 - clusteroperator/network should not change condition/Degraded during normal serial test execution 1939731 - Image registry operator reports unavailable during normal serial run 1939734 - Node Fanout Causes Excessive WATCH Secret Calls, Taking Down Clusters 1939740 - dual stack nodes with OVN single ipv6 fails on bootstrap phase 1939752 - ovnkube-master sbdb container does not set requests on cpu or memory 1939753 - Delete HCO is stucking if there is still VM in the cluster 1939815 - Change the Warning Alert for Encrypted PVs in Create StorageClass(provisioner:RBD) page 1939853 - [DOC] Creating manifests API should not allow folder in the "file_name" 1939865 - GCP PD CSI driver does not have CSIDriver instance 1939869 - [e2e][automation] Add annotations to datavolume for HPP 1939873 - Unlimited number of characters accepted for base domain name 1939943 - cluster-kube-apiserver-operator check-endpoints observed a panic: runtime error: invalid memory address or nil pointer dereference 1940030 - cluster-resource-override: fix spelling mistake for run-level match expression in webhook configuration 1940057 - Openshift builds should use a wach instead of polling when checking for pod status 1940142 - 4.6->4.7 updates stick on OpenStackCinderCSIDriverOperatorCR_OpenStackCinderDriverControllerServiceController_Deploying 1940159 - [OSP] cluster destruction fails to remove router in BYON (with provider network) with Kuryr as primary network 1940206 - Selector and VolumeTableRows not i18ned 1940207 - 4.7->4.6 rollbacks stuck on prometheusrules admission webhook "no route to host" 1940314 - Failed to get type for Dashboard Kubernetes / Compute Resources / Namespace (Workloads) 1940318 - No data under 'Current Bandwidth' for Dashboard 'Kubernetes / Networking / Pod' 1940322 - Split of dashbard is wrong, many Network parts 1940337 - rhos-ipi installer fails with not clear message when openstack tenant doesn't have flavors needed for compute machines 1940361 - [e2e][automation] Fix vm action tests with storageclass HPP 1940432 - Gather datahubs.installers.datahub.sap.com resources from SAP clusters 1940488 - After fix for CVE-2021-3344, Builds do not mount node entitlement keys 1940498 - pods may fail to add logical port due to lr-nat-del/lr-nat-add error messages 1940499 - hybrid-overlay not logging properly before exiting due to an error 1940518 - Components in bare metal components lack resource requests 1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header 1940704 - prjquota is dropped from rootflags if rootfs is reprovisioned 1940755 - [Web-console][Local Storage] LocalVolumeSet could not be created from web-console without detail error info 1940865 - Add BareMetalPlatformType into e2e upgrade service unsupported list 1940876 - Components in ovirt components lack resource requests 1940889 - Installation failures in OpenStack release jobs 1940933 - [sig-arch] Check if alerts are firing during or after upgrade success: AggregatedAPIDown on v1beta1.metrics.k8s.io 1940939 - Wrong Openshift node IP as kubelet setting VIP as node IP 1940940 - csi-snapshot-controller goes unavailable when machines are added removed to cluster 1940950 - vsphere: client/bootstrap CSR double create 1940972 - vsphere: [4.6] CSR approval delayed for unknown reason 1941000 - cinder storageclass creates persistent volumes with wrong label failure-domain.beta.kubernetes.io/zone in multi availability zones architecture on OSP 16. 1941334 - [RFE] Cluster-api-provider-ovirt should handle auto pinning policy 1941342 - Add kata-osbuilder-generate.service as part of the default presets 1941456 - Multiple pods stuck in ContainerCreating status with the message "failed to create container for [kubepods burstable podxxx] : dbus: connection closed by user" being seen in the journal log 1941526 - controller-manager-operator: Observed a panic: nil pointer dereference 1941592 - HAProxyDown not Firing 1941606 - [assisted operator] Assisted Installer Operator CSV related images should be digests for icsp 1941625 - Developer -> Topology - i18n misses 1941635 - Developer -> Monitoring - i18n misses 1941636 - BM worker nodes deployment with virtual media failed while trying to clean raid 1941645 - Developer -> Builds - i18n misses 1941655 - Developer -> Pipelines - i18n misses 1941667 - Developer -> Project - i18n misses 1941669 - Developer -> ConfigMaps - i18n misses 1941759 - Errored pre-flight checks should not prevent install 1941798 - Some details pages don't have internationalized ResourceKind labels 1941801 - Many filter toolbar dropdowns haven't been internationalized 1941815 - From the web console the terminal can no longer connect after using leaving and returning to the terminal view 1941859 - [assisted operator] assisted pod deploy first time in error state 1941901 - Toleration merge logic does not account for multiple entries with the same key 1941915 - No validation against template name in boot source customization 1941936 - when setting parameters in containerRuntimeConfig, it will show incorrect information on its description 1941980 - cluster-kube-descheduler operator is broken when upgraded from 4.7 to 4.8 1941990 - Pipeline metrics endpoint changed in osp-1.4 1941995 - fix backwards incompatible trigger api changes in osp1.4 1942086 - Administrator -> Home - i18n misses 1942117 - Administrator -> Workloads - i18n misses 1942125 - Administrator -> Serverless - i18n misses 1942193 - Operand creation form - broken/cutoff blue line on the Accordion component (fieldGroup) 1942207 - [vsphere] hostname are changed when upgrading from 4.6 to 4.7.x causing upgrades to fail 1942271 - Insights operator doesn't gather pod information from openshift-cluster-version 1942375 - CRI-O failing with error "reserving ctr name" 1942395 - The status is always "Updating" on dc detail page after deployment has failed. 1942521 - [Assisted-4.7] [Staging][OCS] Minimum memory for selected role is failing although minimum OCP requirement satisfied 1942522 - Resolution fails to sort channel if inner entry does not satisfy predicate 1942536 - Corrupted image preventing containers from starting 1942548 - Administrator -> Networking - i18n misses 1942553 - CVE-2021-22133 go.elastic.co/apm: leaks sensitive HTTP headers during panic 1942555 - Network policies in ovn-kubernetes don't support external traffic from router when the endpoint publishing strategy is HostNetwork 1942557 - Query is reporting "no datapoint" when label cluster="" is set but work when the label is removed or when running directly in Prometheus 1942608 - crictl cannot list the images with an error: error locating item named "manifest" for image with ID 1942614 - Administrator -> Storage - i18n misses 1942641 - Administrator -> Builds - i18n misses 1942673 - Administrator -> Pipelines - i18n misses 1942694 - Resource names with a colon do not display property in the browser window title 1942715 - Administrator -> User Management - i18n misses 1942716 - Quay Container Security operator has Medium <-> Low colors reversed 1942725 - [SCC] openshift-apiserver degraded when creating new pod after installing Stackrox which creates a less privileged SCC [4.8] 1942736 - Administrator -> Administration - i18n misses 1942749 - Install Operator form should use info icon for popovers 1942837 - [OCPv4.6] unable to deploy pod with unsafe sysctls 1942839 - Windows VMs fail to start on air-gapped environments 1942856 - Unable to assign nodes for EgressIP even if the egress-assignable label is set 1942858 - [RFE]Confusing detach volume UX 1942883 - AWS EBS CSI driver does not support partitions 1942894 - IPA error when provisioning masters due to an error from ironic.conductor - /dev/sda is busy 1942935 - must-gather improvements 1943145 - vsphere: client/bootstrap CSR double create 1943175 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies (set azure storage account TLS version default to 1.2) 1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() 1943219 - unable to install IPI PRIVATE OpenShift cluster in Azure - SSH access from the Internet should be blocked 1943224 - cannot upgrade openshift-kube-descheduler from 4.7.2 to latest 1943238 - The conditions table does not occupy 100% of the width. 1943258 - [Assisted-4.7][Staging][Advanced Networking] Cluster install fails while waiting for control plane 1943314 - [OVN SCALE] Combine Logical Flows inside Southbound DB. 1943315 - avoid workload disruption for ICSP changes 1943320 - Baremetal node loses connectivity with bonded interface and OVNKubernetes 1943329 - TLSSecurityProfile missing from KubeletConfig CRD Manifest 1943356 - Dynamic plugins surfaced in the UI should be referred to as "Console plugins" 1943539 - crio-wipe is failing to start "Failed to shutdown storage before wiping: A layer is mounted: layer is in use by a container" 1943543 - DeploymentConfig Rollback doesn't reset params correctly 1943558 - [assisted operator] Assisted Service pod unable to reach self signed local registry in disco environement 1943578 - CoreDNS caches NXDOMAIN responses for up to 900 seconds 1943614 - add bracket logging on openshift/builder calls into buildah to assist test-platform team triage 1943637 - upgrade from ocp 4.5 to 4.6 does not clear SNAT rules on ovn 1943649 - don't use hello-openshift for network-check-target 1943667 - KubeDaemonSetRolloutStuck fires during upgrades too often because it does not accurately detect progress 1943719 - storage-operator/vsphere-problem-detector causing upgrades to fail that would have succeeded in past versions 1943804 - API server on AWS takes disruption between 70s and 110s after pod begins termination via external LB 1943845 - Router pods should have startup probes configured 1944121 - OVN-kubernetes references AddressSets after deleting them, causing ovn-controller errors 1944160 - CNO: nbctl daemon should log reconnection info 1944180 - OVN-Kube Master does not release election lock on shutdown 1944246 - Ironic fails to inspect and move node to "manageable' but get bmh remains in "inspecting" 1944268 - openshift-install AWS SDK is missing endpoints for the ap-northeast-3 region 1944509 - Translatable texts without context in ssh expose component 1944581 - oc project not works with cluster proxy 1944587 - VPA could not take actions based on the recommendation when min-replicas=1 1944590 - The field name "VolumeSnapshotContent" is wrong on VolumeSnapshotContent detail page 1944602 - Consistant fallures of features/project-creation.feature Cypress test in CI 1944631 - openshif authenticator should not accept non-hashed tokens 1944655 - [manila-csi-driver-operator] openstack-manila-csi-nodeplugin pods stucked with ".. still connecting to unix:///var/lib/kubelet/plugins/csi-nfsplugin/csi.sock" 1944660 - dm-multipath race condition on bare metal causing /boot partition mount failures 1944674 - Project field become to "All projects" and disabled in "Review and create virtual machine" step in devconsole 1944678 - Whereabouts IPAM CNI duplicate IP addresses assigned to pods 1944761 - field level help instances do not use common util component 1944762 - Drain on worker node during an upgrade fails due to PDB set for image registry pod when only a single replica is present 1944763 - field level help instances do not use common util component 1944853 - Update to nodejs >=14.15.4 for ARM 1944974 - Duplicate KubeControllerManagerDown/KubeSchedulerDown alerts 1944986 - Clarify the ContainerRuntimeConfiguration cr description on the validation 1945027 - Button 'Copy SSH Command' does not work 1945085 - Bring back API data in etcd test 1945091 - In k8s 1.21 bump Feature:IPv6DualStack tests are disabled 1945103 - 'User credentials' shows even the VM is not running 1945104 - In k8s 1.21 bump '[sig-storage] [cis-hostpath] [Testpattern: Generic Ephemeral-volume' tests are disabled 1945146 - Remove pipeline Tech preview badge for pipelines GA operator 1945236 - Bootstrap ignition shim doesn't follow proxy settings 1945261 - Operator dependency not consistently chosen from default channel 1945312 - project deletion does not reset UI project context 1945326 - console-operator: does not check route health periodically 1945387 - Image Registry deployment should have 2 replicas and hard anti-affinity rules 1945398 - 4.8 CI failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial] 1945431 - alerts: SystemMemoryExceedsReservation triggers too quickly 1945443 - operator-lifecycle-manager-packageserver flaps Available=False with no reason or message 1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service 1945548 - catalog resource update failed if spec.secrets set to "" 1945584 - Elasticsearch operator fails to install on 4.8 cluster on ppc64le/s390x 1945599 - Optionally set KERNEL_VERSION and RT_KERNEL_VERSION 1945630 - Pod log filename no longer in -.log format 1945637 - QE- Automation- Fixing smoke test suite for pipeline-plugin 1945646 - gcp-routes.sh running as initrc_t unnecessarily 1945659 - [oVirt] remove ovirt_cafile from ovirt-credentials secret 1945677 - Need ACM Managed Cluster Info metric enabled for OCP monitoring telemetry 1945687 - Dockerfile needs updating to new container CI registry 1945700 - Syncing boot mode after changing device should be restricted to Supermicro 1945816 - " Ingresses " should be kept in English for Chinese 1945818 - Chinese translation issues: Operator should be the same with English Operators 1945849 - Unnecessary series churn when a new version of kube-state-metrics is rolled out 1945910 - [aws] support byo iam roles for instances 1945948 - SNO: pods can't reach ingress when the ingress uses a different IPv6. 1946079 - Virtual master is not getting an IP address 1946097 - [oVirt] oVirt credentials secret contains unnecessary "ovirt_cafile" 1946119 - panic parsing install-config 1946243 - No relevant error when pg limit is reached in block pools page 1946307 - [CI] [UPI] use a standardized and reliable way to install google cloud SDK in UPI image 1946320 - Incorrect error message in Deployment Attach Storage Page 1946449 - [e2e][automation] Fix cloud-init tests as UI changed 1946458 - Edit Application action overwrites Deployment envFrom values on save 1946459 - In bare metal IPv6 environment, [sig-storage] [Driver: nfs] tests are failing in CI. 1946479 - In k8s 1.21 bump BoundServiceAccountTokenVolume is disabled by default 1946497 - local-storage-diskmaker pod logs "DeviceSymlinkExists" and "not symlinking, could not get lock: " 1946506 - [on-prem] mDNS plugin no longer needed 1946513 - honor use specified system reserved with auto node sizing 1946540 - auth operator: only configure webhook authenticators for internal auth when oauth-apiserver pods are ready 1946584 - Machine-config controller fails to generate MC, when machine config pool with dashes in name presents under the cluster 1946607 - etcd readinessProbe is not reflective of actual readiness 1946705 - Fix issues with "search" capability in the Topology Quick Add component 1946751 - DAY2 Confusing event when trying to add hosts to a cluster that completed installation 1946788 - Serial tests are broken because of router 1946790 - Marketplace operator flakes Available=False OperatorStarting during updates 1946838 - Copied CSVs show up as adopted components 1946839 - [Azure] While mirroring images to private registry throwing error: invalid character '<' looking for beginning of value 1946865 - no "namespace:kube_pod_container_resource_requests_cpu_cores:sum" and "namespace:kube_pod_container_resource_requests_memory_bytes:sum" metrics 1946893 - the error messages are inconsistent in DNS status conditions if the default service IP is taken 1946922 - Ingress details page doesn't show referenced secret name and link 1946929 - the default dns operator's Progressing status is always True and cluster operator dns Progressing status is False 1947036 - "failed to create Matchbox client or connect" on e2e-metal jobs or metal clusters via cluster-bot 1947066 - machine-config-operator pod crashes when noProxy is * 1947067 - [Installer] Pick up upstream fix for installer console output 1947078 - Incorrect skipped status for conditional tasks in the pipeline run 1947080 - SNO IPv6 with 'temporary 60-day domain' option fails with IPv4 exception 1947154 - [master] [assisted operator] Unable to re-register an SNO instance if deleting CRDs during install 1947164 - Print "Successfully pushed" even if the build push fails. 1947176 - OVN-Kubernetes leaves stale AddressSets around if the deletion was missed. 1947293 - IPv6 provision addresses range larger then /64 prefix (e.g. /48) 1947311 - When adding a new node to localvolumediscovery UI does not show pre-existing node name's 1947360 - [vSphere csi driver operator] operator pod runs as “BestEffort” qosClass 1947371 - [vSphere csi driver operator] operator doesn't create “csidriver” instance 1947402 - Single Node cluster upgrade: AWS EBS CSI driver deployment is stuck on rollout 1947478 - discovery v1 beta1 EndpointSlice is deprecated in Kubernetes 1.21 (OCP 4.8) 1947490 - If Clevis on a managed LUKs volume with Ignition enables, the system will fails to automatically open the LUKs volume on system boot 1947498 - policy v1 beta1 PodDisruptionBudget is deprecated in Kubernetes 1.21 (OCP 4.8) 1947663 - disk details are not synced in web-console 1947665 - Internationalization values for ceph-storage-plugin should be in file named after plugin 1947684 - MCO on SNO sometimes has rendered configs and sometimes does not 1947712 - [OVN] Many faults and Polling interval stuck for 4 seconds every roughly 5 minutes intervals. 1947719 - 8 APIRemovedInNextReleaseInUse info alerts display 1947746 - Show wrong kubernetes version from kube-scheduler/kube-controller-manager operator pods 1947756 - [azure-disk-csi-driver-operator] Should allow more nodes to be updated simultaneously for speeding up cluster upgrade 1947767 - [azure-disk-csi-driver-operator] Uses the same storage type in the sc created by it as the default sc? 1947771 - [kube-descheduler]descheduler operator pod should not run as “BestEffort” qosClass 1947774 - CSI driver operators use "Always" imagePullPolicy in some containers 1947775 - [vSphere csi driver operator] doesn’t use the downstream images from payload. 1947776 - [vSphere csi driver operator] Should allow more nodes to be updated simultaneously for speeding up cluster upgrade 1947779 - [LSO] Should allow more nodes to be updated simultaneously for speeding up LSO upgrade 1947785 - Cloud Compute: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947789 - Console: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947791 - MCO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947793 - DevEx: APIRemovedInNextReleaseInUse info alerts display 1947794 - OLM: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component does not trigger APIRemovedInNextReleaseInUse alert 1947795 - Networking: APIRemovedInNextReleaseInUse info alerts display 1947797 - CVO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947798 - Images: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947800 - Ingress: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1947801 - Kube Storage Version Migrator APIRemovedInNextReleaseInUse info alerts display 1947803 - Openshift Apiserver: APIRemovedInNextReleaseInUse info alerts display 1947806 - Re-enable h2spec, http/2 and grpc-interop e2e tests in openshift/origin 1947828 - download it link should save pod log in -.log format 1947866 - disk.csi.azure.com.spec.operatorLogLevel is not updated when CSO loglevel is changed 1947917 - Egress Firewall does not reliably apply firewall rules 1947946 - Operator upgrades can delete existing CSV before completion 1948011 - openshift-controller-manager constantly reporting type "Upgradeable" status Unknown 1948012 - service-ca constantly reporting type "Upgradeable" status Unknown 1948019 - [4.8] Large number of requests to the infrastructure cinder volume service 1948022 - Some on-prem namespaces missing from must-gather 1948040 - cluster-etcd-operator: etcd is using deprecated logger 1948082 - Monitoring should not set Available=False with no reason on updates 1948137 - CNI DEL not called on node reboot - OCP 4 CRI-O. 1948232 - DNS operator performs spurious updates in response to API's defaulting of daemonset's maxSurge and service's ipFamilies and ipFamilyPolicy fields 1948311 - Some jobs failing due to excessive watches: the server has received too many requests and has asked us to try again later 1948359 - [aws] shared tag was not removed from user provided IAM role 1948410 - [LSO] Local Storage Operator uses imagePullPolicy as "Always" 1948415 - [vSphere csi driver operator] clustercsidriver.spec.logLevel doesn't take effective after changing 1948427 - No action is triggered after click 'Continue' button on 'Show community Operator' windows 1948431 - TechPreviewNoUpgrade does not enable CSI migration 1948436 - The outbound traffic was broken intermittently after shutdown one egressIP node 1948443 - OCP 4.8 nightly still showing v1.20 even after 1.21 merge 1948471 - [sig-auth][Feature:OpenShiftAuthorization][Serial] authorization TestAuthorizationResourceAccessReview should succeed [Suite:openshift/conformance/serial] 1948505 - [vSphere csi driver operator] vmware-vsphere-csi-driver-operator pod restart every 10 minutes 1948513 - get-resources.sh doesn't honor the no_proxy settings 1948524 - 'DeploymentUpdated' Updated Deployment.apps/downloads -n openshift-console because it changed message is printed every minute 1948546 - VM of worker is in error state when a network has port_security_enabled=False 1948553 - When setting etcd spec.LogLevel is not propagated to etcd operand 1948555 - A lot of events "rpc error: code = DeadlineExceeded desc = context deadline exceeded" were seen in azure disk csi driver verification test 1948563 - End-to-End Secure boot deployment fails "Invalid value for input variable" 1948582 - Need ability to specify local gateway mode in CNO config 1948585 - Need a CI jobs to test local gateway mode with bare metal 1948592 - [Cluster Network Operator] Missing Egress Router Controller 1948606 - DNS e2e test fails "[sig-arch] Only known images used by tests" because it does not use a known image 1948610 - External Storage [Driver: disk.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly] 1948626 - TestRouteAdmissionPolicy e2e test is failing often 1948628 - ccoctl needs to plan for future (non-AWS) platform support in the CLI 1948634 - upgrades: allow upgrades without version change 1948640 - [Descheduler] operator log reports key failed with : kubedeschedulers.operator.openshift.io "cluster" not found 1948701 - unneeded CCO alert already covered by CVO 1948703 - p&f: probes should not get 429s 1948705 - [assisted operator] SNO deployment fails - ClusterDeployment shows bootstrap.ign was not found 1948706 - Cluster Autoscaler Operator manifests missing annotation for ibm-cloud-managed profile 1948708 - cluster-dns-operator includes a deployment with node selector of masters for the IBM cloud managed profile 1948711 - thanos querier and prometheus-adapter should have 2 replicas 1948714 - cluster-image-registry-operator targets master nodes in ibm-cloud-managed-profile 1948716 - cluster-ingress-operator deployment targets master nodes for ibm-cloud-managed profile 1948718 - cluster-network-operator deployment manifest for ibm-cloud-managed profile contains master node selector 1948719 - Machine API components should use 1.21 dependencies 1948721 - cluster-storage-operator deployment targets master nodes for ibm-cloud-managed profile 1948725 - operator lifecycle manager does not include profile annotations for ibm-cloud-managed 1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing 1948771 - ~50% of GCP upgrade jobs in 4.8 failing with "AggregatedAPIDown" alert on packages.coreos.com 1948782 - Stale references to the single-node-production-edge cluster profile 1948787 - secret.StringData shouldn't be used for reads 1948788 - Clicking an empty metrics graph (when there is no data) should still open metrics viewer 1948789 - Clicking on a metrics graph should show request and limits queries as well on the resulting metrics page 1948919 - Need minor update in message on channel modal 1948923 - [aws] installer forces the platform.aws.amiID option to be set, while installing a cluster into GovCloud or C2S region 1948926 - Memory Usage of Dashboard 'Kubernetes / Compute Resources / Pod' contain wrong CPU query 1948936 - [e2e][automation][prow] Prow script point to deleted resource 1948943 - (release-4.8) Limit the number of collected pods in the workloads gatherer 1948953 - Uninitialized cloud provider error when provisioning a cinder volume 1948963 - [RFE] Cluster-api-provider-ovirt should handle hugepages 1948966 - Add the ability to run a gather done by IO via a Kubernetes Job 1948981 - Align dependencies and libraries with latest ironic code 1948998 - style fixes by GoLand and golangci-lint 1948999 - Can not assign multiple EgressIPs to a namespace by using automatic way. 1949019 - PersistentVolumes page cannot sync project status automatically which will block user to create PV 1949022 - Openshift 4 has a zombie problem 1949039 - Wrong env name to get podnetinfo for hugepage in app-netutil 1949041 - vsphere: wrong image names in bundle 1949042 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the http2 tests (on OpenStack) 1949050 - Bump k8s to latest 1.21 1949061 - [assisted operator][nmstate] Continuous attempts to reconcile InstallEnv in the case of invalid NMStateConfig 1949063 - [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service 1949075 - Extend openshift/api for Add card customization 1949093 - PatternFly v4.96.2 regression results in a.pf-c-button hover issues 1949096 - Restore private git clone tests 1949099 - network-check-target code cleanup 1949105 - NetworkPolicy ... should enforce ingress policy allowing any port traffic to a server on a specific protocol 1949145 - Move openshift-user-critical priority class to CCO 1949155 - Console doesn't correctly check for favorited or last namespace on load if project picker used 1949180 - Pipelines plugin model kinds aren't picked up by parser 1949202 - sriov-network-operator not available from operatorhub on ppc64le 1949218 - ccoctl not included in container image 1949237 - Bump OVN: Lots of conjunction warnings in ovn-controller container logs 1949277 - operator-marketplace: deployment manifests for ibm-cloud-managed profile have master node selectors 1949294 - [assisted operator] OPENSHIFT_VERSIONS in assisted operator subscription does not propagate 1949306 - need a way to see top API accessors 1949313 - Rename vmware-vsphere- images to vsphere- images before 4.8 ships 1949316 - BaremetalHost resource automatedCleaningMode ignored due to outdated vendoring 1949347 - apiserver-watcher support for dual-stack 1949357 - manila-csi-controller pod not running due to secret lack(in another ns) 1949361 - CoreDNS resolution failure for external hostnames with "A: dns: overflow unpacking uint16" 1949364 - Mention scheduling profiles in scheduler operator repository 1949370 - Testability of: Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error 1949384 - Edit Default Pull Secret modal - i18n misses 1949387 - Fix the typo in auto node sizing script 1949404 - label selector on pvc creation page - i18n misses 1949410 - The referred role doesn't exist if create rolebinding from rolebinding tab of role page 1949411 - VolumeSnapshot, VolumeSnapshotClass and VolumeSnapshotConent Details tab is not translated - i18n misses 1949413 - Automatic boot order setting is done incorrectly when using by-path style device names 1949418 - Controller factory workers should always restart on panic() 1949419 - oauth-apiserver logs "[SHOULD NOT HAPPEN] failed to update managedFields for authentication.k8s.io/v1, Kind=TokenReview: failed to convert new object (authentication.k8s.io/v1, Kind=TokenReview)" 1949420 - [azure csi driver operator] pvc.status.capacity and pv.spec.capacity are processed not the same as in-tree plugin 1949435 - ingressclass controller doesn't recreate the openshift-default ingressclass after deleting it 1949480 - Listeners timeout are constantly being updated 1949481 - cluster-samples-operator restarts approximately two times per day and logs too many same messages 1949509 - Kuryr should manage API LB instead of CNO 1949514 - URL is not visible for routes at narrow screen widths 1949554 - Metrics of vSphere CSI driver sidecars are not collected 1949582 - OCP v4.7 installation with OVN-Kubernetes fails with error "egress bandwidth restriction -1 is not equals" 1949589 - APIRemovedInNextEUSReleaseInUse Alert Missing 1949591 - Alert does not catch removed api usage during end-to-end tests. 1949593 - rename DeprecatedAPIInUse alert to APIRemovedInNextReleaseInUse 1949612 - Install with 1.21 Kubelet is spamming logs with failed to get stats failed command 'du' 1949626 - machine-api fails to create AWS client in new regions 1949661 - Kubelet Workloads Management changes for OCPNODE-529 1949664 - Spurious keepalived liveness probe failures 1949671 - System services such as openvswitch are stopped before pod containers on system shutdown or reboot 1949677 - multus is the first pod on a new node and the last to go ready 1949711 - cvo unable to reconcile deletion of openshift-monitoring namespace 1949721 - Pick 99237: Use the audit ID of a request for better correlation 1949741 - Bump golang version of cluster-machine-approver 1949799 - ingresscontroller should deny the setting when spec.tuningOptions.threadCount exceed 64 1949810 - OKD 4.7 unable to access Project Topology View 1949818 - Add e2e test to perform MCO operation Single Node OpenShift 1949820 - Unable to use oc adm top is shortcut when asking for imagestreams 1949862 - The ccoctl tool hits the panic sometime when running the delete subcommand 1949866 - The ccoctl fails to create authentication file when running the command ccoctl aws create-identity-provider with --output-dir parameter 1949880 - adding providerParameters.gcp.clientAccess to existing ingresscontroller doesn't work 1949882 - service-idler build error 1949898 - Backport RP#848 to OCP 4.8 1949907 - Gather summary of PodNetworkConnectivityChecks 1949923 - some defined rootVolumes zones not used on installation 1949928 - Samples Operator updates break CI tests 1949935 - Fix incorrect access review check on start pipeline kebab action 1949956 - kaso: add minreadyseconds to ensure we don't have an LB outage on kas 1949967 - Update Kube dependencies in MCO to 1.21 1949972 - Descheduler metrics: populate build info data and make the metrics entries more readeable 1949978 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the h2spec conformance tests [Suite:openshift/conformance/parallel/minimal] 1949990 - (release-4.8) Extend the OLM operator gatherer to include CSV display name 1949991 - openshift-marketplace pods are crashlooping 1950007 - [CI] [UPI] easy_install is not reliable enough to be used in an image 1950026 - [Descheduler] Need better way to handle evicted pod count for removeDuplicate pod strategy 1950047 - CSV deployment template custom annotations are not propagated to deployments 1950112 - SNO: machine-config pool is degraded: error running chcon -R -t var_run_t /run/mco-machine-os-content/os-content-321709791 1950113 - in-cluster operators need an API for additional AWS tags 1950133 - MCO creates empty conditions on the kubeletconfig object 1950159 - Downstream ovn-kubernetes repo should have no linter errors 1950175 - Update Jenkins and agent base image to Go 1.16 1950196 - ssh Key is added even with 'Expose SSH access to this virtual machine' unchecked 1950210 - VPA CRDs use deprecated API version 1950219 - KnativeServing is not shown in list on global config page 1950232 - [Descheduler] - The minKubeVersion should be 1.21 1950236 - Update OKD imagestreams to prefer centos7 images 1950270 - should use "kubernetes.io/os" in the dns/ingresscontroller node selector description when executing oc explain command 1950284 - Tracking bug for NE-563 - support user-defined tags on AWS load balancers 1950341 - NetworkPolicy: allow-from-router policy does not allow access to service when the endpoint publishing strategy is HostNetwork on OpenshiftSDN network 1950379 - oauth-server is in pending/crashbackoff at beginning 50% of CI runs 1950384 - [sig-builds][Feature:Builds][sig-devex][Feature:Jenkins][Slow] openshift pipeline build perm failing 1950409 - Descheduler operator code and docs still reference v1beta1 1950417 - The Marketplace Operator is building with EOL k8s versions 1950430 - CVO serves metrics over HTTP, despite a lack of consumers 1950460 - RFE: Change Request Size Input to Number Spinner Input 1950471 - e2e-metal-ipi-ovn-dualstack is failing with etcd unable to bootstrap 1950532 - Include "update" when referring to operator approval and channel 1950543 - Document non-HA behaviors in the MCO (SingleNodeOpenshift) 1950590 - CNO: Too many OVN netFlows collectors causes ovnkube pods CrashLoopBackOff 1950653 - BuildConfig ignores Args 1950761 - Monitoring operator deployments anti-affinity rules prevent their rollout on single-node 1950908 - kube_pod_labels metric does not contain k8s labels 1950912 - [e2e][automation] add devconsole tests 1950916 - [RFE]console page show error when vm is poused 1950934 - Unnecessary rollouts can happen due to unsorted endpoints 1950935 - Updating cluster-network-operator builder & base images to be consistent with ART 1950978 - the ingressclass cannot be removed even after deleting the related custom ingresscontroller 1951007 - ovn master pod crashed 1951029 - Drainer panics on missing context for node patch 1951034 - (release-4.8) Split up the GatherClusterOperators into smaller parts 1951042 - Panics every few minutes in kubelet logs post-rebase 1951043 - Start Pipeline Modal Parameters should accept empty string defaults 1951058 - [gcp-pd-csi-driver-operator] topology and multipods capabilities are not enabled in e2e tests 1951066 - [IBM][ROKS] Enable volume snapshot controllers on IBM Cloud 1951084 - avoid benign "Path \"/run/secrets/etc-pki-entitlement\" from \"/etc/containers/mounts.conf\" doesn't exist, skipping" messages 1951158 - Egress Router CRD missing Addresses entry 1951169 - Improve API Explorer discoverability from the Console 1951174 - re-pin libvirt to 6.0.0 1951203 - oc adm catalog mirror can generate ICSPs that exceed etcd's size limit 1951209 - RerunOnFailure runStrategy shows wrong VM status (Starting) on Succeeded VMI 1951212 - User/Group details shows unrelated subjects in role bindings tab 1951214 - VM list page crashes when the volume type is sysprep 1951339 - Cluster-version operator does not manage operand container environments when manifest lacks opinions 1951387 - opm index add doesn't respect deprecated bundles 1951412 - Configmap gatherer can fail incorrectly 1951456 - Docs and linting fixes 1951486 - Replace "kubevirt_vmi_network_traffic_bytes_total" with new metrics names 1951505 - Remove deprecated techPreviewUserWorkload field from CMO's configmap 1951558 - Backport Upstream 101093 for Startup Probe Fix 1951585 - enterprise-pod fails to build 1951636 - assisted service operator use default serviceaccount in operator bundle 1951637 - don't rollout a new kube-apiserver revision on oauth accessTokenInactivityTimeout changes 1951639 - Bootstrap API server unclean shutdown causes reconcile delay 1951646 - Unexpected memory climb while container not in use 1951652 - Add retries to opm index add 1951670 - Error gathering bootstrap log after pivot: The bootstrap machine did not execute the release-image.service systemd unit 1951671 - Excessive writes to ironic Nodes 1951705 - kube-apiserver needs alerts on CPU utlization 1951713 - [OCP-OSP] After changing image in machine object it enters in Failed - Can't find created instance 1951853 - dnses.operator.openshift.io resource's spec.nodePlacement.tolerations godoc incorrectly describes default behavior 1951858 - unexpected text '0' on filter toolbar on RoleBinding tab 1951860 - [4.8] add Intel XXV710 NIC model (1572) support in SR-IOV Operator 1951870 - sriov network resources injector: user defined injection removed existing pod annotations 1951891 - [migration] cannot change ClusterNetwork CIDR during migration 1951952 - [AWS CSI Migration] Metrics for cloudprovider error requests are lost 1952001 - Delegated authentication: reduce the number of watch requests 1952032 - malformatted assets in CMO 1952045 - Mirror nfs-server image used in jenkins-e2e 1952049 - Helm: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1952079 - rebase openshift/sdn to kube 1.21 1952111 - Optimize importing from @patternfly/react-tokens 1952174 - DNS operator claims to be done upgrading before it even starts 1952179 - OpenStack Provider Ports UI Underscore Variables 1952187 - Pods stuck in ImagePullBackOff with errors like rpc error: code = Unknown desc = Error committing the finished image: image with ID "SomeLongID" already exists, but uses a different top layer: that ID 1952211 - cascading mounts happening exponentially on when deleting openstack-cinder-csi-driver-node pods 1952214 - Console Devfile Import Dev Preview broken 1952238 - Catalog pods don't report termination logs to catalog-operator 1952262 - Need support external gateway via hybrid overlay 1952266 - etcd operator bumps status.version[name=operator] before operands update 1952268 - etcd operator should not set Degraded=True EtcdMembersDegraded on healthy machine-config node reboots 1952282 - CSR approver races with nodelink controller and does not requeue 1952310 - VM cannot start up if the ssh key is added by another template 1952325 - [e2e][automation] Check support modal in ssh tests and skip template parentSupport 1952333 - openshift/kubernetes vulnerable to CVE-2021-3121 1952358 - Openshift-apiserver CO unavailable in fresh OCP 4.7.5 installations 1952367 - No VM status on overview page when VM is pending 1952368 - worker pool went degraded due to no rpm-ostree on rhel worker during applying new mc 1952372 - VM stop action should not be there if the VM is not running 1952405 - console-operator is not reporting correct Available status 1952448 - Switch from Managed to Disabled mode: no IP removed from configuration and no container metal3-static-ip-manager stopped 1952460 - In k8s 1.21 bump '[sig-network] Firewall rule control plane should not expose well-known ports' test is disabled 1952473 - Monitor pod placement during upgrades 1952487 - Template filter does not work properly 1952495 - “Create” button on the Templates page is confuse 1952527 - [Multus] multi-networkpolicy does wrong filtering 1952545 - Selection issue when inserting YAML snippets 1952585 - Operator links for 'repository' and 'container image' should be clickable in OperatorHub 1952604 - Incorrect port in external loadbalancer config 1952610 - [aws] image-registry panics when the cluster is installed in a new region 1952611 - Tracking bug for OCPCLOUD-1115 - support user-defined tags on AWS EC2 Instances 1952618 - 4.7.4->4.7.8 Upgrade Caused OpenShift-Apiserver Outage 1952625 - Fix translator-reported text issues 1952632 - 4.8 installer should default ClusterVersion channel to stable-4.8 1952635 - Web console displays a blank page- white space instead of cluster information 1952665 - [Multus] multi-networkpolicy pod continue restart due to OOM (out of memory) 1952666 - Implement Enhancement 741 for Kubelet 1952667 - Update Readme for cluster-baremetal-operator with details about the operator 1952684 - cluster-etcd-operator: metrics controller panics on invalid response from client 1952728 - It was not clear for users why Snapshot feature was not available 1952730 - “Customize virtual machine” and the “Advanced” feature are confusing in wizard 1952732 - Users did not understand the boot source labels 1952741 - Monitoring DB: after set Time Range as Custom time range, no data display 1952744 - PrometheusDuplicateTimestamps with user workload monitoring enabled 1952759 - [RFE]It was not immediately clear what the Star icon meant 1952795 - cloud-network-config-controller CRD does not specify correct plural name 1952819 - failed to configure pod interface: error while waiting on flows for pod: timed out waiting for OVS flows 1952820 - [LSO] Delete localvolume pv is failed 1952832 - [IBM][ROKS] Enable the Web console UI to deploy OCS in External mode on IBM Cloud 1952891 - Upgrade failed due to cinder csi driver not deployed 1952904 - Linting issues in gather/clusterconfig package 1952906 - Unit tests for configobserver.go 1952931 - CI does not check leftover PVs 1952958 - Runtime error loading console in Safari 13 1953019 - [Installer][baremetal][metal3] The baremetal IPI installer fails on delete cluster with: failed to clean baremetal bootstrap storage pool 1953035 - Installer should error out if publish: Internal is set while deploying OCP cluster on any on-prem platform 1953041 - openshift-authentication-operator uses 3.9k% of its requested CPU 1953077 - Handling GCP's: Error 400: Permission accesscontextmanager.accessLevels.list is not valid for this resource 1953102 - kubelet CPU use during an e2e run increased 25% after rebase 1953105 - RHCOS system components registered a 3.5x increase in CPU use over an e2e run before and after 4/9 1953169 - endpoint slice controller doesn't handle services target port correctly 1953257 - Multiple EgressIPs per node for one namespace when "oc get hostsubnet" 1953280 - DaemonSet/node-resolver is not recreated by dns operator after deleting it 1953291 - cluster-etcd-operator: peer cert DNS SAN is populated incorrectly 1953418 - [e2e][automation] Fix vm wizard validate tests 1953518 - thanos-ruler pods failed to start up for "cannot unmarshal DNS message" 1953530 - Fix openshift/sdn unit test flake 1953539 - kube-storage-version-migrator: priorityClassName not set 1953543 - (release-4.8) Add missing sample archive data 1953551 - build failure: unexpected trampoline for shared or dynamic linking 1953555 - GlusterFS tests fail on ipv6 clusters 1953647 - prometheus-adapter should have a PodDisruptionBudget in HA topology 1953670 - ironic container image build failing because esp partition size is too small 1953680 - ipBlock ignoring all other cidr's apart from the last one specified 1953691 - Remove unused mock 1953703 - Inconsistent usage of Tech preview badge in OCS plugin of OCP Console 1953726 - Fix issues related to loading dynamic plugins 1953729 - e2e unidling test is flaking heavily on SNO jobs 1953795 - Ironic can't virtual media attach ISOs sourced from ingress routes 1953798 - GCP e2e (parallel and upgrade) regularly trigger KubeAPIErrorBudgetBurn alert, also happens on AWS 1953803 - [AWS] Installer should do pre-check to ensure user-provided private hosted zone name is valid for OCP cluster 1953810 - Allow use of storage policy in VMC environments 1953830 - The oc-compliance build does not available for OCP4.8 1953846 - SystemMemoryExceedsReservation alert should consider hugepage reservation 1953977 - [4.8] packageserver pods restart many times on the SNO cluster 1953979 - Ironic caching virtualmedia images results in disk space limitations 1954003 - Alerts shouldn't report any alerts in firing or pending state: openstack-cinder-csi-driver-controller-metrics TargetDown 1954025 - Disk errors while scaling up a node with multipathing enabled 1954087 - Unit tests for kube-scheduler-operator 1954095 - Apply user defined tags in AWS Internal Registry 1954105 - TaskRuns Tab in PipelineRun Details Page makes cluster based calls for TaskRuns 1954124 - oc set volume not adding storageclass to pvc which leads to issues using snapshots 1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js 1954177 - machine-api: admissionReviewVersions v1beta1 is going to be removed in 1.22 1954187 - multus: admissionReviewVersions v1beta1 is going to be removed in 1.22 1954248 - Disable Alertmanager Protractor e2e tests 1954317 - [assisted operator] Environment variables set in the subscription not being inherited by the assisted-service container 1954330 - NetworkPolicy: allow-from-router with label policy-group.network.openshift.io/ingress: "" does not work on a upgraded cluster 1954421 - Get 'Application is not available' when access Prometheus UI 1954459 - Error: Gateway Time-out display on Alerting console 1954460 - UI, The status of "Used Capacity Breakdown [Pods]" is "Not available" 1954509 - FC volume is marked as unmounted after failed reconstruction 1954540 - Lack translation for local language on pages under storage menu 1954544 - authn operator: endpoints controller should use the context it creates 1954554 - Add e2e tests for auto node sizing 1954566 - Cannot update a component (UtilizationCard) error when switching perspectives manually 1954597 - Default image for GCP does not support ignition V3 1954615 - Undiagnosed panic detected in pod: pods/openshift-cloud-credential-operator_cloud-credential-operator 1954634 - apirequestcounts does not honor max users 1954638 - apirequestcounts should indicate removedinrelease of empty instead of 2.0 1954640 - Support of gatherers with different periods 1954671 - disable volume expansion support in vsphere csi driver storage class 1954687 - localvolumediscovery and localvolumset e2es are disabled 1954688 - LSO has missing examples for localvolumesets 1954696 - [API-1009] apirequestcounts should indicate useragent 1954715 - Imagestream imports become very slow when doing many in parallel 1954755 - Multus configuration should allow for net-attach-defs referenced in the openshift-multus namespace 1954765 - CCO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1954768 - baremetal-operator: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert 1954770 - Backport upstream fix for Kubelet getting stuck in DiskPressure 1954773 - OVN: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component does not trigger APIRemovedInNextReleaseInUse alert 1954783 - [aws] support byo private hosted zone 1954790 - KCM Alert PodDisruptionBudget At and Limit do not alert with maxUnavailable or MinAvailable by percentage 1954830 - verify-client-go job is failing for release-4.7 branch 1954865 - Add necessary priority class to pod-identity-webhook deployment 1954866 - Add necessary priority class to downloads 1954870 - Add necessary priority class to network components 1954873 - dns server may not be specified for clusters with more than 2 dns servers specified by openstack. 1954891 - Add necessary priority class to pruner 1954892 - Add necessary priority class to ingress-canary 1954931 - (release-4.8) Remove legacy URL anonymization in the ClusterOperator related resources 1954937 - [API-1009] oc get apirequestcount shows blank for column REQUESTSINCURRENTHOUR 1954959 - unwanted decorator shown for revisions in topology though should only be shown only for knative services 1954972 - TechPreviewNoUpgrade featureset can be undone 1954973 - "read /proc/pressure/cpu: operation not supported" in node-exporter logs 1954994 - should update to 2.26.0 for prometheus resources label 1955051 - metrics "kube_node_status_capacity_cpu_cores" does not exist 1955089 - Support [sig-cli] oc observe works as expected test for IPv6 1955100 - Samples: APIRemovedInNextReleaseInUse info alerts display 1955102 - Add vsphere_node_hw_version_total metric to the collected metrics 1955114 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM 1955196 - linuxptp-daemon crash on 4.8 1955226 - operator updates apirequestcount CRD over and over 1955229 - release-openshift-origin-installer-e2e-aws-calico-4.7 is permfailing 1955256 - stop collecting API that no longer exists 1955324 - Kubernetes Autoscaler should use Go 1.16 for testing scripts 1955336 - Failure to Install OpenShift on GCP due to Cluster Name being similar to / contains "google" 1955414 - 4.8 -> 4.7 rollbacks broken on unrecognized flowschema openshift-etcd-operator 1955445 - Drop crio image metrics with high cardinality 1955457 - Drop container_memory_failures_total metric because of high cardinality 1955467 - Disable collection of node_mountstats_nfs metrics in node_exporter 1955474 - [aws-ebs-csi-driver] rebase from version v1.0.0 1955478 - Drop high-cardinality metrics from kube-state-metrics which aren't used 1955517 - Failed to upgrade from 4.6.25 to 4.7.8 due to the machine-config degradation 1955548 - [IPI][OSP] OCP 4.6/4.7 IPI with kuryr exceeds defined serviceNetwork range 1955554 - MAO does not react to events triggered from Validating Webhook Configurations 1955589 - thanos-querier should have a PodDisruptionBudget in HA topology 1955595 - Add DevPreviewLongLifecycle Descheduler profile 1955596 - Pods stuck in creation phase on realtime kernel SNO 1955610 - release-openshift-origin-installer-old-rhcos-e2e-aws-4.7 is permfailing 1955622 - 4.8-e2e-metal-assisted jobs: Timeout of 360 seconds expired waiting for Cluster to be in status ['installing', 'error'] 1955701 - [4.8] RHCOS boot image bump for RHEL 8.4 Beta 1955749 - OCP branded templates need to be translated 1955761 - packageserver clusteroperator does not set reason or message for Available condition 1955783 - NetworkPolicy: ACL audit log message for allow-from-router policy should also include the namespace to distinguish between two policies similarly named configured in respective namespaces 1955803 - OperatorHub - console accepts any value for "Infrastructure features" annotation 1955822 - CIS Benchmark 5.4.1 Fails on ROKS 4: Prefer using secrets as files over secrets as environment variables 1955854 - Ingress clusteroperator reports Degraded=True/Available=False if any ingresscontroller is degraded or unavailable 1955862 - Local Storage Operator using LocalVolume CR fails to create PV's when backend storage failure is simulated 1955874 - Webscale: sriov vfs are not created and sriovnetworknodestate indicates sync succeeded - state is not correct 1955879 - Customer tags cannot be seen in S3 level when set spec.managementState from Managed-> Removed-> Managed in configs.imageregistry with high ratio 1955969 - Workers cannot be deployed attached to multiple networks. 1956079 - Installer gather doesn't collect any networking information 1956208 - Installer should validate root volume type 1956220 - Set htt proxy system properties as expected by kubernetes-client 1956281 - Disconnected installs are failing with kubelet trying to pause image from the internet 1956334 - Event Listener Details page does not show Triggers section 1956353 - test: analyze job consistently fails 1956372 - openshift-gcp-routes causes disruption during upgrade by stopping before all pods terminate 1956405 - Bump k8s dependencies in cluster resource override admission operator 1956411 - Apply custom tags to AWS EBS volumes 1956480 - [4.8] Bootimage bump tracker 1956606 - probes FlowSchema manifest not included in any cluster profile 1956607 - Multiple manifests lack cluster profile annotations 1956609 - [cluster-machine-approver] CSRs for replacement control plane nodes not approved after restore from backup 1956610 - manage-helm-repos manifest lacks cluster profile annotations 1956611 - OLM CRD schema validation failing against CRs where the value of a string field is a blank string 1956650 - The container disk URL is empty for Windows guest tools 1956768 - aws-ebs-csi-driver-controller-metrics TargetDown 1956826 - buildArgs does not work when the value is taken from a secret 1956895 - Fix chatty kubelet log message 1956898 - fix log files being overwritten on container state loss 1956920 - can't open terminal for pods that have more than one container running 1956959 - ipv6 disconnected sno crd deployment hive reports success status and clusterdeployrmet reporting false 1956978 - Installer gather doesn't include pod names in filename 1957039 - Physical VIP for pod -> Svc -> Host is incorrectly set to an IP of 169.254.169.2 for Local GW 1957041 - Update CI e2echart with more node info 1957127 - Delegated authentication: reduce the number of watch requests 1957131 - Conformance tests for OpenStack require the Cinder client that is not included in the "tests" image 1957146 - Only run test/extended/router/idle tests on OpenshiftSDN or OVNKubernetes 1957149 - CI: "Managed cluster should start all core operators" fails with: OpenStackCinderDriverStaticResourcesControllerDegraded: "volumesnapshotclass.yaml" (string): missing dynamicClient 1957179 - Incorrect VERSION in node_exporter 1957190 - CI jobs failing due too many watch requests (prometheus-operator) 1957198 - Misspelled console-operator condition 1957227 - Issue replacing the EnvVariables using the unsupported ConfigMap 1957260 - [4.8] [gcp] Installer is missing new region/zone europe-central2 1957261 - update godoc for new build status image change trigger fields 1957295 - Apply priority classes conventions as test to openshift/origin repo 1957315 - kuryr-controller doesn't indicate being out of quota 1957349 - [Azure] Machine object showing Failed phase even node is ready and VM is running properly 1957374 - mcddrainerr doesn't list specific pod 1957386 - Config serve and validate command should be under alpha 1957446 - prepare CCO for future without v1beta1 CustomResourceDefinitions 1957502 - Infrequent panic in kube-apiserver in aws-serial job 1957561 - lack of pseudolocalization for some text on Cluster Setting page 1957584 - Routes are not getting created when using hostname without FQDN standard 1957597 - Public DNS records were not deleted when destroying a cluster which is using byo private hosted zone 1957645 - Event "Updated PrometheusRule.monitoring.coreos.com/v1 because it changed" is frequently looped with weird empty {} changes 1957708 - e2e-metal-ipi and related jobs fail to bootstrap due to multiple VIP's 1957726 - Pod stuck in ContainerCreating - Failed to start transient scope unit: Connection timed out 1957748 - Ptp operator pod should have CPU and memory requests set but not limits 1957756 - Device Replacemet UI, The status of the disk is "replacement ready" before I clicked on "start replacement" 1957772 - ptp daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent 1957775 - CVO creating cloud-controller-manager too early causing upgrade failures 1957809 - [OSP] Install with invalid platform.openstack.machinesSubnet results in runtime error 1957822 - Update apiserver tlsSecurityProfile description to include Custom profile 1957832 - CMO end-to-end tests work only on AWS 1957856 - 'resource name may not be empty' is shown in CI testing 1957869 - baremetal IPI power_interface for irmc is inconsistent 1957879 - cloud-controller-manage ClusterOperator manifest does not declare relatedObjects 1957889 - Incomprehensible documentation of the GatherClusterOperatorPodsAndEvents gatherer 1957893 - ClusterDeployment / Agent conditions show "ClusterAlreadyInstalling" during each spoke install 1957895 - Cypress helper projectDropdown.shouldContain is not an assertion 1957908 - Many e2e failed requests caused by kube-storage-version-migrator-operator's version reads 1957926 - "Add Capacity" should allow to add n3 (or n4) local devices at once 1957951 - [aws] destroy can get blocked on instances stuck in shutting-down state 1957967 - Possible test flake in listPage Cypress view 1957972 - Leftover templates from mdns 1957976 - Ironic execute_deploy_steps command to ramdisk times out, resulting in a failed deployment in 4.7 1957982 - Deployment Actions clickable for view-only projects 1957991 - ClusterOperatorDegraded can fire during installation 1958015 - "config-reloader-cpu" and "config-reloader-memory" flags have been deprecated for prometheus-operator 1958080 - Missing i18n for login, error and selectprovider pages 1958094 - Audit log files are corrupted sometimes 1958097 - don't show "old, insecure token format" if the token does not actually exist 1958114 - Ignore staged vendor files in pre-commit script 1958126 - [OVN]Egressip doesn't take effect 1958158 - OAuth proxy container for AlertManager and Thanos are flooding the logs 1958216 - ocp libvirt: dnsmasq options in install config should allow duplicate option names 1958245 - cluster-etcd-operator: static pod revision is not visible from etcd logs 1958285 - Deployment considered unhealthy despite being available and at latest generation 1958296 - OLM must explicitly alert on deprecated APIs in use 1958329 - pick 97428: add more context to log after a request times out 1958367 - Build metrics do not aggregate totals by build strategy 1958391 - Update MCO KubeletConfig to mixin the API Server TLS Security Profile Singleton 1958405 - etcd: current health checks and reporting are not adequate to ensure availability 1958406 - Twistlock flags mode of /var/run/crio/crio.sock 1958420 - openshift-install 4.7.10 fails with segmentation error 1958424 - aws: support more auth options in manual mode 1958439 - Install/Upgrade button on Install/Upgrade Helm Chart page does not work with Form View 1958492 - CCO: pod-identity-webhook still accesses APIRemovedInNextReleaseInUse 1958643 - All pods creation stuck due to SR-IOV webhook timeout 1958679 - Compression on pool can't be disabled via UI 1958753 - VMI nic tab is not loadable 1958759 - Pulling Insights report is missing retry logic 1958811 - VM creation fails on API version mismatch 1958812 - Cluster upgrade halts as machine-config-daemon fails to parse rpm-ostree status during cluster upgrades 1958861 - [CCO] pod-identity-webhook certificate request failed 1958868 - ssh copy is missing when vm is running 1958884 - Confusing error message when volume AZ not found 1958913 - "Replacing an unhealthy etcd member whose node is not ready" procedure results in new etcd pod in CrashLoopBackOff 1958930 - network config in machine configs prevents addition of new nodes with static networking via kargs 1958958 - [SCALE] segfault with ovnkube adding to address set 1958972 - [SCALE] deadlock in ovn-kube when scaling up to 300 nodes 1959041 - LSO Cluster UI,"Troubleshoot" link does not exist after scale down osd pod 1959058 - ovn-kubernetes has lock contention on the LSP cache 1959158 - packageserver clusteroperator Available condition set to false on any Deployment spec change 1959177 - Descheduler dev manifests are missing permissions 1959190 - Set LABEL io.openshift.release.operator=true for driver-toolkit image addition to payload 1959194 - Ingress controller should use minReadySeconds because otherwise it is disrupted during deployment updates 1959278 - Should remove prometheus servicemonitor from openshift-user-workload-monitoring 1959294 - openshift-operator-lifecycle-manager:olm-operator-serviceaccount should not rely on external networking for health check 1959327 - Degraded nodes on upgrade - Cleaning bootversions: Read-only file system 1959406 - Difficult to debug performance on ovn-k without pprof enabled 1959471 - Kube sysctl conformance tests are disabled, meaning we can't submit conformance results 1959479 - machines doesn't support dual-stack loadbalancers on Azure 1959513 - Cluster-kube-apiserver does not use library-go for audit pkg 1959519 - Operand details page only renders one status donut no matter how many 'podStatuses' descriptors are used 1959550 - Overly generic CSS rules for dd and dt elements breaks styling elsewhere in console 1959564 - Test verify /run filesystem contents failing 1959648 - oc adm top --help indicates that oc adm top can display storage usage while it cannot 1959650 - Gather SDI-related MachineConfigs 1959658 - showing a lot "constructing many client instances from the same exec auth config" 1959696 - Deprecate 'ConsoleConfigRoute' struct in console-operator config 1959699 - [RFE] Collect LSO pod log and daemonset log managed by LSO 1959703 - Bootstrap gather gets into an infinite loop on bootstrap-in-place mode 1959711 - Egressnetworkpolicy doesn't work when configure the EgressIP 1959786 - [dualstack]EgressIP doesn't work on dualstack cluster for IPv6 1959916 - Console not works well against a proxy in front of openshift clusters 1959920 - UEFISecureBoot set not on the right master node 1959981 - [OCPonRHV] - Affinity Group should not create by default if we define empty affinityGroupsNames: [] 1960035 - iptables is missing from ose-keepalived-ipfailover image 1960059 - Remove "Grafana UI" link from Console Monitoring > Dashboards page 1960089 - ImageStreams list page, detail page and breadcrumb are not following CamelCase conventions 1960129 - [e2e][automation] add smoke tests about VM pages and actions 1960134 - some origin images are not public 1960171 - Enable SNO checks for image-registry 1960176 - CCO should recreate a user for the component when it was removed from the cloud providers 1960205 - The kubelet log flooded with reconcileState message once CPU manager enabled 1960255 - fixed obfuscation permissions 1960257 - breaking changes in pr template 1960284 - ExternalTrafficPolicy Local does not preserve connections correctly on shutdown, policy Cluster has significant performance cost 1960323 - Address issues raised by coverity security scan 1960324 - manifests: extra "spec.version" in console quickstarts makes CVO hotloop 1960330 - manifests: invalid selector in ServiceMonitor makes CVO hotloop 1960334 - manifests: invalid selector in ServiceMonitor makes CVO hotloop 1960337 - manifests: invalid selector in ServiceMonitor makes CVO hotloop 1960339 - manifests: unset "preemptionPolicy" makes CVO hotloop 1960531 - Items under 'Current Bandwidth' for Dashboard 'Kubernetes / Networking / Pod' keep added for every access 1960534 - Some graphs of console dashboards have no legend and tooltips are difficult to undstand compared with grafana 1960546 - Add virt_platform metric to the collected metrics 1960554 - Remove rbacv1beta1 handling code 1960612 - Node disk info in overview/details does not account for second drive where /var is located 1960619 - Image registry integration tests use old-style OAuth tokens 1960683 - GlobalConfigPage is constantly requesting resources 1960711 - Enabling IPsec runtime causing incorrect MTU on Pod interfaces 1960716 - Missing details for debugging 1960732 - Outdated manifests directory in CSI driver operator repositories 1960757 - [OVN] hostnetwork pod can access MCS port 22623 or 22624 on master 1960758 - oc debug / oc adm must-gather do not require openshift/tools and openshift/must-gather to be "the newest" 1960767 - /metrics endpoint of the Grafana UI is accessible without authentication 1960780 - CI: failed to create PDB "service-test" the server could not find the requested resource 1961064 - Documentation link to network policies is outdated 1961067 - Improve log gathering logic 1961081 - policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget in CMO logs 1961091 - Gather MachineHealthCheck definitions 1961120 - CSI driver operators fail when upgrading a cluster 1961173 - recreate existing static pod manifests instead of updating 1961201 - [sig-network-edge] DNS should answer A and AAAA queries for a dual-stack service is constantly failing 1961314 - Race condition in operator-registry pull retry unit tests 1961320 - CatalogSource does not emit any metrics to indicate if it's ready or not 1961336 - Devfile sample for BuildConfig is not defined 1961356 - Update single quotes to double quotes in string 1961363 - Minor string update for " No Storage classes found in cluster, adding source is disabled." 1961393 - DetailsPage does not work with group~version~kind 1961452 - Remove "Alertmanager UI" link from Console Monitoring > Alerting page 1961466 - Some dropdown placeholder text on route creation page is not translated 1961472 - openshift-marketplace pods in CrashLoopBackOff state after RHACS installed with an SCC with readOnlyFileSystem set to true 1961506 - NodePorts do not work on RHEL 7.9 workers (was "4.7 -> 4.8 upgrade is stuck at Ingress operator Degraded with rhel 7.9 workers") 1961536 - clusterdeployment without pull secret is crashing assisted service pod 1961538 - manifests: invalid namespace in ClusterRoleBinding makes CVO hotloop 1961545 - Fixing Documentation Generation 1961550 - HAproxy pod logs showing error "another server named 'pod:httpd-7c7ccfffdc-wdkvk:httpd:8080-tcp:10.128.x.x:8080' was already defined at line 326, please use distinct names" 1961554 - respect the shutdown-delay-duration from OpenShiftAPIServerConfig 1961561 - The encryption controllers send lots of request to an API server 1961582 - Build failure on s390x 1961644 - NodeAuthenticator tests are failing in IPv6 1961656 - driver-toolkit missing some release metadata 1961675 - Kebab menu of taskrun contains Edit options which should not be present 1961701 - Enhance gathering of events 1961717 - Update runtime dependencies to Wallaby builds for bugfixes 1961829 - Quick starts prereqs not shown when description is long 1961852 - Excessive lock contention when adding many pods selected by the same NetworkPolicy 1961878 - Add Sprint 199 translations 1961897 - Remove history listener before console UI is unmounted 1961925 - New ManagementCPUsOverride admission plugin blocks pod creation in clusters with no nodes 1962062 - Monitoring dashboards should support default values of "All" 1962074 - SNO:the pod get stuck in CreateContainerError and prompt "failed to add conmon to systemd sandbox cgroup: dial unix /run/systemd/private: connect: resource temporarily unavailable" after adding a performanceprofile 1962095 - Replace gather-job image without FQDN 1962153 - VolumeSnapshot routes are ambiguous, too generic 1962172 - Single node CI e2e tests kubelet metrics endpoints intermittent downtime 1962219 - NTO relies on unreliable leader-for-life implementation. 1962256 - use RHEL8 as the vm-example 1962261 - Monitoring components requesting more memory than they use 1962274 - OCP on RHV installer fails to generate an install-config with only 2 hosts in RHV cluster 1962347 - Cluster does not exist logs after successful installation 1962392 - After upgrade from 4.5.16 to 4.6.17, customer's application is seeing re-transmits 1962415 - duplicate zone information for in-tree PV after enabling migration 1962429 - Cannot create windows vm because kubemacpool.io denied the request 1962525 - [Migration] SDN migration stuck on MCO on RHV cluster 1962569 - NetworkPolicy details page should also show Egress rules 1962592 - Worker nodes restarting during OS installation 1962602 - Cloud credential operator scrolls info "unable to provide upcoming..." on unsupported platform 1962630 - NTO: Ship the current upstream TuneD 1962687 - openshift-kube-storage-version-migrator pod failed due to Error: container has runAsNonRoot and image will run as root 1962698 - Console-operator can not create resource console-public configmap in the openshift-config-managed namespace 1962718 - CVE-2021-29622 prometheus: open redirect under the /new endpoint 1962740 - Add documentation to Egress Router 1962850 - [4.8] Bootimage bump tracker 1962882 - Version pod does not set priorityClassName 1962905 - Ramdisk ISO source defaulting to "http" breaks deployment on a good amount of BMCs 1963068 - ironic container should not specify the entrypoint 1963079 - KCM/KS: ability to enforce localhost communication with the API server. 1963154 - Current BMAC reconcile flow skips Ironic's deprovision step 1963159 - Add Sprint 200 translations 1963204 - Update to 8.4 IPA images 1963205 - Installer is using old redirector 1963208 - Translation typos/inconsistencies for Sprint 200 files 1963209 - Some strings in public.json have errors 1963211 - Fix grammar issue in kubevirt-plugin.json string 1963213 - Memsource download script running into API error 1963219 - ImageStreamTags not internationalized 1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment 1963267 - Warning: Invalid DOM property classname. Did you mean className? console warnings in volumes table 1963502 - create template from is not descriptive 1963676 - in vm wizard when selecting an os template it looks like selecting the flavor too 1963833 - Cluster monitoring operator crashlooping on single node clusters due to segfault 1963848 - Use OS-shipped stalld vs. the NTO-shipped one. 1963866 - NTO: use the latest k8s 1.21.1 and openshift vendor dependencies 1963871 - cluster-etcd-operator:[build] upgrade to go 1.16 1963896 - The VM disks table does not show easy links to PVCs 1963912 - "[sig-network] DNS should provide DNS for {services, cluster, subdomain, hostname}" failures on vsphere 1963932 - Installation failures in bootstrap in OpenStack release jobs 1963964 - Characters are not escaped on config ini file causing Kuryr bootstrap to fail 1964059 - rebase openshift/sdn to kube 1.21.1 1964197 - Failing Test vendor/k8s.io/kube-aggregator/pkg/apiserver TestProxyCertReload due to hardcoded certificate expiration 1964203 - e2e-metal-ipi, e2e-metal-ipi-ovn-dualstack and e2e-metal-ipi-ovn-ipv6 are failing due to "Unknown provider baremetal" 1964243 - The oc compliance fetch-raw doesn’t work for disconnected cluster 1964270 - Failed to install 'cluster-kube-descheduler-operator' with error: "clusterkubedescheduleroperator.4.8.0-202105211057.p0.assembly.stream\": must be no more than 63 characters" 1964319 - Network policy "deny all" interpreted as "allow all" in description page 1964334 - alertmanager/prometheus/thanos-querier /metrics endpoints are not secured 1964472 - Make project and namespace requirements more visible rather than giving me an error after submission 1964486 - Bulk adding of CIDR IPS to whitelist is not working 1964492 - Pick 102171: Implement support for watch initialization in P&F 1964625 - NETID duplicate check is only required in NetworkPolicy Mode 1964748 - Sync upstream 1.7.2 downstream 1964756 - PVC status is always in 'Bound' status when it is actually cloning 1964847 - Sanity check test suite missing from the repo 1964888 - opoenshift-apiserver imagestreamimports depend on >34s timeout support, WAS: transport: loopyWriter.run returning. connection error: desc = "transport is closing" 1964936 - error log for "oc adm catalog mirror" is not correct 1964979 - Add mapping from ACI to infraenv to handle creation order issues 1964997 - Helm Library charts are showing and can be installed from Catalog 1965024 - [DR] backup and restore should perform consistency checks on etcd snapshots 1965092 - [Assisted-4.7] [Staging][OLM] Operators deployments start before all workers finished installation 1965283 - 4.7->4.8 upgrades: cluster operators are not ready: openshift-controller-manager (Upgradeable=Unknown NoData: ), service-ca (Upgradeable=Unknown NoData: 1965330 - oc image extract fails due to security capabilities on files 1965334 - opm index add fails during image extraction 1965367 - Typo in in etcd-metric-serving-ca resource name 1965370 - "Route" is not translated in Korean or Chinese 1965391 - When storage class is already present wizard do not jumps to "Stoarge and nodes" 1965422 - runc is missing Provides oci-runtime in rpm spec 1965522 - [v2v] Multiple typos on VM Import screen 1965545 - Pod stuck in ContainerCreating: Unit ...slice already exists 1965909 - Replace "Enable Taint Nodes" by "Mark nodes as dedicated" 1965921 - [oVirt] High performance VMs shouldn't be created with Existing policy 1965929 - kube-apiserver should use cert auth when reaching out to the oauth-apiserver with a TokenReview request 1966077 - hidden descriptor is visible in the Operator instance details page1966116 - DNS SRV request which worked in 4.7.9 stopped working in 4.7.11 1966126 - root_ca_cert_publisher_sync_duration_seconds metric can have an excessive cardinality 1966138 - (release-4.8) Update K8s & OpenShift API versions 1966156 - Issue with Internal Registry CA on the service pod 1966174 - No storage class is installed, OCS and CNV installations fail 1966268 - Workaround for Network Manager not supporting nmconnections priority 1966401 - Revamp Ceph Table in Install Wizard flow 1966410 - kube-controller-manager should not trigger APIRemovedInNextReleaseInUse alert 1966416 - (release-4.8) Do not exceed the data size limit 1966459 - 'policy/v1beta1 PodDisruptionBudget' and 'batch/v1beta1 CronJob' appear in image-registry-operator log 1966487 - IP address in Pods list table are showing node IP other than pod IP 1966520 - Add button from ocs add capacity should not be enabled if there are no PV's 1966523 - (release-4.8) Gather MachineAutoScaler definitions 1966546 - [master] KubeAPI - keep day1 after cluster is successfully installed 1966561 - Workload partitioning annotation workaround needed for CSV annotation propagation bug 1966602 - don't require manually setting IPv6DualStack feature gate in 4.8 1966620 - The bundle.Dockerfile in the repo is obsolete 1966632 - [4.8.0] [assisted operator] Unable to re-register an SNO instance if deleting CRDs during install 1966654 - Alertmanager PDB is not created, but Prometheus UWM is 1966672 - Add Sprint 201 translations 1966675 - Admin console string updates 1966677 - Change comma to semicolon 1966683 - Translation bugs from Sprint 201 files 1966684 - Verify "Creating snapshot for claim <1>{pvcName}</1>" displays correctly 1966697 - Garbage collector logs every interval - move to debug level 1966717 - include full timestamps in the logs 1966759 - Enable downstream plugin for Operator SDK 1966795 - [tests] Release 4.7 broken due to the usage of wrong OCS version 1966813 - "Replacing an unhealthy etcd member whose node is not ready" procedure results in new etcd pod in CrashLoopBackOff 1966862 - vsphere IPI - local dns prepender is not prepending nameserver 127.0.0.1 1966892 - [master] [Assisted-4.8][SNO] SNO node cannot transition into "Writing image to disk" from "Waiting for bootkub[e" 1966952 - [4.8.0] [Assisted-4.8][SNO][Dual Stack] DHCPv6 settings "ipv6.dhcp-duid=ll" missing from dual stack install 1967104 - [4.8.0] InfraEnv ctrl: log the amount of NMstate Configs baked into the image 1967126 - [4.8.0] [DOC] KubeAPI docs should clarify that the InfraEnv Spec pullSecretRef is currently ignored 1967197 - 404 errors loading some i18n namespaces 1967207 - Getting started card: console customization resources link shows other resources 1967208 - Getting started card should use semver library for parsing the version instead of string manipulation 1967234 - Console is continuously polling for ConsoleLink acm-link 1967275 - Awkward wrapping in getting started dashboard card 1967276 - Help menu tooltip overlays dropdown 1967398 - authentication operator still uses previous deleted pod ip rather than the new created pod ip to do health check 1967403 - (release-4.8) Increase workloads fingerprint gatherer pods limit 1967423 - [master] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion 1967444 - openshift-local-storage pods found with invalid priority class, should be openshift-user-critical or begin with system- while running e2e tests 1967531 - the ccoctl tool should extend MaxItems when listRoles, the default value 100 is a little small 1967578 - [4.8.0] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion 1967591 - The ManagementCPUsOverride admission plugin should not mutate containers with the limit 1967595 - Fixes the remaining lint issues 1967614 - prometheus-k8s pods can't be scheduled due to volume node affinity conflict 1967623 - [OCPonRHV] - ./openshift-install installation with install-config doesn't work if ovirt-config.yaml doesn't exist and user should fill the FQDN URL 1967625 - Add OpenShift Dockerfile for cloud-provider-aws 1967631 - [4.8.0] Cluster install failed due to timeout while "Waiting for control plane" 1967633 - [4.8.0] [Assisted-4.8][SNO] SNO node cannot transition into "Writing image to disk" from "Waiting for bootkube" 1967639 - Console whitescreens if user preferences fail to load 1967662 - machine-api-operator should not use deprecated "platform" field in infrastructures.config.openshift.io 1967667 - Add Sprint 202 Round 1 translations 1967713 - Insights widget shows invalid link to the OCM 1967717 - Insights Advisor widget is missing a description paragraph and contains deprecated naming 1967745 - When setting DNS node placement by toleration to not tolerate master node, effect value should not allow string other than "NoExecute" 1967803 - should update to 7.5.5 for grafana resources version label 1967832 - Add more tests for periodic.go 1967833 - Add tasks pool to tasks_processing 1967842 - Production logs are spammed on "OCS requirements validation status Insufficient hosts to deploy OCS. A minimum of 3 hosts is required to deploy OCS" 1967843 - Fix null reference to messagesToSearch in gather_logs.go 1967902 - [4.8.0] Assisted installer chrony manifests missing index numberring 1967933 - Network-Tools debug scripts not working as expected 1967945 - [4.8.0] [assisted operator] Assisted Service Postgres crashes msg: "mkdir: cannot create directory '/var/lib/pgsql/data/userdata': Permission denied" 1968019 - drain timeout and pool degrading period is too short 1968067 - [master] Agent validation not including reason for being insufficient 1968168 - [4.8.0] KubeAPI - keep day1 after cluster is successfully installed 1968175 - [4.8.0] Agent validation not including reason for being insufficient 1968373 - [4.8.0] BMAC re-attaches installed node on ISO regeneration 1968385 - [4.8.0] Infra env require pullSecretRef although it shouldn't be required 1968435 - [4.8.0] Unclear message in case of missing clusterImageSet 1968436 - Listeners timeout updated to remain using default value 1968449 - [4.8.0] Wrong Install-config override documentation 1968451 - [4.8.0] Garbage collector not cleaning up directories of removed clusters 1968452 - [4.8.0] [doc] "Mirror Registry Configuration" doc section needs clarification of functionality and limitations 1968454 - [4.8.0] backend events generated with wrong namespace for agent 1968455 - [4.8.0] Assisted Service operator's controllers are starting before the base service is ready 1968515 - oc should set user-agent when talking with registry 1968531 - Sync upstream 1.8.0 downstream 1968558 - [sig-cli] oc adm storage-admin [Suite:openshift/conformance/parallel] doesn't clean up properly 1968567 - [OVN] Egress router pod not running and openshift.io/scc is restricted 1968625 - Pods using sr-iov interfaces failign to start for Failed to create pod sandbox 1968700 - catalog-operator crashes when status.initContainerStatuses[].state.waiting is nil 1968701 - Bare metal IPI installation is failed due to worker inspection failure 1968754 - CI: e2e-metal-ipi-upgrade failing on KubeletHasDiskPressure, which triggers machine-config RequiredPoolsFailed 1969212 - [FJ OCP4.8 Bug - PUBLIC VERSION]: Masters repeat reboot every few minutes during workers provisioning 1969284 - Console Query Browser: Can't reset zoom to fixed time range after dragging to zoom 1969315 - [4.8.0] BMAC doesn't check if ISO Url changed before queuing BMH for reconcile 1969352 - [4.8.0] Creating BareMetalHost without the "inspect.metal3.io" does not automatically add it 1969363 - [4.8.0] Infra env should show the time that ISO was generated. 1969367 - [4.8.0] BMAC should wait for an ISO to exist for 1 minute before using it 1969386 - Filesystem's Utilization doesn't show in VM overview tab 1969397 - OVN bug causing subports to stay DOWN fails installations 1969470 - [4.8.0] Misleading error in case of install-config override bad input 1969487 - [FJ OCP4.8 Bug]: Avoid always do delete_configuration clean step 1969525 - Replace golint with revive 1969535 - Topology edit icon does not link correctly when branch name contains slash 1969538 - Install a VolumeSnapshotClass by default on CSI Drivers that support it 1969551 - [4.8.0] Assisted service times out on GetNextSteps due tooc adm release infotaking too long 1969561 - Test "an end user can use OLM can subscribe to the operator" generates deprecation alert 1969578 - installer: accesses v1beta1 RBAC APIs and causes APIRemovedInNextReleaseInUse to fire 1969599 - images without registry are being prefixed with registry.hub.docker.com instead of docker.io 1969601 - manifest for networks.config.openshift.io CRD uses deprecated apiextensions.k8s.io/v1beta1 1969626 - Portfoward stream cleanup can cause kubelet to panic 1969631 - EncryptionPruneControllerDegraded: etcdserver: request timed out 1969681 - MCO: maxUnavailable of ds/machine-config-daemon does not get updated due to missing resourcemerge check 1969712 - [4.8.0] Assisted service reports a malformed iso when we fail to download the base iso 1969752 - [4.8.0] [assisted operator] Installed Clusters are missing DNS setups 1969773 - [4.8.0] Empty cluster name on handleEnsureISOErrors log after applying InfraEnv.yaml 1969784 - WebTerminal widget should send resize events 1969832 - Applying a profile with multiple inheritance where parents include a common ancestor fails 1969891 - Fix rotated pipelinerun status icon issue in safari 1969900 - Test files should not use deprecated APIs that will trigger APIRemovedInNextReleaseInUse 1969903 - Provisioning a large number of hosts results in an unexpected delay in hosts becoming available 1969951 - Cluster local doesn't work for knative services created from dev console 1969969 - ironic-rhcos-downloader container uses and old base image 1970062 - ccoctl does not work with STS authentication 1970068 - ovnkube-master logs "Failed to find node ips for gateway" error 1970126 - [4.8.0] Disable "metrics-events" when deploying using the operator 1970150 - master pool is still upgrading when machine config reports level / restarts on osimageurl change 1970262 - [4.8.0] Remove Agent CRD Status fields not needed 1970265 - [4.8.0] Add State and StateInfo to DebugInfo in ACI and Agent CRDs 1970269 - [4.8.0] missing role in agent CRD 1970271 - [4.8.0] Add ProgressInfo to Agent and AgentClusterInstalll CRDs 1970381 - Monitoring dashboards: Custom time range inputs should retain their values 1970395 - [4.8.0] SNO with AI/operator - kubeconfig secret is not created until the spoke is deployed 1970401 - [4.8.0] AgentLabelSelector is required yet not supported 1970415 - SR-IOV Docs needs documentation for disabling port security on a network 1970470 - Add pipeline annotation to Secrets which are created for a private repo 1970494 - [4.8.0] Missing value-filling of log line in assisted-service operator pod 1970624 - 4.7->4.8 updates: AggregatedAPIDown for v1beta1.metrics.k8s.io 1970828 - "500 Internal Error" for all openshift-monitoring routes 1970975 - 4.7 -> 4.8 upgrades on AWS take longer than expected 1971068 - Removing invalid AWS instances from the CF templates 1971080 - 4.7->4.8 CI: KubePodNotReady due to MCD's 5m sleep between drain attempts 1971188 - Web Console does not show OpenShift Virtualization Menu with VirtualMachine CRDs of version v1alpha3 ! 1971293 - [4.8.0] Deleting agent from one namespace causes all agents with the same name to be deleted from all namespaces 1971308 - [4.8.0] AI KubeAPI AgentClusterInstall confusing "Validated" condition about VIP not matching machine network 1971529 - [Dummy bug for robot] 4.7.14 upgrade to 4.8 and then downgrade back to 4.7.14 doesn't work - clusteroperator/kube-apiserver is not upgradeable 1971589 - [4.8.0] Telemetry-client won't report metrics in case the cluster was installed using the assisted operator 1971630 - [4.8.0] ACM/ZTP with Wan emulation fails to start the agent service 1971632 - [4.8.0] ACM/ZTP with Wan emulation, several clusters fail to step past discovery 1971654 - [4.8.0] InfraEnv controller should always requeue for backend response HTTP StatusConflict (code 409) 1971739 - Keep /boot RW when kdump is enabled 1972085 - [4.8.0] Updating configmap within AgentServiceConfig is not logged properly 1972128 - ironic-static-ip-manager container still uses 4.7 base image 1972140 - [4.8.0] ACM/ZTP with Wan emulation, SNO cluster installs do not show as installed although they are 1972167 - Several operators degraded because Failed to create pod sandbox when installing an sts cluster 1972213 - Openshift Installer| UEFI mode | BM hosts have BIOS halted 1972262 - [4.8.0] "baremetalhost.metal3.io/detached" uses boolean value where string is expected 1972426 - Adopt failure can trigger deprovisioning 1972436 - [4.8.0] [DOCS] AgentServiceConfig examples in operator.md doc should each contain databaseStorage + filesystemStorage 1972526 - [4.8.0] clusterDeployments controller should send an event to InfraEnv for backend cluster registration 1972530 - [4.8.0] no indication for missing debugInfo in AgentClusterInstall 1972565 - performance issues due to lost node, pods taking too long to relaunch 1972662 - DPDK KNI modules need some additional tools 1972676 - Requirements for authenticating kernel modules with X.509 1972687 - Using bound SA tokens causes causes failures to /apis/authorization.openshift.io/v1/clusterrolebindings 1972690 - [4.8.0] infra-env condition message isn't informative in case of missing pull secret 1972702 - [4.8.0] Domain dummy.com (not belonging to Red Hat) is being used in a default configuration 1972768 - kube-apiserver setup fail while installing SNO due to port being used 1972864 - Newlocal-with-fallback` service annotation does not preserve source IP 1973018 - Ironic rhcos downloader breaks image cache in upgrade process from 4.7 to 4.8 1973117 - No storage class is installed, OCS and CNV installations fail 1973233 - remove kubevirt images and references 1973237 - RHCOS-shipped stalld systemd units do not use SCHED_FIFO to run stalld. 1973428 - Placeholder bug for OCP 4.8.0 image release 1973667 - [4.8] NetworkPolicy tests were mistakenly marked skipped 1973672 - fix ovn-kubernetes NetworkPolicy 4.7->4.8 upgrade issue 1973995 - [Feature:IPv6DualStack] tests are failing in dualstack 1974414 - Uninstalling kube-descheduler clusterkubedescheduleroperator.4.6.0-202106010807.p0.git.5db84c5 removes some clusterrolebindings 1974447 - Requirements for nvidia GPU driver container for driver toolkit 1974677 - [4.8.0] KubeAPI CVO progress is not available on CR/conditions only in events. 1974718 - Tuned net plugin fails to handle net devices with n/a value for a channel 1974743 - [4.8.0] All resources not being cleaned up after clusterdeployment deletion 1974746 - [4.8.0] File system usage not being logged appropriately 1974757 - [4.8.0] Assisted-service deployed on an IPv6 cluster installed with proxy: agentclusterinstall shows error pulling an image from quay. 1974773 - Using bound SA tokens causes fail to query cluster resource especially in a sts cluster 1974839 - CVE-2021-29059 nodejs-is-svg: Regular expression denial of service if the application is provided and checks a crafted invalid SVG string 1974850 - [4.8] coreos-installer failing Execshield 1974931 - [4.8.0] Assisted Service Operator should be Infrastructure Operator for Red Hat OpenShift 1974978 - 4.8.0.rc0 upgrade hung, stuck on DNS clusteroperator progressing 1975155 - Kubernetes service IP cannot be accessed for rhel worker 1975227 - [4.8.0] KubeAPI Move conditions consts to CRD types 1975360 - [4.8.0] [master] timeout on kubeAPI subsystem test: SNO full install and validate MetaData 1975404 - [4.8.0] Confusing behavior when multi-node spoke workers present when only controlPlaneAgents specified 1975432 - Alert InstallPlanStepAppliedWithWarnings does not resolve 1975527 - VMware UPI is configuring static IPs via ignition rather than afterburn 1975672 - [4.8.0] Production logs are spammed on "Found unpreparing host: id 08f22447-2cf1-a107-eedf-12c7421f7380 status insufficient" 1975789 - worker nodes rebooted when we simulate a case where the api-server is down 1975938 - gcp-realtime: e2e test failing [sig-storage] Multi-AZ Cluster Volumes should only be allowed to provision PDs in zones where nodes exist [Suite:openshift/conformance/parallel] [Suite:k8s] 1975964 - 4.7 nightly upgrade to 4.8 and then downgrade back to 4.7 nightly doesn't work - ingresscontroller "default" is degraded 1976079 - [4.8.0] Openshift Installer| UEFI mode | BM hosts have BIOS halted 1976263 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel] 1976376 - disable jenkins client plugin test whose Jenkinsfile references master branch openshift/origin artifacts 1976590 - [Tracker] [SNO][assisted-operator][nmstate] Bond Interface is down when booting from the discovery ISO 1977233 - [4.8] Unable to authenticate against IDP after upgrade to 4.8-rc.1 1977351 - CVO pod skipped by workload partitioning with incorrect error stating cluster is not SNO 1977352 - [4.8.0] [SNO] No DNS to cluster API from assisted-installer-controller 1977426 - Installation of OCP 4.6.13 fails when teaming interface is used with OVNKubernetes 1977479 - CI failing on firing CertifiedOperatorsCatalogError due to slow livenessProbe responses 1977540 - sriov webhook not worked when upgrade from 4.7 to 4.8 1977607 - [4.8.0] Post making changes to AgentServiceConfig assisted-service operator is not detecting the change and redeploying assisted-service pod 1977924 - Pod fails to run when a custom SCC with a specific set of volumes is used 1980788 - NTO-shipped stalld can segfault 1981633 - enhance service-ca injection 1982250 - Performance Addon Operator fails to install after catalog source becomes ready 1982252 - olm Operator is in CrashLoopBackOff state with error "couldn't cleanup cross-namespace ownerreferences"

  1. References:

https://access.redhat.com/security/cve/CVE-2016-2183 https://access.redhat.com/security/cve/CVE-2020-7774 https://access.redhat.com/security/cve/CVE-2020-15106 https://access.redhat.com/security/cve/CVE-2020-15112 https://access.redhat.com/security/cve/CVE-2020-15113 https://access.redhat.com/security/cve/CVE-2020-15114 https://access.redhat.com/security/cve/CVE-2020-15136 https://access.redhat.com/security/cve/CVE-2020-26160 https://access.redhat.com/security/cve/CVE-2020-26541 https://access.redhat.com/security/cve/CVE-2020-28469 https://access.redhat.com/security/cve/CVE-2020-28500 https://access.redhat.com/security/cve/CVE-2020-28852 https://access.redhat.com/security/cve/CVE-2021-3114 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/cve/CVE-2021-3516 https://access.redhat.com/security/cve/CVE-2021-3517 https://access.redhat.com/security/cve/CVE-2021-3518 https://access.redhat.com/security/cve/CVE-2021-3520 https://access.redhat.com/security/cve/CVE-2021-3537 https://access.redhat.com/security/cve/CVE-2021-3541 https://access.redhat.com/security/cve/CVE-2021-3636 https://access.redhat.com/security/cve/CVE-2021-20206 https://access.redhat.com/security/cve/CVE-2021-20271 https://access.redhat.com/security/cve/CVE-2021-20291 https://access.redhat.com/security/cve/CVE-2021-21419 https://access.redhat.com/security/cve/CVE-2021-21623 https://access.redhat.com/security/cve/CVE-2021-21639 https://access.redhat.com/security/cve/CVE-2021-21640 https://access.redhat.com/security/cve/CVE-2021-21648 https://access.redhat.com/security/cve/CVE-2021-22133 https://access.redhat.com/security/cve/CVE-2021-23337 https://access.redhat.com/security/cve/CVE-2021-23362 https://access.redhat.com/security/cve/CVE-2021-23368 https://access.redhat.com/security/cve/CVE-2021-23382 https://access.redhat.com/security/cve/CVE-2021-25735 https://access.redhat.com/security/cve/CVE-2021-25737 https://access.redhat.com/security/cve/CVE-2021-26539 https://access.redhat.com/security/cve/CVE-2021-26540 https://access.redhat.com/security/cve/CVE-2021-27292 https://access.redhat.com/security/cve/CVE-2021-28092 https://access.redhat.com/security/cve/CVE-2021-29059 https://access.redhat.com/security/cve/CVE-2021-29622 https://access.redhat.com/security/cve/CVE-2021-32399 https://access.redhat.com/security/cve/CVE-2021-33034 https://access.redhat.com/security/cve/CVE-2021-33194 https://access.redhat.com/security/cve/CVE-2021-33909 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYQCOF9zjgjWX9erEAQjsEg/+NSFQdRcZpqA34LWRtxn+01y2MO0WLroQ d4o+3h0ECKYNRFKJe6n7z8MdmPpvV2uNYN0oIwidTESKHkFTReQ6ZolcV/sh7A26 Z7E+hhpTTObxAL7Xx8nvI7PNffw3CIOZSpnKws5TdrwuMkH5hnBSSZntP5obp9Vs ImewWWl7CNQtFewtXbcmUojNzIvU1mujES2DTy2ffypLoOW6kYdJzyWubigIoR6h gep9HKf1X4oGPuDNF5trSdxKwi6W68+VsOA25qvcNZMFyeTFhZqowot/Jh1HUHD8 TWVpDPA83uuExi/c8tE8u7VZgakWkRWcJUsIw68VJVOYGvpP6K/MjTpSuP2itgUX X//1RGQM7g6sYTCSwTOIrMAPbYH0IMbGDjcS4fSZcfg6c+WJnEpZ72ZgjHZV8mxb 1BtQSs2lil48/cwDKM0yMO2nYsKiz4DCCx2W5izP0rLwNA8Hvqh9qlFgkxJWWOvA mtBCelB0E74qrE4NXbX+MIF7+ZQKjd1evE91/VWNs0FLR/xXdP3C5ORLU3Fag0G/ 0oTV73NdxP7IXVAdsECwU2AqS9ne1y01zJKtd7hq7H/wtkbasqCNq5J7HikJlLe6 dpKh5ZRQzYhGeQvho9WQfz/jd4HZZTcB6wxrWubbd05bYt/i/0gau90LpuFEuSDx +bLvJlpGiMg= =NJcM -----END PGP SIGNATURE-----

-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:

Red Hat Advanced Cluster Management for Kubernetes 2.3.0 images

Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in.

Bugs:

  • RFE Make the source code for the endpoint-metrics-operator public (BZ# 1913444)

  • cluster became offline after apiserver health check (BZ# 1942589)

  • Solution:

Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):

1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension 1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag 1913444 - RFE Make the source code for the endpoint-metrics-operator public 1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull 1927520 - RHACM 2.3.0 images 1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection 1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash() 1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate 1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms 1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization 1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string 1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application 1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header 1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call 1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS 1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service 1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service 1942589 - cluster became offline after apiserver health check 1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() 1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character 1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data 1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service 1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option 1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing 1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js 1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service 1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS) 1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option 1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe 1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command 1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets 1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs 1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method 1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions 1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id 1983131 - Defragmenting an etcd member doesn't reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters

  1. VDSM manages and monitors the host's storage, memory and networks as well as virtual machine creation, other host administration tasks, statistics gathering, and log collection.

Bug Fix(es):

  • An update in libvirt has changed the way block threshold events are submitted. As a result, the VDSM was confused by the libvirt event, and tried to look up a drive, logging a warning about a missing drive. In this release, the VDSM has been adapted to handle the new libvirt behavior, and does not log warnings about missing drives. (BZ#1948177)

  • Previously, when a virtual machine was powered off on the source host of a live migration and the migration finished successfully at the same time, the two events interfered with each other, and sometimes prevented migration cleanup resulting in additional migrations from the host being blocked. In this release, additional migrations are not blocked. (BZ#1959436)

  • Previously, when failing to execute a snapshot and re-executing it later, the second try would fail due to using the previous execution data. In this release, this data will be used only when needed, in recovery mode. (BZ#1984209)

  • Then engine deletes the volume and causes data corruption. 1998017 - Keep cinbderlib dependencies optional for 4.4.8

Bug Fix(es):

  • Documentation is referencing deprecated API for Service Export - Submariner (BZ#1936528)

  • Importing of cluster fails due to error/typo in generated command (BZ#1936642)

  • RHACM 2.2.2 images (BZ#1938215)

  • 2.2 clusterlifecycle fails to allow provision fips: true clusters on aws, vsphere (BZ#1941778)

  • Summary:

The Migration Toolkit for Containers (MTC) 1.7.4 is now available. Description:

The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202102-1492",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "banking corporate lending process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "communications session border controller",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "9.0"
      },
      {
        "model": "enterprise communications broker",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "3.2.0"
      },
      {
        "model": "banking extensibility workbench",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "banking extensibility workbench",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "primavera gateway",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "20.12.0"
      },
      {
        "model": "banking supply chain finance",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "primavera unifier",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "17.12"
      },
      {
        "model": "jd edwards enterpriseone tools",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "9.2.6.1"
      },
      {
        "model": "banking supply chain finance",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "health sciences data management workbench",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "2.5.2.1"
      },
      {
        "model": "communications services gatekeeper",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "7.0"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "communications cloud native core policy",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "1.11.0"
      },
      {
        "model": "financial services crime and compliance management studio",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.0.8.2.0"
      },
      {
        "model": "primavera gateway",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "19.12.0"
      },
      {
        "model": "peoplesoft enterprise peopletools",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.58"
      },
      {
        "model": "primavera unifier",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "18.8"
      },
      {
        "model": "banking credit facilities process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "primavera gateway",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "17.12.11"
      },
      {
        "model": "enterprise communications broker",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "3.3.0"
      },
      {
        "model": "financial services crime and compliance management studio",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.0.8.3.0"
      },
      {
        "model": "primavera gateway",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "18.8.12"
      },
      {
        "model": "communications session border controller",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.4"
      },
      {
        "model": "primavera gateway",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "17.12.0"
      },
      {
        "model": "primavera gateway",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "20.12.7"
      },
      {
        "model": "primavera gateway",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "19.12.11"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "banking credit facilities process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "peoplesoft enterprise peopletools",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.59"
      },
      {
        "model": "communications design studio",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "7.4.2"
      },
      {
        "model": "primavera unifier",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "17.7"
      },
      {
        "model": "primavera unifier",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "19.12"
      },
      {
        "model": "banking credit facilities process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "health sciences data management workbench",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "3.0.0.0"
      },
      {
        "model": "lodash",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "lodash",
        "version": "4.17.21"
      },
      {
        "model": "banking corporate lending process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "banking trade finance process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "primavera gateway",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "18.8.0"
      },
      {
        "model": "primavera unifier",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "20.12"
      },
      {
        "model": "banking trade finance process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "retail customer management and segmentation foundation",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "19.0"
      },
      {
        "model": "banking extensibility workbench",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "banking corporate lending process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.3.0"
      },
      {
        "model": "banking trade finance process management",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.5.0"
      },
      {
        "model": "banking supply chain finance",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "14.2.0"
      },
      {
        "model": "lodash",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "lodash",
        "version": "4.17.21"
      },
      {
        "model": "lodash",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "lodash",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011490"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-28500"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "163276"
      },
      {
        "db": "PACKETSTORM",
        "id": "162901"
      },
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "db": "PACKETSTORM",
        "id": "164090"
      },
      {
        "db": "PACKETSTORM",
        "id": "162151"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1168"
      }
    ],
    "trust": 1.3
  },
  "cve": "CVE-2020-28500",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 5.0,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 10.0,
            "id": "CVE-2020-28500",
            "impactScore": 2.9,
            "integrityImpact": "NONE",
            "severity": "MEDIUM",
            "trust": 1.9,
            "vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 5.0,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 10.0,
            "id": "VHN-373964",
            "impactScore": 2.9,
            "integrityImpact": "NONE",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:L/AU:N/C:N/I:N/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "LOW",
            "baseScore": 5.3,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 3.9,
            "id": "CVE-2020-28500",
            "impactScore": 1.4,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 2.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "Low",
            "baseScore": 5.3,
            "baseSeverity": "Medium",
            "confidentialityImpact": "None",
            "exploitabilityScore": null,
            "id": "CVE-2020-28500",
            "impactScore": null,
            "integrityImpact": "None",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2020-28500",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "report@snyk.io",
            "id": "CVE-2020-28500",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "NVD",
            "id": "CVE-2020-28500",
            "trust": 0.8,
            "value": "Medium"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202102-1168",
            "trust": 0.6,
            "value": "MEDIUM"
          },
          {
            "author": "VULHUB",
            "id": "VHN-373964",
            "trust": 0.1,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2020-28500",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-373964"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-28500"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011490"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1168"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-28500"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-28500"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions. Lodash Exists in unspecified vulnerabilities.Service operation interruption (DoS) It may be in a state. lodash is an open source JavaScript utility library. There is a security vulnerability in Lodash. Please keep an eye on CNNVD or manufacturer announcements. Description:\n\nThe ovirt-engine package provides the manager for virtualization\nenvironments. \nThis manager enables admins to define hosts and networks, as well as to add\nstorage, create VMs and manage user permissions. \n\nBug Fix(es):\n\n* This release adds the queue attribute to the virtio-scsi driver in the\nvirtual machine configuration. This improvement enables multi-queue\nperformance with the virtio-scsi driver. (BZ#911394)\n\n* With this release, source-load-balancing has been added as a new\nsub-option for xmit_hash_policy. It can be configured for bond modes\nbalance-xor (2), 802.3ad (4) and balance-tlb (5), by specifying\nxmit_hash_policy=vlan+srcmac. (BZ#1683987)\n\n* The default DataCenter/Cluster will be set to compatibility level 4.6 on\nnew installations of Red Hat Virtualization 4.4.6.; (BZ#1950348)\n\n* With this release, support has been added for copying disks between\nregular Storage Domains and Managed Block Storage Domains. \nIt is now possible to migrate disks between Managed Block Storage Domains\nand regular Storage Domains. (BZ#1906074)\n\n* Previously, the engine-config value LiveSnapshotPerformFreezeInEngine was\nset by default to false and was supposed to be uses in cluster\ncompatibility levels below 4.4. The value was set to general version. \nWith this release, each cluster level has it\u0027s own value, defaulting to\nfalse for 4.4 and above. This will reduce unnecessary overhead in removing\ntime outs of the file system freeze command. (BZ#1932284)\n\n* With this release, running virtual machines is supported for up to 16TB\nof RAM on x86_64 architectures. (BZ#1944723)\n\n* This release adds the gathering of oVirt/RHV related certificates to\nallow easier debugging of issues for faster customer help and issue\nresolution. \nInformation from certificates is now included as part of the sosreport. \nNote that no corresponding private key information is gathered, due to\nsecurity considerations. (BZ#1845877)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/2974891\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1113630 - [RFE] indicate vNICs that are out-of-sync from their configuration on engine\n1310330 - [RFE] Provide a way to remove stale LUNs from hypervisors\n1589763 - [downstream clone] Error changing CD for a running VM when ISO image is on a block domain\n1621421 - [RFE] indicate vNIC is out of sync on network QoS modification on engine\n1717411 - improve engine logging when migration fail\n1766414 - [downstream] [UI] hint after updating mtu on networks connected to running VMs\n1775145 - Incorrect message from hot-plugging memory\n1821199 - HP VM fails to migrate between identical hosts (the same cpu flags) not supporting TSC. \n1845877 - [RFE] Collect information about RHV PKI\n1875363 - engine-setup failing on FIPS enabled rhel8 machine\n1906074 - [RFE] Support disks copy between regular and managed block storage domains\n1910858 - vm_ovf_generations is not cleared while detaching the storage domain causing VM import with old stale configuration\n1917718 - [RFE] Collect memory usage from guests without ovirt-guest-agent and memory ballooning\n1919195 - Unable to create snapshot without saving memory of running VM from VM Portal. \n1919984 - engine-setup failse to deploy the grafana service in an external DWH server\n1924610 - VM Portal shows N/A as the VM IP address even if the guest agent is running and the IP is shown in the webadmin portal\n1926018 - Failed to run VM after FIPS mode is enabled\n1926823 - Integrating ELK with RHV-4.4 fails as RHVH is missing \u0027rsyslog-gnutls\u0027 package. \n1928158 - Rename \u0027CA Certificate\u0027 link in welcome page to \u0027Engine CA certificate\u0027\n1928188 - Failed to parse \u0027writeOps\u0027 value \u0027XXXX\u0027 to integer: For input string: \"XXXX\"\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1929211 - Failed to parse \u0027writeOps\u0027 value \u0027XXXX\u0027 to integer: For input string: \"XXXX\"\n1930522 - [RHV-4.4.5.5] Failed to deploy RHEL AV 8.4.0 host to RHV with error \"missing groups or modules: virt:8.4\"\n1930565 - Host upgrade failed in imgbased but RHVM shows upgrade successful\n1930895 - RHEL 8 virtual machine with qemu-guest-agent installed displays Guest OS Memory Free/Cached/Buffered: Not Configured\n1932284 - Engine handled FS freeze is not fast enough for Windows systems\n1935073 - Ansible ovirt_disk module can create disks with conflicting IDs that cannot be removed\n1942083 - upgrade ovirt-cockpit-sso to 0.1.4-2\n1943267 - Snapshot creation is failing for VM having vGPU. \n1944723 - [RFE] Support virtual machines with 16TB memory\n1948577 - [welcome page] remove \"Infrastructure Migration\" section (obsoleted)\n1949543 - rhv-log-collector-analyzer fails to run MAC Pools rule\n1949547 - rhv-log-collector-analyzer report contains \u0027b characters\n1950348 - Set compatibility level 4.6 for Default DataCenter/Cluster during new installations of RHV 4.4.6\n1950466 - Host installation failed\n1954401 - HP VMs pinning is wiped after edit-\u003eok and pinned to first physical CPUs.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Moderate: OpenShift Container Platform 4.8.2 bug fix and security update\nAdvisory ID:       RHSA-2021:2438-01\nProduct:           Red Hat OpenShift Enterprise\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2021:2438\nIssue date:        2021-07-27\nCVE Names:         CVE-2016-2183 CVE-2020-7774 CVE-2020-15106 \n                   CVE-2020-15112 CVE-2020-15113 CVE-2020-15114 \n                   CVE-2020-15136 CVE-2020-26160 CVE-2020-26541 \n                   CVE-2020-28469 CVE-2020-28500 CVE-2020-28852 \n                   CVE-2021-3114 CVE-2021-3121 CVE-2021-3516 \n                   CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 \n                   CVE-2021-3537 CVE-2021-3541 CVE-2021-3636 \n                   CVE-2021-20206 CVE-2021-20271 CVE-2021-20291 \n                   CVE-2021-21419 CVE-2021-21623 CVE-2021-21639 \n                   CVE-2021-21640 CVE-2021-21648 CVE-2021-22133 \n                   CVE-2021-23337 CVE-2021-23362 CVE-2021-23368 \n                   CVE-2021-23382 CVE-2021-25735 CVE-2021-25737 \n                   CVE-2021-26539 CVE-2021-26540 CVE-2021-27292 \n                   CVE-2021-28092 CVE-2021-29059 CVE-2021-29622 \n                   CVE-2021-32399 CVE-2021-33034 CVE-2021-33194 \n                   CVE-2021-33909 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.8.2 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nThis release includes a security update for Red Hat OpenShift Container\nPlatform 4.8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.8.2. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2021:2437\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel\nease-notes.html\n\nSecurity Fix(es):\n\n* SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32)\n(CVE-2016-2183)\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n\n* nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)\n\n* etcd: Large slice causes panic in decodeRecord method (CVE-2020-15106)\n\n* etcd: DoS in wal/wal.go (CVE-2020-15112)\n\n* etcd: directories created via os.MkdirAll are not checked for permissions\n(CVE-2020-15113)\n\n* etcd: gateway can include itself as an endpoint resulting in resource\nexhaustion and leads to DoS (CVE-2020-15114)\n\n* etcd: no authentication is performed against endpoints provided in the\n- --endpoints flag (CVE-2020-15136)\n\n* jwt-go: access restriction bypass vulnerability (CVE-2020-26160)\n\n* nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)\n\n* nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n(CVE-2020-28500)\n\n* golang.org/x/text: Panic in language.ParseAcceptLanguage while processing\nbcp47 tag (CVE-2020-28852)\n\n* golang: crypto/elliptic: incorrect operations on the P-224 curve\n(CVE-2021-3114)\n\n* containernetworking-cni: Arbitrary path injection via type field in CNI\nconfiguration (CVE-2021-20206)\n\n* containers/storage: DoS via malicious image (CVE-2021-20291)\n\n* prometheus: open redirect under the /new endpoint (CVE-2021-29622)\n\n* golang: x/net/html: infinite loop in ParseFragment (CVE-2021-33194)\n\n* go.elastic.co/apm: leaks sensitive HTTP headers during panic\n(CVE-2021-22133)\n\nSpace precludes listing in detail the following additional CVEs fixes:\n(CVE-2021-27292), (CVE-2021-28092), (CVE-2021-29059), (CVE-2021-23382),\n(CVE-2021-26539), (CVE-2021-26540), (CVE-2021-23337), (CVE-2021-23362) and\n(CVE-2021-23368)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.8.2-x86_64\n\nThe image digest is\nssha256:0e82d17ababc79b10c10c5186920232810aeccbccf2a74c691487090a2c98ebc\n\n(For s390x architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.8.2-s390x\n\nThe image digest is\nsha256:a284c5c3fa21b06a6a65d82be1dc7e58f378aa280acd38742fb167a26b91ecb5\n\n(For ppc64le architecture)\n\n  $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.8.2-ppc64le\n\nThe image digest is\nsha256:da989b8e28bccadbb535c2b9b7d3597146d14d254895cd35f544774f374cdd0f\n\nAll OpenShift Container Platform 4.8 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.8/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor\n\n3. Solution:\n\nFor OpenShift Container Platform 4.8 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.8/updating/updating-cluster\n- -cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1369383 - CVE-2016-2183 SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32)\n1725981 - oc explain does not work well with full resource.group names\n1747270 - [osp] Machine with name \"\u003ccluster-id\u003e-worker\"couldn\u0027t join the cluster\n1772993 - rbd block devices attached to a host are visible in unprivileged container pods\n1786273 - [4.6] KAS pod logs show \"error building openapi models ... has invalid property: anyOf\" for CRDs\n1786314 - [IPI][OSP] Install fails on OpenStack with self-signed certs unless the node running the installer has the CA cert in its system trusts\n1801407 - Router in v4v6 mode puts brackets around IPv4 addresses in the Forwarded header\n1812212 - ArgoCD example application cannot be downloaded from github\n1817954 - [ovirt] Workers nodes are not numbered sequentially\n1824911 - PersistentVolume yaml editor is read-only with system:persistent-volume-provisioner ClusterRole\n1825219 - openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with \"Unable to connect to the server\"\n1825417 - The containerruntimecontroller doesn\u0027t roll back to CR-1 if we delete CR-2\n1834551 - ClusterOperatorDown fires when operator is only degraded; states will block upgrades\n1835264 - Intree provisioner doesn\u0027t respect PVC.spec.dataSource sometimes\n1839101 - Some sidebar links in developer perspective don\u0027t follow same project\n1840881 - The KubeletConfigController cannot process multiple confs for a pool/ pool changes\n1846875 - Network setup test high failure rate\n1848151 - Console continues to poll the ClusterVersion resource when the user doesn\u0027t have authority\n1850060 - After upgrading to 3.11.219 timeouts are appearing. \n1852637 - Kubelet sets incorrect image names in node status images section\n1852743 - Node list CPU column only show usage\n1853467 - container_fs_writes_total is inconsistent with CPU/memory in summarizing cgroup values\n1857008 - [Edge] [BareMetal] Not provided STATE value for machines\n1857477 - Bad helptext for storagecluster creation\n1859382 - check-endpoints panics on graceful shutdown\n1862084 - Inconsistency of time formats in the OpenShift web-console\n1864116 - Cloud credential operator scrolls warnings about unsupported platform\n1866222 - Should output all options when runing `operator-sdk init --help`\n1866318 - [RHOCS Usability Study][Dashboard] Users found it difficult to navigate to the OCS dashboard\n1866322 - [RHOCS Usability Study][Dashboard] Alert details page does not help to explain the Alert\n1866331 - [RHOCS Usability Study][Dashboard] Users need additional tooltips or definitions\n1868755 - [vsphere] terraform provider vsphereprivate crashes when network is unavailable on host\n1868870 - CVE-2020-15113 etcd: directories created via os.MkdirAll are not checked for permissions\n1868872 - CVE-2020-15112 etcd: DoS in wal/wal.go\n1868874 - CVE-2020-15114 etcd: gateway can include itself as an endpoint resulting in resource exhaustion and leads to DoS\n1868880 - CVE-2020-15136 etcd: no authentication is performed against endpoints provided in the --endpoints flag\n1868883 - CVE-2020-15106 etcd: Large slice causes panic in decodeRecord method\n1871303 - [sig-instrumentation] Prometheus when installed on the cluster should have important platform topology metrics\n1871770 - [IPI baremetal] The Keepalived.conf file is not indented evenly\n1872659 - ClusterAutoscaler doesn\u0027t scale down when a node is not needed anymore\n1873079 - SSH to api and console route is possible when the clsuter is hosted on Openstack\n1873649 - proxy.config.openshift.io should validate user inputs\n1874322 - openshift/oauth-proxy: htpasswd using SHA1 to store credentials\n1874931 - Accessibility - Keyboard shortcut to exit YAML editor not easily discoverable\n1876918 - scheduler test leaves taint behind\n1878199 - Remove Log Level Normalization controller in cluster-config-operator release N+1\n1878655 - [aws-custom-region] creating manifests take too much time when custom endpoint is unreachable\n1878685 - Ingress resource with \"Passthrough\"  annotation does not get applied when using the newer \"networking.k8s.io/v1\" API\n1879077 - Nodes tainted after configuring additional host iface\n1879140 - console auth errors not understandable by customers\n1879182 - switch over to secure access-token logging by default and delete old non-sha256 tokens\n1879184 - CVO must detect or log resource hotloops\n1879495 - [4.6] namespace \\\u201copenshift-user-workload-monitoring\\\u201d does not exist\u201d\n1879638 - Binary file uploaded to a secret in OCP 4 GUI is not properly converted to Base64-encoded string\n1879944 - [OCP 4.8] Slow PV creation with vsphere\n1880757 - AWS: master not removed from LB/target group when machine deleted\n1880758 - Component descriptions in cloud console have bad description (Managed by Terraform)\n1881210 - nodePort for router-default metrics with NodePortService does not exist\n1881481 - CVO hotloops on some service manifests\n1881484 - CVO hotloops on deployment manifests\n1881514 - CVO hotloops on imagestreams from cluster-samples-operator\n1881520 - CVO hotloops on (some) clusterrolebindings\n1881522 - CVO hotloops on clusterserviceversions packageserver\n1881662 - Error getting volume limit for plugin kubernetes.io/\u003cname\u003e in kubelet logs\n1881694 - Evidence of disconnected installs pulling images from the local registry instead of quay.io\n1881938 - migrator deployment doesn\u0027t tolerate masters\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1883587 - No option for user to select volumeMode\n1883993 - Openshift 4.5.8 Deleting pv disk vmdk after delete machine\n1884053 - cluster DNS experiencing disruptions during cluster upgrade in insights cluster\n1884800 - Failed to set up mount unit: Invalid argument\n1885186 - Removing ssh keys MC does not remove the key from authorized_keys\n1885349 - [IPI Baremetal] Proxy Information Not passed to metal3\n1885717 - activeDeadlineSeconds DeadlineExceeded does not show terminated container statuses\n1886572 - auth: error contacting auth provider when extra ingress (not default)  goes down\n1887849 - When creating new storage class failure_domain is missing. \n1888712 - Worker nodes do not come up on a baremetal IPI deployment with control plane network configured on a vlan on top of bond interface due to Pending CSRs\n1889689 - AggregatedAPIErrors alert may never fire\n1890678 - Cypress:  Fix \u0027structure\u0027 accesibility violations\n1890828 - Intermittent prune job failures causing operator degradation\n1891124 - CP Conformance: CRD spec and status failures\n1891301 - Deleting bmh  by \"oc delete bmh\u0027 get stuck\n1891696 - [LSO] Add capacity UI does not check for node present in selected storageclass\n1891766 - [LSO] Min-Max filter\u0027s from OCS wizard accepts Negative values and that cause PV not getting created\n1892642 - oauth-server password metrics do not appear in UI after initial OCP installation\n1892718 - HostAlreadyClaimed: The new route cannot be loaded with a new api group version\n1893850 - Add an alert for requests rejected by the apiserver\n1893999 - can\u0027t login ocp cluster with oc 4.7 client without the username\n1895028 - [gcp-pd-csi-driver-operator] Volumes created by CSI driver are not deleted on cluster deletion\n1895053 - Allow builds to optionally mount in cluster trust stores\n1896226 - recycler-pod template should not be in kubelet static manifests directory\n1896321 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types\n1896751 - [RHV IPI] Worker nodes stuck in the Provisioning Stage if the machineset has a long name\n1897415 - [Bare Metal - Ironic] provide the ability to set the cipher suite for ipmitool when doing a Bare Metal IPI install\n1897621 - Auth test.Login test.logs in as kubeadmin user: Timeout\n1897918 - [oVirt] e2e tests fail due to kube-apiserver not finishing\n1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability\n1899057 - fix spurious br-ex MAC address error log\n1899187 - [Openstack] node-valid-hostname.service failes during the first boot leading to 5 minute provisioning delay\n1899587 - [External] RGW usage metrics shown on Object Service Dashboard  is incorrect\n1900454 - Enable host-based disk encryption on Azure platform\n1900819 - Scaled ingress replicas following sharded pattern don\u0027t balance evenly across multi-AZ\n1901207 - Search Page - Pipeline resources table not immediately updated after Name filter applied or removed\n1901535 - Remove the managingOAuthAPIServer field from the authentication.operator API\n1901648 - \"do you need to set up custom dns\" tooltip inaccurate\n1902003 - Jobs Completions column is not sorting when there are \"0 of 1\" and \"1 of 1\" in the list. \n1902076 - image registry operator should monitor status of its routes\n1902247 - openshift-oauth-apiserver apiserver pod crashloopbackoffs\n1903055 - [OSP] Validation should fail when no any IaaS flavor or type related field are given\n1903228 - Pod stuck in Terminating, runc init process frozen\n1903383 - Latest RHCOS 47.83. builds failing to install: mount /root.squashfs failed\n1903553 - systemd container renders node NotReady after deleting it\n1903700 - metal3 Deployment doesn\u0027t have unique Pod selector\n1904006 - The --dir option doest not work for command  `oc image extract`\n1904505 - Excessive Memory Use in Builds\n1904507 - vsphere-problem-detector: implement missing metrics\n1904558 - Random init-p error when trying to start pod\n1905095 - Images built on OCP 4.6 clusters create manifests that result in quay.io (and other registries) rejecting those manifests\n1905147 - ConsoleQuickStart Card\u0027s prerequisites is a combined text instead of a list\n1905159 - Installation on previous unused dasd fails after formatting\n1905331 - openshift-multus initContainer multus-binary-copy, etc. are not requesting required resources: cpu, memory\n1905460 - Deploy using virtualmedia for disabled provisioning network on real BM(HPE) fails\n1905577 - Control plane machines not adopted when provisioning network is disabled\n1905627 - Warn users when using an unsupported browser such as IE\n1905709 - Machine API deletion does not properly handle stopped instances on AWS or GCP\n1905849 - Default volumesnapshotclass should be created when creating default storageclass\n1906056 - Bundles skipped via the `skips` field cannot be pinned\n1906102 - CBO produces standard metrics\n1906147 - ironic-rhcos-downloader should not use --insecure\n1906304 - Unexpected value NaN parsing x/y attribute when viewing pod Memory/CPU usage chart\n1906740 - [aws]Machine should be \"Failed\" when creating a machine with invalid region\n1907309 - Migrate controlflow v1alpha1 to v1beta1 in storage\n1907315 - the internal load balancer annotation for AWS should use \"true\" instead of \"0.0.0.0/0\" as value\n1907353 - [4.8] OVS daemonset is wasting resources even though it doesn\u0027t do anything\n1907614 - Update kubernetes deps to 1.20\n1908068 - Enable DownwardAPIHugePages feature gate\n1908169 - The example of Import URL is \"Fedora cloud image list\" for all templates. \n1908170 - sriov network resource injector: Hugepage injection doesn\u0027t work with mult container\n1908343 - Input labels in Manage columns modal should be clickable\n1908378 - [sig-network] pods should successfully create sandboxes by getting pod - Static Pod Failures\n1908655 - \"Evaluating rule failed\" for \"record: node:node_num_cpu:sum\" rule\n1908762 - [Dualstack baremetal cluster] multicast traffic is not working on ovn-kubernetes\n1908765 - [SCALE] enable OVN lflow data path groups\n1908774 - [SCALE] enable OVN DB memory trimming on compaction\n1908916 - CNO: turn on OVN DB RAFT diffs once all master DB pods are capable of it\n1909091 - Pod/node/ip/template isn\u0027t showing when vm is running\n1909600 - Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error\n1909849 - release-openshift-origin-installer-e2e-aws-upgrade-fips-4.4 is perm failing\n1909875 - [sig-cluster-lifecycle] Cluster version operator acknowledges upgrade : timed out waiting for cluster to acknowledge upgrade\n1910067 - UPI: openstacksdk fails on \"server group list\"\n1910113 - periodic-ci-openshift-release-master-ocp-4.5-ci-e2e-44-stable-to-45-ci is never passing\n1910318 - OC 4.6.9 Installer failed: Some pods are not scheduled: 3 node(s) didn\u0027t match node selector: AWS compute machines without status\n1910378 - socket timeouts for webservice communication between pods\n1910396 - 4.6.9 cred operator should back-off when provisioning fails on throttling\n1910500 - Could not list CSI provisioner on web when create storage class on GCP platform\n1911211 - Should show the cert-recovery-controller version  correctly\n1911470 - ServiceAccount Registry Authfiles Do Not Contain Entries for Public Hostnames\n1912571 - libvirt: Support setting dnsmasq options through the install config\n1912820 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade\n1913112 - BMC details should be optional for unmanaged hosts\n1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag\n1913341 - GCP: strange cluster behavior in CI run\n1913399 - switch to v1beta1 for the priority and fairness APIs\n1913525 - Panic in OLM packageserver when invoking webhook authorization endpoint\n1913532 - After a 4.6 to 4.7 upgrade, a node went unready\n1913974 - snapshot test periodically failing with \"can\u0027t open \u0027/mnt/test/data\u0027: No such file or directory\"\n1914127 - Deletion of oc get svc router-default -n openshift-ingress hangs\n1914446 - openshift-service-ca-operator and openshift-service-ca pods run as root\n1914994 - Panic observed in k8s-prometheus-adapter since k8s 1.20\n1915122 - Size of the hostname was preventing proper DNS resolution of the worker node names\n1915693 - Not able to install gpu-operator on cpumanager enabled node. \n1915971 - Role and Role Binding breadcrumbs do not work as expected\n1916116 - the left navigation menu would not be expanded if repeat clicking the links in Overview page\n1916118 - [OVN] Source IP is not EgressIP if configured allow 0.0.0.0/0 in the EgressFirewall\n1916392 - scrape priority and fairness endpoints for must-gather\n1916450 - Alertmanager: add title and text fields to Adv. config. section of Slack Receiver form\n1916489 - [sig-scheduling] SchedulerPriorities [Serial] fails with \"Error waiting for 1 pods to be running - probably a timeout: Timeout while waiting for pods with labels to be ready\"\n1916553 - Default template\u0027s description is empty on details tab\n1916593 - Destroy cluster sometimes stuck in a loop\n1916872 - need ability to reconcile exgw annotations on pod add\n1916890 - [OCP 4.7] api or api-int not available during installation\n1917241 - [en_US] The tooltips of Created date time is not easy to read in all most of UIs. \n1917282 - [Migration] MCO stucked for rhel worker after  enable the migration prepare state\n1917328 - It should default to current namespace when create vm from template action on details page\n1917482 - periodic-ci-openshift-release-master-ocp-4.7-e2e-metal-ipi failing with \"cannot go from state \u0027deploy failed\u0027 to state \u0027manageable\u0027\"\n1917485 - [oVirt] ovirt machine/machineset object has missing some field validations\n1917667 - Master machine config pool updates are stalled during the migration from SDN to OVNKube. \n1917906 - [oauth-server] bump k8s.io/apiserver to 1.20.3\n1917931 - [e2e-gcp-upi] failing due to missing pyopenssl library\n1918101 - [vsphere]Delete Provisioning machine took about 12 minutes\n1918376 - Image registry pullthrough does not support ICSP, mirroring e2es do not pass\n1918442 - Service Reject ACL does not work on dualstack\n1918723 - installer fails to write boot record on 4k scsi lun on s390x\n1918729 - Add hide/reveal button for the token field in the KMS configuration page\n1918750 - CVE-2021-3114 golang: crypto/elliptic: incorrect operations on the P-224 curve\n1918785 - Pod request and limit calculations in console are incorrect\n1918910 - Scale from zero annotations should not requeue if instance type missing\n1919032 - oc image extract - will not extract files from image rootdir - \"error: unexpected directory from mapping tests.test\"\n1919048 - Whereabouts IPv6 addresses not calculated when leading hextets equal 0\n1919151 - [Azure] dnsrecords with invalid domain should not be published to Azure dnsZone\n1919168 - `oc adm catalog mirror` doesn\u0027t work for the air-gapped cluster\n1919291 - [Cinder-csi-driver] Filesystem did not expand for on-line volume resize\n1919336 - vsphere-problem-detector should check if datastore is part of datastore cluster\n1919356 - Add missing profile annotation in cluster-update-keys manifests\n1919391 - CVE-2021-20206 containernetworking-cni: Arbitrary path injection via type field in CNI configuration\n1919398 - Permissive Egress NetworkPolicy (0.0.0.0/0) is blocking all traffic\n1919406 - OperatorHub filter heading \"Provider Type\" should be \"Source\"\n1919737 - hostname lookup delays when master node down\n1920209 - Multus daemonset upgrade takes the longest time in the cluster during an upgrade\n1920221 - GCP jobs exhaust zone listing query quota sometimes due to too many initializations of cloud provider in tests\n1920300 - cri-o does not support configuration of stream idle time\n1920307 - \"VM not running\" should be \"Guest agent required\" on vm details page in dev console\n1920532 - Problem in trying to connect through the service to a member that is the same as the caller. \n1920677 - Various missingKey errors in the devconsole namespace\n1920699 - Operation cannot be fulfilled on clusterresourcequotas.quota.openshift.io error when creating different OpenShift resources\n1920901 - [4.7]\"500 Internal Error\" for prometheus route in https_proxy cluster\n1920903 - oc adm top reporting unknown status for Windows node\n1920905 - Remove DNS lookup workaround from cluster-api-provider\n1921106 - A11y Violation: button name(s) on Utilization Card on Cluster Dashboard\n1921184 - kuryr-cni binds to wrong interface on machine with two interfaces\n1921227 - Fix issues related to consuming new extensions in Console static plugins\n1921264 - Bundle unpack jobs can hang indefinitely\n1921267 - ResourceListDropdown not internationalized\n1921321 - SR-IOV obliviously reboot the node\n1921335 - ThanosSidecarUnhealthy\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1921720 - test: openshift-tests.[sig-cli] oc observe works as expected [Suite:openshift/conformance/parallel]\n1921763 - operator registry has high memory usage in 4.7... cleanup row closes\n1921778 - Push to stage now failing with semver issues on old releases\n1921780 - Search page not fully internationalized\n1921781 - DefaultList component not internationalized\n1921878 - [kuryr] Egress network policy with namespaceSelector in Kuryr behaves differently than in OVN-Kubernetes\n1921885 - Server-side Dry-run with Validation Downloads Entire OpenAPI spec often\n1921892 - MAO: controller runtime manager closes event recorder\n1921894 - Backport Avoid node disruption when kube-apiserver-to-kubelet-signer is rotated\n1921937 - During upgrade /etc/hostname becomes a directory, nodes are set with kubernetes.io/hostname=localhost label\n1921953 - ClusterServiceVersion property inference does not infer package and version\n1922063 - \"Virtual Machine\" should be \"Templates\" in template wizard\n1922065 - Rootdisk size is default to 15GiB in customize wizard\n1922235 - [build-watch] e2e-aws-upi - e2e-aws-upi container setup failing because of Python code version mismatch\n1922264 - Restore snapshot as a new PVC: RWO/RWX access modes are not click-able if parent PVC is deleted\n1922280 - [v2v] on the upstream release, In VM import wizard I see RHV but no oVirt\n1922646 - Panic in authentication-operator invoking webhook authorization\n1922648 - FailedCreatePodSandBox due to \"failed to pin namespaces [uts]: [pinns:e]: /var/run/utsns exists and is not a directory: File exists\"\n1922764 - authentication operator is degraded due to number of kube-apiservers\n1922992 - some button text on YAML sidebar are not translated\n1922997 - [Migration]The SDN migration rollback failed. \n1923038 - [OSP] Cloud Info is loaded twice\n1923157 - Ingress traffic performance drop due to NodePort services\n1923786 - RHV UPI fails with unhelpful message when ASSET_DIR is not set. \n1923811 - Registry claims Available=True despite .status.readyReplicas == 0  while .spec.replicas == 2\n1923847 - Error occurs when creating pods if configuring multiple key-only labels in default cluster-wide node selectors or project-wide node selectors\n1923984 - Incorrect anti-affinity for UWM prometheus\n1924020 - panic: runtime error: index out of range [0] with length 0\n1924075 - kuryr-controller restart when enablePortPoolsPrepopulation = true\n1924083 - \"Activity\" Pane of Persistent Storage tab shows events related to Noobaa too\n1924140 - [OSP] Typo in OPENSHFIT_INSTALL_SKIP_PREFLIGHT_VALIDATIONS variable\n1924171 - ovn-kube must handle single-stack to dual-stack migration\n1924358 - metal UPI setup fails, no worker nodes\n1924502 - Failed to start transient scope unit: Argument list too long / systemd[1]: Failed to set up mount unit: Invalid argument\n1924536 - \u0027More about Insights\u0027 link points to support link\n1924585 - \"Edit Annotation\" are not correctly translated in Chinese\n1924586 - Control Plane status and Operators status are not fully internationalized\n1924641 - [User Experience] The message \"Missing storage class\" needs to be displayed after user clicks Next and needs to be rephrased\n1924663 - Insights operator should collect related pod logs when operator is degraded\n1924701 - Cluster destroy fails when using byo with Kuryr\n1924728 - Difficult to identify deployment issue if the destination disk is too small\n1924729 - Create Storageclass for CephFS provisioner assumes incorrect default FSName in external mode (side-effect of fix for Bug 1878086)\n1924747 - InventoryItem doesn\u0027t internationalize resource kind\n1924788 - Not clear error message when there are no NADs available for the user\n1924816 - Misleading error messages in ironic-conductor log\n1924869 - selinux avc deny after installing OCP 4.7\n1924916 - PVC reported as Uploading when it is actually cloning\n1924917 - kuryr-controller in crash loop if IP is removed from secondary interfaces\n1924953 - newly added \u0027excessive etcd leader changes\u0027 test case failing in serial job\n1924968 - Monitoring list page filter options are not translated\n1924983 - some components in utils directory not localized\n1925017 - [UI] VM Details-\u003e Network Interfaces, \u0027Name,\u0027 is displayed instead on \u0027Name\u0027\n1925061 - Prometheus backed by a PVC may start consuming a lot of RAM after 4.6 -\u003e 4.7 upgrade due to series churn\n1925083 - Some texts are not marked for translation on idp creation page. \n1925087 - Add i18n support for the Secret page\n1925148 - Shouldn\u0027t create the redundant imagestream when use `oc new-app --name=testapp2 -i ` with exist imagestream\n1925207 - VM from custom template - cloudinit disk is not added if creating the VM from custom template using customization wizard\n1925216 - openshift installer fails immediately failed to fetch Install Config\n1925236 - OpenShift Route targets every port of a multi-port service\n1925245 - oc idle: Clusters upgrading with an idled workload do not have annotations on the workload\u0027s service\n1925261 - Items marked as mandatory in KMS Provider form are not enforced\n1925291 - Baremetal IPI - While deploying with IPv6 provision network with subnet other than /64 masters fail to PXE boot\n1925343 - [ci] e2e-metal tests are not using reserved instances\n1925493 - Enable snapshot e2e tests\n1925586 - cluster-etcd-operator is leaking transports\n1925614 - Error: InstallPlan.operators.coreos.com not found\n1925698 - On GCP, load balancers report kube-apiserver fails its /readyz check 50% of the time, causing load balancer backend churn and disruptions to apiservers\n1926029 - [RFE] Either disable save or give warning when no disks support snapshot\n1926054 - Localvolume CR is created successfully, when the storageclass name defined in the localvolume exists. \n1926072 - Close button (X) does not work in the new \"Storage cluster exists\" Warning alert message(introduced via fix for Bug 1867400)\n1926082 - Insights operator should not go degraded during upgrade\n1926106 - [ja_JP][zh_CN] Create Project, Delete Project and Delete PVC modal are not fully internationalized\n1926115 - Texts in \u201cInsights\u201d popover on overview page are not marked for i18n\n1926123 - Pseudo bug: revert \"force cert rotation every couple days for development\" in 4.7\n1926126 - some kebab/action menu translation issues\n1926131 - Add HPA page is not fully internationalized\n1926146 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it\n1926154 - Create new pool with arbiter - wrong replica\n1926278 - [oVirt] consume K8S 1.20 packages\n1926279 - Pod ignores mtu setting from sriovNetworkNodePolicies in case of PF partitioning\n1926285 - ignore pod not found status messages\n1926289 - Accessibility: Modal content hidden from screen readers\n1926310 - CannotRetrieveUpdates alerts on Critical severity\n1926329 - [Assisted-4.7][Staging] monitoring stack in staging is being overloaded by the amount of metrics being exposed by assisted-installer pods and scraped by prometheus. \n1926336 - Service details can overflow boxes at some screen widths\n1926346 - move to go 1.15 and registry.ci.openshift.org\n1926364 - Installer timeouts because proxy blocked connection to Ironic API running on bootstrap VM\n1926465 - bootstrap kube-apiserver does not have --advertise-address set \u2013 was: [BM][IPI][DualStack] Installation fails cause Kubernetes service doesn\u0027t have IPv6 endpoints\n1926484 - API server exits non-zero on 2 SIGTERM signals\n1926547 - OpenShift installer not reporting IAM permission issue when removing the Shared Subnet Tag\n1926579 - Setting .spec.policy is deprecated and will be removed eventually. Please use .spec.profile instead is being logged every 3 seconds in scheduler operator log\n1926598 - Duplicate alert rules are displayed on console for thanos-querier api return wrong results\n1926776 - \"Template support\" modal appears when select the RHEL6 common template\n1926835 - [e2e][automation] prow gating use unsupported CDI version\n1926843 - pipeline with finally tasks status is improper\n1926867 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade\n1926893 - When deploying the operator via OLM (after creating the respective catalogsource), the deployment \"lost\" the `resources` section. \n1926903 - NTO may fail to disable stalld when relying on Tuned \u0027[service]\u0027 plugin\n1926931 - Inconsistent ovs-flow rule on one of the app node for egress node\n1926943 - vsphere-problem-detector: Alerts in CI jobs\n1926977 - [sig-devex][Feature:ImageEcosystem][Slow] openshift sample application repositories rails/nodejs\n1927013 - Tables don\u0027t render properly at smaller screen widths\n1927017 - CCO does not relinquish leadership when restarting for proxy CA change\n1927042 - Empty static pod files on UPI deployments are confusing\n1927047 - multiple external gateway pods will not work in ingress with IP fragmentation\n1927068 - Workers fail to PXE boot when IPv6 provisionining network has subnet other than /64\n1927075 - [e2e][automation] Fix pvc string in pvc.view\n1927118 - OCP 4.7: NVIDIA GPU Operator DCGM metrics not displayed in OpenShift Console Monitoring Metrics page\n1927244 - UPI installation with Kuryr timing out on bootstrap stage\n1927263 - kubelet service takes around 43 secs to start container when started from stopped state\n1927264 - FailedCreatePodSandBox due to multus inability to reach apiserver\n1927310 - Performance: Console makes unnecessary requests for en-US messages on load\n1927340 - Race condition in OperatorCondition reconcilation\n1927366 - OVS configuration service unable to clone NetworkManager\u0027s connections in the overlay FS\n1927391 - Fix flake in TestSyncPodsDeletesWhenSourcesAreReady\n1927393 - 4.7 still points to 4.6 catalog images\n1927397 - p\u0026f: add auto update for priority \u0026 fairness bootstrap configuration objects\n1927423 - Happy \"Not Found\" and no visible error messages on error-list page when /silences 504s\n1927465 - Homepage dashboard content not internationalized\n1927678 - Reboot interface defaults to softPowerOff so fencing is too slow\n1927731 - /usr/lib/dracut/modules.d/30ignition/ignition --version sigsev\n1927797 - \u0027Pod(s)\u0027 should be included in the pod donut label when a horizontal pod autoscaler is enabled\n1927882 - Can\u0027t create cluster role binding from UI when a project is selected\n1927895 - global RuntimeConfig is overwritten with merge result\n1927898 - i18n Admin Notifier\n1927902 - i18n Cluster Utilization dashboard duration\n1927903 - \"CannotRetrieveUpdates\" - critical error in openshift web console\n1927925 - Manually misspelled as Manualy\n1927941 - StatusDescriptor detail item and Status component can cause runtime error when the status is an object or array\n1927942 - etcd should use socket option (SO_REUSEADDR) instead of wait for port release on process restart\n1927944 - cluster version operator cycles terminating state waiting for leader election\n1927993 - Documentation Links in OKD Web Console are not Working\n1928008 - Incorrect behavior when we click back button after viewing the node details in Internal-attached mode\n1928045 - N+1 scaling Info message says \"single zone\" even if the nodes are spread across 2 or 0 zones\n1928147 - Domain search set in the required domains in Option 119 of DHCP Server is ignored by RHCOS on RHV\n1928157 - 4.7 CNO claims to be done upgrading before it even starts\n1928164 - Traffic to outside the cluster redirected when OVN is used and NodePort service is configured\n1928297 - HAProxy fails with 500 on some requests\n1928473 - NetworkManager overlay FS not being created on None platform\n1928512 - sap license management logs gatherer\n1928537 - Cannot IPI with tang/tpm disk encryption\n1928640 - Definite error message when using StorageClass based on azure-file / Premium_LRS\n1928658 - Update plugins and Jenkins version to prepare openshift-sync-plugin 1.0.46 release\n1928850 - Unable to pull images due to limited quota on Docker Hub\n1928851 - manually creating NetNamespaces will break things and this is not obvious\n1928867 - golden images - DV should not be created with WaitForFirstConsumer\n1928869 - Remove css required to fix search bug in console caused by pf issue in 2021.1\n1928875 - Update translations\n1928893 - Memory Pressure Drop Down Info is stating \"Disk\" capacity is low instead of memory\n1928931 - DNSRecord CRD is using deprecated v1beta1 API\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1929052 - Add new Jenkins agent maven dir for 3.6\n1929056 - kube-apiserver-availability.rules are failing evaluation\n1929110 - LoadBalancer service check test fails during vsphere upgrade\n1929136 - openshift isn\u0027t able to mount nfs manila shares to pods\n1929175 - LocalVolumeSet: PV is created on disk belonging to other provisioner\n1929243 - Namespace column missing in Nodes Node Details / pods tab\n1929277 - Monitoring workloads using too high a priorityclass\n1929281 - Update Tech Preview badge to transparent border color when upgrading to PatternFly v4.87.1\n1929314 - ovn-kubernetes endpoint slice controller doesn\u0027t run on CI jobs\n1929359 - etcd-quorum-guard uses origin-cli [4.8]\n1929577 - Edit Application action overwrites Deployment envFrom values on save\n1929654 - Registry for Azure uses legacy V1 StorageAccount\n1929693 - Pod stuck at \"ContainerCreating\" status\n1929733 - oVirt CSI driver operator is constantly restarting\n1929769 - Getting 404 after switching user perspective in another tab and reload Project details\n1929803 - Pipelines shown in edit flow for Workloads created via ContainerImage flow\n1929824 - fix alerting on volume name check for vsphere\n1929917 - Bare-metal operator is firing for ClusterOperatorDown for 15m during 4.6 to 4.7 upgrade\n1929944 - The etcdInsufficientMembers alert fires incorrectly when any instance is down and not when quorum is lost\n1930007 - filter dropdown item filter and resource list dropdown item filter doesn\u0027t support multi selection\n1930015 - OS list is overlapped by buttons in template wizard\n1930064 - Web console crashes during VM creation from template when no storage classes are defined\n1930220 - Cinder CSI driver is not able to mount volumes under heavier load\n1930240 - Generated clouds.yaml incomplete when provisioning network is disabled\n1930248 - After creating a remediation flow and rebooting a worker there is no access to the openshift-web-console\n1930268 - intel vfio devices are not expose as resources\n1930356 - Darwin binary missing from mirror.openshift.com\n1930393 - Gather info about unhealthy SAP pods\n1930546 - Monitoring-dashboard-workload keep loading when user with cluster-role cluster-monitoring-view login develoer console\n1930570 - Jenkins templates are displayed in Developer Catalog twice\n1930620 - the logLevel field in containerruntimeconfig can\u0027t be set to \"trace\"\n1930631 - Image local-storage-mustgather in the doc does not come from product registry\n1930893 - Backport upstream patch 98956 for pod terminations\n1931005 - Related objects page doesn\u0027t show the object when its name is empty\n1931103 - remove periodic log within kubelet\n1931115 - Azure cluster install fails with worker type workers Standard_D4_v2\n1931215 - [RFE] Cluster-api-provider-ovirt should handle affinity groups\n1931217 - [RFE] Installer should create RHV Affinity group for OCP cluster VMS\n1931467 - Kubelet consuming a large amount of CPU and memory and node becoming unhealthy\n1931505 - [IPI baremetal] Two nodes hold the VIP post remove and start  of the Keepalived container\n1931522 - Fresh UPI install on BM with bonding using OVN Kubernetes fails\n1931529 - SNO: mentioning of 4 nodes in error message - Cluster network CIDR prefix 24 does not contain enough addresses for 4 hosts each one with 25 prefix (128 addresses)\n1931629 - Conversational Hub Fails due to ImagePullBackOff\n1931637 - Kubeturbo Operator fails due to ImagePullBackOff\n1931652 - [single-node] etcd: discover-etcd-initial-cluster graceful termination race. \n1931658 - [single-node] cluster-etcd-operator: cluster never pivots from bootstrapIP endpoint\n1931674 - [Kuryr] Enforce nodes MTU for the Namespaces and Pods\n1931852 - Ignition HTTP GET is failing, because DHCP IPv4 config is failing silently\n1931883 - Fail to install Volume Expander Operator due to CrashLookBackOff\n1931949 - Red Hat  Integration Camel-K Operator keeps stuck in Pending state\n1931974 - Operators cannot access kubeapi endpoint on OVNKubernetes on ipv6\n1931997 - network-check-target causes upgrade to fail from 4.6.18 to 4.7\n1932001 - Only one of multiple subscriptions to the same package is honored\n1932097 - Apiserver liveness probe is marking it as unhealthy during normal shutdown\n1932105 - machine-config ClusterOperator claims level while control-plane still updating\n1932133 - AWS EBS CSI Driver doesn\u2019t support \u201ccsi.storage.k8s.io/fsTyps\u201d parameter\n1932135 - When \u201ciopsPerGB\u201d parameter is not set, event for AWS EBS CSI Driver provisioning is not clear\n1932152 - When \u201ciopsPerGB\u201d parameter is set to a wrong number, events for AWS EBS CSI Driver provisioning are not clear\n1932154 - [AWS ] machine stuck in provisioned phase , no warnings or errors\n1932182 - catalog operator causing CPU spikes and bad etcd performance\n1932229 - Can\u2019t find kubelet metrics for aws ebs csi volumes\n1932281 - [Assisted-4.7][UI] Unable to change upgrade channel once upgrades were discovered\n1932323 - CVE-2021-26540 sanitize-html: improper validation of hostnames set by the \"allowedIframeHostnames\" option can lead to bypass hostname whitelist for iframe element\n1932324 - CRIO fails to create a Pod in sandbox stage -  starting container process caused: process_linux.go:472: container init caused: Running hook #0:: error running hook: exit status 255, stdout: , stderr: \\\"\\n\"\n1932362 - CVE-2021-26539 sanitize-html: improper handling of internationalized domain name (IDN) can lead to bypass hostname whitelist validation\n1932401 - Cluster Ingress Operator degrades if external LB redirects http to https because of new \"canary\" route\n1932453 - Update Japanese timestamp format\n1932472 - Edit Form/YAML switchers cause weird collapsing/code-folding issue\n1932487 - [OKD] origin-branding manifest is missing cluster profile annotations\n1932502 - Setting MTU for a bond interface using Kernel arguments is not working\n1932618 - Alerts during a test run should fail the test job, but were not\n1932624 - ClusterMonitoringOperatorReconciliationErrors is pending at the end of an upgrade and probably should not be\n1932626 - During a 4.8 GCP upgrade OLM fires an alert indicating the operator is unhealthy\n1932673 - Virtual machine template provided by red hat should not be editable. The UI allows to edit and then reverse the change after it was made\n1932789 - Proxy with port is unable to be validated if it overlaps with service/cluster network\n1932799 - During a hive driven baremetal installation the process does not go beyond 80% in the bootstrap VM\n1932805 - e2e: test OAuth API connections in the tests by that name\n1932816 - No new local storage operator bundle image is built\n1932834 - enforce the use of hashed access/authorize tokens\n1933101 - Can not upgrade a Helm Chart that uses a library chart in the OpenShift dev console\n1933102 - Canary daemonset uses default node selector\n1933114 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it [Suite:openshift/conformance/parallel/minimal]\n1933159 - multus DaemonSets should use maxUnavailable: 33%\n1933173 - openshift-sdn/sdn DaemonSet should use maxUnavailable: 10%\n1933174 - openshift-sdn/ovs DaemonSet should use maxUnavailable: 10%\n1933179 - network-check-target DaemonSet should use maxUnavailable: 10%\n1933180 - openshift-image-registry/node-ca DaemonSet should use maxUnavailable: 10%\n1933184 - openshift-cluster-csi-drivers DaemonSets should use maxUnavailable: 10%\n1933263 - user manifest with nodeport services causes bootstrap to block\n1933269 - Cluster unstable replacing an unhealthy etcd member\n1933284 - Samples in CRD creation are ordered arbitarly\n1933414 - Machines are created with unexpected name for Ports\n1933599 - bump k8s.io/apiserver to 1.20.3\n1933630 - [Local Volume] Provision disk failed when disk label has unsupported value like \":\"\n1933664 - Getting Forbidden for image in a container template when creating a sample app\n1933708 - Grafana is not displaying deployment config resources in dashboard `Default /Kubernetes / Compute Resources / Namespace (Workloads)`\n1933711 - EgressDNS: Keep short lived records at most 30s\n1933730 - [AI-UI-Wizard] Toggling \"Use extra disks for local storage\" checkbox highlights the \"Next\" button to move forward but grays out once clicked\n1933761 - Cluster DNS service caps TTLs too low and thus evicts from its cache too aggressively\n1933772 - MCD Crash Loop Backoff\n1933805 - TargetDown alert fires during upgrades because of normal upgrade behavior\n1933857 - Details page can throw an uncaught exception if kindObj prop is undefined\n1933880 - Kuryr-Controller crashes when it\u0027s missing the status object\n1934021 - High RAM usage on machine api termination node system oom\n1934071 - etcd consuming high amount of  memory and CPU after upgrade to 4.6.17\n1934080 - Both old and new Clusterlogging CSVs stuck in Pending during upgrade\n1934085 - Scheduling conformance tests failing in a single node cluster\n1934107 - cluster-authentication-operator builds URL incorrectly for IPv6\n1934112 - Add memory and uptime metadata to IO archive\n1934113 - mcd panic when there\u0027s not enough free disk space\n1934123 - [OSP] First public endpoint is used to fetch ignition config from Glance URL (with multiple endpoints) on OSP\n1934163 - Thanos Querier restarting and gettin alert ThanosQueryHttpRequestQueryRangeErrorRateHigh\n1934174 - rootfs too small when enabling NBDE\n1934176 - Machine Config Operator degrades during cluster update with failed to convert Ignition config spec v2 to v3\n1934177 - knative-camel-operator  CreateContainerError \"container_linux.go:366: starting container process caused: chdir to cwd (\\\"/home/nonroot\\\") set in config.json failed: permission denied\"\n1934216 - machineset-controller stuck in CrashLoopBackOff after upgrade to 4.7.0\n1934229 - List page text filter has input lag\n1934397 - Extend OLM operator gatherer to include Operator/ClusterServiceVersion conditions\n1934400 - [ocp_4][4.6][apiserver-auth] OAuth API servers are not ready - PreconditionNotReady\n1934516 - Setup different priority classes for prometheus-k8s and prometheus-user-workload pods\n1934556 - OCP-Metal images\n1934557 - RHCOS boot image bump for LUKS fixes\n1934643 - Need BFD failover capability on ECMP routes\n1934711 - openshift-ovn-kubernetes ovnkube-node DaemonSet should use maxUnavailable: 10%\n1934773 - Canary client should perform canary probes explicitly over HTTPS (rather than redirect from HTTP)\n1934905 - CoreDNS\u0027s \"errors\" plugin is not enabled for custom upstream resolvers\n1935058 - Can\u2019t finish install sts clusters on aws government region\n1935102 - Error: specifying a root certificates file with the insecure flag is not allowed during oc login\n1935155 - IGMP/MLD packets being dropped\n1935157 - [e2e][automation] environment tests broken\n1935165 - OCP 4.6 Build fails when filename contains an umlaut\n1935176 - Missing an indication whether the deployed setup is SNO. \n1935269 - Topology operator group shows child Jobs. Not shown in details view\u0027s resources. \n1935419 - Failed to scale worker using virtualmedia on Dell R640\n1935528 - [AWS][Proxy] ingress reports degrade with CanaryChecksSucceeding=False in the cluster with proxy setting\n1935539 - Openshift-apiserver CO unavailable during cluster upgrade from 4.6 to 4.7\n1935541 - console operator panics in DefaultDeployment with nil cm\n1935582 - prometheus liveness probes cause issues while replaying WAL\n1935604 - high CPU usage fails ingress controller\n1935667 - pipelinerun status icon rendering issue\n1935706 - test: Detect when the master pool is still updating after upgrade\n1935732 - Update Jenkins agent maven directory to be version agnostic [ART ocp build data]\n1935814 - Pod and Node lists eventually have incorrect row heights when additional columns have long text\n1935909 - New CSV using ServiceAccount named \"default\" stuck in Pending during upgrade\n1936022 - DNS operator performs spurious updates in response to API\u0027s defaulting of daemonset\u0027s terminationGracePeriod and service\u0027s clusterIPs\n1936030 - Ingress operator performs spurious updates in response to API\u0027s defaulting of NodePort service\u0027s clusterIPs field\n1936223 - The IPI installer has a typo. It is missing the word \"the\" in \"the Engine\". \n1936336 - Updating multus-cni builder \u0026 base images to be consistent with ART 4.8 (closed)\n1936342 - kuryr-controller restarting after 3 days cluster running - pools without members\n1936443 - Hive based OCP IPI baremetal installation fails to connect to API VIP port 22623\n1936488 - [sig-instrumentation][Late] Alerts shouldn\u0027t report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured: Prometheus query error\n1936515 - sdn-controller is missing some health checks\n1936534 - When creating a worker with a used mac-address stuck on registering\n1936585 - configure alerts if the catalogsources are missing\n1936620 - OLM checkbox descriptor renders switch instead of checkbox\n1936721 - network-metrics-deamon not associated with a priorityClassName\n1936771 - [aws ebs csi driver] The event for Pod consuming a readonly PVC is not clear\n1936785 - Configmap gatherer doesn\u0027t include namespace name (in the archive path) in case of a configmap with binary data\n1936788 - RBD RWX PVC creation with  Filesystem volume mode selection is creating RWX PVC with Block volume mode instead of disabling Filesystem volume mode selection\n1936798 - Authentication log gatherer shouldn\u0027t scan all the pod logs in the openshift-authentication namespace\n1936801 - Support ServiceBinding 0.5.0+\n1936854 - Incorrect imagestream is shown as selected in knative service container image edit flow\n1936857 - e2e-ovirt-ipi-install-install is permafailing on 4.5 nightlies\n1936859 - ovirt 4.4 -\u003e 4.5 upgrade jobs are permafailing\n1936867 - Periodic vsphere IPI install is broken - missing pip\n1936871 - [Cinder CSI] Topology aware provisioning doesn\u0027t work when Nova and Cinder AZs are different\n1936904 - Wrong output YAML when syncing groups without --confirm\n1936983 - Topology view - vm details screen isntt stop loading\n1937005 - when kuryr quotas are unlimited, we should not sent alerts\n1937018 - FilterToolbar component does not handle \u0027null\u0027 value for \u0027rowFilters\u0027 prop\n1937020 - Release new from image stream chooses incorrect ID based on status\n1937077 - Blank White page on Topology\n1937102 - Pod Containers Page Not Translated\n1937122 - CAPBM changes to support flexible reboot modes\n1937145 - [Local storage] PV provisioned by localvolumeset stays in \"Released\" status after the pod/pvc deleted\n1937167 - [sig-arch] Managed cluster should have no crashlooping pods in core namespaces over four minutes\n1937244 - [Local Storage] The model name of aws EBS doesn\u0027t be extracted well\n1937299 - pod.spec.volumes.awsElasticBlockStore.partition is not respected on NVMe volumes\n1937452 - cluster-network-operator CI linting fails in master branch\n1937459 - Wrong Subnet retrieved for Service without Selector\n1937460 - [CI] Network quota pre-flight checks are failing the installation\n1937464 - openstack cloud credentials are not getting configured with correct user_domain_name across the cluster\n1937466 - KubeClientCertificateExpiration alert is confusing, without explanation in the documentation\n1937496 - Metrics viewer in OCP Console is missing date in a timestamp for selected datapoint\n1937535 - Not all image pulls within OpenShift builds retry\n1937594 - multiple pods in ContainerCreating state after migration from OpenshiftSDN to OVNKubernetes\n1937627 - Bump DEFAULT_DOC_URL for 4.8\n1937628 - Bump upgrade channels for 4.8\n1937658 - Description for storage class encryption during storagecluster creation needs to be updated\n1937666 - Mouseover on headline\n1937683 - Wrong icon classification of output in buildConfig when the destination is a DockerImage\n1937693 - ironic image \"/\" cluttered with files\n1937694 - [oVirt] split ovirt providerIDReconciler logic into NodeController and ProviderIDController\n1937717 - If browser default font size is 20, the layout of template screen breaks\n1937722 - OCP 4.8 vuln due to BZ 1936445\n1937929 - Operand page shows a 404:Not Found error for OpenShift GitOps Operator\n1937941 - [RFE]fix wording for favorite templates\n1937972 - Router HAProxy config file template is slow to render due to repetitive regex compilations\n1938131 - [AWS] Missing iam:ListAttachedRolePolicies permission in permissions.go\n1938321 - Cannot view PackageManifest objects in YAML on \u0027Home \u003e Search\u0027 page nor \u0027CatalogSource details \u003e Operators tab\u0027\n1938465 - thanos-querier should set a CPU request on the thanos-query container\n1938466 - packageserver deployment sets neither CPU or memory request on the packageserver container\n1938467 - The default cluster-autoscaler should get default cpu and memory requests if user omits them\n1938468 - kube-scheduler-operator has a container without a CPU request\n1938492 - Marketplace extract container does not request CPU or memory\n1938493 - machine-api-operator declares restrictive cpu and memory limits where it should not\n1938636 - Can\u0027t set the loglevel of the container: cluster-policy-controller and kube-controller-manager-recovery-controller\n1938903 - Time range on dashboard page will be empty after drog and drop mouse in the graph\n1938920 - ovnkube-master/ovs-node DaemonSets should use maxUnavailable: 10%\n1938947 - Update blocked from 4.6 to 4.7 when using spot/preemptible instances\n1938949 - [VPA] Updater failed to trigger evictions due to \"vpa-admission-controller\" not found\n1939054 - machine healthcheck kills aws spot instance before generated\n1939060 - CNO: nodes and masters are upgrading simultaneously\n1939069 - Add source to vm template silently failed when no storage class is defined in the cluster\n1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string\n1939168 - Builds failing for OCP 3.11 since PR#25 was merged\n1939226 - kube-apiserver readiness probe appears to be hitting /healthz, not /readyz\n1939227 - kube-apiserver liveness probe appears to be hitting /healthz, not /livez\n1939232 - CI tests using openshift/hello-world broken by Ruby Version Update\n1939270 - fix co upgradeableFalse status and reason\n1939294 - OLM may not delete pods with grace period zero (force delete)\n1939412 - missed labels for thanos-ruler pods\n1939485 - CVE-2021-20291 containers/storage: DoS via malicious image\n1939547 - Include container=\"POD\" in resource queries\n1939555 - VSphereProblemDetectorControllerDegraded: context canceled during upgrade to 4.8.0\n1939573 - after entering valid git repo url on add flow page, throwing warning message instead Validated\n1939580 - Authentication operator is degraded during 4.8 to 4.8 upgrade and normal 4.8 e2e runs\n1939606 - Attempting to put a host into maintenance mode warns about Ceph cluster health, but no storage cluster problems are apparent\n1939661 - support new AWS region ap-northeast-3\n1939726 - clusteroperator/network should not change condition/Degraded during normal serial test execution\n1939731 - Image registry operator reports unavailable during normal serial run\n1939734 - Node Fanout Causes Excessive WATCH Secret Calls, Taking Down Clusters\n1939740 - dual stack nodes with OVN single ipv6 fails on bootstrap phase\n1939752 - ovnkube-master sbdb container does not set requests on cpu or memory\n1939753 - Delete HCO is stucking if there is still VM in the cluster\n1939815 - Change the Warning Alert for Encrypted PVs in Create StorageClass(provisioner:RBD) page\n1939853 - [DOC] Creating manifests API should not allow folder in the \"file_name\"\n1939865 - GCP PD CSI driver does not have CSIDriver instance\n1939869 - [e2e][automation] Add annotations to datavolume for HPP\n1939873 - Unlimited number of characters accepted for base domain name\n1939943 - `cluster-kube-apiserver-operator check-endpoints` observed a panic: runtime error: invalid memory address or nil pointer dereference\n1940030 - cluster-resource-override: fix spelling mistake for run-level match expression in webhook configuration\n1940057 - Openshift builds should use a wach instead of polling when checking for pod status\n1940142 - 4.6-\u003e4.7 updates stick on OpenStackCinderCSIDriverOperatorCR_OpenStackCinderDriverControllerServiceController_Deploying\n1940159 - [OSP] cluster destruction fails to remove router in BYON (with provider network) with Kuryr as primary network\n1940206 - Selector and VolumeTableRows not i18ned\n1940207 - 4.7-\u003e4.6 rollbacks stuck on prometheusrules admission webhook \"no route to host\"\n1940314 - Failed to get type for Dashboard Kubernetes / Compute Resources / Namespace (Workloads)\n1940318 - No data under \u0027Current Bandwidth\u0027 for Dashboard \u0027Kubernetes / Networking / Pod\u0027\n1940322 - Split of dashbard  is wrong, many Network parts\n1940337 - rhos-ipi installer fails with not clear message when openstack tenant doesn\u0027t have flavors needed for compute machines\n1940361 - [e2e][automation] Fix vm action tests with storageclass HPP\n1940432 - Gather datahubs.installers.datahub.sap.com resources from SAP clusters\n1940488 - After fix for CVE-2021-3344, Builds do not mount node entitlement keys\n1940498 - pods may fail to add logical port due to lr-nat-del/lr-nat-add error messages\n1940499 - hybrid-overlay not logging properly before exiting due to an error\n1940518 - Components in bare metal components lack resource requests\n1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n1940704 - prjquota is dropped from rootflags if rootfs is reprovisioned\n1940755 - [Web-console][Local Storage] LocalVolumeSet could not be created from web-console without detail error info\n1940865 - Add BareMetalPlatformType into e2e upgrade service unsupported list\n1940876 - Components in ovirt components lack resource requests\n1940889 - Installation failures in OpenStack release jobs\n1940933 - [sig-arch] Check if alerts are firing during or after upgrade success: AggregatedAPIDown on v1beta1.metrics.k8s.io\n1940939 - Wrong Openshift node IP as kubelet setting VIP as node IP\n1940940 - csi-snapshot-controller goes unavailable when machines are added removed to cluster\n1940950 - vsphere: client/bootstrap CSR double create\n1940972 - vsphere: [4.6] CSR approval delayed for unknown reason\n1941000 - cinder storageclass creates persistent volumes with wrong label failure-domain.beta.kubernetes.io/zone in multi availability zones architecture on OSP 16. \n1941334 - [RFE] Cluster-api-provider-ovirt should handle auto pinning policy\n1941342 - Add `kata-osbuilder-generate.service` as part of the default presets\n1941456 - Multiple pods stuck in ContainerCreating status with the message \"failed to create container for [kubepods burstable podxxx] : dbus: connection closed by user\" being seen in the journal log\n1941526 - controller-manager-operator: Observed a panic: nil pointer dereference\n1941592 - HAProxyDown not Firing\n1941606 - [assisted operator] Assisted Installer Operator CSV related images should be digests for icsp\n1941625 - Developer -\u003e Topology - i18n misses\n1941635 - Developer -\u003e Monitoring - i18n misses\n1941636 - BM worker nodes deployment with virtual media failed while trying to clean raid\n1941645 - Developer -\u003e Builds - i18n misses\n1941655 - Developer -\u003e Pipelines - i18n misses\n1941667 - Developer -\u003e Project - i18n misses\n1941669 - Developer -\u003e ConfigMaps - i18n misses\n1941759 - Errored pre-flight checks should not prevent install\n1941798 - Some details pages don\u0027t have internationalized ResourceKind labels\n1941801 - Many filter toolbar dropdowns haven\u0027t been internationalized\n1941815 - From the web console the terminal can no longer connect after using leaving and returning to the terminal view\n1941859 - [assisted operator] assisted pod deploy first time in error state\n1941901 - Toleration merge logic does not account for multiple entries with the same key\n1941915 - No validation against template name in boot source customization\n1941936 - when setting parameters in containerRuntimeConfig, it will show incorrect information on its description\n1941980 - cluster-kube-descheduler operator is broken when upgraded from 4.7 to 4.8\n1941990 - Pipeline metrics endpoint changed in osp-1.4\n1941995 - fix backwards incompatible trigger api changes in osp1.4\n1942086 - Administrator -\u003e Home - i18n misses\n1942117 - Administrator -\u003e Workloads - i18n misses\n1942125 - Administrator -\u003e Serverless - i18n misses\n1942193 - Operand creation form - broken/cutoff blue line on the Accordion component (fieldGroup)\n1942207 - [vsphere] hostname are changed when upgrading from 4.6 to 4.7.x causing upgrades to fail\n1942271 - Insights operator doesn\u0027t gather pod information from openshift-cluster-version\n1942375 - CRI-O failing with error \"reserving ctr name\"\n1942395 - The status is always \"Updating\" on dc detail page after deployment has failed. \n1942521 - [Assisted-4.7] [Staging][OCS] Minimum memory for selected role is failing although minimum OCP requirement satisfied\n1942522 - Resolution fails to sort channel if inner entry does not satisfy predicate\n1942536 - Corrupted image preventing containers from starting\n1942548 - Administrator -\u003e Networking - i18n misses\n1942553 - CVE-2021-22133 go.elastic.co/apm: leaks sensitive HTTP headers during panic\n1942555 - Network policies in ovn-kubernetes don\u0027t support external traffic from router when the endpoint publishing strategy is HostNetwork\n1942557 - Query is reporting \"no datapoint\" when label cluster=\"\" is set but work when the label is removed or when running directly in Prometheus\n1942608 - crictl cannot list the images with an error: error locating item named \"manifest\" for image with ID\n1942614 - Administrator -\u003e Storage - i18n misses\n1942641 - Administrator -\u003e Builds - i18n misses\n1942673 - Administrator -\u003e Pipelines - i18n misses\n1942694 - Resource names with a colon do not display property in the browser window title\n1942715 - Administrator -\u003e User Management - i18n misses\n1942716 - Quay Container Security operator has Medium \u003c-\u003e Low colors reversed\n1942725 - [SCC] openshift-apiserver degraded when creating new pod after installing Stackrox which creates a less privileged SCC [4.8]\n1942736 - Administrator -\u003e Administration - i18n misses\n1942749 - Install Operator form should use info icon for popovers\n1942837 - [OCPv4.6] unable to deploy pod with unsafe sysctls\n1942839 - Windows VMs fail to start on air-gapped environments\n1942856 - Unable to assign nodes for EgressIP even if the egress-assignable label is set\n1942858 - [RFE]Confusing detach volume UX\n1942883 - AWS EBS CSI driver does not support partitions\n1942894 - IPA error when provisioning masters due to an error from ironic.conductor - /dev/sda is busy\n1942935 - must-gather improvements\n1943145 - vsphere: client/bootstrap CSR double create\n1943175 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies (set azure storage account TLS version default to 1.2)\n1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl()\n1943219 - unable to install IPI PRIVATE OpenShift cluster in Azure - SSH access from the Internet should be blocked\n1943224 - cannot upgrade openshift-kube-descheduler from 4.7.2 to latest\n1943238 - The conditions table does not occupy 100% of the width. \n1943258 - [Assisted-4.7][Staging][Advanced Networking] Cluster install fails while waiting for control plane\n1943314 - [OVN SCALE] Combine Logical Flows inside Southbound DB. \n1943315 - avoid workload disruption for ICSP changes\n1943320 - Baremetal node loses connectivity with bonded interface and OVNKubernetes\n1943329 - TLSSecurityProfile missing from KubeletConfig CRD Manifest\n1943356 - Dynamic plugins surfaced in the UI should be referred to as \"Console plugins\"\n1943539 - crio-wipe is failing to start \"Failed to shutdown storage before wiping: A layer is mounted: layer is in use by a container\"\n1943543 - DeploymentConfig Rollback doesn\u0027t reset params correctly\n1943558 - [assisted operator] Assisted Service pod unable to reach self signed local registry in disco environement\n1943578 - CoreDNS caches NXDOMAIN responses for up to 900 seconds\n1943614 - add bracket logging on openshift/builder calls into buildah to assist test-platform team triage\n1943637 - upgrade from ocp 4.5 to 4.6 does not clear SNAT rules on ovn\n1943649 - don\u0027t use hello-openshift for network-check-target\n1943667 - KubeDaemonSetRolloutStuck fires during upgrades too often because it does not accurately detect progress\n1943719 - storage-operator/vsphere-problem-detector causing upgrades to fail that would have succeeded in past versions\n1943804 - API server on AWS takes disruption between 70s and 110s after pod begins termination via external LB\n1943845 - Router pods should have startup probes configured\n1944121 - OVN-kubernetes references AddressSets after deleting them, causing ovn-controller errors\n1944160 - CNO: nbctl daemon should log reconnection info\n1944180 - OVN-Kube Master does not release election lock on shutdown\n1944246 - Ironic fails to inspect and move node to \"manageable\u0027 but get bmh remains in \"inspecting\"\n1944268 - openshift-install AWS SDK is missing endpoints for the ap-northeast-3 region\n1944509 - Translatable texts without context in ssh expose component\n1944581 - oc project not works with cluster proxy\n1944587 - VPA could not take actions based on the recommendation when min-replicas=1\n1944590 - The field name \"VolumeSnapshotContent\" is wrong on VolumeSnapshotContent detail page\n1944602 - Consistant fallures of features/project-creation.feature Cypress test in CI\n1944631 - openshif authenticator should not accept non-hashed tokens\n1944655 - [manila-csi-driver-operator] openstack-manila-csi-nodeplugin pods stucked with \".. still connecting to unix:///var/lib/kubelet/plugins/csi-nfsplugin/csi.sock\"\n1944660 - dm-multipath race condition on bare metal causing /boot partition mount failures\n1944674 - Project field become to \"All projects\" and disabled in \"Review and create virtual machine\" step in devconsole\n1944678 - Whereabouts IPAM CNI duplicate IP addresses assigned to pods\n1944761 - field level help instances do not use common util component \u003cFieldLevelHelp\u003e\n1944762 - Drain on worker node during an upgrade fails due to PDB set for image registry pod when only a single replica is present\n1944763 - field level help instances do not use common util component \u003cFieldLevelHelp\u003e\n1944853 - Update to nodejs \u003e=14.15.4 for ARM\n1944974 - Duplicate KubeControllerManagerDown/KubeSchedulerDown alerts\n1944986 - Clarify the ContainerRuntimeConfiguration cr description on the validation\n1945027 - Button \u0027Copy SSH Command\u0027 does not work\n1945085 - Bring back API data in etcd test\n1945091 - In k8s 1.21 bump Feature:IPv6DualStack tests are disabled\n1945103 - \u0027User credentials\u0027 shows even the VM is not running\n1945104 - In k8s 1.21 bump \u0027[sig-storage] [cis-hostpath] [Testpattern: Generic Ephemeral-volume\u0027 tests are disabled\n1945146 - Remove pipeline Tech preview badge for pipelines GA operator\n1945236 - Bootstrap ignition shim doesn\u0027t follow proxy settings\n1945261 - Operator dependency not consistently chosen from default channel\n1945312 - project deletion does not reset UI project context\n1945326 - console-operator: does not check route health periodically\n1945387 - Image Registry deployment should have 2 replicas and hard anti-affinity rules\n1945398 - 4.8 CI failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n1945431 - alerts: SystemMemoryExceedsReservation triggers too quickly\n1945443 - operator-lifecycle-manager-packageserver flaps Available=False with no reason or message\n1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service\n1945548 - catalog resource update failed if spec.secrets set to \"\"\n1945584 - Elasticsearch  operator fails to install on 4.8 cluster on ppc64le/s390x\n1945599 - Optionally set KERNEL_VERSION and RT_KERNEL_VERSION\n1945630 - Pod log filename no longer in \u003cpod-name\u003e-\u003ccontainer-name\u003e.log format\n1945637 - QE- Automation- Fixing smoke test suite for pipeline-plugin\n1945646 - gcp-routes.sh running as initrc_t unnecessarily\n1945659 - [oVirt] remove ovirt_cafile from ovirt-credentials secret\n1945677 - Need ACM Managed Cluster Info metric enabled for OCP monitoring telemetry\n1945687 - Dockerfile needs updating to new container CI registry\n1945700 - Syncing boot mode after changing device should be restricted to Supermicro\n1945816 - \" Ingresses \" should be kept in English for Chinese\n1945818 - Chinese translation issues: Operator should be the same with English `Operators`\n1945849 - Unnecessary series churn when a new version of kube-state-metrics is rolled out\n1945910 - [aws] support byo iam roles for instances\n1945948 - SNO: pods can\u0027t reach ingress when the ingress uses a different IPv6. \n1946079 - Virtual master is not getting an IP address\n1946097 - [oVirt] oVirt credentials secret contains unnecessary \"ovirt_cafile\"\n1946119 - panic parsing install-config\n1946243 - No relevant error when pg limit is reached in block pools page\n1946307 - [CI] [UPI] use a standardized and reliable way to install google cloud SDK in UPI image\n1946320 - Incorrect error message in Deployment Attach Storage Page\n1946449 - [e2e][automation] Fix cloud-init tests as UI changed\n1946458 - Edit Application action overwrites Deployment envFrom values on save\n1946459 - In bare metal IPv6 environment, [sig-storage] [Driver: nfs] tests are failing in CI. \n1946479 - In k8s 1.21 bump BoundServiceAccountTokenVolume is disabled by default\n1946497 - local-storage-diskmaker pod logs \"DeviceSymlinkExists\" and \"not symlinking, could not get lock: \u003cnil\u003e\"\n1946506 - [on-prem] mDNS plugin no longer needed\n1946513 - honor use specified system reserved with auto node sizing\n1946540 - auth operator: only configure webhook authenticators for internal auth when oauth-apiserver pods are ready\n1946584 - Machine-config controller fails to generate MC, when machine config pool with dashes in name presents under the cluster\n1946607 - etcd readinessProbe is not reflective of actual readiness\n1946705 - Fix issues with \"search\" capability in the Topology Quick Add component\n1946751 - DAY2 Confusing event when trying to add hosts to a cluster that completed installation\n1946788 - Serial tests are broken because of router\n1946790 - Marketplace operator flakes Available=False OperatorStarting during updates\n1946838 - Copied CSVs show up as adopted components\n1946839 - [Azure] While mirroring images to private registry throwing error: invalid character \u0027\u003c\u0027 looking for beginning of value\n1946865 - no \"namespace:kube_pod_container_resource_requests_cpu_cores:sum\" and \"namespace:kube_pod_container_resource_requests_memory_bytes:sum\" metrics\n1946893 - the error messages are inconsistent in DNS status conditions if the default service IP is taken\n1946922 - Ingress details page doesn\u0027t show referenced secret name and link\n1946929 - the default dns operator\u0027s Progressing status is always True and cluster operator dns Progressing status is False\n1947036 - \"failed to create Matchbox client or connect\" on e2e-metal jobs or metal clusters via cluster-bot\n1947066 - machine-config-operator pod crashes when noProxy is *\n1947067 - [Installer] Pick up upstream fix for installer console output\n1947078 - Incorrect skipped status for conditional tasks in the pipeline run\n1947080 - SNO IPv6 with \u0027temporary 60-day domain\u0027 option fails with IPv4 exception\n1947154 - [master] [assisted operator] Unable to re-register an SNO instance if deleting CRDs during install\n1947164 - Print \"Successfully pushed\" even if the build push fails. \n1947176 - OVN-Kubernetes leaves stale AddressSets around if the deletion was missed. \n1947293 - IPv6 provision addresses range larger then /64 prefix (e.g. /48)\n1947311 - When adding a new node to localvolumediscovery UI does not show pre-existing node name\u0027s\n1947360 - [vSphere csi driver operator] operator pod runs as \u201cBestEffort\u201d qosClass\n1947371 - [vSphere csi driver operator] operator doesn\u0027t create \u201ccsidriver\u201d instance\n1947402 - Single Node cluster upgrade: AWS EBS CSI driver deployment is stuck on rollout\n1947478 - discovery v1 beta1 EndpointSlice is deprecated in Kubernetes 1.21 (OCP 4.8)\n1947490 - If Clevis on a managed LUKs volume with Ignition enables, the system will fails to automatically open the LUKs volume on system boot\n1947498 - policy v1 beta1 PodDisruptionBudget is deprecated in Kubernetes 1.21 (OCP 4.8)\n1947663 - disk details are not synced in web-console\n1947665 - Internationalization values for ceph-storage-plugin should be in file named after plugin\n1947684 - MCO on SNO sometimes has rendered configs and sometimes does not\n1947712 - [OVN] Many faults and Polling interval stuck for 4 seconds every roughly 5 minutes intervals. \n1947719 - 8 APIRemovedInNextReleaseInUse info alerts display\n1947746 - Show wrong kubernetes version from kube-scheduler/kube-controller-manager operator pods\n1947756 - [azure-disk-csi-driver-operator] Should allow more nodes to be updated simultaneously for speeding up cluster upgrade\n1947767 - [azure-disk-csi-driver-operator] Uses the same storage type in the sc created by it as the default sc?\n1947771 - [kube-descheduler]descheduler operator pod should not run as \u201cBestEffort\u201d qosClass\n1947774 - CSI driver operators use \"Always\" imagePullPolicy in some containers\n1947775 - [vSphere csi driver operator] doesn\u2019t use the downstream images from payload. \n1947776 - [vSphere csi driver operator] Should allow more nodes to be updated simultaneously for speeding up cluster upgrade\n1947779 - [LSO] Should allow more nodes to be updated simultaneously for speeding up LSO upgrade\n1947785 - Cloud Compute: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947789 - Console: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947791 - MCO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947793 - DevEx: APIRemovedInNextReleaseInUse info alerts display\n1947794 - OLM: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component does not trigger APIRemovedInNextReleaseInUse alert\n1947795 - Networking: APIRemovedInNextReleaseInUse info alerts display\n1947797 - CVO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947798 - Images: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947800 - Ingress: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947801 - Kube Storage Version Migrator APIRemovedInNextReleaseInUse info alerts display\n1947803 - Openshift Apiserver: APIRemovedInNextReleaseInUse info alerts display\n1947806 - Re-enable h2spec, http/2 and grpc-interop e2e tests in openshift/origin\n1947828 - `download it` link should save pod log in \u003cpod-name\u003e-\u003ccontainer-name\u003e.log format\n1947866 - disk.csi.azure.com.spec.operatorLogLevel is not updated when CSO loglevel  is changed\n1947917 - Egress Firewall does not reliably apply firewall rules\n1947946 - Operator upgrades can delete existing CSV before completion\n1948011 - openshift-controller-manager constantly reporting type \"Upgradeable\" status Unknown\n1948012 - service-ca constantly reporting type \"Upgradeable\" status Unknown\n1948019 - [4.8] Large number of requests to the infrastructure cinder volume service\n1948022 - Some on-prem namespaces missing from must-gather\n1948040 - cluster-etcd-operator: etcd is using deprecated logger\n1948082 - Monitoring should not set Available=False with no reason on updates\n1948137 - CNI DEL not called on node reboot - OCP 4 CRI-O. \n1948232 - DNS operator performs spurious updates in response to API\u0027s defaulting of daemonset\u0027s maxSurge and service\u0027s ipFamilies and ipFamilyPolicy fields\n1948311 - Some jobs failing due to excessive watches: the server has received too many requests and has asked us to try again later\n1948359 - [aws] shared tag was not removed from user provided IAM role\n1948410 - [LSO] Local Storage Operator uses imagePullPolicy as \"Always\"\n1948415 - [vSphere csi driver operator] clustercsidriver.spec.logLevel doesn\u0027t take effective after changing\n1948427 - No action is triggered after click \u0027Continue\u0027 button on \u0027Show community Operator\u0027 windows\n1948431 - TechPreviewNoUpgrade does not enable CSI migration\n1948436 - The outbound traffic was broken intermittently after shutdown one egressIP node\n1948443 - OCP 4.8 nightly still showing v1.20 even after 1.21 merge\n1948471 - [sig-auth][Feature:OpenShiftAuthorization][Serial] authorization  TestAuthorizationResourceAccessReview should succeed [Suite:openshift/conformance/serial]\n1948505 - [vSphere csi driver operator] vmware-vsphere-csi-driver-operator pod restart every 10 minutes\n1948513 - get-resources.sh doesn\u0027t honor the no_proxy settings\n1948524 - \u0027DeploymentUpdated\u0027 Updated Deployment.apps/downloads -n openshift-console because it changed message is printed every minute\n1948546 - VM of worker is in error state when a network has port_security_enabled=False\n1948553 - When setting etcd spec.LogLevel is not propagated to etcd operand\n1948555 - A lot of events \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\" were seen in azure disk csi driver verification test\n1948563 - End-to-End Secure boot deployment fails \"Invalid value for input variable\"\n1948582 - Need ability to specify local gateway mode in CNO config\n1948585 - Need a CI jobs to test local gateway mode with bare metal\n1948592 - [Cluster Network Operator] Missing Egress Router Controller\n1948606 - DNS e2e test fails \"[sig-arch] Only known images used by tests\" because it does not use a known image\n1948610 - External Storage [Driver: disk.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]\n1948626 - TestRouteAdmissionPolicy e2e test is failing often\n1948628 - ccoctl needs to plan for future (non-AWS) platform support in the CLI\n1948634 - upgrades: allow upgrades without version change\n1948640 - [Descheduler] operator log reports key failed with : kubedeschedulers.operator.openshift.io \"cluster\" not found\n1948701 - unneeded CCO alert already covered by CVO\n1948703 - p\u0026f: probes should not get 429s\n1948705 - [assisted operator] SNO deployment fails - ClusterDeployment shows `bootstrap.ign was not found`\n1948706 - Cluster Autoscaler Operator manifests missing annotation for ibm-cloud-managed profile\n1948708 - cluster-dns-operator includes a deployment with node selector of masters for the IBM cloud managed profile\n1948711 - thanos querier and prometheus-adapter should have 2 replicas\n1948714 - cluster-image-registry-operator targets master nodes in ibm-cloud-managed-profile\n1948716 - cluster-ingress-operator deployment targets master nodes for ibm-cloud-managed profile\n1948718 - cluster-network-operator deployment manifest for ibm-cloud-managed profile contains master node selector\n1948719 - Machine API components should use 1.21 dependencies\n1948721 - cluster-storage-operator deployment targets master nodes for ibm-cloud-managed profile\n1948725 - operator lifecycle manager does not include profile annotations for ibm-cloud-managed\n1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing\n1948771 - ~50% of GCP upgrade jobs in 4.8 failing with \"AggregatedAPIDown\" alert on packages.coreos.com\n1948782 - Stale references to the single-node-production-edge cluster profile\n1948787 - secret.StringData shouldn\u0027t be used for reads\n1948788 - Clicking an empty metrics graph (when there is no data) should still open metrics viewer\n1948789 - Clicking on a metrics graph should show request and limits queries as well on the resulting metrics page\n1948919 - Need minor update in message on channel modal\n1948923 - [aws] installer forces the platform.aws.amiID option to be set, while installing a cluster into GovCloud or C2S region\n1948926 - Memory Usage of Dashboard \u0027Kubernetes / Compute Resources / Pod\u0027 contain wrong CPU query\n1948936 - [e2e][automation][prow] Prow script point to deleted resource\n1948943 - (release-4.8) Limit the number of collected pods in the workloads gatherer\n1948953 - Uninitialized cloud provider error when provisioning a cinder volume\n1948963 - [RFE] Cluster-api-provider-ovirt should handle hugepages\n1948966 - Add the ability to run a gather done by IO via a Kubernetes Job\n1948981 - Align dependencies and libraries with latest ironic code\n1948998 - style fixes by GoLand and golangci-lint\n1948999 - Can not assign multiple EgressIPs to a namespace by using automatic way. \n1949019 - PersistentVolumes page cannot sync project status automatically which will block user to create PV\n1949022 - Openshift 4 has a zombie problem\n1949039 - Wrong env name to get podnetinfo for hugepage in app-netutil\n1949041 - vsphere: wrong image names in bundle\n1949042 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the http2 tests  (on OpenStack)\n1949050 - Bump k8s to latest 1.21\n1949061 - [assisted operator][nmstate] Continuous attempts to reconcile InstallEnv  in the case of invalid NMStateConfig\n1949063 - [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service\n1949075 - Extend openshift/api for Add card customization\n1949093 - PatternFly v4.96.2 regression results in a.pf-c-button hover issues\n1949096 - Restore private git clone tests\n1949099 - network-check-target code cleanup\n1949105 - NetworkPolicy ... should enforce ingress policy allowing any port traffic to a server on a specific protocol\n1949145 - Move openshift-user-critical priority class to CCO\n1949155 - Console doesn\u0027t correctly check for favorited or last namespace on load if project picker used\n1949180 - Pipelines plugin model kinds aren\u0027t picked up by parser\n1949202 - sriov-network-operator not available from operatorhub on ppc64le\n1949218 - ccoctl not included in container image\n1949237 - Bump OVN: Lots of conjunction warnings in ovn-controller container logs\n1949277 - operator-marketplace: deployment manifests for ibm-cloud-managed profile have master node selectors\n1949294 - [assisted operator] OPENSHIFT_VERSIONS in assisted operator subscription does not propagate\n1949306 - need a way to see top API accessors\n1949313 - Rename vmware-vsphere-* images to vsphere-* images before 4.8 ships\n1949316 - BaremetalHost resource automatedCleaningMode ignored due to outdated vendoring\n1949347 - apiserver-watcher support for dual-stack\n1949357 - manila-csi-controller pod not running due to secret lack(in another ns)\n1949361 - CoreDNS resolution failure for external hostnames with \"A: dns: overflow unpacking uint16\"\n1949364 - Mention scheduling profiles in scheduler operator repository\n1949370 - Testability of: Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error\n1949384 - Edit Default Pull Secret modal - i18n misses\n1949387 - Fix the typo in auto node sizing script\n1949404 - label selector on pvc creation page - i18n misses\n1949410 - The referred role doesn\u0027t exist if create rolebinding from rolebinding tab of role page\n1949411 - VolumeSnapshot, VolumeSnapshotClass and VolumeSnapshotConent Details tab is not translated - i18n misses\n1949413 - Automatic boot order setting is done incorrectly when using by-path style device names\n1949418 - Controller factory workers should always restart on panic()\n1949419 - oauth-apiserver logs \"[SHOULD NOT HAPPEN] failed to update managedFields for authentication.k8s.io/v1, Kind=TokenReview: failed to convert new object (authentication.k8s.io/v1, Kind=TokenReview)\"\n1949420 - [azure csi driver operator] pvc.status.capacity and pv.spec.capacity are processed not the same as in-tree plugin\n1949435 - ingressclass controller doesn\u0027t recreate the openshift-default ingressclass after deleting it\n1949480 - Listeners timeout are constantly being updated\n1949481 - cluster-samples-operator restarts approximately two times per day and logs too many same messages\n1949509 - Kuryr should manage API LB instead of CNO\n1949514 - URL is not visible for routes at narrow screen widths\n1949554 - Metrics of vSphere CSI driver sidecars are not collected\n1949582 - OCP v4.7 installation with OVN-Kubernetes fails with error \"egress bandwidth restriction -1 is not equals\"\n1949589 - APIRemovedInNextEUSReleaseInUse Alert Missing\n1949591 - Alert does not catch removed api usage during end-to-end tests. \n1949593 - rename DeprecatedAPIInUse alert to APIRemovedInNextReleaseInUse\n1949612 - Install with 1.21 Kubelet is spamming logs with failed to get stats failed command \u0027du\u0027\n1949626 - machine-api fails to create AWS client in new regions\n1949661 - Kubelet Workloads Management changes for OCPNODE-529\n1949664 - Spurious keepalived liveness probe failures\n1949671 - System services such as openvswitch are stopped before pod containers on system shutdown or reboot\n1949677 - multus is the first pod on a new node and the last to go ready\n1949711 - cvo unable to reconcile deletion of openshift-monitoring namespace\n1949721 - Pick 99237: Use the audit ID of a request for better correlation\n1949741 - Bump golang version of cluster-machine-approver\n1949799 - ingresscontroller should deny the setting when spec.tuningOptions.threadCount exceed 64\n1949810 - OKD 4.7  unable to access Project  Topology View\n1949818 - Add e2e test to perform MCO operation Single Node OpenShift\n1949820 - Unable to use `oc adm top is` shortcut when asking for `imagestreams`\n1949862 - The ccoctl tool hits the panic sometime when running the delete subcommand\n1949866 - The ccoctl fails to create authentication file when running the command `ccoctl aws create-identity-provider` with `--output-dir` parameter\n1949880 - adding providerParameters.gcp.clientAccess to existing ingresscontroller doesn\u0027t work\n1949882 - service-idler build error\n1949898 - Backport RP#848 to OCP 4.8\n1949907 - Gather summary of PodNetworkConnectivityChecks\n1949923 - some defined rootVolumes zones not used on installation\n1949928 - Samples Operator updates break CI tests\n1949935 - Fix  incorrect access review check on start pipeline kebab action\n1949956 - kaso: add minreadyseconds to ensure we don\u0027t have an LB outage on kas\n1949967 - Update Kube dependencies in MCO to 1.21\n1949972 - Descheduler metrics: populate build info data and make the metrics entries more readeable\n1949978 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the h2spec conformance tests [Suite:openshift/conformance/parallel/minimal]\n1949990 - (release-4.8) Extend the OLM operator gatherer to include CSV display name\n1949991 - openshift-marketplace pods are crashlooping\n1950007 - [CI] [UPI] easy_install is not reliable enough to be used in an image\n1950026 - [Descheduler] Need better way to handle evicted pod count for removeDuplicate pod strategy\n1950047 - CSV deployment template custom annotations are not propagated to deployments\n1950112 - SNO: machine-config pool is degraded:   error running chcon -R -t var_run_t /run/mco-machine-os-content/os-content-321709791\n1950113 - in-cluster operators need an API for additional AWS tags\n1950133 - MCO creates empty conditions on the kubeletconfig object\n1950159 - Downstream ovn-kubernetes repo should have no linter errors\n1950175 - Update Jenkins and agent base image to Go 1.16\n1950196 - ssh Key is added even with \u0027Expose SSH access to this virtual machine\u0027 unchecked\n1950210 - VPA CRDs use deprecated API version\n1950219 - KnativeServing is not shown in list on global config page\n1950232 - [Descheduler] - The minKubeVersion should be 1.21\n1950236 - Update OKD imagestreams to prefer centos7 images\n1950270 - should use \"kubernetes.io/os\" in the dns/ingresscontroller node selector description when executing oc explain command\n1950284 - Tracking bug for NE-563 - support user-defined tags on AWS load balancers\n1950341 - NetworkPolicy: allow-from-router policy does not allow access to service when the endpoint publishing strategy is HostNetwork on OpenshiftSDN network\n1950379 - oauth-server is in pending/crashbackoff at beginning 50% of CI runs\n1950384 - [sig-builds][Feature:Builds][sig-devex][Feature:Jenkins][Slow] openshift pipeline build  perm failing\n1950409 - Descheduler operator code and docs still reference v1beta1\n1950417 - The Marketplace Operator is building with EOL k8s versions\n1950430 - CVO serves metrics over HTTP, despite a lack of consumers\n1950460 - RFE: Change Request Size Input to Number Spinner Input\n1950471 - e2e-metal-ipi-ovn-dualstack is failing with etcd unable to bootstrap\n1950532 - Include \"update\" when referring to operator approval and channel\n1950543 - Document non-HA behaviors in the MCO (SingleNodeOpenshift)\n1950590 - CNO: Too many OVN netFlows collectors causes ovnkube pods CrashLoopBackOff\n1950653 - BuildConfig ignores Args\n1950761 - Monitoring operator deployments anti-affinity rules prevent their rollout on single-node\n1950908 - kube_pod_labels metric does not contain k8s labels\n1950912 - [e2e][automation] add devconsole tests\n1950916 - [RFE]console page show error when vm is poused\n1950934 - Unnecessary rollouts can happen due to unsorted endpoints\n1950935 - Updating cluster-network-operator builder \u0026 base images to be consistent with ART\n1950978 - the ingressclass cannot be removed even after deleting the related custom ingresscontroller\n1951007 - ovn master pod crashed\n1951029 - Drainer panics on missing context for node patch\n1951034 - (release-4.8) Split up the GatherClusterOperators into smaller parts\n1951042 - Panics every few minutes in kubelet logs post-rebase\n1951043 - Start Pipeline Modal Parameters should accept empty string defaults\n1951058 - [gcp-pd-csi-driver-operator] topology and multipods capabilities are not enabled in e2e tests\n1951066 - [IBM][ROKS] Enable volume snapshot controllers on IBM Cloud\n1951084 - avoid benign \"Path \\\"/run/secrets/etc-pki-entitlement\\\" from \\\"/etc/containers/mounts.conf\\\" doesn\u0027t exist, skipping\" messages\n1951158 - Egress Router CRD missing Addresses entry\n1951169 - Improve API Explorer discoverability from the Console\n1951174 - re-pin libvirt to 6.0.0\n1951203 - oc adm catalog mirror can generate ICSPs that exceed etcd\u0027s size limit\n1951209 - RerunOnFailure runStrategy shows wrong VM status (Starting) on Succeeded VMI\n1951212 - User/Group details shows unrelated subjects in role bindings tab\n1951214 - VM list page crashes when the volume type is sysprep\n1951339 - Cluster-version operator does not manage operand container environments when manifest lacks opinions\n1951387 - opm index add doesn\u0027t respect deprecated bundles\n1951412 - Configmap gatherer can fail incorrectly\n1951456 - Docs and linting fixes\n1951486 - Replace \"kubevirt_vmi_network_traffic_bytes_total\" with new metrics names\n1951505 - Remove deprecated techPreviewUserWorkload field from CMO\u0027s configmap\n1951558 - Backport Upstream 101093 for Startup Probe Fix\n1951585 - enterprise-pod fails to build\n1951636 - assisted service operator use default serviceaccount in operator bundle\n1951637 - don\u0027t rollout a new kube-apiserver revision on oauth accessTokenInactivityTimeout changes\n1951639 - Bootstrap API server unclean shutdown causes reconcile delay\n1951646 - Unexpected memory climb while container not in use\n1951652 - Add retries to opm index add\n1951670 - Error gathering bootstrap log after pivot: The bootstrap machine did not execute the release-image.service systemd unit\n1951671 - Excessive writes to ironic Nodes\n1951705 - kube-apiserver needs alerts on CPU utlization\n1951713 - [OCP-OSP] After changing image in machine object it enters in Failed - Can\u0027t find created instance\n1951853 - dnses.operator.openshift.io resource\u0027s spec.nodePlacement.tolerations godoc incorrectly describes default behavior\n1951858 - unexpected text \u00270\u0027 on filter toolbar on RoleBinding tab\n1951860 - [4.8] add Intel XXV710 NIC model (1572) support in SR-IOV Operator\n1951870 - sriov network resources injector: user defined injection removed existing pod annotations\n1951891 - [migration] cannot change ClusterNetwork CIDR during migration\n1951952 - [AWS CSI Migration] Metrics for cloudprovider error requests are lost\n1952001 - Delegated authentication: reduce the number of watch requests\n1952032 - malformatted assets in CMO\n1952045 - Mirror nfs-server image used in jenkins-e2e\n1952049 - Helm: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1952079 - rebase openshift/sdn to kube 1.21\n1952111 - Optimize importing from @patternfly/react-tokens\n1952174 - DNS operator claims to be done upgrading before it even starts\n1952179 - OpenStack Provider Ports UI Underscore Variables\n1952187 - Pods stuck in ImagePullBackOff with errors like rpc error: code = Unknown desc = Error committing the finished image: image with ID \"SomeLongID\" already exists, but uses a different top layer: that ID\n1952211 - cascading mounts happening exponentially on when deleting openstack-cinder-csi-driver-node pods\n1952214 - Console Devfile Import Dev Preview broken\n1952238 - Catalog pods don\u0027t report termination logs to catalog-operator\n1952262 - Need support external gateway via hybrid overlay\n1952266 - etcd operator bumps status.version[name=operator] before operands update\n1952268 - etcd operator should not set Degraded=True EtcdMembersDegraded on healthy machine-config node reboots\n1952282 - CSR approver races with nodelink controller and does not requeue\n1952310 - VM cannot start up if the ssh key is added by another template\n1952325 - [e2e][automation] Check support modal in ssh tests and skip template parentSupport\n1952333 - openshift/kubernetes vulnerable to CVE-2021-3121\n1952358 - Openshift-apiserver CO unavailable in fresh OCP 4.7.5 installations\n1952367 - No VM status on overview page when VM is pending\n1952368 - worker pool went degraded due to no rpm-ostree on rhel worker during applying new mc\n1952372 - VM stop action should not be there if the VM is not running\n1952405 - console-operator is not reporting correct Available status\n1952448 - Switch from Managed to Disabled mode: no IP removed from configuration and no container metal3-static-ip-manager stopped\n1952460 - In k8s 1.21 bump \u0027[sig-network] Firewall rule control plane should not expose well-known ports\u0027 test is disabled\n1952473 - Monitor pod placement during upgrades\n1952487 - Template filter does not work properly\n1952495 - \u201cCreate\u201d button on the Templates page is confuse\n1952527 - [Multus] multi-networkpolicy does wrong filtering\n1952545 - Selection issue when inserting YAML snippets\n1952585 - Operator links for \u0027repository\u0027 and \u0027container image\u0027 should be clickable in OperatorHub\n1952604 - Incorrect port in external loadbalancer config\n1952610 - [aws] image-registry panics when the cluster is installed in a new region\n1952611 - Tracking bug for OCPCLOUD-1115 - support user-defined tags on AWS EC2 Instances\n1952618 - 4.7.4-\u003e4.7.8 Upgrade Caused OpenShift-Apiserver Outage\n1952625 - Fix translator-reported text issues\n1952632 - 4.8 installer should default ClusterVersion channel to stable-4.8\n1952635 - Web console displays a blank page- white space instead of cluster information\n1952665 - [Multus] multi-networkpolicy pod continue restart due to OOM (out of memory)\n1952666 - Implement Enhancement 741 for Kubelet\n1952667 - Update Readme for cluster-baremetal-operator with details about the operator\n1952684 - cluster-etcd-operator: metrics controller panics on invalid response from client\n1952728 - It was not clear for users why Snapshot feature was not available\n1952730 - \u201cCustomize virtual machine\u201d and the \u201cAdvanced\u201d feature are confusing in wizard\n1952732 - Users did not understand the boot source labels\n1952741 - Monitoring DB: after set Time Range as Custom time range, no data display\n1952744 - PrometheusDuplicateTimestamps with user workload monitoring enabled\n1952759 - [RFE]It was not immediately clear what the Star icon meant\n1952795 - cloud-network-config-controller CRD does not specify correct plural name\n1952819 - failed to configure pod interface: error while waiting on flows for pod: timed out waiting for OVS flows\n1952820 - [LSO] Delete localvolume pv is failed\n1952832 - [IBM][ROKS] Enable the Web console UI to deploy OCS in External mode on IBM Cloud\n1952891 - Upgrade failed due to cinder csi driver not deployed\n1952904 - Linting issues in gather/clusterconfig package\n1952906 - Unit tests for configobserver.go\n1952931 - CI does not check leftover PVs\n1952958 - Runtime error loading console in Safari 13\n1953019 - [Installer][baremetal][metal3] The baremetal IPI installer fails on delete cluster with: failed to clean baremetal bootstrap storage pool\n1953035 - Installer should error out if publish: Internal is set while deploying OCP cluster on any on-prem platform\n1953041 - openshift-authentication-operator uses 3.9k% of its requested CPU\n1953077 - Handling GCP\u0027s: Error 400: Permission accesscontextmanager.accessLevels.list is not valid for this resource\n1953102 - kubelet CPU use during an e2e run increased 25% after rebase\n1953105 - RHCOS system components registered a 3.5x increase in CPU use over an e2e run before and after 4/9\n1953169 - endpoint slice controller doesn\u0027t handle services target port correctly\n1953257 - Multiple EgressIPs per node for one namespace when \"oc get hostsubnet\"\n1953280 - DaemonSet/node-resolver is not recreated by dns operator after deleting it\n1953291 - cluster-etcd-operator: peer cert DNS SAN is populated incorrectly\n1953418 - [e2e][automation] Fix vm wizard validate tests\n1953518 - thanos-ruler pods failed to start up for \"cannot unmarshal DNS message\"\n1953530 - Fix openshift/sdn unit test flake\n1953539 - kube-storage-version-migrator: priorityClassName not set\n1953543 - (release-4.8) Add missing sample archive data\n1953551 - build failure: unexpected trampoline for shared or dynamic linking\n1953555 - GlusterFS tests fail on ipv6 clusters\n1953647 - prometheus-adapter should have a PodDisruptionBudget in HA topology\n1953670 - ironic container image build failing because esp partition size is too small\n1953680 - ipBlock ignoring all other cidr\u0027s apart from the last one specified\n1953691 - Remove unused mock\n1953703 - Inconsistent usage of Tech preview badge in OCS plugin of OCP Console\n1953726 - Fix issues related to loading dynamic plugins\n1953729 - e2e unidling test is flaking heavily on SNO jobs\n1953795 - Ironic can\u0027t virtual media attach ISOs sourced from ingress routes\n1953798 - GCP e2e (parallel and upgrade) regularly trigger KubeAPIErrorBudgetBurn alert, also happens on AWS\n1953803 - [AWS] Installer should do pre-check to ensure user-provided private hosted zone name is valid for OCP cluster\n1953810 - Allow use of storage policy in VMC environments\n1953830 - The oc-compliance build does not available for OCP4.8\n1953846 - SystemMemoryExceedsReservation alert should consider hugepage reservation\n1953977 - [4.8] packageserver pods restart many times on the SNO cluster\n1953979 - Ironic caching virtualmedia images results in disk space limitations\n1954003 - Alerts shouldn\u0027t report any alerts in firing or pending state: openstack-cinder-csi-driver-controller-metrics TargetDown\n1954025 - Disk errors while scaling up a node with multipathing enabled\n1954087 - Unit tests for kube-scheduler-operator\n1954095 - Apply user defined tags in AWS Internal Registry\n1954105 - TaskRuns Tab in PipelineRun Details Page makes cluster based calls for TaskRuns\n1954124 - oc set volume not adding storageclass to pvc which leads to issues using snapshots\n1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js\n1954177 - machine-api: admissionReviewVersions v1beta1 is going to be removed in 1.22\n1954187 - multus: admissionReviewVersions v1beta1 is going to be removed in 1.22\n1954248 - Disable Alertmanager Protractor e2e tests\n1954317 - [assisted operator] Environment variables set in the subscription not being inherited by the assisted-service container\n1954330 - NetworkPolicy: allow-from-router with label policy-group.network.openshift.io/ingress: \"\" does not work on a upgraded cluster\n1954421 - Get \u0027Application is not available\u0027 when access Prometheus UI\n1954459 - Error: Gateway Time-out display on Alerting console\n1954460 - UI, The status of \"Used Capacity Breakdown [Pods]\"  is \"Not available\"\n1954509 - FC volume is marked as unmounted after failed reconstruction\n1954540 - Lack translation for local language on pages under storage menu\n1954544 - authn operator: endpoints controller should use the context it creates\n1954554 - Add e2e tests for auto node sizing\n1954566 - Cannot update a component (`UtilizationCard`) error when switching perspectives manually\n1954597 - Default image for GCP does not support ignition V3\n1954615 - Undiagnosed panic detected in pod: pods/openshift-cloud-credential-operator_cloud-credential-operator\n1954634 - apirequestcounts does not honor max users\n1954638 - apirequestcounts should indicate removedinrelease of empty instead of 2.0\n1954640 - Support of gatherers with different periods\n1954671 - disable volume expansion support in vsphere csi driver storage class\n1954687 - localvolumediscovery and localvolumset e2es are disabled\n1954688 - LSO has missing examples for localvolumesets\n1954696 - [API-1009] apirequestcounts should indicate useragent\n1954715 - Imagestream imports become very slow when doing many in parallel\n1954755 - Multus configuration should allow for net-attach-defs referenced in the openshift-multus namespace\n1954765 - CCO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1954768 - baremetal-operator: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1954770 - Backport upstream fix for Kubelet getting stuck in DiskPressure\n1954773 - OVN: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component does not trigger APIRemovedInNextReleaseInUse alert\n1954783 - [aws] support byo private hosted zone\n1954790 - KCM Alert PodDisruptionBudget At and Limit do not alert with maxUnavailable or MinAvailable by percentage\n1954830 - verify-client-go job is failing for release-4.7 branch\n1954865 - Add necessary priority class to pod-identity-webhook deployment\n1954866 - Add necessary priority class to downloads\n1954870 - Add necessary priority class to network components\n1954873 - dns server may not be specified for clusters with more than 2 dns servers specified by  openstack. \n1954891 - Add necessary priority class to pruner\n1954892 - Add necessary priority class to ingress-canary\n1954931 - (release-4.8) Remove legacy URL anonymization in the ClusterOperator related resources\n1954937 - [API-1009] `oc get apirequestcount` shows blank for column REQUESTSINCURRENTHOUR\n1954959 - unwanted decorator shown for revisions in topology though should only be shown only for knative services\n1954972 - TechPreviewNoUpgrade featureset can be undone\n1954973 - \"read /proc/pressure/cpu: operation not supported\" in node-exporter logs\n1954994 - should update to 2.26.0 for prometheus resources label\n1955051 - metrics \"kube_node_status_capacity_cpu_cores\" does not exist\n1955089 - Support [sig-cli] oc observe works as expected test for IPv6\n1955100 - Samples: APIRemovedInNextReleaseInUse info alerts display\n1955102 - Add vsphere_node_hw_version_total metric to the collected metrics\n1955114 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM\n1955196 - linuxptp-daemon crash on 4.8\n1955226 - operator updates apirequestcount CRD over and over\n1955229 - release-openshift-origin-installer-e2e-aws-calico-4.7 is permfailing\n1955256 - stop collecting API that no longer exists\n1955324 - Kubernetes Autoscaler should use Go 1.16 for testing scripts\n1955336 - Failure to Install OpenShift on GCP due to Cluster Name being similar to / contains \"google\"\n1955414 - 4.8 -\u003e 4.7 rollbacks broken on unrecognized flowschema openshift-etcd-operator\n1955445 - Drop crio image metrics with high cardinality\n1955457 - Drop container_memory_failures_total metric because of high cardinality\n1955467 - Disable collection of node_mountstats_nfs metrics in node_exporter\n1955474 - [aws-ebs-csi-driver] rebase from version v1.0.0\n1955478 - Drop high-cardinality metrics from kube-state-metrics which aren\u0027t used\n1955517 - Failed to upgrade from 4.6.25 to 4.7.8 due to the machine-config degradation\n1955548 - [IPI][OSP] OCP 4.6/4.7 IPI with kuryr exceeds defined serviceNetwork range\n1955554 - MAO does not react to events triggered from Validating Webhook Configurations\n1955589 - thanos-querier should have a PodDisruptionBudget in HA topology\n1955595 - Add DevPreviewLongLifecycle Descheduler profile\n1955596 - Pods stuck in creation phase on realtime kernel SNO\n1955610 - release-openshift-origin-installer-old-rhcos-e2e-aws-4.7 is permfailing\n1955622 - 4.8-e2e-metal-assisted jobs: Timeout of 360 seconds expired waiting for Cluster to be in status [\u0027installing\u0027, \u0027error\u0027]\n1955701 - [4.8] RHCOS boot image bump for RHEL 8.4 Beta\n1955749 - OCP branded templates need to be translated\n1955761 - packageserver clusteroperator does not set reason or message for Available condition\n1955783 - NetworkPolicy: ACL audit log message for allow-from-router policy should also include the namespace to distinguish between two policies similarly named configured in respective namespaces\n1955803 - OperatorHub - console accepts any value for \"Infrastructure features\" annotation\n1955822 - CIS Benchmark 5.4.1 Fails on ROKS 4: Prefer using secrets as files over secrets as environment variables\n1955854 - Ingress clusteroperator reports Degraded=True/Available=False if any ingresscontroller is degraded or unavailable\n1955862 - Local Storage Operator using LocalVolume CR fails to create PV\u0027s when backend storage failure is simulated\n1955874 - Webscale: sriov vfs are not created and sriovnetworknodestate indicates sync succeeded - state is not correct\n1955879 - Customer tags cannot be seen in S3 level when set spec.managementState from Managed-\u003e Removed-\u003e Managed in configs.imageregistry with high ratio\n1955969 - Workers cannot be deployed attached to multiple networks. \n1956079 - Installer gather doesn\u0027t collect any networking information\n1956208 - Installer should validate root volume type\n1956220 - Set htt proxy system properties as expected by kubernetes-client\n1956281 - Disconnected installs are failing with kubelet trying to pause image from the internet\n1956334 - Event Listener Details page does not show Triggers section\n1956353 - test: analyze job consistently fails\n1956372 - openshift-gcp-routes causes disruption during upgrade by stopping before all pods terminate\n1956405 - Bump k8s dependencies in cluster resource override admission operator\n1956411 - Apply custom tags to AWS EBS volumes\n1956480 - [4.8] Bootimage bump tracker\n1956606 - probes FlowSchema manifest not included in any cluster profile\n1956607 - Multiple manifests lack cluster profile annotations\n1956609 - [cluster-machine-approver] CSRs for replacement control plane nodes not approved after restore from backup\n1956610 - manage-helm-repos manifest lacks cluster profile annotations\n1956611 - OLM CRD schema validation failing against CRs where the value of a string field is a blank string\n1956650 - The container disk URL is empty for Windows guest tools\n1956768 - aws-ebs-csi-driver-controller-metrics TargetDown\n1956826 - buildArgs does not work when the value is taken from a secret\n1956895 - Fix chatty kubelet log message\n1956898 - fix log files being overwritten on container state loss\n1956920 - can\u0027t open terminal for pods that have more than one container running\n1956959 - ipv6 disconnected sno crd deployment hive reports success status and clusterdeployrmet reporting false\n1956978 - Installer gather doesn\u0027t include pod names in filename\n1957039 - Physical VIP for pod -\u003e Svc -\u003e Host is incorrectly set to an IP of 169.254.169.2 for Local GW\n1957041 - Update CI e2echart with more node info\n1957127 - Delegated authentication: reduce the number of watch requests\n1957131 - Conformance tests for OpenStack require the Cinder client that is not included in the \"tests\" image\n1957146 - Only run test/extended/router/idle tests on OpenshiftSDN or OVNKubernetes\n1957149 - CI: \"Managed cluster should start all core operators\" fails with: OpenStackCinderDriverStaticResourcesControllerDegraded: \"volumesnapshotclass.yaml\" (string): missing dynamicClient\n1957179 - Incorrect VERSION in node_exporter\n1957190 - CI jobs failing due too many watch requests (prometheus-operator)\n1957198 - Misspelled console-operator condition\n1957227 - Issue replacing the EnvVariables using the unsupported ConfigMap\n1957260 - [4.8] [gcp] Installer is missing new region/zone europe-central2\n1957261 - update godoc for new build status image change trigger fields\n1957295 - Apply priority classes conventions as test to openshift/origin repo\n1957315 - kuryr-controller doesn\u0027t indicate being out of quota\n1957349 - [Azure] Machine object showing Failed phase even node is ready and VM is running properly\n1957374 - mcddrainerr doesn\u0027t list specific pod\n1957386 - Config serve and validate command should be under alpha\n1957446 - prepare CCO for future without v1beta1 CustomResourceDefinitions\n1957502 - Infrequent panic in kube-apiserver in aws-serial job\n1957561 - lack of pseudolocalization for some text on Cluster Setting page\n1957584 - Routes are not getting created  when using hostname  without FQDN standard\n1957597 - Public DNS records were not deleted when destroying a cluster which is using byo private hosted zone\n1957645 - Event \"Updated PrometheusRule.monitoring.coreos.com/v1 because it changed\" is frequently looped with weird empty {} changes\n1957708 - e2e-metal-ipi and related jobs fail to bootstrap due to multiple VIP\u0027s\n1957726 - Pod stuck in ContainerCreating - Failed to start transient scope unit: Connection timed out\n1957748 - Ptp operator pod should have CPU and memory requests set but not limits\n1957756 - Device Replacemet UI, The status of the disk is \"replacement ready\" before I clicked on \"start replacement\"\n1957772 - ptp daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1957775 - CVO creating cloud-controller-manager too early causing upgrade failures\n1957809 - [OSP] Install with invalid platform.openstack.machinesSubnet results in runtime error\n1957822 - Update apiserver tlsSecurityProfile description to include Custom profile\n1957832 - CMO end-to-end tests work only on AWS\n1957856 - \u0027resource name may not be empty\u0027 is shown in CI testing\n1957869 - baremetal IPI power_interface for irmc is inconsistent\n1957879 - cloud-controller-manage ClusterOperator manifest does not declare relatedObjects\n1957889 - Incomprehensible documentation of the GatherClusterOperatorPodsAndEvents gatherer\n1957893 - ClusterDeployment / Agent conditions show \"ClusterAlreadyInstalling\" during each spoke install\n1957895 - Cypress helper projectDropdown.shouldContain is not an assertion\n1957908 - Many e2e failed requests caused by kube-storage-version-migrator-operator\u0027s version reads\n1957926 - \"Add Capacity\" should allow to add n*3 (or n*4) local devices at once\n1957951 - [aws] destroy can get blocked on instances stuck in shutting-down state\n1957967 - Possible test flake in listPage Cypress view\n1957972 - Leftover templates from mdns\n1957976 - Ironic execute_deploy_steps command to ramdisk times out, resulting in a failed deployment in 4.7\n1957982 - Deployment Actions clickable for view-only projects\n1957991 - ClusterOperatorDegraded can fire during installation\n1958015 - \"config-reloader-cpu\" and \"config-reloader-memory\" flags have been deprecated for prometheus-operator\n1958080 - Missing i18n for login, error and selectprovider pages\n1958094 - Audit log files are corrupted sometimes\n1958097 - don\u0027t show \"old, insecure token format\" if the token does not actually exist\n1958114 - Ignore staged vendor files in pre-commit script\n1958126 - [OVN]Egressip doesn\u0027t take effect\n1958158 - OAuth proxy container for AlertManager and Thanos are flooding the logs\n1958216 - ocp libvirt: dnsmasq options in install config should allow duplicate option names\n1958245 - cluster-etcd-operator: static pod revision is not visible from etcd logs\n1958285 - Deployment considered unhealthy despite being available and at latest generation\n1958296 - OLM must explicitly alert on deprecated APIs in use\n1958329 - pick 97428: add more context to log after a request times out\n1958367 - Build metrics do not aggregate totals by build strategy\n1958391 - Update MCO KubeletConfig to mixin the API Server TLS Security Profile Singleton\n1958405 - etcd: current health checks and reporting are not adequate to ensure availability\n1958406 - Twistlock flags mode of /var/run/crio/crio.sock\n1958420 - openshift-install 4.7.10 fails with segmentation error\n1958424 - aws: support more auth options in manual mode\n1958439 - Install/Upgrade button on Install/Upgrade Helm Chart page does not work with Form View\n1958492 - CCO: pod-identity-webhook still accesses APIRemovedInNextReleaseInUse\n1958643 - All pods creation stuck due to SR-IOV webhook timeout\n1958679 - Compression on pool can\u0027t be disabled via UI\n1958753 - VMI nic tab is not loadable\n1958759 - Pulling Insights report is missing retry logic\n1958811 - VM creation fails on API version mismatch\n1958812 - Cluster upgrade halts as machine-config-daemon fails to parse `rpm-ostree status` during cluster upgrades\n1958861 - [CCO] pod-identity-webhook certificate request failed\n1958868 - ssh copy is missing when vm is running\n1958884 - Confusing error message when volume AZ not found\n1958913 - \"Replacing an unhealthy etcd member whose node is not ready\" procedure results in new etcd pod in CrashLoopBackOff\n1958930 - network config in machine configs prevents addition of new nodes with static networking via kargs\n1958958 - [SCALE] segfault with ovnkube adding to address set\n1958972 - [SCALE] deadlock in ovn-kube when scaling up to 300 nodes\n1959041 - LSO Cluster UI,\"Troubleshoot\" link does not exist after scale down osd pod\n1959058 - ovn-kubernetes has lock contention on the LSP cache\n1959158 - packageserver clusteroperator Available condition set to false on any Deployment spec change\n1959177 - Descheduler dev manifests are missing permissions\n1959190 - Set LABEL io.openshift.release.operator=true for driver-toolkit image addition to payload\n1959194 - Ingress controller should use minReadySeconds because otherwise it is disrupted during deployment updates\n1959278 - Should remove prometheus servicemonitor from openshift-user-workload-monitoring\n1959294 - openshift-operator-lifecycle-manager:olm-operator-serviceaccount should not rely on external networking for health check\n1959327 - Degraded nodes on upgrade - Cleaning bootversions: Read-only file system\n1959406 - Difficult to debug performance on ovn-k without pprof enabled\n1959471 - Kube sysctl conformance tests are disabled, meaning we can\u0027t submit conformance results\n1959479 - machines doesn\u0027t support dual-stack loadbalancers on Azure\n1959513 - Cluster-kube-apiserver does not use library-go for audit pkg\n1959519 - Operand details page only renders one status donut no matter how many \u0027podStatuses\u0027 descriptors are used\n1959550 - Overly generic CSS rules for dd and dt elements breaks styling elsewhere in console\n1959564 - Test verify /run filesystem contents failing\n1959648 - oc adm top --help indicates that oc adm top can display storage usage while it cannot\n1959650 - Gather SDI-related MachineConfigs\n1959658 - showing a lot \"constructing many client instances from the same exec auth config\"\n1959696 - Deprecate \u0027ConsoleConfigRoute\u0027 struct in console-operator config\n1959699 - [RFE] Collect LSO pod log and daemonset log managed by LSO\n1959703 - Bootstrap gather gets into an infinite loop on bootstrap-in-place mode\n1959711 - Egressnetworkpolicy  doesn\u0027t work when configure the EgressIP\n1959786 - [dualstack]EgressIP doesn\u0027t work on dualstack cluster for IPv6\n1959916 - Console not works well against a proxy in front of openshift clusters\n1959920 - UEFISecureBoot set not on the right master node\n1959981 - [OCPonRHV] - Affinity Group should not create by default if we define empty affinityGroupsNames: []\n1960035 - iptables is missing from ose-keepalived-ipfailover image\n1960059 - Remove \"Grafana UI\" link from Console Monitoring \u003e Dashboards page\n1960089 - ImageStreams list page, detail page and breadcrumb are not following CamelCase conventions\n1960129 - [e2e][automation] add smoke tests about VM pages and actions\n1960134 - some origin images are not public\n1960171 - Enable SNO checks for image-registry\n1960176 - CCO should recreate a user for the component when it was removed from the cloud providers\n1960205 - The kubelet log flooded with reconcileState message once CPU manager enabled\n1960255 - fixed obfuscation permissions\n1960257 - breaking changes in pr template\n1960284 - ExternalTrafficPolicy Local does not preserve connections correctly on shutdown, policy Cluster has significant performance cost\n1960323 - Address issues raised by coverity security scan\n1960324 - manifests: extra \"spec.version\" in console quickstarts makes CVO hotloop\n1960330 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1960334 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1960337 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1960339 - manifests: unset \"preemptionPolicy\" makes CVO hotloop\n1960531 - Items under \u0027Current Bandwidth\u0027 for Dashboard \u0027Kubernetes / Networking / Pod\u0027 keep added for every access\n1960534 - Some graphs of console dashboards have no legend and tooltips are difficult to undstand compared with grafana\n1960546 - Add virt_platform metric to the collected metrics\n1960554 - Remove rbacv1beta1 handling code\n1960612 - Node disk info in overview/details does not account for second drive where /var is located\n1960619 - Image registry integration tests use old-style OAuth tokens\n1960683 - GlobalConfigPage is constantly requesting resources\n1960711 - Enabling IPsec runtime causing incorrect MTU on Pod interfaces\n1960716 - Missing details for debugging\n1960732 - Outdated manifests directory in CSI driver operator repositories\n1960757 - [OVN] hostnetwork pod can access MCS port 22623 or 22624 on master\n1960758 - oc debug / oc adm must-gather do not require openshift/tools and openshift/must-gather to be \"the newest\"\n1960767 - /metrics endpoint of the Grafana UI is accessible without authentication\n1960780 - CI: failed to create PDB \"service-test\" the server could not find the requested resource\n1961064 - Documentation link to network policies is outdated\n1961067 - Improve log gathering logic\n1961081 - policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget in CMO logs\n1961091 - Gather MachineHealthCheck definitions\n1961120 - CSI driver operators fail when upgrading a cluster\n1961173 - recreate existing static pod manifests instead of updating\n1961201 - [sig-network-edge] DNS should answer A and AAAA queries for a dual-stack service is constantly failing\n1961314 - Race condition in operator-registry pull retry unit tests\n1961320 - CatalogSource does not emit any metrics to indicate if it\u0027s ready or not\n1961336 - Devfile sample for BuildConfig is not defined\n1961356 - Update single quotes to double quotes in string\n1961363 - Minor string update for \" No Storage classes found in cluster, adding source is disabled.\"\n1961393 - DetailsPage does not work with group~version~kind\n1961452 - Remove \"Alertmanager UI\" link from Console Monitoring \u003e Alerting page\n1961466 - Some dropdown placeholder text on route creation page is not translated\n1961472 - openshift-marketplace pods in CrashLoopBackOff state after RHACS installed with an SCC with readOnlyFileSystem set to true\n1961506 - NodePorts do not work on RHEL 7.9 workers (was \"4.7 -\u003e 4.8 upgrade is stuck at Ingress operator Degraded with rhel 7.9 workers\")\n1961536 - clusterdeployment without pull secret is crashing assisted service pod\n1961538 - manifests: invalid namespace in ClusterRoleBinding makes CVO hotloop\n1961545 - Fixing Documentation Generation\n1961550 - HAproxy pod logs showing error \"another server named \u0027pod:httpd-7c7ccfffdc-wdkvk:httpd:8080-tcp:10.128.x.x:8080\u0027 was already defined at line 326, please use distinct names\"\n1961554 - respect the shutdown-delay-duration from OpenShiftAPIServerConfig\n1961561 - The encryption controllers send lots of request to an API server\n1961582 - Build failure on s390x\n1961644 - NodeAuthenticator tests are failing in IPv6\n1961656 - driver-toolkit missing some release metadata\n1961675 - Kebab menu of taskrun contains Edit options which should not be present\n1961701 - Enhance gathering of events\n1961717 - Update runtime dependencies to Wallaby builds for bugfixes\n1961829 - Quick starts prereqs not shown when description is long\n1961852 - Excessive lock contention when adding many pods selected by the same NetworkPolicy\n1961878 - Add Sprint 199 translations\n1961897 - Remove history listener before console UI is unmounted\n1961925 - New ManagementCPUsOverride admission plugin blocks pod creation in clusters with no nodes\n1962062 - Monitoring dashboards should support default values of \"All\"\n1962074 - SNO:the pod get stuck in CreateContainerError and prompt \"failed to add conmon to systemd sandbox cgroup: dial unix /run/systemd/private: connect: resource temporarily unavailable\" after adding a performanceprofile\n1962095 - Replace gather-job image without FQDN\n1962153 - VolumeSnapshot routes are ambiguous, too generic\n1962172 - Single node CI e2e tests kubelet metrics endpoints intermittent downtime\n1962219 - NTO relies on unreliable leader-for-life implementation. \n1962256 - use RHEL8 as the vm-example\n1962261 - Monitoring components requesting more memory than they use\n1962274 - OCP on RHV installer fails to generate an install-config with only 2 hosts in RHV cluster\n1962347 - Cluster does not exist logs after successful installation\n1962392 - After upgrade from 4.5.16 to 4.6.17, customer\u0027s application is seeing re-transmits\n1962415 - duplicate zone information for in-tree PV after enabling migration\n1962429 - Cannot create windows vm because kubemacpool.io denied the request\n1962525 - [Migration] SDN migration stuck on MCO on RHV cluster\n1962569 - NetworkPolicy details page should also show Egress rules\n1962592 - Worker nodes restarting during OS installation\n1962602 - Cloud credential operator scrolls info \"unable to provide upcoming...\" on unsupported platform\n1962630 - NTO: Ship the current upstream TuneD\n1962687 - openshift-kube-storage-version-migrator pod failed due to Error: container has runAsNonRoot and image will run as root\n1962698 - Console-operator can not create resource console-public configmap in the openshift-config-managed namespace\n1962718 - CVE-2021-29622 prometheus: open redirect under the /new endpoint\n1962740 - Add documentation to Egress Router\n1962850 - [4.8] Bootimage bump tracker\n1962882 - Version pod does not set priorityClassName\n1962905 - Ramdisk ISO source defaulting to \"http\" breaks deployment on a good amount of BMCs\n1963068 - ironic container should not specify the entrypoint\n1963079 - KCM/KS: ability to enforce localhost communication with the API server. \n1963154 - Current BMAC reconcile flow skips Ironic\u0027s deprovision step\n1963159 - Add Sprint 200 translations\n1963204 - Update to 8.4 IPA images\n1963205 - Installer is using old redirector\n1963208 - Translation typos/inconsistencies for Sprint 200 files\n1963209 - Some strings in public.json have errors\n1963211 - Fix grammar issue in kubevirt-plugin.json string\n1963213 - Memsource download script running into API error\n1963219 - ImageStreamTags not internationalized\n1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment\n1963267 - Warning: Invalid DOM property `classname`. Did you mean `className`? console warnings in volumes table\n1963502 - create template from is not descriptive\n1963676 - in vm wizard when selecting an os template it looks like selecting the flavor too\n1963833 - Cluster monitoring operator crashlooping on single node clusters due to segfault\n1963848 - Use OS-shipped stalld vs. the NTO-shipped one. \n1963866 - NTO: use the latest k8s 1.21.1 and openshift vendor dependencies\n1963871 - cluster-etcd-operator:[build] upgrade to go 1.16\n1963896 - The VM disks table does not show easy links to PVCs\n1963912 - \"[sig-network] DNS should provide DNS for {services, cluster, subdomain, hostname}\" failures on vsphere\n1963932 - Installation failures in bootstrap in OpenStack release jobs\n1963964 - Characters are not escaped on config ini file causing Kuryr bootstrap to fail\n1964059 - rebase openshift/sdn to kube 1.21.1\n1964197 - Failing Test vendor/k8s.io/kube-aggregator/pkg/apiserver TestProxyCertReload due to hardcoded certificate expiration\n1964203 - e2e-metal-ipi, e2e-metal-ipi-ovn-dualstack and e2e-metal-ipi-ovn-ipv6 are failing due to \"Unknown provider baremetal\"\n1964243 - The `oc compliance fetch-raw` doesn\u2019t work for disconnected cluster\n1964270 - Failed to install \u0027cluster-kube-descheduler-operator\u0027 with error: \"clusterkubedescheduleroperator.4.8.0-202105211057.p0.assembly.stream\\\": must be no more than 63 characters\"\n1964319 - Network policy \"deny all\" interpreted as \"allow all\" in description page\n1964334 - alertmanager/prometheus/thanos-querier /metrics endpoints are not secured\n1964472 - Make project and namespace requirements more visible rather than giving me an error after submission\n1964486 - Bulk adding of CIDR IPS to whitelist is not working\n1964492 - Pick 102171: Implement support for watch initialization in P\u0026F\n1964625 - NETID duplicate check is only required in NetworkPolicy Mode\n1964748 - Sync upstream 1.7.2 downstream\n1964756 - PVC status is always in \u0027Bound\u0027 status when it is actually cloning\n1964847 - Sanity check test suite missing from the repo\n1964888 - opoenshift-apiserver imagestreamimports depend on \u003e34s timeout support, WAS: transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\n1964936 - error log for \"oc adm catalog mirror\" is not correct\n1964979 - Add mapping from ACI to infraenv to handle creation order issues\n1964997 - Helm Library charts are showing and can be installed from Catalog\n1965024 - [DR] backup and restore should perform consistency checks on etcd snapshots\n1965092 - [Assisted-4.7] [Staging][OLM] Operators deployments start before all workers finished installation\n1965283 - 4.7-\u003e4.8 upgrades: cluster operators are not ready: openshift-controller-manager (Upgradeable=Unknown NoData: ), service-ca (Upgradeable=Unknown NoData:\n1965330 - oc image extract fails due to security capabilities on files\n1965334 - opm index add fails during image extraction\n1965367 - Typo in in etcd-metric-serving-ca resource name\n1965370 - \"Route\" is not translated in Korean or Chinese\n1965391 - When storage class is already present wizard do not jumps to \"Stoarge and nodes\"\n1965422 - runc is missing Provides oci-runtime in rpm spec\n1965522 - [v2v] Multiple typos on VM Import screen\n1965545 - Pod stuck in ContainerCreating: Unit ...slice already exists\n1965909 - Replace \"Enable Taint Nodes\" by \"Mark nodes as dedicated\"\n1965921 - [oVirt] High performance VMs shouldn\u0027t be created with Existing policy\n1965929 - kube-apiserver should use cert auth when reaching out to the oauth-apiserver with a TokenReview request\n1966077 - `hidden` descriptor is visible in the Operator instance details page`\n1966116 - DNS SRV request which worked in 4.7.9 stopped working in 4.7.11\n1966126 - root_ca_cert_publisher_sync_duration_seconds metric can have an excessive cardinality\n1966138 - (release-4.8) Update K8s \u0026 OpenShift API versions\n1966156 - Issue with Internal Registry CA on the service pod\n1966174 - No storage class is installed, OCS and CNV installations fail\n1966268 - Workaround for Network Manager not supporting nmconnections priority\n1966401 - Revamp Ceph Table in Install Wizard flow\n1966410 - kube-controller-manager should not trigger APIRemovedInNextReleaseInUse alert\n1966416 - (release-4.8) Do not exceed the data size limit\n1966459 - \u0027policy/v1beta1 PodDisruptionBudget\u0027 and \u0027batch/v1beta1 CronJob\u0027 appear in image-registry-operator log\n1966487 - IP address in Pods list table are showing node IP other than pod IP\n1966520 - Add button from ocs add capacity should not be enabled if there are no PV\u0027s\n1966523 - (release-4.8) Gather MachineAutoScaler definitions\n1966546 - [master] KubeAPI - keep day1 after cluster is successfully installed\n1966561 - Workload partitioning annotation workaround needed for CSV annotation propagation bug\n1966602 - don\u0027t require manually setting IPv6DualStack feature gate in 4.8\n1966620 - The bundle.Dockerfile in the repo is obsolete\n1966632 - [4.8.0] [assisted operator] Unable to re-register an SNO instance if deleting CRDs during install\n1966654 - Alertmanager PDB is not created, but Prometheus UWM is\n1966672 - Add Sprint 201 translations\n1966675 - Admin console string updates\n1966677 - Change comma to semicolon\n1966683 - Translation bugs from Sprint 201 files\n1966684 - Verify \"Creating snapshot for claim \u003c1\u003e{pvcName}\u003c/1\u003e\" displays correctly\n1966697 - Garbage collector logs every interval - move to debug level\n1966717 - include full timestamps in the logs\n1966759 - Enable downstream plugin for Operator SDK\n1966795 - [tests] Release 4.7 broken due to the usage of wrong OCS version\n1966813 - \"Replacing an unhealthy etcd member whose node is not ready\" procedure results in new etcd pod in CrashLoopBackOff\n1966862 - vsphere IPI - local dns prepender is not prepending nameserver 127.0.0.1\n1966892 - [master] [Assisted-4.8][SNO] SNO node cannot transition into \"Writing image to disk\" from \"Waiting for bootkub[e\"\n1966952 - [4.8.0] [Assisted-4.8][SNO][Dual Stack] DHCPv6 settings \"ipv6.dhcp-duid=ll\" missing from dual stack install\n1967104 - [4.8.0] InfraEnv ctrl: log the amount of NMstate Configs baked into the image\n1967126 - [4.8.0] [DOC] KubeAPI docs should clarify that the InfraEnv Spec pullSecretRef is currently ignored\n1967197 - 404 errors loading some i18n namespaces\n1967207 - Getting started card: console customization resources link shows other resources\n1967208 - Getting started card should use semver library for parsing the version instead of string manipulation\n1967234 - Console is continuously polling for ConsoleLink acm-link\n1967275 - Awkward wrapping in getting started dashboard card\n1967276 - Help menu tooltip overlays dropdown\n1967398 - authentication operator still uses previous deleted pod ip rather than the new created pod ip to do health check\n1967403 - (release-4.8) Increase workloads fingerprint gatherer pods limit\n1967423 - [master] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion\n1967444 - openshift-local-storage pods found with invalid priority class, should be openshift-user-critical or begin with system- while running e2e tests\n1967531 - the ccoctl tool should extend MaxItems when listRoles, the default value 100 is a little small\n1967578 - [4.8.0] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion\n1967591 - The ManagementCPUsOverride admission plugin should not mutate containers with the limit\n1967595 - Fixes the remaining lint issues\n1967614 - prometheus-k8s pods can\u0027t be scheduled due to volume node affinity conflict\n1967623 - [OCPonRHV] - ./openshift-install installation with install-config doesn\u0027t work if ovirt-config.yaml doesn\u0027t exist and user should fill the FQDN URL\n1967625 - Add OpenShift Dockerfile for cloud-provider-aws\n1967631 - [4.8.0] Cluster install failed due to timeout while \"Waiting for control plane\"\n1967633 - [4.8.0] [Assisted-4.8][SNO] SNO node cannot transition into \"Writing image to disk\" from \"Waiting for bootkube\"\n1967639 - Console whitescreens if user preferences fail to load\n1967662 - machine-api-operator should not use deprecated \"platform\" field in infrastructures.config.openshift.io\n1967667 - Add Sprint 202 Round 1 translations\n1967713 - Insights widget shows invalid link to the OCM\n1967717 - Insights Advisor widget is missing a description paragraph and contains deprecated naming\n1967745 - When setting DNS node placement by toleration to not tolerate master node, effect value should not allow string other than \"NoExecute\"\n1967803 - should update to 7.5.5 for grafana resources version label\n1967832 - Add more tests for periodic.go\n1967833 - Add tasks pool to tasks_processing\n1967842 - Production logs are spammed on \"OCS requirements validation status Insufficient hosts to deploy OCS. A minimum of 3 hosts is required to deploy OCS\"\n1967843 - Fix null reference to messagesToSearch in gather_logs.go\n1967902 - [4.8.0] Assisted installer chrony manifests missing index numberring\n1967933 - Network-Tools debug scripts not working as expected\n1967945 - [4.8.0] [assisted operator] Assisted Service Postgres crashes msg: \"mkdir: cannot create directory \u0027/var/lib/pgsql/data/userdata\u0027: Permission denied\"\n1968019 - drain timeout and pool degrading period is too short\n1968067 - [master] Agent validation not including reason for being insufficient\n1968168 - [4.8.0] KubeAPI - keep day1 after cluster is successfully installed\n1968175 - [4.8.0] Agent validation not including reason for being insufficient\n1968373 - [4.8.0] BMAC re-attaches installed node on ISO regeneration\n1968385 - [4.8.0] Infra env require pullSecretRef although it shouldn\u0027t be required\n1968435 - [4.8.0] Unclear message in case of missing clusterImageSet\n1968436 - Listeners timeout updated to remain using default value\n1968449 - [4.8.0] Wrong Install-config override documentation\n1968451 - [4.8.0] Garbage collector not cleaning up directories of removed clusters\n1968452 - [4.8.0] [doc] \"Mirror Registry Configuration\" doc section needs clarification of functionality and limitations\n1968454 - [4.8.0] backend events generated with wrong namespace for agent\n1968455 - [4.8.0] Assisted Service operator\u0027s controllers are starting before the base service is ready\n1968515 - oc should set user-agent when talking with registry\n1968531 - Sync upstream 1.8.0 downstream\n1968558 - [sig-cli] oc adm storage-admin [Suite:openshift/conformance/parallel] doesn\u0027t clean up properly\n1968567 - [OVN] Egress router pod not running and openshift.io/scc is restricted\n1968625 - Pods using sr-iov interfaces failign to start for Failed to create pod sandbox\n1968700 - catalog-operator crashes when status.initContainerStatuses[].state.waiting is nil\n1968701 - Bare metal IPI installation is failed due to worker inspection failure\n1968754 - CI: e2e-metal-ipi-upgrade failing on KubeletHasDiskPressure, which triggers machine-config RequiredPoolsFailed\n1969212 - [FJ OCP4.8 Bug - PUBLIC VERSION]: Masters repeat reboot every few minutes during workers provisioning\n1969284 - Console Query Browser: Can\u0027t reset zoom to fixed time range after dragging to zoom\n1969315 - [4.8.0] BMAC doesn\u0027t check if ISO Url changed before queuing BMH for reconcile\n1969352 - [4.8.0] Creating BareMetalHost without the \"inspect.metal3.io\" does not automatically add it\n1969363 - [4.8.0] Infra env should show the time that ISO was generated. \n1969367 - [4.8.0] BMAC should wait for an ISO to exist for 1 minute before using it\n1969386 - Filesystem\u0027s Utilization doesn\u0027t show in VM overview tab\n1969397 - OVN bug causing subports to stay DOWN fails installations\n1969470 - [4.8.0] Misleading error in case of install-config override bad input\n1969487 - [FJ OCP4.8 Bug]: Avoid always do delete_configuration clean step\n1969525 - Replace golint with revive\n1969535 - Topology edit icon does not link correctly when branch name contains slash\n1969538 - Install a VolumeSnapshotClass by default on CSI Drivers that support it\n1969551 - [4.8.0] Assisted service times out on GetNextSteps due to `oc adm release info` taking too long\n1969561 - Test \"an end user can use OLM can subscribe to the operator\" generates deprecation alert\n1969578 - installer: accesses v1beta1 RBAC APIs and causes APIRemovedInNextReleaseInUse to fire\n1969599 - images without registry are being prefixed with registry.hub.docker.com instead of docker.io\n1969601 - manifest for networks.config.openshift.io CRD uses deprecated apiextensions.k8s.io/v1beta1\n1969626 - Portfoward stream cleanup can cause kubelet to panic\n1969631 - EncryptionPruneControllerDegraded: etcdserver: request timed out\n1969681 - MCO: maxUnavailable of ds/machine-config-daemon does not get updated due to missing resourcemerge check\n1969712 - [4.8.0] Assisted service reports a malformed iso when we fail to download the base iso\n1969752 - [4.8.0] [assisted operator] Installed Clusters are missing DNS setups\n1969773 - [4.8.0] Empty cluster name on handleEnsureISOErrors log after applying InfraEnv.yaml\n1969784 - WebTerminal widget should send resize events\n1969832 - Applying a profile with multiple inheritance where parents include a common ancestor fails\n1969891 - Fix rotated pipelinerun status icon issue in safari\n1969900 - Test files should not use deprecated APIs that will trigger APIRemovedInNextReleaseInUse\n1969903 - Provisioning a large number of hosts results in an unexpected delay in hosts becoming available\n1969951 - Cluster local doesn\u0027t work for knative services created from dev console\n1969969 - ironic-rhcos-downloader container uses and old base image\n1970062 - ccoctl does not work with STS authentication\n1970068 - ovnkube-master logs \"Failed to find node ips for gateway\" error\n1970126 - [4.8.0] Disable \"metrics-events\" when deploying using the operator\n1970150 - master pool is still upgrading when machine config reports level / restarts on osimageurl change\n1970262 - [4.8.0] Remove Agent CRD Status fields not needed\n1970265 - [4.8.0] Add State and StateInfo to DebugInfo in ACI and Agent CRDs\n1970269 - [4.8.0] missing role in agent CRD\n1970271 - [4.8.0] Add ProgressInfo to Agent and AgentClusterInstalll CRDs\n1970381 - Monitoring dashboards: Custom time range inputs should retain their values\n1970395 - [4.8.0] SNO with AI/operator - kubeconfig secret is not created until the spoke is deployed\n1970401 - [4.8.0] AgentLabelSelector is required yet not supported\n1970415 - SR-IOV Docs needs documentation for disabling port security on a network\n1970470 - Add pipeline annotation to Secrets which are created for a private repo\n1970494 - [4.8.0] Missing value-filling of log line in assisted-service operator pod\n1970624 - 4.7-\u003e4.8 updates: AggregatedAPIDown for v1beta1.metrics.k8s.io\n1970828 - \"500 Internal Error\" for all openshift-monitoring routes\n1970975 - 4.7 -\u003e 4.8 upgrades on AWS take longer than expected\n1971068 - Removing invalid AWS instances from the CF templates\n1971080 - 4.7-\u003e4.8 CI: KubePodNotReady due to MCD\u0027s 5m sleep between drain attempts\n1971188 - Web Console does not show OpenShift Virtualization Menu with VirtualMachine CRDs of version v1alpha3 !\n1971293 - [4.8.0] Deleting agent from one namespace causes all agents with the same name to be deleted from all namespaces\n1971308 - [4.8.0] AI KubeAPI AgentClusterInstall confusing \"Validated\" condition about VIP not matching machine network\n1971529 - [Dummy bug for robot] 4.7.14 upgrade to 4.8 and then downgrade back to 4.7.14 doesn\u0027t work - clusteroperator/kube-apiserver is not upgradeable\n1971589 - [4.8.0] Telemetry-client won\u0027t report metrics in case the cluster was installed using the assisted operator\n1971630 - [4.8.0] ACM/ZTP with Wan emulation fails to start the agent service\n1971632 - [4.8.0] ACM/ZTP with Wan emulation, several clusters fail to step past discovery\n1971654 - [4.8.0] InfraEnv controller should always requeue for backend response HTTP StatusConflict (code 409)\n1971739 - Keep /boot RW when kdump is enabled\n1972085 - [4.8.0] Updating configmap within AgentServiceConfig is not logged properly\n1972128 - ironic-static-ip-manager container still uses 4.7 base image\n1972140 - [4.8.0] ACM/ZTP with Wan emulation, SNO cluster installs do not show as installed although they are\n1972167 - Several operators degraded because Failed to create pod sandbox when installing an sts cluster\n1972213 - Openshift Installer| UEFI mode | BM hosts have BIOS halted\n1972262 - [4.8.0] \"baremetalhost.metal3.io/detached\" uses boolean value where string is expected\n1972426 - Adopt failure can trigger deprovisioning\n1972436 - [4.8.0] [DOCS] AgentServiceConfig examples in operator.md doc should each contain databaseStorage + filesystemStorage\n1972526 - [4.8.0] clusterDeployments controller should send an event to InfraEnv for backend cluster registration\n1972530 - [4.8.0] no indication for missing debugInfo in AgentClusterInstall\n1972565 - performance issues due to lost node, pods taking too long to relaunch\n1972662 - DPDK KNI modules need some additional tools\n1972676 - Requirements for authenticating kernel modules with X.509\n1972687 - Using bound SA tokens causes causes failures to /apis/authorization.openshift.io/v1/clusterrolebindings\n1972690 - [4.8.0] infra-env condition message isn\u0027t informative in case of missing pull secret\n1972702 - [4.8.0] Domain dummy.com (not belonging to Red Hat) is being used in a default configuration\n1972768 - kube-apiserver setup fail while installing SNO due to port being used\n1972864 - New `local-with-fallback` service annotation does not preserve source IP\n1973018 - Ironic rhcos downloader breaks image cache in upgrade process from 4.7 to 4.8\n1973117 - No storage class is installed, OCS and CNV installations fail\n1973233 - remove kubevirt images and references\n1973237 - RHCOS-shipped stalld systemd units do not use SCHED_FIFO to run stalld. \n1973428 - Placeholder bug for OCP 4.8.0 image release\n1973667 - [4.8] NetworkPolicy tests were mistakenly marked skipped\n1973672 - fix ovn-kubernetes NetworkPolicy 4.7-\u003e4.8 upgrade issue\n1973995 - [Feature:IPv6DualStack] tests are failing in dualstack\n1974414 - Uninstalling kube-descheduler clusterkubedescheduleroperator.4.6.0-202106010807.p0.git.5db84c5 removes some clusterrolebindings\n1974447 - Requirements for nvidia GPU driver container for driver toolkit\n1974677 - [4.8.0] KubeAPI CVO progress is not available on CR/conditions only in events. \n1974718 - Tuned net plugin fails to handle net devices with n/a value for a channel\n1974743 - [4.8.0] All resources not being cleaned up after clusterdeployment deletion\n1974746 - [4.8.0] File system usage not being logged appropriately\n1974757 - [4.8.0] Assisted-service deployed on an IPv6 cluster installed with proxy: agentclusterinstall shows error pulling an image from quay. \n1974773 - Using bound SA tokens causes fail to query cluster resource especially in a sts cluster\n1974839 - CVE-2021-29059 nodejs-is-svg: Regular expression denial of service if the application is provided and checks a crafted invalid SVG string\n1974850 - [4.8] coreos-installer failing Execshield\n1974931 - [4.8.0] Assisted Service Operator should be Infrastructure Operator for Red Hat OpenShift\n1974978 - 4.8.0.rc0 upgrade hung, stuck on DNS clusteroperator progressing\n1975155 - Kubernetes service IP cannot be accessed for rhel worker\n1975227 - [4.8.0] KubeAPI Move conditions consts to CRD types\n1975360 - [4.8.0] [master] timeout on kubeAPI subsystem test: SNO full install and validate MetaData\n1975404 - [4.8.0] Confusing behavior when multi-node spoke workers present when only controlPlaneAgents specified\n1975432 - Alert InstallPlanStepAppliedWithWarnings does not resolve\n1975527 - VMware UPI is configuring static IPs via ignition rather than afterburn\n1975672 - [4.8.0] Production logs are spammed on \"Found unpreparing host: id 08f22447-2cf1-a107-eedf-12c7421f7380 status insufficient\"\n1975789 - worker nodes rebooted when we simulate a case where the api-server is down\n1975938 - gcp-realtime: e2e test failing [sig-storage] Multi-AZ Cluster Volumes should only be allowed to provision PDs in zones where nodes exist [Suite:openshift/conformance/parallel] [Suite:k8s]\n1975964 - 4.7 nightly upgrade to 4.8 and then downgrade back to 4.7 nightly doesn\u0027t work -  ingresscontroller \"default\" is degraded\n1976079 - [4.8.0] Openshift Installer| UEFI mode | BM hosts have BIOS halted\n1976263 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel]\n1976376 - disable jenkins client plugin test whose Jenkinsfile references master branch openshift/origin artifacts\n1976590 - [Tracker] [SNO][assisted-operator][nmstate] Bond Interface is down when booting from the discovery ISO\n1977233 - [4.8] Unable to authenticate against IDP after upgrade to 4.8-rc.1\n1977351 - CVO pod skipped by workload partitioning with incorrect error stating cluster is not SNO\n1977352 - [4.8.0] [SNO] No DNS to cluster API from assisted-installer-controller\n1977426 - Installation of OCP 4.6.13 fails when teaming interface is used with OVNKubernetes\n1977479 - CI failing on firing CertifiedOperatorsCatalogError due to slow livenessProbe responses\n1977540 - sriov webhook not worked when upgrade from 4.7 to 4.8\n1977607 - [4.8.0] Post making changes to AgentServiceConfig assisted-service operator is not detecting the change and redeploying assisted-service pod\n1977924 - Pod fails to run when a custom SCC with a specific set of volumes is used\n1980788 - NTO-shipped stalld can segfault\n1981633 - enhance service-ca injection\n1982250 - Performance Addon Operator fails to install after catalog source becomes ready\n1982252 - olm Operator is in CrashLoopBackOff state with error \"couldn\u0027t cleanup cross-namespace ownerreferences\"\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2016-2183\nhttps://access.redhat.com/security/cve/CVE-2020-7774\nhttps://access.redhat.com/security/cve/CVE-2020-15106\nhttps://access.redhat.com/security/cve/CVE-2020-15112\nhttps://access.redhat.com/security/cve/CVE-2020-15113\nhttps://access.redhat.com/security/cve/CVE-2020-15114\nhttps://access.redhat.com/security/cve/CVE-2020-15136\nhttps://access.redhat.com/security/cve/CVE-2020-26160\nhttps://access.redhat.com/security/cve/CVE-2020-26541\nhttps://access.redhat.com/security/cve/CVE-2020-28469\nhttps://access.redhat.com/security/cve/CVE-2020-28500\nhttps://access.redhat.com/security/cve/CVE-2020-28852\nhttps://access.redhat.com/security/cve/CVE-2021-3114\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/cve/CVE-2021-3516\nhttps://access.redhat.com/security/cve/CVE-2021-3517\nhttps://access.redhat.com/security/cve/CVE-2021-3518\nhttps://access.redhat.com/security/cve/CVE-2021-3520\nhttps://access.redhat.com/security/cve/CVE-2021-3537\nhttps://access.redhat.com/security/cve/CVE-2021-3541\nhttps://access.redhat.com/security/cve/CVE-2021-3636\nhttps://access.redhat.com/security/cve/CVE-2021-20206\nhttps://access.redhat.com/security/cve/CVE-2021-20271\nhttps://access.redhat.com/security/cve/CVE-2021-20291\nhttps://access.redhat.com/security/cve/CVE-2021-21419\nhttps://access.redhat.com/security/cve/CVE-2021-21623\nhttps://access.redhat.com/security/cve/CVE-2021-21639\nhttps://access.redhat.com/security/cve/CVE-2021-21640\nhttps://access.redhat.com/security/cve/CVE-2021-21648\nhttps://access.redhat.com/security/cve/CVE-2021-22133\nhttps://access.redhat.com/security/cve/CVE-2021-23337\nhttps://access.redhat.com/security/cve/CVE-2021-23362\nhttps://access.redhat.com/security/cve/CVE-2021-23368\nhttps://access.redhat.com/security/cve/CVE-2021-23382\nhttps://access.redhat.com/security/cve/CVE-2021-25735\nhttps://access.redhat.com/security/cve/CVE-2021-25737\nhttps://access.redhat.com/security/cve/CVE-2021-26539\nhttps://access.redhat.com/security/cve/CVE-2021-26540\nhttps://access.redhat.com/security/cve/CVE-2021-27292\nhttps://access.redhat.com/security/cve/CVE-2021-28092\nhttps://access.redhat.com/security/cve/CVE-2021-29059\nhttps://access.redhat.com/security/cve/CVE-2021-29622\nhttps://access.redhat.com/security/cve/CVE-2021-32399\nhttps://access.redhat.com/security/cve/CVE-2021-33034\nhttps://access.redhat.com/security/cve/CVE-2021-33194\nhttps://access.redhat.com/security/cve/CVE-2021-33909\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYQCOF9zjgjWX9erEAQjsEg/+NSFQdRcZpqA34LWRtxn+01y2MO0WLroQ\nd4o+3h0ECKYNRFKJe6n7z8MdmPpvV2uNYN0oIwidTESKHkFTReQ6ZolcV/sh7A26\nZ7E+hhpTTObxAL7Xx8nvI7PNffw3CIOZSpnKws5TdrwuMkH5hnBSSZntP5obp9Vs\nImewWWl7CNQtFewtXbcmUojNzIvU1mujES2DTy2ffypLoOW6kYdJzyWubigIoR6h\ngep9HKf1X4oGPuDNF5trSdxKwi6W68+VsOA25qvcNZMFyeTFhZqowot/Jh1HUHD8\nTWVpDPA83uuExi/c8tE8u7VZgakWkRWcJUsIw68VJVOYGvpP6K/MjTpSuP2itgUX\nX//1RGQM7g6sYTCSwTOIrMAPbYH0IMbGDjcS4fSZcfg6c+WJnEpZ72ZgjHZV8mxb\n1BtQSs2lil48/cwDKM0yMO2nYsKiz4DCCx2W5izP0rLwNA8Hvqh9qlFgkxJWWOvA\nmtBCelB0E74qrE4NXbX+MIF7+ZQKjd1evE91/VWNs0FLR/xXdP3C5ORLU3Fag0G/\n0oTV73NdxP7IXVAdsECwU2AqS9ne1y01zJKtd7hq7H/wtkbasqCNq5J7HikJlLe6\ndpKh5ZRQzYhGeQvho9WQfz/jd4HZZTcB6wxrWubbd05bYt/i/0gau90LpuFEuSDx\n+bLvJlpGiMg=\n=NJcM\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.3.0 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \n\nBugs:\n\n* RFE Make the source code for the endpoint-metrics-operator public (BZ#\n1913444)\n\n* cluster became offline after apiserver health check (BZ# 1942589)\n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension\n1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag\n1913444 - RFE Make the source code for the endpoint-metrics-operator public\n1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull\n1927520 - RHACM 2.3.0 images\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection\n1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate\n1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms\n1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization\n1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string\n1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application\n1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call\n1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS\n1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service\n1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service\n1942589 - cluster became offline after apiserver health check\n1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl()\n1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character\n1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data\n1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service\n1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option\n1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing\n1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js\n1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service\n1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS)\n1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option\n1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command\n1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets\n1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs\n1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method\n1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions\n1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id\n1983131 - Defragmenting an etcd member doesn\u0027t reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters\n\n5. VDSM manages and monitors the host\u0027s storage, memory and\nnetworks as well as virtual machine creation, other host administration\ntasks, statistics gathering, and log collection. \n\nBug Fix(es):\n\n* An update in libvirt has changed the way block threshold events are\nsubmitted. \nAs a result, the VDSM was confused by the libvirt event, and tried to look\nup a drive, logging a warning about a missing drive. \nIn this release, the VDSM has been adapted to handle the new libvirt\nbehavior, and does not log warnings about missing drives. (BZ#1948177)\n\n* Previously, when a virtual machine was powered off on the source host of\na live migration and the migration finished successfully at the same time,\nthe two events  interfered with each other, and sometimes prevented\nmigration cleanup resulting in additional migrations from the host being\nblocked. \nIn this release, additional migrations are not blocked. (BZ#1959436)\n\n* Previously, when failing to execute a snapshot and re-executing it later,\nthe second try would fail due to using the previous execution data. In this\nrelease, this data will be used only when needed, in recovery mode. \n(BZ#1984209)\n\n4. Then engine deletes the volume and causes data corruption. \n1998017 - Keep cinbderlib dependencies optional for 4.4.8\n\n6. \n\nBug Fix(es):\n\n* Documentation is referencing deprecated API for Service Export -\nSubmariner (BZ#1936528)\n\n* Importing of cluster fails due to error/typo in generated command\n(BZ#1936642)\n\n* RHACM 2.2.2 images (BZ#1938215)\n\n* 2.2 clusterlifecycle fails to allow provision `fips: true` clusters on\naws, vsphere (BZ#1941778)\n\n3. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.7.4 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-28500"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011490"
      },
      {
        "db": "VULHUB",
        "id": "VHN-373964"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-28500"
      },
      {
        "db": "PACKETSTORM",
        "id": "163276"
      },
      {
        "db": "PACKETSTORM",
        "id": "162901"
      },
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "db": "PACKETSTORM",
        "id": "164090"
      },
      {
        "db": "PACKETSTORM",
        "id": "162151"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      }
    ],
    "trust": 2.43
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2020-28500",
        "trust": 4.1
      },
      {
        "db": "SIEMENS",
        "id": "SSA-637483",
        "trust": 1.8
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-22-258-05",
        "trust": 1.5
      },
      {
        "db": "PACKETSTORM",
        "id": "163276",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "162151",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "162901",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU99475301",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011490",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "163690",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "163747",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "164090",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.1225",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.1871",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4616",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5790",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.3036",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2232",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.2182",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2555",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2021.2657",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4568",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.2555",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022052615",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021090922",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021062702",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1168",
        "trust": 0.6
      },
      {
        "db": "VULHUB",
        "id": "VHN-373964",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-28500",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168352",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-373964"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-28500"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011490"
      },
      {
        "db": "PACKETSTORM",
        "id": "163276"
      },
      {
        "db": "PACKETSTORM",
        "id": "162901"
      },
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "db": "PACKETSTORM",
        "id": "164090"
      },
      {
        "db": "PACKETSTORM",
        "id": "162151"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1168"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-28500"
      }
    ]
  },
  "id": "VAR-202102-1492",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-373964"
      }
    ],
    "trust": 0.30766129
  },
  "last_update_date": "2024-11-23T19:57:25.302000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "perf",
        "trust": 0.8,
        "url": "https://github.com/lodash/lodash/pull/5065"
      },
      {
        "title": "lodash Security vulnerabilities",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=142393"
      },
      {
        "title": "Debian CVElist Bug Report Logs: CVE-2021-23337 CVE-2020-28500",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=705b23b69122ed473c796891371a9f52"
      },
      {
        "title": "IBM: Security Bulletin: IBM Integration Bus \u0026 IBM App Connect Enterprise V11 are affected by vulnerabilities in Node.js (CVE-2020-28500)",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=3d9a3b6c21f9e87c491e9c1a56004595"
      },
      {
        "title": "IBM: Security Bulletin: A security vulnerability in Node.js Lodash module affects IBM Cloud Automation Manager.",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=ab2b9d02254c2d45625dc8b682d0c4eb"
      },
      {
        "title": "Red Hat: Important: Migration Toolkit for Containers (MTC) 1.7.4 security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226429 - Security Advisory"
      },
      {
        "title": "tsp-vulnerable-app-nodejs-express",
        "trust": 0.1,
        "url": "https://github.com/the-scan-project/tsp-vulnerable-app-nodejs-express "
      },
      {
        "title": "sample-vulnerable-app-nodejs-express",
        "trust": 0.1,
        "url": "https://github.com/samoylenko/sample-vulnerable-app-nodejs-express "
      },
      {
        "title": "lm-test",
        "trust": 0.1,
        "url": "https://github.com/MishaKav/lm-test "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2020-28500"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011490"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1168"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "NVD-CWE-Other",
        "trust": 1.0
      },
      {
        "problemtype": "others (CWE-Other) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011490"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-28500"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.6,
        "url": "https://snyk.io/vuln/snyk-java-orgfujionwebjars-1074896"
      },
      {
        "trust": 2.0,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28500"
      },
      {
        "trust": 1.8,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf"
      },
      {
        "trust": 1.8,
        "url": "https://security.netapp.com/advisory/ntap-20210312-0006/"
      },
      {
        "trust": 1.8,
        "url": "https://github.com/lodash/lodash/blob/npm/trimend.js%23l8"
      },
      {
        "trust": 1.8,
        "url": "https://github.com/lodash/lodash/pull/5065"
      },
      {
        "trust": 1.8,
        "url": "https://snyk.io/vuln/snyk-java-orgwebjars-1074894"
      },
      {
        "trust": 1.8,
        "url": "https://snyk.io/vuln/snyk-java-orgwebjarsbower-1074892"
      },
      {
        "trust": 1.8,
        "url": "https://snyk.io/vuln/snyk-java-orgwebjarsbowergithublodash-1074895"
      },
      {
        "trust": 1.8,
        "url": "https://snyk.io/vuln/snyk-java-orgwebjarsnpm-1074893"
      },
      {
        "trust": 1.8,
        "url": "https://snyk.io/vuln/snyk-js-lodash-1018905"
      },
      {
        "trust": 1.8,
        "url": "https://www.oracle.com//security-alerts/cpujul2021.html"
      },
      {
        "trust": 1.8,
        "url": "https://www.oracle.com/security-alerts/cpujan2022.html"
      },
      {
        "trust": 1.8,
        "url": "https://www.oracle.com/security-alerts/cpujul2022.html"
      },
      {
        "trust": 1.8,
        "url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
      },
      {
        "trust": 0.9,
        "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.8,
        "url": "http://jvn.jp/vu/jvnvu99475301/index.html"
      },
      {
        "trust": 0.7,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-integration-bus-ibm-app-connect-enterprise-v11-are-affected-by-vulnerabilities-in-node-js-cve-2020-28500/"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2020-28500"
      },
      {
        "trust": 0.7,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2021-23337"
      },
      {
        "trust": 0.7,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-watson-discovery-for-ibm-cloud-pak-for-data-affected-by-vulnerability-in-node-js-3/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2657"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.1225"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/162901/red-hat-security-advisory-2021-2179-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-security-guardium-insights-is-affected-by-multiple-vulnerabilities-5/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/support/pages/node/6486341"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/163747/red-hat-security-advisory-2021-3016-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-a-security-vulnerability-in-node-js-lodash-module-affects-ibm-cloud-automation-manager-2/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/164090/red-hat-security-advisory-2021-3459-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.1871"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.3036"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021090922"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/163276/red-hat-security-advisory-2021-2543-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.2555"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022052615"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/support/pages/node/6524656"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/support/pages/node/6483681"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4616"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/162151/red-hat-security-advisory-2021-1168-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021062702"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2232"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/163690/red-hat-security-advisory-2021-2438-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-a-security-vulnerability-in-node-js-lodash-module-affects-ibm-cloud-pak-for-multicloud-management-managed-service/"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-potential-vulnerability-with-node-js-lodash-module-2/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2021.2555"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.2182"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5790"
      },
      {
        "trust": 0.6,
        "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/lodash-denial-of-service-via-tonumber-trim-36225"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-cloud-pak-for-integration-is-vulnerable-to-node-js-lodash-vulnerability-cve-2020-28500/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4568"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23337"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-3449"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-3450"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28852"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-28852"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-25013"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29362"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29361"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-2708"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8286"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-28196"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-20305"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-15358"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15358"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8927"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13434"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2017-14502"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29362"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8285"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2019-9169"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-29363"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3114"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27618"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29361"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-13434"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-2708"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2016-10228"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8231"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3326"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9169"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-27219"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-8284"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-27618"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28196"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/articles/2974891"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-28469"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33034"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-28092"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3520"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3537"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3121"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-33909"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3518"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-32399"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3516"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-23368"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-23362"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3517"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3541"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28469"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-20271"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-27292"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-23382"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28851"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-21321"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-23841"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-28851"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-23840"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-21322"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/.html"
      },
      {
        "trust": 0.1,
        "url": "https://github.com/the-scan-project/tsp-vulnerable-app-nodejs-express"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26116"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8284"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23336"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20305"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13949"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28362"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8285"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8286"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.7/jaeger/jaeger_install/rhb"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26116"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3842"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8927"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13776"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27619"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2543"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24977"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-3842"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13776"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23336"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3177"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13949"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8231"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27619"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24977"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/ht"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2179"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/technical_notes"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21419"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15112"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25737"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.8/updating/updating-cluster"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21639"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7774"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20291"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-26540"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23368"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21419"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33194"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-26539"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15106"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29059"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25735"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2016-2183"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26160"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21623"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2438"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15112"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20206"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25735"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20206"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22133"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23362"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15113"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21640"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26160"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21640"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7774"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:2437"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15136"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23382"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21623"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21639"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21648"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15106"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15136"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-26541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29622"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21648"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20291"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15113"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15114"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22133"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20271"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2016-2183"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15114"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3636"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20454"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20934"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29418"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13050"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-20843"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-1730"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29482"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27358"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23369"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-11668"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23364"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23343"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21309"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23383"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-28918"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3560"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33033"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-14889"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-1730"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13627"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20934"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25217"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:3016"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3377"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-21272"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29477"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23346"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-29478"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11668"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23839"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19906"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33623"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-15903"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33910"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:3459"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2021:1168"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29529"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-29529"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3121"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3347"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3449"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28374"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27364"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-26708"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27365"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0466"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-27152"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27363"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21322"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27152"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3347"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3450"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14040"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21321"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-27365"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-0466"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27364"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14040"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28374"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-26708"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36084"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-15586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36085"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-8559"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30629"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20838"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1785"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1897"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1927"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4189"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20095"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2526"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-5827"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-29154"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0691"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2097"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3634"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3580"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2068"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24370"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0686"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32206"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-25313"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32208"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-29824"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16845"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23177"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3737"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14155"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19603"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-42771"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1292"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0639"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36087"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6429"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20231"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-40528"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30631"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20232"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25219"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-31566"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-25314"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17595"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36086"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-16845"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0512"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15586"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28493"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25032"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1650"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13435"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-373964"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-28500"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011490"
      },
      {
        "db": "PACKETSTORM",
        "id": "163276"
      },
      {
        "db": "PACKETSTORM",
        "id": "162901"
      },
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "db": "PACKETSTORM",
        "id": "164090"
      },
      {
        "db": "PACKETSTORM",
        "id": "162151"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1168"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-28500"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-373964"
      },
      {
        "db": "VULMON",
        "id": "CVE-2020-28500"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011490"
      },
      {
        "db": "PACKETSTORM",
        "id": "163276"
      },
      {
        "db": "PACKETSTORM",
        "id": "162901"
      },
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "db": "PACKETSTORM",
        "id": "164090"
      },
      {
        "db": "PACKETSTORM",
        "id": "162151"
      },
      {
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1168"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-28500"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2021-02-15T00:00:00",
        "db": "VULHUB",
        "id": "VHN-373964"
      },
      {
        "date": "2021-02-15T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-28500"
      },
      {
        "date": "2021-04-05T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-011490"
      },
      {
        "date": "2021-06-24T17:54:53",
        "db": "PACKETSTORM",
        "id": "163276"
      },
      {
        "date": "2021-06-01T15:17:45",
        "db": "PACKETSTORM",
        "id": "162901"
      },
      {
        "date": "2021-07-28T14:53:49",
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "date": "2021-08-06T14:02:37",
        "db": "PACKETSTORM",
        "id": "163747"
      },
      {
        "date": "2021-09-09T13:33:33",
        "db": "PACKETSTORM",
        "id": "164090"
      },
      {
        "date": "2021-04-13T15:38:30",
        "db": "PACKETSTORM",
        "id": "162151"
      },
      {
        "date": "2022-09-13T15:42:14",
        "db": "PACKETSTORM",
        "id": "168352"
      },
      {
        "date": "2021-02-15T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202102-1168"
      },
      {
        "date": "2021-02-15T11:15:12.397000",
        "db": "NVD",
        "id": "CVE-2020-28500"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-09-13T00:00:00",
        "db": "VULHUB",
        "id": "VHN-373964"
      },
      {
        "date": "2022-09-13T00:00:00",
        "db": "VULMON",
        "id": "CVE-2020-28500"
      },
      {
        "date": "2022-09-20T05:44:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-011490"
      },
      {
        "date": "2022-11-11T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202102-1168"
      },
      {
        "date": "2024-11-21T05:22:55.053000",
        "db": "NVD",
        "id": "CVE-2020-28500"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "163690"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1168"
      }
    ],
    "trust": 0.7
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Lodash\u00a0 Vulnerability in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011490"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "other",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202102-1168"
      }
    ],
    "trust": 0.6
  }
}

var-202109-1795
Vulnerability from variot

When sending data to an MQTT server, libcurl <= 7.73.0 and 7.78.0 could in some circumstances erroneously keep a pointer to an already freed memory area and both use that again in a subsequent call to send data and also free it again. Pillow is a Python-based image processing library. There is currently no information about this vulnerability, please feel free to follow CNNVD or manufacturer announcements. A use-after-free security issue has been found in the MQTT sending component of curl prior to 7.79.0. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

APPLE-SA-2022-03-14-4 macOS Monterey 12.3

macOS Monterey 12.3 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT213183.

Accelerate Framework Available for: macOS Monterey Impact: Opening a maliciously crafted PDF file may lead to an unexpected application termination or arbitrary code execution Description: A memory corruption issue was addressed with improved state management. CVE-2022-22633: an anonymous researcher

AMD Available for: macOS Monterey Impact: An application may be able to execute arbitrary code with kernel privileges Description: A use after free issue was addressed with improved memory management. CVE-2022-22669: an anonymous researcher

AppKit Available for: macOS Monterey Impact: A malicious application may be able to gain root privileges Description: A logic issue was addressed with improved validation. CVE-2022-22665: Lockheed Martin Red Team

AppleGraphicsControl Available for: macOS Monterey Impact: An application may be able to gain elevated privileges Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2022-22631: an anonymous researcher

AppleScript Available for: macOS Monterey Impact: Processing a maliciously crafted AppleScript binary may result in unexpected application termination or disclosure of process memory Description: An out-of-bounds read was addressed with improved input validation. CVE-2022-22625: Mickey Jin (@patch1t) of Trend Micro

AppleScript Available for: macOS Monterey Impact: An application may be able to read restricted memory Description: This issue was addressed with improved checks. CVE-2022-22648: an anonymous researcher

AppleScript Available for: macOS Monterey Impact: Processing a maliciously crafted AppleScript binary may result in unexpected application termination or disclosure of process memory Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2022-22626: Mickey Jin (@patch1t) of Trend Micro CVE-2022-22627: Qi Sun and Robert Ai of Trend Micro

AppleScript Available for: macOS Monterey Impact: Processing a maliciously crafted file may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved validation. CVE-2022-22597: Qi Sun and Robert Ai of Trend Micro

BOM Available for: macOS Monterey Impact: A maliciously crafted ZIP archive may bypass Gatekeeper checks Description: This issue was addressed with improved checks. CVE-2022-22616: Ferdous Saljooki (@malwarezoo) and Jaron Bradley (@jbradley89) of Jamf Software, Mickey Jin (@patch1t)

curl Available for: macOS Monterey Impact: Multiple issues in curl Description: Multiple issues were addressed by updating to curl version 7.79.1. CVE-2021-22946 CVE-2021-22947 CVE-2021-22945 CVE-2022-22623

FaceTime Available for: macOS Monterey Impact: A user may send audio and video in a FaceTime call without knowing that they have done so Description: This issue was addressed with improved checks. CVE-2022-22643: Sonali Luthar of the University of Virginia, Michael Liao of the University of Illinois at Urbana-Champaign, Rohan Pahwa of Rutgers University, and Bao Nguyen of the University of Florida

ImageIO Available for: macOS Monterey Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: An out-of-bounds read was addressed with improved input validation. CVE-2022-22611: Xingyu Jin of Google

ImageIO Available for: macOS Monterey Impact: Processing a maliciously crafted image may lead to heap corruption Description: A memory consumption issue was addressed with improved memory handling. CVE-2022-22612: Xingyu Jin of Google

Intel Graphics Driver Available for: macOS Monterey Impact: An application may be able to execute arbitrary code with kernel privileges Description: A type confusion issue was addressed with improved state handling. CVE-2022-22661: an anonymous researcher, Peterpan0927 of Alibaba Security Pandora Lab

IOGPUFamily Available for: macOS Monterey Impact: An application may be able to gain elevated privileges Description: A use after free issue was addressed with improved memory management. CVE-2022-22641: Mohamed Ghannam (@_simo36)

Kernel Available for: macOS Monterey Impact: An application may be able to execute arbitrary code with kernel privileges Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2022-22613: Alex, an anonymous researcher

Kernel Available for: macOS Monterey Impact: An application may be able to execute arbitrary code with kernel privileges Description: A use after free issue was addressed with improved memory management. CVE-2022-22614: an anonymous researcher CVE-2022-22615: an anonymous researcher

Kernel Available for: macOS Monterey Impact: A malicious application may be able to elevate privileges Description: A logic issue was addressed with improved state management. CVE-2022-22632: Keegan Saunders

Kernel Available for: macOS Monterey Impact: An attacker in a privileged position may be able to perform a denial of service attack Description: A null pointer dereference was addressed with improved validation. CVE-2022-22638: derrek (@derrekr6)

Kernel Available for: macOS Monterey Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved validation. CVE-2022-22640: sqrtpwn

libarchive Available for: macOS Monterey Impact: Multiple issues in libarchive Description: Multiple memory corruption issues existed in libarchive. These issues were addressed with improved input validation. CVE-2021-36976

Login Window Available for: macOS Monterey Impact: A person with access to a Mac may be able to bypass Login Window Description: This issue was addressed with improved checks. CVE-2022-22647: an anonymous researcher

LoginWindow Available for: macOS Monterey Impact: A local attacker may be able to view the previous logged in user’s desktop from the fast user switching screen Description: An authentication issue was addressed with improved state management. CVE-2022-22656

GarageBand MIDI Available for: macOS Monterey Impact: Opening a maliciously crafted file may lead to unexpected application termination or arbitrary code execution Description: A memory initialization issue was addressed with improved memory handling. CVE-2022-22657: Brandon Perry of Atredis Partners

GarageBand MIDI Available for: macOS Monterey Impact: Opening a maliciously crafted file may lead to unexpected application termination or arbitrary code execution Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2022-22664: Brandon Perry of Atredis Partners

NSSpellChecker Available for: macOS Monterey Impact: A malicious application may be able to access information about a user's contacts Description: A privacy issue existed in the handling of Contact cards. This was addressed with improved state management. CVE-2022-22644: an anonymous researcher

PackageKit Available for: macOS Monterey Impact: An application may be able to gain elevated privileges Description: A logic issue was addressed with improved state management. CVE-2022-22617: Mickey Jin (@patch1t)

Preferences Available for: macOS Monterey Impact: A malicious application may be able to read other applications' settings Description: The issue was addressed with additional permissions checks. CVE-2022-22609: Zhipeng Huo (@R3dF09) and Yuebin Sun (@yuebinsun2020) of Tencent Security Xuanwu Lab (xlab.tencent.com)

QuickTime Player Available for: macOS Monterey Impact: A plug-in may be able to inherit the application's permissions and access user data Description: This issue was addressed with improved checks. CVE-2022-22650: Wojciech Reguła (@_r3ggi) of SecuRing

Safari Downloads Available for: macOS Monterey Impact: A maliciously crafted ZIP archive may bypass Gatekeeper checks Description: This issue was addressed with improved checks. CVE-2022-22616: Ferdous Saljooki (@malwarezoo) and Jaron Bradley (@jbradley89) of Jamf Software, Mickey Jin (@patch1t)

Sandbox Available for: macOS Monterey Impact: A malicious application may be able to bypass certain Privacy preferences Description: The issue was addressed with improved permissions logic. CVE-2022-22600: Sudhakar Muthumani of Primefort Private Limited, Khiem Tran

Siri Available for: macOS Monterey Impact: A person with physical access to a device may be able to use Siri to obtain some location information from the lock screen Description: A permissions issue was addressed with improved validation. CVE-2022-22599: Andrew Goldberg of the University of Texas at Austin, McCombs School of Business (linkedin.com/andrew-goldberg/)

SMB Available for: macOS Monterey Impact: A remote attacker may be able to cause unexpected system termination or corrupt kernel memory Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2022-22651: Felix Poulin-Belanger

SoftwareUpdate Available for: macOS Monterey Impact: An application may be able to gain elevated privileges Description: A logic issue was addressed with improved state management. CVE-2022-22639: Mickey Jin (@patch1t)

System Preferences Available for: macOS Monterey Impact: An app may be able to spoof system notifications and UI Description: This issue was addressed with a new entitlement. CVE-2022-22660: Guilherme Rambo of Best Buddy Apps (rambo.codes)

UIKit Available for: macOS Monterey Impact: A person with physical access to an iOS device may be able to see sensitive information via keyboard suggestions Description: This issue was addressed with improved checks. CVE-2022-22621: Joey Hewitt

Vim Available for: macOS Monterey Impact: Multiple issues in Vim Description: Multiple issues were addressed by updating Vim. CVE-2021-4136 CVE-2021-4166 CVE-2021-4173 CVE-2021-4187 CVE-2021-4192 CVE-2021-4193 CVE-2021-46059 CVE-2022-0128 CVE-2022-0156 CVE-2022-0158

VoiceOver Available for: macOS Monterey Impact: A user may be able to view restricted content from the lock screen Description: A lock screen issue was addressed with improved state management. CVE-2021-30918: an anonymous researcher

WebKit Available for: macOS Monterey Impact: Processing maliciously crafted web content may disclose sensitive user information Description: A cookie management issue was addressed with improved state management. WebKit Bugzilla: 232748 CVE-2022-22662: Prakash (@1lastBr3ath) of Threat Nix

WebKit Available for: macOS Monterey Impact: Processing maliciously crafted web content may lead to code execution Description: A memory corruption issue was addressed with improved state management. WebKit Bugzilla: 232812 CVE-2022-22610: Quan Yin of Bigo Technology Live Client Team

WebKit Available for: macOS Monterey Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. WebKit Bugzilla: 233172 CVE-2022-22624: Kirin (@Pwnrin) of Tencent Security Xuanwu Lab WebKit Bugzilla: 234147 CVE-2022-22628: Kirin (@Pwnrin) of Tencent Security Xuanwu Lab

WebKit Available for: macOS Monterey Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A buffer overflow issue was addressed with improved memory handling. WebKit Bugzilla: 234966 CVE-2022-22629: Jeonghoon Shin at Theori working with Trend Micro Zero Day Initiative

WebKit Available for: macOS Monterey Impact: A malicious website may cause unexpected cross-origin behavior Description: A logic issue was addressed with improved state management. WebKit Bugzilla: 235294 CVE-2022-22637: Tom McKee of Google

Wi-Fi Available for: macOS Monterey Impact: A malicious application may be able to leak sensitive user information Description: A logic issue was addressed with improved restrictions. CVE-2022-22668: MrPhil17

xar Available for: macOS Monterey Impact: A local user may be able to write arbitrary files Description: A validation issue existed in the handling of symlinks. This issue was addressed with improved validation of symlinks. CVE-2022-22582: Richard Warren of NCC Group

Additional recognition

AirDrop We would like to acknowledge Omar Espino (omespino.com), Ron Masas of BreakPoint.sh for their assistance.

Bluetooth We would like to acknowledge an anonymous researcher, chenyuwang (@mzzzz__) of Tencent Security Xuanwu Lab for their assistance.

Face Gallery We would like to acknowledge Tian Zhang (@KhaosT) for their assistance.

Intel Graphics Driver We would like to acknowledge Jack Dates of RET2 Systems, Inc., Yinyi Wu (@3ndy1) for their assistance.

Local Authentication We would like to acknowledge an anonymous researcher for their assistance.

Notes We would like to acknowledge Nathaniel Ekoniak of Ennate Technologies for their assistance.

Password Manager We would like to acknowledge Maximilian Golla (@m33x) of Max Planck Institute for Security and Privacy (MPI-SP) for their assistance.

Siri We would like to acknowledge an anonymous researcher for their assistance.

syslog We would like to acknowledge Yonghwi Jin (@jinmo123) of Theori for their assistance.

TCC We would like to acknowledge Csaba Fitzl (@theevilbit) of Offensive Security for their assistance.

UIKit We would like to acknowledge Tim Shadel of Day Logger, Inc. for their assistance.

WebKit We would like to acknowledge Abdullah Md Shaleh for their assistance.

WebKit Storage We would like to acknowledge Martin Bajanik of FingerprintJS for their assistance.

macOS Monterey 12.3 may be obtained from the Mac App Store or Apple's Software Downloads web site: https://support.apple.com/downloads/ All information is also posted on the Apple Security Updates web site: https://support.apple.com/en-us/HT201222.

This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmIv0O4ACgkQeC9qKD1p rhjGGRAAjqIyEzN+LAk+2uzHIMQNEwav9fqo/ZNoYAOzNgActK56PIC/PBM3SzHd LrGFKbBq/EMU4EqXT6ycB7/uZfaAZVCBDNo1qOoYNHXnKtGL2Z/96mV14qbSmRvC jfg1pC0G1jPTxJKvHhuQSZHDGj+BI458fwuTY48kjCnzlWf9dKr2kdjUjE38X9RM 0upKVKqY+oWdbn5jPwgZ408NOqzHrHDW1iIYd4v9UrKN3pfMGDzVZTr/offL6VFL osOVWv1IZvXrhPsrtd2KfG0hTHz71vShVZ7jGAsGEdC/mT79zwFbYuzBFy791xFa rizr/ZWGfWBSYy8O90d1l13lDlE739YPc/dt1mjcvP9FTnzMwBagy+6//zAVe0v/ KZOjmvtK5sRvrQH54E8qTYitdMpY2aZhfT6D8tcl+98TjxTDNXXj/gypdCXNWqyB L1PtFhTjQ0WnzUNB7sosM0zAjfZ1iPAZq0XHDQ6p6gEdVavNOHo/ekgibVm5f1pi kwBHkKyq55QbzipDWwXl6Owk/iaHPxgENYb78BpeUQSFei+IYDUsyLkPh3L95PHZ JSyKOtbBArlYOWcxlYHn+hDK8iotA1c/SHDefYOoNkp1uP853Ge09eWq+zMzUwEo GXXJYMi1Q8gmJ9wK/A3d/FKY4FBZxpByUUgjYhiMKTU5cSeihaI= =RiA+ -----END PGP SIGNATURE-----

. ========================================================================== Ubuntu Security Notice USN-5079-3 September 21, 2021

curl vulnerabilities

A security issue affects these releases of Ubuntu and its derivatives:

  • Ubuntu 18.04 LTS

Summary:

USN-5079-1 introduced a regression in curl.

Software Description: - curl: HTTP, HTTPS, and FTP client and client libraries

Details:

USN-5079-1 fixed vulnerabilities in curl. One of the fixes introduced a regression on Ubuntu 18.04 LTS. This update fixes the problem.

We apologize for the inconvenience. A remote attacker could use this issue to cause curl to crash, resulting in a denial of service, or possibly execute arbitrary code. (CVE-2021-22945) Patrick Monnerat discovered that curl incorrectly handled upgrades to TLS. When receiving certain responses from servers, curl would continue without TLS even when the option to require a successful upgrade to TLS was specified. (CVE-2021-22946) Patrick Monnerat discovered that curl incorrectly handled responses received before STARTTLS. A remote attacker could possibly use this issue to inject responses and intercept communications. (CVE-2021-22947)

Update instructions:

The problem can be corrected by updating your system to the following package versions:

Ubuntu 18.04 LTS: curl 7.58.0-2ubuntu3.16 libcurl3-gnutls 7.58.0-2ubuntu3.16 libcurl3-nss 7.58.0-2ubuntu3.16 libcurl4 7.58.0-2ubuntu3.16

In general, a standard system update will make all the necessary changes. These flaws may allow remote attackers to obtain sensitive information, leak authentication or cookie header data or facilitate a denial of service attack.

For the stable distribution (bullseye), these problems have been fixed in version 7.74.0-1.3+deb11u2.

We recommend that you upgrade your curl packages. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202212-01


                                       https://security.gentoo.org/

Severity: High Title: curl: Multiple Vulnerabilities Date: December 19, 2022 Bugs: #803308, #813270, #841302, #843824, #854708, #867679, #878365 ID: 202212-01


Synopsis

Multiple vulnerabilities have been found in curl, the worst of which could result in arbitrary code execution.

Background

A command line tool and library for transferring data with URLs.

Affected packages

-------------------------------------------------------------------
 Package              /     Vulnerable     /            Unaffected
-------------------------------------------------------------------

1 net-misc/curl < 7.86.0 >= 7.86.0

Description

Multiple vulnerabilities have been discovered in curl. Please review the CVE identifiers referenced below for details.

Impact

Please review the referenced CVE identifiers for details.

Workaround

There is no known workaround at this time.

Resolution

All curl users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-misc/curl-7.86.0"

References

[ 1 ] CVE-2021-22922 https://nvd.nist.gov/vuln/detail/CVE-2021-22922 [ 2 ] CVE-2021-22923 https://nvd.nist.gov/vuln/detail/CVE-2021-22923 [ 3 ] CVE-2021-22925 https://nvd.nist.gov/vuln/detail/CVE-2021-22925 [ 4 ] CVE-2021-22926 https://nvd.nist.gov/vuln/detail/CVE-2021-22926 [ 5 ] CVE-2021-22945 https://nvd.nist.gov/vuln/detail/CVE-2021-22945 [ 6 ] CVE-2021-22946 https://nvd.nist.gov/vuln/detail/CVE-2021-22946 [ 7 ] CVE-2021-22947 https://nvd.nist.gov/vuln/detail/CVE-2021-22947 [ 8 ] CVE-2022-22576 https://nvd.nist.gov/vuln/detail/CVE-2022-22576 [ 9 ] CVE-2022-27774 https://nvd.nist.gov/vuln/detail/CVE-2022-27774 [ 10 ] CVE-2022-27775 https://nvd.nist.gov/vuln/detail/CVE-2022-27775 [ 11 ] CVE-2022-27776 https://nvd.nist.gov/vuln/detail/CVE-2022-27776 [ 12 ] CVE-2022-27779 https://nvd.nist.gov/vuln/detail/CVE-2022-27779 [ 13 ] CVE-2022-27780 https://nvd.nist.gov/vuln/detail/CVE-2022-27780 [ 14 ] CVE-2022-27781 https://nvd.nist.gov/vuln/detail/CVE-2022-27781 [ 15 ] CVE-2022-27782 https://nvd.nist.gov/vuln/detail/CVE-2022-27782 [ 16 ] CVE-2022-30115 https://nvd.nist.gov/vuln/detail/CVE-2022-30115 [ 17 ] CVE-2022-32205 https://nvd.nist.gov/vuln/detail/CVE-2022-32205 [ 18 ] CVE-2022-32206 https://nvd.nist.gov/vuln/detail/CVE-2022-32206 [ 19 ] CVE-2022-32207 https://nvd.nist.gov/vuln/detail/CVE-2022-32207 [ 20 ] CVE-2022-32208 https://nvd.nist.gov/vuln/detail/CVE-2022-32208 [ 21 ] CVE-2022-32221 https://nvd.nist.gov/vuln/detail/CVE-2022-32221 [ 22 ] CVE-2022-35252 https://nvd.nist.gov/vuln/detail/CVE-2022-35252 [ 23 ] CVE-2022-35260 https://nvd.nist.gov/vuln/detail/CVE-2022-35260 [ 24 ] CVE-2022-42915 https://nvd.nist.gov/vuln/detail/CVE-2022-42915 [ 25 ] CVE-2022-42916 https://nvd.nist.gov/vuln/detail/CVE-2022-42916

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202212-01

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202109-1795",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "h300s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h410s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "universal forwarder",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "splunk",
        "version": "9.1.0"
      },
      {
        "model": "mysql server",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.0.0"
      },
      {
        "model": "h700s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "universal forwarder",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "splunk",
        "version": "8.2.12"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0.1.1"
      },
      {
        "model": "clustered data ontap",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "35"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "11.0"
      },
      {
        "model": "macos",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.0.0"
      },
      {
        "model": "universal forwarder",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "splunk",
        "version": "9.0.6"
      },
      {
        "model": "libcurl",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "haxx",
        "version": "7.78.0"
      },
      {
        "model": "solidfire baseboard management controller",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "universal forwarder",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "splunk",
        "version": "9.0.0"
      },
      {
        "model": "libcurl",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "haxx",
        "version": "7.73.0"
      },
      {
        "model": "h500e",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "mysql server",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "8.0.26"
      },
      {
        "model": "macos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "apple",
        "version": "12.3"
      },
      {
        "model": "mysql server",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "5.7.35"
      },
      {
        "model": "h300e",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "33"
      },
      {
        "model": "cloud backup",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "mysql server",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "5.7.0"
      },
      {
        "model": "h700e",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h500s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "universal forwarder",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "splunk",
        "version": "8.2.0"
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-22945"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Ubuntu",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "164171"
      },
      {
        "db": "PACKETSTORM",
        "id": "164220"
      }
    ],
    "trust": 0.2
  },
  "cve": "CVE-2021-22945",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "PARTIAL",
            "baseScore": 5.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "CVE-2021-22945",
            "impactScore": 4.9,
            "integrityImpact": "NONE",
            "severity": "MEDIUM",
            "trust": 1.0,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:N/A:P",
            "version": "2.0"
          },
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "PARTIAL",
            "baseScore": 5.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "VHN-381419",
            "impactScore": 4.9,
            "integrityImpact": "NONE",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:M/AU:N/C:P/I:N/A:P",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 9.1,
            "baseSeverity": "CRITICAL",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 3.9,
            "id": "CVE-2021-22945",
            "impactScore": 5.2,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:H",
            "version": "3.1"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2021-22945",
            "trust": 1.0,
            "value": "CRITICAL"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202104-975",
            "trust": 0.6,
            "value": "MEDIUM"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202109-998",
            "trust": 0.6,
            "value": "CRITICAL"
          },
          {
            "author": "VULHUB",
            "id": "VHN-381419",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-381419"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-975"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202109-998"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-22945"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "When sending data to an MQTT server, libcurl \u003c= 7.73.0 and 7.78.0 could in some circumstances erroneously keep a pointer to an already freed memory area and both use that again in a subsequent call to send data and also free it *again*. Pillow is a Python-based image processing library. \nThere is currently no information about this vulnerability, please feel free to follow CNNVD or manufacturer announcements. A use-after-free security issue has been found in the MQTT sending component of curl prior to 7.79.0. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2022-03-14-4 macOS Monterey 12.3\n\nmacOS Monterey 12.3 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT213183. \n\nAccelerate Framework\nAvailable for: macOS Monterey\nImpact: Opening a maliciously crafted PDF file may lead to an\nunexpected application termination or arbitrary code execution\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2022-22633: an anonymous researcher\n\nAMD\nAvailable for: macOS Monterey\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2022-22669: an anonymous researcher\n\nAppKit\nAvailable for: macOS Monterey\nImpact: A malicious application may be able to gain root privileges\nDescription: A logic issue was addressed with improved validation. \nCVE-2022-22665: Lockheed Martin Red Team\n\nAppleGraphicsControl\nAvailable for: macOS Monterey\nImpact: An application may be able to gain elevated privileges\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2022-22631: an anonymous researcher\n\nAppleScript\nAvailable for: macOS Monterey\nImpact: Processing a maliciously crafted AppleScript binary may\nresult in unexpected application termination or disclosure of process\nmemory\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2022-22625: Mickey Jin (@patch1t) of Trend Micro\n\nAppleScript\nAvailable for: macOS Monterey\nImpact: An application may be able to read restricted memory\nDescription: This issue was addressed with improved checks. \nCVE-2022-22648: an anonymous researcher\n\nAppleScript\nAvailable for: macOS Monterey\nImpact: Processing a maliciously crafted AppleScript binary may\nresult in unexpected application termination or disclosure of process\nmemory\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2022-22626: Mickey Jin (@patch1t) of Trend Micro\nCVE-2022-22627: Qi Sun and Robert Ai of Trend Micro\n\nAppleScript\nAvailable for: macOS Monterey\nImpact: Processing a maliciously crafted file may lead to arbitrary\ncode execution\nDescription: A memory corruption issue was addressed with improved\nvalidation. \nCVE-2022-22597: Qi Sun and Robert Ai of Trend Micro\n\nBOM\nAvailable for: macOS Monterey\nImpact: A maliciously crafted ZIP archive may bypass Gatekeeper\nchecks\nDescription: This issue was addressed with improved checks. \nCVE-2022-22616: Ferdous Saljooki (@malwarezoo) and Jaron Bradley\n(@jbradley89) of Jamf Software, Mickey Jin (@patch1t)\n\ncurl\nAvailable for: macOS Monterey\nImpact: Multiple issues in curl\nDescription: Multiple issues were addressed by updating to curl\nversion 7.79.1. \nCVE-2021-22946\nCVE-2021-22947\nCVE-2021-22945\nCVE-2022-22623\n\nFaceTime\nAvailable for: macOS Monterey\nImpact: A user may send audio and video in a FaceTime call without\nknowing that they have done so\nDescription: This issue was addressed with improved checks. \nCVE-2022-22643: Sonali Luthar of the University of Virginia, Michael\nLiao of the University of Illinois at Urbana-Champaign, Rohan Pahwa\nof Rutgers University, and Bao Nguyen of the University of Florida\n\nImageIO\nAvailable for: macOS Monterey\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2022-22611: Xingyu Jin of Google\n\nImageIO\nAvailable for: macOS Monterey\nImpact: Processing a maliciously crafted image may lead to heap\ncorruption\nDescription: A memory consumption issue was addressed with improved\nmemory handling. \nCVE-2022-22612: Xingyu Jin of Google\n\nIntel Graphics Driver\nAvailable for: macOS Monterey\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A type confusion issue was addressed with improved state\nhandling. \nCVE-2022-22661: an anonymous researcher, Peterpan0927 of Alibaba\nSecurity Pandora Lab\n\nIOGPUFamily\nAvailable for: macOS Monterey\nImpact: An application may be able to gain elevated privileges\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2022-22641: Mohamed Ghannam (@_simo36)\n\nKernel\nAvailable for: macOS Monterey\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2022-22613: Alex, an anonymous researcher\n\nKernel\nAvailable for: macOS Monterey\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2022-22614: an anonymous researcher\nCVE-2022-22615: an anonymous researcher\n\nKernel\nAvailable for: macOS Monterey\nImpact: A malicious application may be able to elevate privileges\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2022-22632: Keegan Saunders\n\nKernel\nAvailable for: macOS Monterey\nImpact: An attacker in a privileged position may be able to perform a\ndenial of service attack\nDescription: A null pointer dereference was addressed with improved\nvalidation. \nCVE-2022-22638: derrek (@derrekr6)\n\nKernel\nAvailable for: macOS Monterey\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nvalidation. \nCVE-2022-22640: sqrtpwn\n\nlibarchive\nAvailable for: macOS Monterey\nImpact: Multiple issues in libarchive\nDescription: Multiple memory corruption issues existed in libarchive. \nThese issues were addressed with improved input validation. \nCVE-2021-36976\n\nLogin Window\nAvailable for: macOS Monterey\nImpact: A person with access to a Mac may be able to bypass Login\nWindow\nDescription: This issue was addressed with improved checks. \nCVE-2022-22647: an anonymous researcher\n\nLoginWindow\nAvailable for: macOS Monterey\nImpact: A local attacker may be able to view the previous logged in\nuser\u2019s desktop from the fast user switching screen\nDescription: An authentication issue was addressed with improved\nstate management. \nCVE-2022-22656\n\nGarageBand MIDI\nAvailable for: macOS Monterey\nImpact: Opening a maliciously crafted file may lead to unexpected\napplication termination or arbitrary code execution\nDescription: A memory initialization issue was addressed with\nimproved memory handling. \nCVE-2022-22657: Brandon Perry of Atredis Partners\n\nGarageBand MIDI\nAvailable for: macOS Monterey\nImpact: Opening a maliciously crafted file may lead to unexpected\napplication termination or arbitrary code execution\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2022-22664: Brandon Perry of Atredis Partners\n\nNSSpellChecker\nAvailable for: macOS Monterey\nImpact: A malicious application may be able to access information\nabout a user\u0027s contacts\nDescription: A privacy issue existed in the handling of Contact\ncards. This was addressed with improved state management. \nCVE-2022-22644: an anonymous researcher\n\nPackageKit\nAvailable for: macOS Monterey\nImpact: An application may be able to gain elevated privileges\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2022-22617: Mickey Jin (@patch1t)\n\nPreferences\nAvailable for: macOS Monterey\nImpact: A malicious application may be able to read other\napplications\u0027 settings\nDescription: The issue was addressed with additional permissions\nchecks. \nCVE-2022-22609: Zhipeng Huo (@R3dF09) and Yuebin Sun (@yuebinsun2020)\nof Tencent Security Xuanwu Lab (xlab.tencent.com)\n\nQuickTime Player\nAvailable for: macOS Monterey\nImpact: A plug-in may be able to inherit the application\u0027s\npermissions and access user data\nDescription: This issue was addressed with improved checks. \nCVE-2022-22650: Wojciech Regu\u0142a (@_r3ggi) of SecuRing\n\nSafari Downloads\nAvailable for: macOS Monterey\nImpact: A maliciously crafted ZIP archive may bypass Gatekeeper\nchecks\nDescription: This issue was addressed with improved checks. \nCVE-2022-22616: Ferdous Saljooki (@malwarezoo) and Jaron Bradley\n(@jbradley89) of Jamf Software, Mickey Jin (@patch1t)\n\nSandbox\nAvailable for: macOS Monterey\nImpact: A malicious application may be able to bypass certain Privacy\npreferences\nDescription: The issue was addressed with improved permissions logic. \nCVE-2022-22600: Sudhakar Muthumani of Primefort Private Limited,\nKhiem Tran\n\nSiri\nAvailable for: macOS Monterey\nImpact: A person with physical access to a device may be able to use\nSiri to obtain some location information from the lock screen\nDescription: A permissions issue was addressed with improved\nvalidation. \nCVE-2022-22599: Andrew Goldberg of the University of Texas at Austin,\nMcCombs School of Business (linkedin.com/andrew-goldberg/)\n\nSMB\nAvailable for: macOS Monterey\nImpact: A remote attacker may be able to cause unexpected system\ntermination or corrupt kernel memory\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2022-22651: Felix Poulin-Belanger\n\nSoftwareUpdate\nAvailable for: macOS Monterey\nImpact: An application may be able to gain elevated privileges\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2022-22639: Mickey Jin (@patch1t)\n\nSystem Preferences\nAvailable for: macOS Monterey\nImpact: An app may be able to spoof system notifications and UI\nDescription: This issue was addressed with a new entitlement. \nCVE-2022-22660: Guilherme Rambo of Best Buddy Apps (rambo.codes)\n\nUIKit\nAvailable for: macOS Monterey\nImpact: A person with physical access to an iOS device may be able to\nsee sensitive information via keyboard suggestions\nDescription: This issue was addressed with improved checks. \nCVE-2022-22621: Joey Hewitt\n\nVim\nAvailable for: macOS Monterey\nImpact: Multiple issues in Vim\nDescription: Multiple issues were addressed by updating Vim. \nCVE-2021-4136\nCVE-2021-4166\nCVE-2021-4173\nCVE-2021-4187\nCVE-2021-4192\nCVE-2021-4193\nCVE-2021-46059\nCVE-2022-0128\nCVE-2022-0156\nCVE-2022-0158\n\nVoiceOver\nAvailable for: macOS Monterey\nImpact: A user may be able to view restricted content from the lock\nscreen\nDescription: A lock screen issue was addressed with improved state\nmanagement. \nCVE-2021-30918: an anonymous researcher\n\nWebKit\nAvailable for: macOS Monterey\nImpact: Processing maliciously crafted web content may disclose\nsensitive user information\nDescription: A cookie management issue was addressed with improved\nstate management. \nWebKit Bugzilla: 232748\nCVE-2022-22662: Prakash (@1lastBr3ath) of Threat Nix\n\nWebKit\nAvailable for: macOS Monterey\nImpact: Processing maliciously crafted web content may lead to code\nexecution\nDescription: A memory corruption issue was addressed with improved\nstate management. \nWebKit Bugzilla: 232812\nCVE-2022-22610: Quan Yin of Bigo Technology Live Client Team\n\nWebKit\nAvailable for: macOS Monterey\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nWebKit Bugzilla: 233172\nCVE-2022-22624: Kirin (@Pwnrin) of Tencent Security Xuanwu Lab\nWebKit Bugzilla: 234147\nCVE-2022-22628: Kirin (@Pwnrin) of Tencent Security Xuanwu Lab\n\nWebKit\nAvailable for: macOS Monterey\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A buffer overflow issue was addressed with improved\nmemory handling. \nWebKit Bugzilla: 234966\nCVE-2022-22629: Jeonghoon Shin at Theori working with Trend Micro\nZero Day Initiative\n\nWebKit\nAvailable for: macOS Monterey\nImpact: A malicious website may cause unexpected cross-origin\nbehavior\nDescription: A logic issue was addressed with improved state\nmanagement. \nWebKit Bugzilla: 235294\nCVE-2022-22637: Tom McKee of Google\n\nWi-Fi\nAvailable for: macOS Monterey\nImpact: A malicious application may be able to leak sensitive user\ninformation\nDescription: A logic issue was addressed with improved restrictions. \nCVE-2022-22668: MrPhil17\n\nxar\nAvailable for: macOS Monterey\nImpact: A local user may be able to write arbitrary files\nDescription: A validation issue existed in the handling of symlinks. \nThis issue was addressed with improved validation of symlinks. \nCVE-2022-22582: Richard Warren of NCC Group\n\nAdditional recognition\n\nAirDrop\nWe would like to acknowledge Omar Espino (omespino.com), Ron Masas of\nBreakPoint.sh for their assistance. \n\nBluetooth\nWe would like to acknowledge an anonymous researcher, chenyuwang\n(@mzzzz__) of Tencent Security Xuanwu Lab for their assistance. \n\nFace Gallery\nWe would like to acknowledge Tian Zhang (@KhaosT) for their\nassistance. \n\nIntel Graphics Driver\nWe would like to acknowledge Jack Dates of RET2 Systems, Inc., Yinyi\nWu (@3ndy1) for their assistance. \n\nLocal Authentication\nWe would like to acknowledge an anonymous researcher for their\nassistance. \n\nNotes\nWe would like to acknowledge Nathaniel Ekoniak of Ennate Technologies\nfor their assistance. \n\nPassword Manager\nWe would like to acknowledge Maximilian Golla (@m33x) of Max Planck\nInstitute for Security and Privacy (MPI-SP) for their assistance. \n\nSiri\nWe would like to acknowledge an anonymous researcher for their\nassistance. \n\nsyslog\nWe would like to acknowledge Yonghwi Jin (@jinmo123) of Theori for\ntheir assistance. \n\nTCC\nWe would like to acknowledge Csaba Fitzl (@theevilbit) of Offensive\nSecurity for their assistance. \n\nUIKit\nWe would like to acknowledge Tim Shadel of Day Logger, Inc. for their\nassistance. \n\nWebKit\nWe would like to acknowledge Abdullah Md Shaleh for their assistance. \n\nWebKit Storage\nWe would like to acknowledge Martin Bajanik of FingerprintJS for\ntheir assistance. \n\nmacOS Monterey 12.3 may be obtained from the Mac App Store or Apple\u0027s\nSoftware Downloads web site: https://support.apple.com/downloads/\nAll information is also posted on the Apple Security Updates\nweb site: https://support.apple.com/en-us/HT201222. \n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmIv0O4ACgkQeC9qKD1p\nrhjGGRAAjqIyEzN+LAk+2uzHIMQNEwav9fqo/ZNoYAOzNgActK56PIC/PBM3SzHd\nLrGFKbBq/EMU4EqXT6ycB7/uZfaAZVCBDNo1qOoYNHXnKtGL2Z/96mV14qbSmRvC\njfg1pC0G1jPTxJKvHhuQSZHDGj+BI458fwuTY48kjCnzlWf9dKr2kdjUjE38X9RM\n0upKVKqY+oWdbn5jPwgZ408NOqzHrHDW1iIYd4v9UrKN3pfMGDzVZTr/offL6VFL\nosOVWv1IZvXrhPsrtd2KfG0hTHz71vShVZ7jGAsGEdC/mT79zwFbYuzBFy791xFa\nrizr/ZWGfWBSYy8O90d1l13lDlE739YPc/dt1mjcvP9FTnzMwBagy+6//zAVe0v/\nKZOjmvtK5sRvrQH54E8qTYitdMpY2aZhfT6D8tcl+98TjxTDNXXj/gypdCXNWqyB\nL1PtFhTjQ0WnzUNB7sosM0zAjfZ1iPAZq0XHDQ6p6gEdVavNOHo/ekgibVm5f1pi\nkwBHkKyq55QbzipDWwXl6Owk/iaHPxgENYb78BpeUQSFei+IYDUsyLkPh3L95PHZ\nJSyKOtbBArlYOWcxlYHn+hDK8iotA1c/SHDefYOoNkp1uP853Ge09eWq+zMzUwEo\nGXXJYMi1Q8gmJ9wK/A3d/FKY4FBZxpByUUgjYhiMKTU5cSeihaI=\n=RiA+\n-----END PGP SIGNATURE-----\n\n\n. ==========================================================================\nUbuntu Security Notice USN-5079-3\nSeptember 21, 2021\n\ncurl vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 18.04 LTS\n\nSummary:\n\nUSN-5079-1 introduced a regression in curl. \n\nSoftware Description:\n- curl: HTTP, HTTPS, and FTP client and client libraries\n\nDetails:\n\nUSN-5079-1 fixed vulnerabilities in curl. One of the fixes introduced a\nregression on Ubuntu 18.04 LTS. This update fixes the problem. \n\nWe apologize for the inconvenience. A remote attacker could use this issue to cause curl to\n crash, resulting in a denial of service, or possibly execute arbitrary\n code. (CVE-2021-22945)\n  Patrick Monnerat discovered that curl incorrectly handled upgrades to TLS. \n When receiving certain responses from servers, curl would continue without\n TLS even when the option to require a successful upgrade to TLS was\n specified. (CVE-2021-22946)\n  Patrick Monnerat discovered that curl incorrectly handled responses\n received before STARTTLS. A remote attacker could possibly use this issue\n to inject responses and intercept communications. (CVE-2021-22947)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 18.04 LTS:\n  curl                            7.58.0-2ubuntu3.16\n  libcurl3-gnutls                 7.58.0-2ubuntu3.16\n  libcurl3-nss                    7.58.0-2ubuntu3.16\n  libcurl4                        7.58.0-2ubuntu3.16\n\nIn general, a standard system update will make all the necessary changes. These flaws may allow remote attackers to obtain sensitive\ninformation, leak authentication or cookie header data or facilitate a\ndenial of service attack. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 7.74.0-1.3+deb11u2. \n\nWe recommend that you upgrade your curl packages. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202212-01\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: High\n    Title: curl: Multiple Vulnerabilities\n     Date: December 19, 2022\n     Bugs: #803308, #813270, #841302, #843824, #854708, #867679, #878365\n       ID: 202212-01\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been found in curl, the worst of which\ncould result in arbitrary code execution. \n\nBackground\n=========\nA command line tool and library for transferring data with URLs. \n\nAffected packages\n================\n    -------------------------------------------------------------------\n     Package              /     Vulnerable     /            Unaffected\n    -------------------------------------------------------------------\n  1  net-misc/curl              \u003c 7.86.0                    \u003e= 7.86.0\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in curl. Please review the\nCVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll curl users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-misc/curl-7.86.0\"\n\nReferences\n=========\n[ 1 ] CVE-2021-22922\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22922\n[ 2 ] CVE-2021-22923\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22923\n[ 3 ] CVE-2021-22925\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22925\n[ 4 ] CVE-2021-22926\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22926\n[ 5 ] CVE-2021-22945\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22945\n[ 6 ] CVE-2021-22946\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22946\n[ 7 ] CVE-2021-22947\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22947\n[ 8 ] CVE-2022-22576\n      https://nvd.nist.gov/vuln/detail/CVE-2022-22576\n[ 9 ] CVE-2022-27774\n      https://nvd.nist.gov/vuln/detail/CVE-2022-27774\n[ 10 ] CVE-2022-27775\n      https://nvd.nist.gov/vuln/detail/CVE-2022-27775\n[ 11 ] CVE-2022-27776\n      https://nvd.nist.gov/vuln/detail/CVE-2022-27776\n[ 12 ] CVE-2022-27779\n      https://nvd.nist.gov/vuln/detail/CVE-2022-27779\n[ 13 ] CVE-2022-27780\n      https://nvd.nist.gov/vuln/detail/CVE-2022-27780\n[ 14 ] CVE-2022-27781\n      https://nvd.nist.gov/vuln/detail/CVE-2022-27781\n[ 15 ] CVE-2022-27782\n      https://nvd.nist.gov/vuln/detail/CVE-2022-27782\n[ 16 ] CVE-2022-30115\n      https://nvd.nist.gov/vuln/detail/CVE-2022-30115\n[ 17 ] CVE-2022-32205\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32205\n[ 18 ] CVE-2022-32206\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32206\n[ 19 ] CVE-2022-32207\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32207\n[ 20 ] CVE-2022-32208\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32208\n[ 21 ] CVE-2022-32221\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32221\n[ 22 ] CVE-2022-35252\n      https://nvd.nist.gov/vuln/detail/CVE-2022-35252\n[ 23 ] CVE-2022-35260\n      https://nvd.nist.gov/vuln/detail/CVE-2022-35260\n[ 24 ] CVE-2022-42915\n      https://nvd.nist.gov/vuln/detail/CVE-2022-42915\n[ 25 ] CVE-2022-42916\n      https://nvd.nist.gov/vuln/detail/CVE-2022-42916\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202212-01\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-22945"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-975"
      },
      {
        "db": "VULHUB",
        "id": "VHN-381419"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-22945"
      },
      {
        "db": "PACKETSTORM",
        "id": "166319"
      },
      {
        "db": "PACKETSTORM",
        "id": "164171"
      },
      {
        "db": "PACKETSTORM",
        "id": "164220"
      },
      {
        "db": "PACKETSTORM",
        "id": "169318"
      },
      {
        "db": "PACKETSTORM",
        "id": "170303"
      }
    ],
    "trust": 2.07
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2021-22945",
        "trust": 2.3
      },
      {
        "db": "HACKERONE",
        "id": "1269242",
        "trust": 1.7
      },
      {
        "db": "SIEMENS",
        "id": "SSA-389290",
        "trust": 1.7
      },
      {
        "db": "PACKETSTORM",
        "id": "170303",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "166319",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "164171",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "164220",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "169318",
        "trust": 0.7
      },
      {
        "db": "CS-HELP",
        "id": "SB2021041363",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-975",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3022",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.3146",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021091715",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022042569",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022031433",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021092301",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021091514",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2021091601",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022031104",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022062007",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202109-998",
        "trust": 0.6
      },
      {
        "db": "VULHUB",
        "id": "VHN-381419",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-22945",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-381419"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-22945"
      },
      {
        "db": "PACKETSTORM",
        "id": "166319"
      },
      {
        "db": "PACKETSTORM",
        "id": "164171"
      },
      {
        "db": "PACKETSTORM",
        "id": "164220"
      },
      {
        "db": "PACKETSTORM",
        "id": "169318"
      },
      {
        "db": "PACKETSTORM",
        "id": "170303"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-975"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202109-998"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-22945"
      }
    ]
  },
  "id": "VAR-202109-1795",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-381419"
      }
    ],
    "trust": 0.30766129
  },
  "last_update_date": "2024-08-14T13:11:48.112000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "Haxx libcurl Remediation of resource management error vulnerabilities",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=164671"
      },
      {
        "title": "Arch Linux Issues: ",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2021-22945 log"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-22945"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202109-998"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-415",
        "trust": 1.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-381419"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-22945"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.8,
        "url": "https://security.gentoo.org/glsa/202212-01"
      },
      {
        "trust": 1.7,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-389290.pdf"
      },
      {
        "trust": 1.7,
        "url": "https://security.netapp.com/advisory/ntap-20211029-0003/"
      },
      {
        "trust": 1.7,
        "url": "https://support.apple.com/kb/ht213183"
      },
      {
        "trust": 1.7,
        "url": "https://www.debian.org/security/2022/dsa-5197"
      },
      {
        "trust": 1.7,
        "url": "http://seclists.org/fulldisclosure/2022/mar/29"
      },
      {
        "trust": 1.7,
        "url": "https://hackerone.com/reports/1269242"
      },
      {
        "trust": 1.7,
        "url": "https://www.oracle.com/security-alerts/cpuoct2021.html"
      },
      {
        "trust": 1.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22945"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/apoak4x73ejtaptsvt7irvdmuwvxnwgd/"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/rwlec6yvem2hwubx67sdgpsy4cqb72oe/"
      },
      {
        "trust": 0.7,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/apoak4x73ejtaptsvt7irvdmuwvxnwgd/"
      },
      {
        "trust": 0.7,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/rwlec6yvem2hwubx67sdgpsy4cqb72oe/"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021041363"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/libcurl-reuse-after-free-via-mqtt-sending-36417"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-22945"
      },
      {
        "trust": 0.6,
        "url": "https://www.ibm.com/support/pages/node/6495403"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/170303/gentoo-linux-security-advisory-202212-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022042569"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/164220/ubuntu-security-notice-usn-5079-3.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021092301"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.3146"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021091601"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022062007"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169318/debian-security-advisory-5197-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021091514"
      },
      {
        "trust": 0.6,
        "url": "https://support.apple.com/en-us/ht213183"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2021091715"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166319/apple-security-advisory-2022-03-14-4.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3022"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/164171/ubuntu-security-notice-usn-5079-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022031433"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022031104"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22947"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22946"
      },
      {
        "trust": 0.2,
        "url": "https://ubuntu.com/security/notices/usn-5079-1"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-27782"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32205"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-27775"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32206"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-27774"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32207"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-27781"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-27776"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22576"
      },
      {
        "trust": 0.1,
        "url": "http://seclists.org/oss-sec/2021/q3/166"
      },
      {
        "trust": 0.1,
        "url": "https://security.archlinux.org/cve-2021-22945"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22609"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4173"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22612"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22610"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4136"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22616"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4192"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/en-us/ht201222."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-46059"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0156"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/downloads/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0158"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22613"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4193"
      },
      {
        "trust": 0.1,
        "url": "https://www.apple.com/support/security/pgp/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30918"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22600"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-36976"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22599"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4166"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0128"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22597"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22611"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22615"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4187"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22582"
      },
      {
        "trust": 0.1,
        "url": "https://support.apple.com/ht213183."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22614"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/curl/7.58.0-2ubuntu3.15"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/curl/7.68.0-1ubuntu2.7"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/curl/7.74.0-1ubuntu2.3"
      },
      {
        "trust": 0.1,
        "url": "https://ubuntu.com/security/notices/usn-5079-3"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/curl/7.58.0-2ubuntu3.16"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/bugs/1944120"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/faq"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22924"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/"
      },
      {
        "trust": 0.1,
        "url": "https://security-tracker.debian.org/tracker/curl"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22922"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-27779"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30115"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35260"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22926"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32208"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-27780"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35252"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42916"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42915"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22923"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32221"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-381419"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-22945"
      },
      {
        "db": "PACKETSTORM",
        "id": "166319"
      },
      {
        "db": "PACKETSTORM",
        "id": "164171"
      },
      {
        "db": "PACKETSTORM",
        "id": "164220"
      },
      {
        "db": "PACKETSTORM",
        "id": "169318"
      },
      {
        "db": "PACKETSTORM",
        "id": "170303"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-975"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202109-998"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-22945"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-381419"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-22945"
      },
      {
        "db": "PACKETSTORM",
        "id": "166319"
      },
      {
        "db": "PACKETSTORM",
        "id": "164171"
      },
      {
        "db": "PACKETSTORM",
        "id": "164220"
      },
      {
        "db": "PACKETSTORM",
        "id": "169318"
      },
      {
        "db": "PACKETSTORM",
        "id": "170303"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-975"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202109-998"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-22945"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2021-09-23T00:00:00",
        "db": "VULHUB",
        "id": "VHN-381419"
      },
      {
        "date": "2022-03-15T15:49:02",
        "db": "PACKETSTORM",
        "id": "166319"
      },
      {
        "date": "2021-09-15T15:27:42",
        "db": "PACKETSTORM",
        "id": "164171"
      },
      {
        "date": "2021-09-21T15:39:10",
        "db": "PACKETSTORM",
        "id": "164220"
      },
      {
        "date": "2022-08-28T19:12:00",
        "db": "PACKETSTORM",
        "id": "169318"
      },
      {
        "date": "2022-12-19T13:48:31",
        "db": "PACKETSTORM",
        "id": "170303"
      },
      {
        "date": "2021-04-13T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202104-975"
      },
      {
        "date": "2021-09-15T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202109-998"
      },
      {
        "date": "2021-09-23T13:15:08.690000",
        "db": "NVD",
        "id": "CVE-2021-22945"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-12-22T00:00:00",
        "db": "VULHUB",
        "id": "VHN-381419"
      },
      {
        "date": "2021-04-14T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202104-975"
      },
      {
        "date": "2023-06-05T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202109-998"
      },
      {
        "date": "2024-03-27T15:04:30.460000",
        "db": "NVD",
        "id": "CVE-2021-22945"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "164171"
      },
      {
        "db": "PACKETSTORM",
        "id": "164220"
      },
      {
        "db": "PACKETSTORM",
        "id": "169318"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202109-998"
      }
    ],
    "trust": 0.9
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Pillow Buffer error vulnerability",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-975"
      }
    ],
    "trust": 0.6
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "other",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202104-975"
      }
    ],
    "trust": 0.6
  }
}

var-202301-0546
Vulnerability from variot

A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 1). An authenticated remote attacker with access to the Web Based Management (443/tcp) of the affected product as well as with access to the SFTP server of the affected product (22/tcp), could potentially read and write arbitrary files from and to the device's file system. An attacker might leverage this to trigger remote code execution on the affected component. SINEC INS Exists in a past traversal vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202301-0546",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": "1.0 sp2 update 1"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001807"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-45093"
      }
    ]
  },
  "cve": "CVE-2022-45093",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 8.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.8,
            "id": "CVE-2022-45093",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "LOW",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "HIGH",
            "attackVector": "NETWORK",
            "author": "productcert@siemens.com",
            "availabilityImpact": "HIGH",
            "baseScore": 8.5,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 1.8,
            "id": "CVE-2022-45093",
            "impactScore": 6.0,
            "integrityImpact": "HIGH",
            "privilegesRequired": "LOW",
            "scope": "CHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:H/PR:L/UI:N/S:C/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 8.8,
            "baseSeverity": "High",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2022-45093",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "Low",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2022-45093",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "productcert@siemens.com",
            "id": "CVE-2022-45093",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "NVD",
            "id": "CVE-2022-45093",
            "trust": 0.8,
            "value": "High"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202301-799",
            "trust": 0.6,
            "value": "HIGH"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001807"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-799"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-45093"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-45093"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 1). An authenticated remote attacker with access to the Web Based Management (443/tcp) of the affected product as well as with access to the SFTP server of the affected product (22/tcp), could potentially read and write arbitrary files from and to the device\u0027s file system. An attacker might leverage this to trigger remote code execution on the affected component. SINEC INS Exists in a past traversal vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-45093"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001807"
      }
    ],
    "trust": 1.62
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2022-45093",
        "trust": 3.2
      },
      {
        "db": "SIEMENS",
        "id": "SSA-332410",
        "trust": 1.6
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-23-017-03",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU90782730",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001807",
        "trust": 0.8
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-799",
        "trust": 0.6
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001807"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-799"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-45093"
      }
    ]
  },
  "id": "VAR-202301-0546",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-08-14T12:22:08.537000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "SSA-332410",
        "trust": 0.8,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf"
      },
      {
        "title": "Siemens SINEC NMS Repair measures for path traversal vulnerabilities",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=221681"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001807"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-799"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-22",
        "trust": 1.0
      },
      {
        "problemtype": "Path traversal (CWE-22) [ others ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001807"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-45093"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.6,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu90782730/index.html"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-45093"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-017-03"
      },
      {
        "trust": 0.6,
        "url": "https://cxsecurity.com/cveshow/cve-2022-45093/"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001807"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-799"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-45093"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001807"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-799"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-45093"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-05-16T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2023-001807"
      },
      {
        "date": "2023-01-10T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202301-799"
      },
      {
        "date": "2023-01-10T12:15:23.523000",
        "db": "NVD",
        "id": "CVE-2022-45093"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-05-16T03:25:00",
        "db": "JVNDB",
        "id": "JVNDB-2023-001807"
      },
      {
        "date": "2023-01-16T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202301-799"
      },
      {
        "date": "2023-01-14T00:43:41.810000",
        "db": "NVD",
        "id": "CVE-2022-45093"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-799"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "SINEC\u00a0INS\u00a0 Past traversal vulnerability in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2023-001807"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "path traversal",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202301-799"
      }
    ],
    "trust": 0.6
  }
}

var-202201-0349
Vulnerability from variot

node-fetch is vulnerable to Exposure of Sensitive Information to an Unauthorized Actor. node-fetch Exists in an open redirect vulnerability.Information may be obtained and information may be tampered with. The purpose of this text-only errata is to inform you about the security issues fixed in this release. Description:

Red Hat Process Automation Manager is an open source business process management suite that combines process management and decision service management and enables business and IT users to create, manage, validate, and deploy process applications and decision services.

Security Fix(es):

  • chart.js: prototype pollution (CVE-2020-7746)

  • moment: inefficient parsing algorithm resulting in DoS (CVE-2022-31129)

  • package immer before 9.0.6. Solution:

For on-premise installations, before applying the update, back up your existing installation, including all applications, configuration files, databases and database settings, and so on.

Red Hat recommends that you halt the server by stopping the JBoss Application Server process before installing this update. After installing the update, restart the server by starting the JBoss Application Server process.

The References section of this erratum contains a download link. You must log in to download the update. Bugs fixed (https://bugzilla.redhat.com/):

2041833 - CVE-2021-23436 immer: type confusion vulnerability can lead to a bypass of CVE-2020-28477 2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2047200 - CVE-2022-23437 xerces-j2: infinite loop when handling specially crafted XML document payloads 2047343 - CVE-2022-21363 mysql-connector-java: Difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Connectors 2050863 - CVE-2022-21724 jdbc-postgresql: Unchecked Class Instantiation when providing Plugin Classes 2063601 - CVE-2022-23913 artemis-commons: Apache ActiveMQ Artemis DoS 2064007 - CVE-2022-26520 postgresql-jdbc: Arbitrary File Write Vulnerability 2064698 - CVE-2020-36518 jackson-databind: denial of service via a large depth of nested objects 2066009 - CVE-2021-44906 minimist: prototype pollution 2067387 - CVE-2022-24771 node-forge: Signature verification leniency in checking digestAlgorithm structure can lead to signature forgery 2067458 - CVE-2022-24772 node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery 2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale 2076133 - CVE-2022-1365 cross-fetch: Exposure of Private Personal Information to an Unauthorized Actor 2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information 2096966 - CVE-2020-7746 chart.js: prototype pollution 2103584 - CVE-2022-0722 parse-url: Exposure of Sensitive Information to an Unauthorized Actor in GitHub repository ionicabizau/parse-url 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS 2107994 - CVE-2022-2458 Business-central: Possible XML External Entity Injection attack

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Important: OpenShift Container Platform 4.11.0 bug fix and security update Advisory ID: RHSA-2022:5069-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2022:5069 Issue date: 2022-08-10 CVE Names: CVE-2018-25009 CVE-2018-25010 CVE-2018-25012 CVE-2018-25013 CVE-2018-25014 CVE-2018-25032 CVE-2019-5827 CVE-2019-13750 CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 CVE-2020-13435 CVE-2020-14155 CVE-2020-17541 CVE-2020-19131 CVE-2020-24370 CVE-2020-28493 CVE-2020-35492 CVE-2020-36330 CVE-2020-36331 CVE-2020-36332 CVE-2021-3481 CVE-2021-3580 CVE-2021-3634 CVE-2021-3672 CVE-2021-3695 CVE-2021-3696 CVE-2021-3697 CVE-2021-3737 CVE-2021-4115 CVE-2021-4156 CVE-2021-4189 CVE-2021-20095 CVE-2021-20231 CVE-2021-20232 CVE-2021-23177 CVE-2021-23566 CVE-2021-23648 CVE-2021-25219 CVE-2021-31535 CVE-2021-31566 CVE-2021-36084 CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 CVE-2021-38185 CVE-2021-38593 CVE-2021-40528 CVE-2021-41190 CVE-2021-41617 CVE-2021-42771 CVE-2021-43527 CVE-2021-43818 CVE-2021-44225 CVE-2021-44906 CVE-2022-0235 CVE-2022-0778 CVE-2022-1012 CVE-2022-1215 CVE-2022-1271 CVE-2022-1292 CVE-2022-1586 CVE-2022-1621 CVE-2022-1629 CVE-2022-1706 CVE-2022-1729 CVE-2022-2068 CVE-2022-2097 CVE-2022-21698 CVE-2022-22576 CVE-2022-23772 CVE-2022-23773 CVE-2022-23806 CVE-2022-24407 CVE-2022-24675 CVE-2022-24903 CVE-2022-24921 CVE-2022-25313 CVE-2022-25314 CVE-2022-26691 CVE-2022-26945 CVE-2022-27191 CVE-2022-27774 CVE-2022-27776 CVE-2022-27782 CVE-2022-28327 CVE-2022-28733 CVE-2022-28734 CVE-2022-28735 CVE-2022-28736 CVE-2022-28737 CVE-2022-29162 CVE-2022-29810 CVE-2022-29824 CVE-2022-30321 CVE-2022-30322 CVE-2022-30323 CVE-2022-32250 ==================================================================== 1. Summary:

Red Hat OpenShift Container Platform release 4.11.0 is now available with updates to packages and images that fix several bugs and add enhancements.

This release includes a security update for Red Hat OpenShift Container Platform 4.11.

Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

This advisory contains the container images for Red Hat OpenShift Container Platform 4.11.0. See the following advisory for the RPM packages for this release:

https://access.redhat.com/errata/RHSA-2022:5068

Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:

https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html

Security Fix(es):

  • go-getter: command injection vulnerability (CVE-2022-26945)
  • go-getter: unsafe download (issue 1 of 3) (CVE-2022-30321)
  • go-getter: unsafe download (issue 2 of 3) (CVE-2022-30322)
  • go-getter: unsafe download (issue 3 of 3) (CVE-2022-30323)
  • nanoid: Information disclosure via valueOf() function (CVE-2021-23566)
  • sanitize-url: XSS (CVE-2021-23648)
  • minimist: prototype pollution (CVE-2021-44906)
  • node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
  • prometheus/client_golang: Denial of service using InstrumentHandlerCounter (CVE-2022-21698)
  • golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)
  • go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses (CVE-2022-29810)
  • opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

You may download the oc tool and use it to inspect release image metadata as follows:

(For x86_64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-x86_64

The image digest is sha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4

(For aarch64 architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-aarch64

The image digest is sha256:29fa8419da2afdb64b5475d2b43dad8cc9205e566db3968c5738e7a91cf96dfe

(For s390x architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-s390x

The image digest is sha256:015d6180238b4024d11dfef6751143619a0458eccfb589f2058ceb1a6359dd46

(For ppc64le architecture)

$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.0-ppc64le

The image digest is sha256:5052f8d5597c6656ca9b6bfd3de521504c79917aa80feb915d3c8546241f86ca

All OpenShift Container Platform 4.11 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html

  1. Solution:

For OpenShift Container Platform 4.11 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html

Details on how to access this content are available at https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html

  1. Bugs fixed (https://bugzilla.redhat.com/):

1817075 - MCC & MCO don't free leader leases during shut down -> 10 minutes of leader election timeouts 1822752 - cluster-version operator stops applying manifests when blocked by a precondition check 1823143 - oc adm release extract --command, --tools doesn't pull from localregistry when given a localregistry/image 1858418 - [OCPonRHV] OpenShift installer fails when Blank template is missing in oVirt/RHV 1859153 - [AWS] An IAM error occurred occasionally during the installation phase: Invalid IAM Instance Profile name 1896181 - [ovirt] install fails: due to terraform error "Cannot run VM. VM is being updated" on vm resource 1898265 - [OCP 4.5][AWS] Installation failed: error updating LB Target Group 1902307 - [vSphere] cloud labels management via cloud provider makes nodes not ready 1905850 - oc adm policy who-can failed to check the operatorcondition/status resource 1916279 - [OCPonRHV] Sometimes terraform installation fails on -failed to fetch Cluster(another terraform bug) 1917898 - [ovirt] install fails: due to terraform error "Tag not matched: expect but got " on vm resource 1918005 - [vsphere] If there are multiple port groups with the same name installation fails 1918417 - IPv6 errors after exiting crictl 1918690 - Should update the KCM resource-graph timely with the latest configure 1919980 - oVirt installer fails due to terraform error "Failed to wait for Templte(...) to become ok" 1921182 - InspectFailed: kubelet Failed to inspect image: rpc error: code = DeadlineExceeded desc = context deadline exceeded 1923536 - Image pullthrough does not pass 429 errors back to capable clients 1926975 - [aws-c2s] kube-apiserver crashloops due to missing cloud config 1928932 - deploy/route_crd.yaml in openshift/router uses deprecated v1beta1 CRD API 1932812 - Installer uses the terraform-provider in the Installer's directory if it exists 1934304 - MemoryPressure Top Pod Consumers seems to be 2x expected value 1943937 - CatalogSource incorrect parsing validation 1944264 - [ovn] CNO should gracefully terminate OVN databases 1944851 - List of ingress routes not cleaned up when routers no longer exist - take 2 1945329 - In k8s 1.21 bump conntrack 'should drop INVALID conntrack entries' tests are disabled 1948556 - Cannot read property 'apiGroup' of undefined error viewing operator CSV 1949827 - Kubelet bound to incorrect IPs, referring to incorrect NICs in 4.5.x 1957012 - Deleting the KubeDescheduler CR does not remove the corresponding deployment or configmap 1957668 - oc login does not show link to console 1958198 - authentication operator takes too long to pick up a configuration change 1958512 - No 1.25 shown in REMOVEDINRELEASE for apis audited with k8s.io/removed-release 1.25 and k8s.io/deprecated true 1961233 - Add CI test coverage for DNS availability during upgrades 1961844 - baremetal ClusterOperator installed by CVO does not have relatedObjects 1965468 - [OSP] Delete volume snapshots based on cluster ID in their metadata 1965934 - can not get new result with "Refresh off" if click "Run queries" again 1965969 - [aws] the public hosted zone id is not correct in the destroy log, while destroying a cluster which is using BYO private hosted zone. 1968253 - GCP CSI driver can provision volume with access mode ROX 1969794 - [OSP] Document how to use image registry PVC backend with custom availability zones 1975543 - [OLM] Remove stale cruft installed by CVO in earlier releases 1976111 - [tracker] multipathd.socket is missing start conditions 1976782 - Openshift registry starts to segfault after S3 storage configuration 1977100 - Pod failed to start with message "set CPU load balancing: readdirent /proc/sys/kernel/sched_domain/cpu66/domain0: no such file or directory" 1978303 - KAS pod logs show: [SHOULD NOT HAPPEN] ...failed to convert new object...CertificateSigningRequest) to smd typed: .status.conditions: duplicate entries for key [type=\"Approved\"] 1978798 - [Network Operator] Upgrade: The configuration to enable network policy ACL logging is missing on the cluster upgraded from 4.7->4.8 1979671 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning 1982737 - OLM does not warn on invalid CSV 1983056 - IP conflict while recreating Pod with fixed name 1984785 - LSO CSV does not contain disconnected annotation 1989610 - Unsupported data types should not be rendered on operand details page 1990125 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit 1990384 - 502 error on "Observe -> Alerting" UI after disabled local alertmanager 1992553 - all the alert rules' annotations "summary" and "description" should comply with the OpenShift alerting guidelines 1994117 - Some hardcodes are detected at the code level in orphaned code 1994820 - machine controller doesn't send vCPU quota failed messages to cluster install logs 1995953 - Ingresscontroller change the replicas to scaleup first time will be rolling update for all the ingress pods 1996544 - AWS region ap-northeast-3 is missing in installer prompt 1996638 - Helm operator manager container restart when CR is creating&deleting 1997120 - test_recreate_pod_in_namespace fails - Timed out waiting for namespace 1997142 - OperatorHub: Filtering the OperatorHub catalog is extremely slow 1997704 - [osp][octavia lb] given loadBalancerIP is ignored when creating a LoadBalancer type svc 1999325 - FailedMount MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered 1999529 - Must gather fails to gather logs for all the namespace if server doesn't have volumesnapshotclasses resource 1999891 - must-gather collects backup data even when Pods fails to be created 2000653 - Add hypershift namespace to exclude namespaces list in descheduler configmap 2002009 - IPI Baremetal, qemu-convert takes to long to save image into drive on slow/large disks 2002602 - Storageclass creation page goes blank when "Enable encryption" is clicked if there is a syntax error in the configmap 2002868 - Node exporter not able to scrape OVS metrics 2005321 - Web Terminal is not opened on Stage of DevSandbox when terminal instance is not created yet 2005694 - Removing proxy object takes up to 10 minutes for the changes to propagate to the MCO 2006067 - Objects are not valid as a React child 2006201 - ovirt-csi-driver-node pods are crashing intermittently 2007246 - Openshift Container Platform - Ingress Controller does not set allowPrivilegeEscalation in the router deployment 2007340 - Accessibility issues on topology - list view 2007611 - TLS issues with the internal registry and AWS S3 bucket 2007647 - oc adm release info --changes-from does not show changes in repos that squash-merge 2008486 - Double scroll bar shows up on dragging the task quick search to the bottom 2009345 - Overview page does not load from openshift console for some set of users after upgrading to 4.7.19 2009352 - Add image-registry usage metrics to telemeter 2009845 - Respect overrides changes during installation 2010361 - OpenShift Alerting Rules Style-Guide Compliance 2010364 - OpenShift Alerting Rules Style-Guide Compliance 2010393 - [sig-arch][Late] clients should not use APIs that are removed in upcoming releases [Suite:openshift/conformance/parallel] 2011525 - Rate-limit incoming BFD to prevent ovn-controller DoS 2011895 - Details about cloud errors are missing from PV/PVC errors 2012111 - LSO still try to find localvolumeset which is already deleted 2012969 - need to figure out why osupdatedstart to reboot is zero seconds 2013144 - Developer catalog category links could not be open in a new tab (sharing and open a deep link works fine) 2013461 - Import deployment from Git with s2i expose always port 8080 (Service and Pod template, not Route) if another Route port is selected by the user 2013734 - unable to label downloads route in openshift-console namespace 2013822 - ensure that the container-tools content comes from the RHAOS plashets 2014161 - PipelineRun logs are delayed and stuck on a high log volume 2014240 - Image registry uses ICSPs only when source exactly matches image 2014420 - Topology page is crashed 2014640 - Cannot change storage class of boot disk when cloning from template 2015023 - Operator objects are re-created even after deleting it 2015042 - Adding a template from the catalog creates a secret that is not owned by the TemplateInstance 2015356 - Different status shows on VM list page and details page 2015375 - PVC creation for ODF/IBM Flashsystem shows incorrect types 2015459 - [azure][openstack]When image registry configure an invalid proxy, registry pods are CrashLoopBackOff 2015800 - [IBM]Shouldn't change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value 2016425 - Adoption controller generating invalid metadata.Labels for an already adopted Subscription resource 2016534 - externalIP does not work when egressIP is also present 2017001 - Topology context menu for Serverless components always open downwards 2018188 - VRRP ID conflict between keepalived-ipfailover and cluster VIPs 2018517 - [sig-arch] events should not repeat pathologically expand_less failures - s390x CI 2019532 - Logger object in LSO does not log source location accurately 2019564 - User settings resources (ConfigMap, Role, RB) should be deleted when a user is deleted 2020483 - Parameter $auto_interval_period is in Period drop-down list 2020622 - e2e-aws-upi and e2e-azure-upi jobs are not working 2021041 - [vsphere] Not found TagCategory when destroying ipi cluster 2021446 - openshift-ingress-canary is not reporting DEGRADED state, even though the canary route is not available and accessible 2022253 - Web terminal view is broken 2022507 - Pods stuck in OutOfpods state after running cluster-density 2022611 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment 2022745 - Cluster reader is not able to list NodeNetwork objects 2023295 - Must-gather tool gathering data from custom namespaces. 2023691 - ClusterIP internalTrafficPolicy does not work for ovn-kubernetes 2024427 - oc completion zsh doesn't auto complete 2024708 - The form for creating operational CRs is badly rendering filed names ("obsoleteCPUs" -> "Obsolete CP Us" ) 2024821 - [Azure-File-CSI] need more clear info when requesting pvc with volumeMode Block 2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion 2025624 - Ingress router metrics endpoint serving old certificates after certificate rotation 2026356 - [IPI on Azure] The bootstrap machine type should be same as master 2026461 - Completed pods in Openshift cluster not releasing IP addresses and results in err: range is full unless manually deleted 2027603 - [UI] Dropdown doesn't close on it's own after arbiter zone selection on 'Capacity and nodes' page 2027613 - Users can't silence alerts from the dev console 2028493 - OVN-migration failed - ovnkube-node: error waiting for node readiness: timed out waiting for the condition 2028532 - noobaa-pg-db-0 pod stuck in Init:0/2 2028821 - Misspelled label in ODF management UI - MCG performance view 2029438 - Bootstrap node cannot resolve api-int because NetworkManager replaces resolv.conf 2029470 - Recover from suddenly appearing old operand revision WAS: kube-scheduler-operator test failure: Node's not achieving new revision 2029797 - Uncaught exception: ResizeObserver loop limit exceeded 2029835 - CSI migration for vSphere: Inline-volume tests failing 2030034 - prometheusrules.openshift.io: dial tcp: lookup prometheus-operator.openshift-monitoring.svc on 172.30.0.10:53: no such host 2030530 - VM created via customize wizard has single quotation marks surrounding its password 2030733 - wrong IP selected to connect to the nodes when ExternalCloudProvider enabled 2030776 - e2e-operator always uses quay master images during presubmit tests 2032559 - CNO allows migration to dual-stack in unsupported configurations 2032717 - Unable to download ignition after coreos-installer install --copy-network 2032924 - PVs are not being cleaned up after PVC deletion 2033482 - [vsphere] two variables in tf are undeclared and get warning message during installation 2033575 - monitoring targets are down after the cluster run for more than 1 day 2033711 - IBM VPC operator needs e2e csi tests for ibmcloud 2033862 - MachineSet is not scaling up due to an OpenStack error trying to create multiple ports with the same MAC address 2034147 - OpenShift VMware IPI Installation fails with Resource customization when corespersocket is unset and vCPU count is not a multiple of 4 2034296 - Kubelet and Crio fails to start during upgrde to 4.7.37 2034411 - [Egress Router] No NAT rules for ipv6 source and destination created in ip6tables-save 2034688 - Allow Prometheus/Thanos to return 401 or 403 when the request isn't authenticated 2034958 - [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready 2035005 - MCD is not always removing in progress taint after a successful update 2035334 - [RFE] [OCPonRHV] Provision machines with preallocated disks 2035899 - Operator-sdk run bundle doesn't support arm64 env 2036202 - Bump podman to >= 3.3.0 so that setup of multiple credentials for a single registry which can be distinguished by their path will work 2036594 - [MAPO] Machine goes to failed state due to a momentary error of the cluster etcd 2036948 - SR-IOV Network Device Plugin should handle offloaded VF instead of supporting only PF 2037190 - dns operator status flaps between True/False/False and True/True/(False|True) after updating dnses.operator.openshift.io/default 2037447 - Ingress Operator is not closing TCP connections. 2037513 - I/O metrics from the Kubernetes/Compute Resources/Cluster Dashboard show as no datapoints found 2037542 - Pipeline Builder footer is not sticky and yaml tab doesn't use full height 2037610 - typo for the Terminated message from thanos-querier pod description info 2037620 - Upgrade playbook should quit directly when trying to upgrade RHEL-7 workers to 4.10 2037625 - AppliedClusterResourceQuotas can not be shown on project overview 2037626 - unable to fetch ignition file when scaleup rhel worker nodes on cluster enabled Tang disk encryption 2037628 - Add test id to kms flows for automation 2037721 - PodDisruptionBudgetAtLimit alert fired in SNO cluster 2037762 - Wrong ServiceMonitor definition is causing failure during Prometheus configuration reload and preventing changes from being applied 2037841 - [RFE] use /dev/ptp_hyperv on Azure/AzureStack 2038115 - Namespace and application bar is not sticky anymore 2038244 - Import from git ignore the given servername and could not validate On-Premises GitHub and BitBucket installations 2038405 - openshift-e2e-aws-workers-rhel-workflow in CI step registry broken 2038774 - IBM-Cloud OVN IPsec fails, IKE UDP ports and ESP protocol not in security group 2039135 - the error message is not clear when using "opm index prune" to prune a file-based index image 2039161 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected 2039253 - ovnkube-node crashes on duplicate endpoints 2039256 - Domain validation fails when TLD contains a digit. 2039277 - Topology list view items are not highlighted on keyboard navigation 2039462 - Application tab in User Preferences dropdown menus are too wide. 2039477 - validation icon is missing from Import from git 2039589 - The toolbox command always ignores [command] the first time 2039647 - Some developer perspective links are not deep-linked causes developer to sometimes delete/modify resources in the wrong project 2040180 - Bug when adding a new table panel to a dashboard for OCP UI with only one value column 2040195 - Ignition fails to enable systemd units with backslash-escaped characters in their names 2040277 - ThanosRuleNoEvaluationFor10Intervals alert description is wrong 2040488 - OpenShift-Ansible BYOH Unit Tests are Broken 2040635 - CPU Utilisation is negative number for "Kubernetes / Compute Resources / Cluster" dashboard 2040654 - 'oc adm must-gather -- some_script' should exit with same non-zero code as the failed 'some_script' exits 2040779 - Nodeport svc not accessible when the backend pod is on a window node 2040933 - OCP 4.10 nightly build will fail to install if multiple NICs are defined on KVM nodes 2041133 - 'oc explain route.status.ingress.conditions' shows type 'Currently only Ready' but actually is 'Admitted' 2041454 - Garbage values accepted for --reference-policy in oc import-image without any error 2041616 - Ingress operator tries to manage DNS of additional ingresscontrollers that are not under clusters basedomain, which can't work 2041769 - Pipeline Metrics page not showing data for normal user 2041774 - Failing git detection should not recommend Devfiles as import strategy 2041814 - The KubeletConfigController wrongly process multiple confs for a pool 2041940 - Namespace pre-population not happening till a Pod is created 2042027 - Incorrect feedback for "oc label pods --all" 2042348 - Volume ID is missing in output message when expanding volume which is not mounted. 2042446 - CSIWithOldVSphereHWVersion alert recurring despite upgrade to vmx-15 2042501 - use lease for leader election 2042587 - ocm-operator: Improve reconciliation of CA ConfigMaps 2042652 - Unable to deploy hw-event-proxy operator 2042838 - The status of container is not consistent on Container details and pod details page 2042852 - Topology toolbars are unaligned to other toolbars 2042999 - A pod cannot reach kubernetes.default.svc.cluster.local cluster IP 2043035 - Wrong error code provided when request contains invalid argument 2043068 - available of text disappears in Utilization item if x is 0 2043080 - openshift-installer intermittent failure on AWS with Error: InvalidVpcID.NotFound: The vpc ID 'vpc-123456789' does not exist 2043094 - ovnkube-node not deleting stale conntrack entries when endpoints go away 2043118 - Host should transition through Preparing when HostFirmwareSettings changed 2043132 - Add a metric when vsphere csi storageclass creation fails 2043314 - oc debug node does not meet compliance requirement 2043336 - Creating multi SriovNetworkNodePolicy cause the worker always be draining 2043428 - Address Alibaba CSI driver operator review comments 2043533 - Update ironic, inspector, and ironic-python-agent to latest bugfix release 2043672 - [MAPO] root volumes not working 2044140 - When 'oc adm upgrade --to-image ...' rejects an update as not recommended, it should mention --allow-explicit-upgrade 2044207 - [KMS] The data in the text box does not get cleared on switching the authentication method 2044227 - Test Managed cluster should only include cluster daemonsets that have maxUnavailable update of 10 or 33 percent fails 2044412 - Topology list misses separator lines and hover effect let the list jump 1px 2044421 - Topology list does not allow selecting an application group anymore 2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2044803 - Unify button text style on VM tabs 2044824 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s] 2045065 - Scheduled pod has nodeName changed 2045073 - Bump golang and build images for local-storage-operator 2045087 - Failed to apply sriov policy on intel nics 2045551 - Remove enabled FeatureGates from TechPreviewNoUpgrade 2045559 - API_VIP moved when kube-api container on another master node was stopped 2045577 - [ocp 4.9 | ovn-kubernetes] ovsdb_idl|WARN|transaction error: {"details":"cannot delete Datapath_Binding row 29e48972-xxxx because of 2 remaining reference(s)","error":"referential integrity violation 2045872 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt 2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter 2046133 - [MAPO]IPI proxy installation failed 2046156 - Network policy: preview of affected pods for non-admin shows empty popup 2046157 - Still uses pod-security.admission.config.k8s.io/v1alpha1 in admission plugin config 2046191 - Opeartor pod is missing correct qosClass and priorityClass 2046277 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the module.vpc.aws_subnet.private_subnet[0] resource 2046319 - oc debug cronjob command failed with error "unable to extract pod template from type v1.CronJob". 2046435 - Better Devfile Import Strategy support in the 'Import from Git' flow 2046496 - Awkward wrapping of project toolbar on mobile 2046497 - Re-enable TestMetricsEndpoint test case in console operator e2e tests 2046498 - "All Projects" and "all applications" use different casing on topology page 2046591 - Auto-update boot source is not available while create new template from it 2046594 - "Requested template could not be found" while creating VM from user-created template 2046598 - Auto-update boot source size unit is byte on customize wizard 2046601 - Cannot create VM from template 2046618 - Start last run action should contain current user name in the started-by annotation of the PLR 2046662 - Should upgrade the go version to be 1.17 for example go operator memcached-operator 2047197 - Sould upgrade the operator_sdk.util version to "0.4.0" for the "osdk_metric" module 2047257 - [CP MIGRATION] Node drain failure during control plane node migration 2047277 - Storage status is missing from status card of virtualization overview 2047308 - Remove metrics and events for master port offsets 2047310 - Running VMs per template card needs empty state when no VMs exist 2047320 - New route annotation to show another URL or hide topology URL decorator doesn't work for Knative Services 2047335 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used 2047362 - Removing prometheus UI access breaks origin test 2047445 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure 2047670 - Installer should pre-check that the hosted zone is not associated with the VPC and throw the error message. 2047702 - Issue described on bug #2013528 reproduced: mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8 2047710 - [OVN] ovn-dbchecker CrashLoopBackOff and sbdb jsonrpc unix socket receive error 2047732 - [IBM]Volume is not deleted after destroy cluster 2047741 - openshift-installer intermittent failure on AWS with "Error: Provider produced inconsistent result after apply" when creating the module.masters.aws_network_interface.master[1] resource 2047790 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2047799 - release-openshift-ocp-installer-e2e-aws-upi-4.9 2047870 - Prevent redundant queries of BIOS settings in HostFirmwareController 2047895 - Fix architecture naming in oc adm release mirror for aarch64 2047911 - e2e: Mock CSI tests fail on IBM ROKS clusters 2047913 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel] 2047925 - [FJ OCP4.10 Bug]: IRONIC_KERNEL_PARAMS does not contain coreos_kernel_params during iPXE boot 2047935 - [4.11] Bootimage bump tracker 2047998 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin- 2048059 - Service Level Agreement (SLA) always show 'Unknown' 2048067 - [IPI on Alibabacloud] "Platform Provisioning Check" tells '"ap-southeast-6": enhanced NAT gateway is not supported', which seems false 2048186 - Image registry operator panics when finalizes config deletion 2048214 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud 2048219 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool 2048221 - Capitalization of titles in the VM details page is inconsistent. 2048222 - [AWS GovCloud] Cluster can not be installed on AWS GovCloud regions via terminal interactive UI. 2048276 - Cypress E2E tests fail due to a typo in test-cypress.sh 2048333 - prometheus-adapter becomes inaccessible during rollout 2048352 - [OVN] node does not recover after NetworkManager restart, NotReady and unreachable 2048442 - [KMS] UI does not have option to specify kube auth path and namespace for cluster wide encryption 2048451 - Custom serviceEndpoints in install-config are reported to be unreachable when environment uses a proxy 2048538 - Network policies are not implemented or updated by OVN-Kubernetes 2048541 - incorrect rbac check for install operator quick starts 2048563 - Leader election conventions for cluster topology 2048575 - IP reconciler cron job failing on single node 2048686 - Check MAC address provided on the install-config.yaml file 2048687 - All bare metal jobs are failing now due to End of Life of centos 8 2048793 - Many Conformance tests are failing in OCP 4.10 with Kuryr 2048803 - CRI-O seccomp profile out of date 2048824 - [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class 2048841 - [ovn] Missing lr-policy-list and snat rules for egressip when new pods are added 2048955 - Alibaba Disk CSI Driver does not have CI 2049073 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured 2049078 - Bond CNI: Failed to attach Bond NAD to pod 2049108 - openshift-installer intermittent failure on AWS with 'Error: Error waiting for NAT Gateway (nat-xxxxx) to become available' 2049117 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently 2049133 - oc adm catalog mirror throws 'missing signature key' error when using file://local/index 2049142 - Missing "app" label 2049169 - oVirt CSI driver should use the trusted CA bundle when cluster proxy is configured 2049234 - ImagePull fails with error "unable to pull manifest from example.com/busy.box:v5 invalid reference format" 2049410 - external-dns-operator creates provider section, even when not requested 2049483 - Sidepanel for Connectors/workloads in topology shows invalid tabs 2049613 - MTU migration on SDN IPv4 causes API alerts 2049671 - system:serviceaccount:openshift-cluster-csi-drivers:aws-ebs-csi-driver-operator trying to GET and DELETE /api/v1/namespaces/openshift-cluster-csi-drivers/configmaps/kube-cloud-config which does not exist 2049687 - superfluous apirequestcount entries in audit log 2049775 - cloud-provider-config change not applied when ExternalCloudProvider enabled 2049787 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs 2049832 - ContainerCreateError when trying to launch large (>500) numbers of pods across nodes 2049872 - cluster storage operator AWS credentialsrequest lacks KMS privileges 2049889 - oc new-app --search nodejs warns about access to sample content on quay.io 2050005 - Plugin module IDs can clash with console module IDs causing runtime errors 2050011 - Observe > Metrics page: Timespan text input and dropdown do not align 2050120 - Missing metrics in kube-state-metrics 2050146 - Installation on PSI fails with: 'openstack platform does not have the required standard-attr-tag network extension' 2050173 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0 2050180 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2 2050300 - panic in cluster-storage-operator while updating status 2050332 - Malformed ClusterClaim lifetimes cause the clusterclaims-controller to silently fail to reconcile all clusterclaims 2050335 - azure-disk failed to mount with error special device does not exist 2050345 - alert data for burn budget needs to be updated to prevent regression 2050407 - revert "force cert rotation every couple days for development" in 4.11 2050409 - ip-reconcile job is failing consistently 2050452 - Update osType and hardware version used by RHCOS OVA to indicate it is a RHEL 8 guest 2050466 - machine config update with invalid container runtime config should be more robust 2050637 - Blog Link not re-directing to the intented website in the last modal in the Dev Console Onboarding Tour 2050698 - After upgrading the cluster the console still show 0 of N, 0% progress for worker nodes 2050707 - up test for prometheus pod look to far in the past 2050767 - Vsphere upi tries to access vsphere during manifests generation phase 2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function 2050882 - Crio appears to be coredumping in some scenarios 2050902 - not all resources created during import have common labels 2050946 - Cluster-version operator fails to notice TechPreviewNoUpgrade featureSet change after initialization-lookup error 2051320 - Need to build ose-aws-efs-csi-driver-operator-bundle-container image for 4.11 2051333 - [aws] records in public hosted zone and BYO private hosted zone were not deleted. 2051377 - Unable to switch vfio-pci to netdevice in policy 2051378 - Template wizard is crashed when there are no templates existing 2051423 - migrate loadbalancers from amphora to ovn not working 2051457 - [RFE] PDB for cloud-controller-manager to avoid going too many replicas down 2051470 - prometheus: Add validations for relabel configs 2051558 - RoleBinding in project without subject is causing "Project access" page to fail 2051578 - Sort is broken for the Status and Version columns on the Cluster Settings > ClusterOperators page 2051583 - sriov must-gather image doesn't work 2051593 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line 2051611 - Remove Check which enforces summary_interval must match logSyncInterval 2051642 - Remove "Tech-Preview" Label for the Web Terminal GA release 2051657 - Remove 'Tech preview' from minnimal deployment Storage System creation 2051718 - MetaLLB: Validation Webhook: BGPPeer hold time is allowed to be set to less than 3s 2051722 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop 2051881 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total not valid 2051954 - Allow changing of policyAuditConfig ratelimit post-deployment 2051969 - Need to build local-storage-operator-metadata-container image for 4.11 2051985 - An APIRequestCount without dots in the name can cause a panic 2052016 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. 2052034 - Can't start correct debug pod using pod definition yaml in OCP 4.8 2052055 - Whereabouts should implement client-go 1.22+ 2052056 - Static pod installer should throttle creating new revisions 2052071 - local storage operator metrics target down after upgrade 2052095 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1 2052270 - FSyncControllerDegraded has "treshold" -> "threshold" typos 2052309 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests 2052332 - Probe failures and pod restarts during 4.7 to 4.8 upgrade 2052393 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh 2052398 - 4.9 to 4.10 upgrade fails for ovnkube-masters 2052415 - Pod density test causing problems when using kube-burner 2052513 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. 2052578 - Create new app from a private git repository using 'oc new app' with basic auth does not work. 2052595 - Remove dev preview badge from IBM FlashSystem deployment windows 2052618 - Node reboot causes duplicate persistent volumes 2052671 - Add Sprint 214 translations 2052674 - Remove extra spaces 2052700 - kube-controller-manger should use configmap lease 2052701 - kube-scheduler should use configmap lease 2052814 - go fmt fails in OSM after migration to go 1.17 2052840 - IMAGE_BUILDER=docker make test-e2e-operator-ocp runs with podman instead of docker 2052953 - Observe dashboard always opens for last viewed workload instead of the selected one 2052956 - Installing virtualization operator duplicates the first action on workloads in topology 2052975 - High cpu load on Juniper Qfx5120 Network switches after upgrade to Openshift 4.8.26 2052986 - Console crashes when Mid cycle hook in Recreate strategy(edit deployment/deploymentConfig) selects Lifecycle strategy as "Tags the current image as an image stream tag if the deployment succeeds" 2053006 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11 2053104 - [vSphere CSI driver Operator] hw_version_total metric update wrong value after upgrade nodes hardware version from vmx-13 to vmx-15 2053112 - nncp status is unknown when nnce is Progressing 2053118 - nncp Available condition reason should be exposed in oc get 2053168 - Ensure the core dynamic plugin SDK package has correct types and code 2053205 - ci-openshift-cluster-network-operator-master-e2e-agnostic-upgrade is failing most of the time 2053304 - Debug terminal no longer works in admin console 2053312 - requestheader IDP test doesn't wait for cleanup, causing high failure rates 2053334 - rhel worker scaleup playbook failed because missing some dependency of podman 2053343 - Cluster Autoscaler not scaling down nodes which seem to qualify for scale-down 2053491 - nmstate interprets interface names as float64 and subsequently crashes on state update 2053501 - Git import detection does not happen for private repositories 2053582 - inability to detect static lifecycle failure 2053596 - [IBM Cloud] Storage IOPS limitations and lack of IPI ETCD deployment options trigger leader election during cluster initialization 2053609 - LoadBalancer SCTP service leaves stale conntrack entry that causes issues if service is recreated 2053622 - PDB warning alert when CR replica count is set to zero 2053685 - Topology performance: Immutable .toJSON consumes a lot of CPU time when rendering a large topology graph (~100 nodes) 2053721 - When using RootDeviceHint rotational setting the host can fail to provision 2053922 - [OCP 4.8][OVN] pod interface: error while waiting on OVS.Interface.external-ids 2054095 - [release-4.11] Gather images.conifg.openshift.io cluster resource definiition 2054197 - The ProjectHelmChartRepositrory schema has merged but has not been initialized in the cluster yet 2054200 - Custom created services in openshift-ingress removed even though the services are not of type LoadBalancer 2054238 - console-master-e2e-gcp-console is broken 2054254 - vSphere test failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial] 2054285 - Services other than knative service also shows as KSVC in add subscription/trigger modal 2054319 - must-gather | gather_metallb_logs can't detect metallb pod 2054351 - Rrestart of ptp4l/phc2sys on change of PTPConfig generates more than one times, socket error in event frame work 2054385 - redhat-operatori ndex image build failed with AMQ brew build - amq-interconnect-operator-metadata-container-1.10.13 2054564 - DPU network operator 4.10 branch need to sync with master 2054630 - cancel create silence from kebab menu of alerts page will navigated to the previous page 2054693 - Error deploying HorizontalPodAutoscaler with oc new-app command in OpenShift 4 2054701 - [MAPO] Events are not created for MAPO machines 2054705 - [tracker] nf_reinject calls nf_queue_entry_free on an already freed entry->state 2054735 - Bad link in CNV console 2054770 - IPI baremetal deployment metal3 pod crashes when using capital letters in hosts bootMACAddress 2054787 - SRO controller goes to CrashLoopBackOff status when the pull-secret does not have the correct permissions 2054950 - A large number is showing on disk size field 2055305 - Thanos Querier high CPU and memory usage till OOM 2055386 - MetalLB changes the shared external IP of a service upon updating the externalTrafficPolicy definition 2055433 - Unable to create br-ex as gateway is not found 2055470 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation 2055492 - The default YAML on vm wizard is not latest 2055601 - installer did not destroy .app dns recored in a IPI on ASH install 2055702 - Enable Serverless tests in CI 2055723 - CCM operator doesn't deploy resources after enabling TechPreviewNoUpgrade feature set. 2055729 - NodePerfCheck fires and stays active on momentary high latency 2055814 - Custom dynamic exntension point causes runtime and compile time error 2055861 - cronjob collect-profiles failed leads node reach to OutOfpods status 2055980 - [dynamic SDK][internal] console plugin SDK does not support table actions 2056454 - Implement preallocated disks for oVirt in the cluster API provider 2056460 - Implement preallocated disks for oVirt in the OCP installer 2056496 - If image does not exists for builder image then upload jar form crashes 2056519 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies 2056607 - Running kubernetes-nmstate handler e2e tests stuck on OVN clusters 2056752 - Better to named the oc-mirror version info with more information like the oc version --client 2056802 - "enforcedLabelLimit|enforcedLabelNameLengthLimit|enforcedLabelValueLengthLimit" do not take effect 2056841 - [UI] [DR] Web console update is available pop-up is seen multiple times on Hub cluster where ODF operator is not installed and unnecessarily it pop-up on the Managed cluster as well where ODF operator is installed 2056893 - incorrect warning for --to-image in oc adm upgrade help 2056967 - MetalLB: speaker metrics is not updated when deleting a service 2057025 - Resource requests for the init-config-reloader container of prometheus-k8s- pods are too high 2057054 - SDK: k8s methods resolves into Response instead of the Resource 2057079 - [cluster-csi-snapshot-controller-operator] CI failure: events should not repeat pathologically 2057101 - oc commands working with images print an incorrect and inappropriate warning 2057160 - configure-ovs selects wrong interface on reboot 2057183 - OperatorHub: Missing "valid subscriptions" filter 2057251 - response code for Pod count graph changed from 422 to 200 periodically for about 30 minutes if pod is rescheduled 2057358 - [Secondary Scheduler] - cannot build bundle index image using the secondary scheduler operator bundle 2057387 - [Secondary Scheduler] - olm.skiprange, com.redhat.openshift.versions is incorrect and no minkubeversion 2057403 - CMO logs show forbidden: User "system:serviceaccount:openshift-monitoring:cluster-monitoring-operator" cannot get resource "replicasets" in API group "apps" in the namespace "openshift-monitoring" 2057495 - Alibaba Disk CSI driver does not provision small PVCs 2057558 - Marketplace operator polls too frequently for cluster operator status changes 2057633 - oc rsync reports misleading error when container is not found 2057642 - ClusterOperator status.conditions[].reason "etcd disk metrics exceeded..." should be a CamelCase slug 2057644 - FSyncControllerDegraded latches True, even after fsync latency recovers on all members 2057696 - Removing console still blocks OCP install from completing 2057762 - ingress operator should report Upgradeable False to remind user before upgrade to 4.10 when Non-SAN certs are used 2057832 - expr for record rule: "cluster:telemetry_selected_series:count" is improper 2057967 - KubeJobCompletion does not account for possible job states 2057990 - Add extra debug information to image signature workflow test 2057994 - SRIOV-CNI failed to load netconf: LoadConf(): failed to get VF information 2058030 - On OCP 4.10+ using OVNK8s on BM IPI, nodes register as localhost.localdomain 2058217 - [vsphere-problem-detector-operator] 'vsphere_rwx_volumes_total' metric name make confused 2058225 - openshift_csi_share_ metrics are not found from telemeter server 2058282 - Websockets stop updating during cluster upgrades 2058291 - CI builds should have correct version of Kube without needing to push tags everytime 2058368 - Openshift OVN-K got restarted mutilple times with the error " ovsdb-server/memory-trim-on-compaction on'' failed: exit status 1 and " ovndbchecker.go:118] unable to turn on memory trimming for SB DB, stderr " , cluster unavailable 2058370 - e2e-aws-driver-toolkit CI job is failing 2058421 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install 2058424 - ConsolePlugin proxy always passes Authorization header even if authorize property is omitted or false 2058623 - Bootstrap server dropdown menu in Create Event Source- KafkaSource form is empty even if it's created 2058626 - Multiple Azure upstream kube fsgroupchangepolicy tests are permafailing expecting gid "1000" but geting "root" 2058671 - whereabouts IPAM CNI ip-reconciler cronjob specification requires hostnetwork, api-int lb usage & proper backoff 2058692 - [Secondary Scheduler] Creating secondaryscheduler instance fails with error "key failed with : secondaryschedulers.operator.openshift.io "secondary-scheduler" not found" 2059187 - [Secondary Scheduler] - key failed with : serviceaccounts "secondary-scheduler" is forbidden 2059212 - [tracker] Backport https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa 2059213 - ART cannot build installer images due to missing terraform binaries for some architectures 2059338 - A fully upgraded 4.10 cluster defaults to HW-13 hardware version even if HW-15 is default (and supported) 2059490 - The operator image in CSV file of the ART DPU network operator bundle is incorrect 2059567 - vMedia based IPI installation of OpenShift fails on Nokia servers due to issues with virtual media attachment and boot source override 2059586 - (release-4.11) Insights operator doesn't reconcile clusteroperator status condition messages 2059654 - Dynamic demo plugin proxy example out of date 2059674 - Demo plugin fails to build 2059716 - cloud-controller-manager flaps operator version during 4.9 -> 4.10 update 2059791 - [vSphere CSI driver Operator] didn't update 'vsphere_csi_driver_error' metric value when fixed the error manually 2059840 - [LSO]Could not gather logs for pod diskmaker-discovery and diskmaker-manager 2059943 - MetalLB: Move CI config files to metallb repo from dev-scripts repo 2060037 - Configure logging level of FRR containers 2060083 - CMO doesn't react to changes in clusteroperator console 2060091 - CMO produces invalid alertmanager statefulset if console cluster .status.consoleURL is unset 2060133 - [OVN RHEL upgrade] could not find IP addresses: failed to lookup link br-ex: Link not found 2060147 - RHEL8 Workers Need to Ensure libseccomp is up to date at install time 2060159 - LGW: External->Service of type ETP=Cluster doesn't go to the node 2060329 - Detect unsupported amount of workloads before rendering a lazy or crashing topology 2060334 - Azure VNET lookup fails when the NIC subnet is in a different resource group 2060361 - Unable to enumerate NICs due to missing the 'primary' field due to security restrictions 2060406 - Test 'operators should not create watch channels very often' fails 2060492 - Update PtpConfigSlave source-crs to use network_transport L2 instead of UDPv4 2060509 - Incorrect installation of ibmcloud vpc csi driver in IBM Cloud ROKS 4.10 2060532 - LSO e2e tests are run against default image and namespace 2060534 - openshift-apiserver pod in crashloop due to unable to reach kubernetes svc ip 2060549 - ErrorAddingLogicalPort: duplicate IP found in ECMP Pod route cache! 2060553 - service domain can't be resolved when networkpolicy is used in OCP 4.10-rc 2060583 - Remove Console internal-kubevirt plugin SDK package 2060605 - Broken access to public images: Unable to connect to the server: no basic auth credentials 2060617 - IBMCloud destroy DNS regex not strict enough 2060687 - Azure Ci: SubscriptionDoesNotSupportZone - does not support availability zones at location 'westus' 2060697 - [AWS] partitionNumber cannot work for specifying Partition number 2060714 - [DOCS] Change source_labels to sourceLabels in "Configuring remote write storage" section 2060837 - [oc-mirror] Catalog merging error when two or more bundles does not have a set Replace field 2060894 - Preceding/Trailing Whitespaces In Form Elements on the add page 2060924 - Console white-screens while using debug terminal 2060968 - Installation failing due to ironic-agent.service not starting properly 2060970 - Bump recommended FCOS to 35.20220213.3.0 2061002 - Conntrack entry is not removed for LoadBalancer IP 2061301 - Traffic Splitting Dialog is Confusing With Only One Revision 2061303 - Cachito request failure with vendor directory is out of sync with go.mod/go.sum 2061304 - workload info gatherer - don't serialize empty images map 2061333 - White screen for Pipeline builder page 2061447 - [GSS] local pv's are in terminating state 2061496 - etcd RecentBackup=Unknown ControllerStarted contains no message string 2061527 - [IBMCloud] infrastructure asset missing CloudProviderType 2061544 - AzureStack is hard-coded to use Standard_LRS for the disk type 2061549 - AzureStack install with internal publishing does not create api DNS record 2061611 - [upstream] The marker of KubeBuilder doesn't work if it is close to the code 2061732 - Cinder CSI crashes when API is not available 2061755 - Missing breadcrumb on the resource creation page 2061833 - A single worker can be assigned to multiple baremetal hosts 2061891 - [IPI on IBMCLOUD] missing ?br-sao? region in openshift installer 2061916 - mixed ingress and egress policies can result in half-isolated pods 2061918 - Topology Sidepanel style is broken 2061919 - Egress Ip entry stays on node's primary NIC post deletion from hostsubnet 2062007 - MCC bootstrap command lacks template flag 2062126 - IPfailover pod is crashing during creation showing keepalived_script doesn't exist 2062151 - Add RBAC for 'infrastructures' to operator bundle 2062355 - kubernetes-nmstate resources and logs not included in must-gathers 2062459 - Ingress pods scheduled on the same node 2062524 - [Kamelet Sink] Topology crashes on click of Event sink node if the resource is created source to Uri over ref 2062558 - Egress IP with openshift sdn in not functional on worker node. 2062568 - CVO does not trigger new upgrade again after fail to update to unavailable payload 2062645 - configure-ovs: don't restart networking if not necessary 2062713 - Special Resource Operator(SRO) - No sro_used_nodes metric 2062849 - hw event proxy is not binding on ipv6 local address 2062920 - Project selector is too tall with only a few projects 2062998 - AWS GovCloud regions are recognized as the unknown regions 2063047 - Configuring a full-path query log file in CMO breaks Prometheus with the latest version of the operator 2063115 - ose-aws-efs-csi-driver has invalid dependency in go.mod 2063164 - metal-ipi-ovn-ipv6 Job Permafailing and Blocking OpenShift 4.11 Payloads: insights operator is not available 2063183 - DefragDialTimeout is set to low for large scale OpenShift Container Platform - Cluster 2063194 - cluster-autoscaler-default will fail when automated etcd defrag is running on large scale OpenShift Container Platform 4 - Cluster 2063321 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs 2063324 - MCO template output directories created with wrong mode causing render failure in unprivileged container environments 2063375 - ptp operator upgrade from 4.9 to 4.10 stuck at pending due to service account requirements not met 2063414 - on OKD 4.10, when image-registry is enabled, the /etc/hosts entry is missing on some nodes 2063699 - Builds - Builds - Logs: i18n misses. 2063708 - Builds - Builds - Logs: translation correction needed. 2063720 - Metallb EBGP neighbor stuck in active until adding ebgp-multihop (directly connected neighbors) 2063732 - Workloads - StatefulSets : I18n misses 2063747 - When building a bundle, the push command fails because is passes a redundant "IMG=" on the the CLI 2063753 - User Preferences - Language - Language selection : Page refresh rquired to change the UI into selected Language. 2063756 - User Preferences - Applications - Insecure traffic : i18n misses 2063795 - Remove go-ovirt-client go.mod replace directive 2063829 - During an IPI install with the 4.10.4 installer on vSphere, getting "Check": platform.vsphere.network: Invalid value: "VLAN_3912": unable to find network provided" 2063831 - etcd quorum pods landing on same node 2063897 - Community tasks not shown in pipeline builder page 2063905 - PrometheusOperatorWatchErrors alert may fire shortly in case of transient errors from the API server 2063938 - sing the hard coded rest-mapper in library-go 2063955 - cannot download operator catalogs due to missing images 2063957 - User Management - Users : While Impersonating user, UI is not switching into user's set language 2064024 - SNO OCP upgrade with DU workload stuck at waiting for kube-apiserver static pod 2064170 - [Azure] Missing punctuation in the installconfig.controlPlane.platform.azure.osDisk explain 2064239 - Virtualization Overview page turns into blank page 2064256 - The Knative traffic distribution doesn't update percentage in sidebar 2064553 - UI should prefer to use the virtio-win configmap than v2v-vmware configmap for windows creation 2064596 - Fix the hubUrl docs link in pipeline quicksearch modal 2064607 - Pipeline builder makes too many (100+) API calls upfront 2064613 - [OCPonRHV]- after few days that cluster is alive we got error in storage operator 2064693 - [IPI][OSP] Openshift-install fails to find the shiftstack cloud defined in clouds.yaml in the current directory 2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server 2064705 - the alertmanagerconfig validation catches the wrong value for invalid field 2064744 - Errors trying to use the Debug Container feature 2064984 - Update error message for label limits 2065076 - Access monitoring Routes based on monitoring-shared-config creates wrong URL 2065160 - Possible leak of load balancer targets on AWS Machine API Provider 2065224 - Configuration for cloudFront in image-registry operator configuration is ignored & duration is corrupted 2065290 - CVE-2021-23648 sanitize-url: XSS 2065338 - VolumeSnapshot creation date sorting is broken 2065507 - oc adm upgrade should return ReleaseAccepted condition to show upgrade status. 2065510 - [AWS] failed to create cluster on ap-southeast-3 2065513 - Dev Perspective -> Project Dashboard shows Resource Quotas which are a bit misleading, and too many decimal places 2065547 - (release-4.11) Gather kube-controller-manager pod logs with garbage collector errors 2065552 - [AWS] Failed to install cluster on AWS ap-southeast-3 region due to image-registry panic error 2065577 - user with user-workload-monitoring-config-edit role can not create user-workload-monitoring-config configmap 2065597 - Cinder CSI is not configurable 2065682 - Remote write relabel config adds label __tmp_openshift_cluster_id to all metrics 2065689 - Internal Image registry with GCS backend does not redirect client 2065749 - Kubelet slowly leaking memory and pods eventually unable to start 2065785 - ip-reconciler job does not complete, halts node drain 2065804 - Console backend check for Web Terminal Operator incorrectly returns HTTP 204 2065806 - stop considering Mint mode as supported on Azure 2065840 - the cronjob object is created with a wrong api version batch/v1beta1 when created via the openshift console 2065893 - [4.11] Bootimage bump tracker 2066009 - CVE-2021-44906 minimist: prototype pollution 2066232 - e2e-aws-workers-rhel8 is failing on ansible check 2066418 - [4.11] Update channels information link is taking to a 404 error page 2066444 - The "ingress" clusteroperator's relatedObjects field has kind names instead of resource names 2066457 - Prometheus CI failure: 503 Service Unavailable 2066463 - [IBMCloud] failed to list DNS zones: Exactly one of ApiKey or RefreshToken must be specified 2066605 - coredns template block matches cluster API to loose 2066615 - Downstream OSDK still use upstream image for Hybird type operator 2066619 - The GitCommit of the oc-mirror version is not correct 2066665 - [ibm-vpc-block] Unable to change default storage class 2066700 - [node-tuning-operator] - Minimize wildcard/privilege Usage in Cluster and Local Roles 2066754 - Cypress reports for core tests are not captured 2066782 - Attached disk keeps in loading status when add disk to a power off VM by non-privileged user 2066865 - Flaky test: In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies 2066886 - openshift-apiserver pods never going NotReady 2066887 - Dependabot alert: Path traversal in github.com/valyala/fasthttp 2066889 - Dependabot alert: Path traversal in github.com/valyala/fasthttp 2066923 - No rule to make target 'docker-push' when building the SRO bundle 2066945 - SRO appends "arm64" instead of "aarch64" to the kernel name and it doesn't match the DTK 2067004 - CMO contains grafana image though grafana is removed 2067005 - Prometheus rule contains grafana though grafana is removed 2067062 - should update prometheus-operator resources version 2067064 - RoleBinding in Developer Console is dropping all subjects when editing 2067155 - Incorrect operator display name shown in pipelines quickstart in devconsole 2067180 - Missing i18n translations 2067298 - Console 4.10 operand form refresh 2067312 - PPT event source is lost when received by the consumer 2067384 - OCP 4.10 should be firing APIRemovedInNextEUSReleaseInUse for APIs removed in 1.25 2067456 - OCP 4.11 should be firing APIRemovedInNextEUSReleaseInUse and APIRemovedInNextReleaseInUse for APIs removed in 1.25 2067995 - Internal registries with a big number of images delay pod creation due to recursive SELinux file context relabeling 2068115 - resource tab extension fails to show up 2068148 - [4.11] /etc/redhat-release symlink is broken 2068180 - OCP UPI on AWS with STS enabled is breaking the Ingress operator 2068181 - Event source powered with kamelet type source doesn't show associated deployment in resources tab 2068490 - OLM descriptors integration test failing 2068538 - Crashloop back-off popover visual spacing defects 2068601 - Potential etcd inconsistent revision and data occurs 2068613 - ClusterRoleUpdated/ClusterRoleBindingUpdated Spamming Event Logs 2068908 - Manual blog link change needed 2069068 - reconciling Prometheus Operator Deployment failed while upgrading from 4.7.46 to 4.8.35 2069075 - [Alibaba 4.11.0-0.nightly] cluster storage component in Progressing state 2069181 - Disabling community tasks is not working 2069198 - Flaky CI test in e2e/pipeline-ci 2069307 - oc mirror hangs when processing the Red Hat 4.10 catalog 2069312 - extend rest mappings with 'job' definition 2069457 - Ingress operator has superfluous finalizer deletion logic for LoadBalancer-type services 2069577 - ConsolePlugin example proxy authorize is wrong 2069612 - Special Resource Operator (SRO) - Crash when nodeSelector does not match any nodes 2069632 - Not able to download previous container logs from console 2069643 - ConfigMaps leftovers while uninstalling SpecialResource with configmap 2069654 - Creating VMs with YAML on Openshift Virtualization UI is missing labels flavor, os and workload 2069685 - UI crashes on load if a pinned resource model does not exist 2069705 - prometheus target "serviceMonitor/openshift-metallb-system/monitor-metallb-controller/0" has a failure with "server returned HTTP status 502 Bad Gateway" 2069740 - On-prem loadbalancer ports conflict with kube node port range 2069760 - In developer perspective divider does not show up in navigation 2069904 - Sync upstream 1.18.1 downstream 2069914 - Application Launcher groupings are not case-sensitive 2069997 - [4.11] should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces 2070000 - Add warning alerts for installing standalone k8s-nmstate 2070020 - InContext doesn't work for Event Sources 2070047 - Kuryr: Prometheus when installed on the cluster shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured 2070160 - Copy-to-clipboard and

 elements cause display issues for ACM dynamic plugins
2070172 - SRO uses the chart's name as Helm release, not the SpecialResource's
2070181 - [MAPO] serverGroupName ignored
2070457 - Image vulnerability Popover overflows from the visible area
2070674 - [GCP] Routes get timed out and nonresponsive after creating 2K service routes
2070703 - some ipv6 network policy tests consistently failing
2070720 - [UI] Filter reset doesn't work on Pods/Secrets/etc pages and complete list disappears
2070731 - details switch label is not clickable on add page
2070791 - [GCP]Image registry are crash on cluster with GCP workload identity enabled
2070792 - service "openshift-marketplace/marketplace-operator-metrics" is not annotated with capability
2070805 - ClusterVersion: could not download the update
2070854 - cv.status.capabilities.enabledCapabilities doesn?t show the day-2 enabled caps when there are errors on resources update
2070887 - Cv condition ImplicitlyEnabledCapabilities doesn?t complain about the disabled capabilities which is previously enabled
2070888 - Cannot bind driver vfio-pci when apply sriovnodenetworkpolicy with type vfio-pci
2070929 - OVN-Kubernetes: EgressIP breaks access from a pod with EgressIP to other host networked pods on different nodes
2071019 - rebase vsphere csi driver 2.5
2071021 - vsphere driver has snapshot support missing
2071033 - conditionally relabel volumes given annotation not working - SELinux context match is wrong
2071139 - Ingress pods scheduled on the same node
2071364 - All image building tests are broken with "            error: build error: attempting to convert BUILD_LOGLEVEL env var value "" to integer: strconv.Atoi: parsing "": invalid syntax
2071578 - Monitoring navigation should not be shown if monitoring is not available (CRC)
2071599 - RoleBidings are not getting updated for ClusterRole in OpenShift Web Console
2071614 - Updating EgressNetworkPolicy rejecting with error UnsupportedMediaType
2071617 - remove Kubevirt extensions in favour of dynamic plugin
2071650 - ovn-k ovn_db_cluster metrics are not exposed for SNO
2071691 - OCP Console global PatternFly overrides adds padding to breadcrumbs
2071700 - v1 events show "Generated from" message without the source/reporting component
2071715 - Shows 404 on Environment nav in Developer console
2071719 - OCP Console global PatternFly overrides link button whitespace
2071747 - Link to documentation from the overview page goes to a missing link
2071761 - Translation Keys Are Not Namespaced
2071799 - Multus CNI should exit cleanly on CNI DEL when the API server is unavailable
2071859 - ovn-kube pods spec.dnsPolicy should be Default
2071914 - cloud-network-config-controller 4.10.5:  Error building cloud provider client, err: %vfailed to initialize Azure environment: autorest/azure: There is no cloud environment matching the name ""
2071998 - Cluster-version operator should share details of signature verification when it fails in 'Force: true' updates
2072106 - cluster-ingress-operator tests do not build on go 1.18
2072134 - Routes are not accessible within cluster from hostnet pods
2072139 - vsphere driver has permissions to create/update PV objects
2072154 - Secondary Scheduler operator panics
2072171 - Test "[sig-network][Feature:EgressFirewall] EgressFirewall should have no impact outside its namespace [Suite:openshift/conformance/parallel]" fails
2072195 - machine api doesn't issue client cert when AWS DNS suffix missing
2072215 - Whereabouts ip-reconciler should be opt-in and not required
2072389 - CVO exits upgrade immediately rather than waiting for etcd backup
2072439 - openshift-cloud-network-config-controller reports wrong range of IP addresses for Azure worker nodes
2072455 - make bundle overwrites supported-nic-ids_v1_configmap.yaml
2072570 - The namespace titles for operator-install-single-namespace test keep changing
2072710 - Perfscale - pods time out waiting for OVS port binding (ovn-installed)
2072766 - Cluster Network Operator stuck in CrashLoopBackOff when scheduled to same master
2072780 - OVN kube-master does not clear NetworkUnavailableCondition on GCP BYOH Windows node
2072793 - Drop "Used Filesystem" from "Virtualization -> Overview"
2072805 - Observe > Dashboards: $__range variables cause PromQL query errors
2072807 - Observe > Dashboards: Missing panel.styles attribute for table panels causes JS error
2072842 - (release-4.11) Gather namespace names with overlapping UID ranges
2072883 - sometimes monitoring dashboards charts can not be loaded successfully
2072891 - Update gcp-pd-csi-driver to 1.5.1;
2072911 - panic observed in kubedescheduler operator
2072924 - periodic-ci-openshift-release-master-ci-4.11-e2e-azure-techpreview-serial
2072957 - ContainerCreateError loop leads to several thousand empty logfiles in the file system
2072998 - update aws-efs-csi-driver to the latest version
2072999 - Navigate from logs of selected Tekton task instead of last one
2073021 - [vsphere] Failed to update OS on master nodes
2073112 - Prometheus (uwm) externalLabels not showing always in alerts. 
2073113 - Warning is logged to the console: W0407 Defaulting of registry auth file to "${HOME}/.docker/config.json" is deprecated. 
2073176 - removing data in form does not remove data from yaml editor
2073197 - Error in Spoke/SNO agent: Source image rejected: A signature was required, but no signature exists
2073329 - Pipelines-plugin- Having different title for Pipeline Runs tab, on Pipeline Details page it's "PipelineRuns" and on Repository Details page it's "Pipeline Runs". 
2073373 - Update azure-disk-csi-driver to 1.16.0
2073378 - failed egressIP assignment - cloud-network-config-controller does not delete failed cloudprivateipconfig
2073398 - machine-api-provider-openstack does not clean up OSP ports after failed server provisioning
2073436 - Update azure-file-csi-driver to v1.14.0
2073437 - Topology performance: Firehose/useK8sWatchResources cache can return unexpected data format if isList differs on multiple calls
2073452 - [sig-network] pods should successfully create sandboxes by other - failed (add)
2073473 - [OVN SCALE][ovn-northd] Unnecessary SB record no-op changes added to SB transaction. 
2073522 - Update ibm-vpc-block-csi-driver to v4.2.0
2073525 - Update vpc-node-label-updater to v4.1.2
2073901 - Installation failed due to etcd operator Err:DefragControllerDegraded: failed to dial endpoint https://10.0.0.7:2379 with maintenance client: context canceled
2073937 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for UMW
2073938 - APIRemovedInNextEUSReleaseInUse alert for runtimeclasses
2073945 - APIRemovedInNextEUSReleaseInUse alert for podsecuritypolicies
2073972 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for platform monitoring
2074009 - [OVN] ovn-northd doesn't clean Chassis_Private record after scale down to 0 a machineSet
2074031 - Admins should be able to tune garbage collector aggressiveness (GOGC) for kube-apiserver if necessary
2074062 - Node Tuning Operator(NTO) - Cloud provider profile rollback doesn't work well
2074084 - CMO metrics not visible in the OCP webconsole UI
2074100 - CRD filtering according to name broken
2074210 - asia-south2, australia-southeast2, and southamerica-west1Missing from GCP regions
2074237 - oc new-app --image-stream flag behavior is unclear
2074243 - DefaultPlacement API allow empty enum value and remove default
2074447 - cluster-dashboard: CPU Utilisation iowait and steal
2074465 - PipelineRun fails in import from Git flow if "main" branch is default
2074471 - Cannot delete namespace with a LB type svc and Kuryr when ExternalCloudProvider is enabled
2074475 - [e2e][automation] kubevirt plugin cypress tests fail
2074483 - coreos-installer doesnt work on Dell machines
2074544 - e2e-metal-ipi-ovn-ipv6 failing due to recent CEO changes
2074585 - MCG standalone deployment page goes blank when the KMS option is enabled
2074606 - occm does not have permissions to annotate SVC objects
2074612 - Operator fails to install due to service name lookup failure
2074613 - nodeip-configuration container incorrectly attempts to relabel /etc/systemd/system
2074635 - Unable to start Web Terminal after deleting existing instance
2074659 - AWS installconfig ValidateForProvisioning always provides blank values to validate zone records
2074706 - Custom EC2 endpoint is not considered by AWS EBS CSI driver
2074710 - Transition to go-ovirt-client
2074756 - Namespace column provide wrong data in ClusterRole Details -> Rolebindings tab
2074767 - Metrics page show incorrect values due to metrics level config
2074807 - NodeFilesystemSpaceFillingUp alert fires even before kubelet GC kicks in
2074902 - oc debug node/nodename ? chroot /host somecommand should exit with non-zero when the sub-command failed
2075015 - etcd-guard connection refused event repeating pathologically (payload blocking)
2075024 - Metal upgrades permafailing on metal3 containers crash looping
2075050 - oc-mirror fails to calculate between two channels with different prefixes for the same version of OCP
2075091 - Symptom Detection.Undiagnosed panic detected in pod
2075117 - Developer catalog: Order dropdown (A-Z, Z-A) is miss-aligned (in a separate row)
2075149 - Trigger Translations When Extensions Are Updated
2075189 - Imports from dynamic-plugin-sdk lead to failed module resolution errors
2075459 - Set up cluster on aws with rootvolumn io2 failed due to no iops despite it being configured
2075475 - OVN-Kubernetes: egress router pod (redirect mode), access from pod on different worker-node (redirect) doesn't work
2075478 - Bump documentationBaseURL to 4.11
2075491 - nmstate operator cannot be upgraded on SNO
2075575 - Local Dev Env - Prometheus 404 Call errors spam the console
2075584 - improve clarity of build failure messages when using csi shared resources but tech preview is not enabled
2075592 - Regression - Top of the web terminal drawer is missing a stroke/dropshadow
2075621 - Cluster upgrade.[sig-mco] Machine config pools complete upgrade
2075647 - 'oc adm upgrade ...' POSTs ClusterVersion, clobbering any unrecognized spec properties
2075671 - Cluster Ingress Operator K8S API cache contains duplicate objects
2075778 - Fix failing TestGetRegistrySamples test
2075873 - Bump recommended FCOS to 35.20220327.3.0
2076193 - oc patch command for the liveness probe and readiness probe parameters of an OpenShift router deployment doesn't take effect
2076270 - [OCPonRHV] MachineSet scale down operation fails to delete the worker VMs
2076277 - [RFE] [OCPonRHV] Add storage domain ID valueto Compute/ControlPlain section in the machine object
2076290 - PTP operator readme missing documentation on BC setup via PTP config
2076297 - Router process ignores shutdown signal while starting up
2076323 - OLM blocks all operator installs if an openshift-marketplace catalogsource is unavailable
2076355 - The KubeletConfigController wrongly process multiple confs for a pool after having kubeletconfig in bootstrap
2076393 - [VSphere] survey fails to list datacenters
2076521 - Nodes in the same zone are not updated in the right order
2076527 - Pipeline Builder: Make unnecessary tekton hub API calls when the user types 'too fast'
2076544 - Whitespace (padding) is missing after an PatternFly update, already in 4.10
2076553 - Project access view replace group ref with user ref when updating their Role
2076614 - Missing Events component from the SDK API
2076637 - Configure metrics for vsphere driver to be reported
2076646 - openshift-install destroy unable to delete PVC disks in GCP if cluster identifier is longer than 22 characters
2076793 - CVO exits upgrade immediately rather than waiting for etcd backup
2076831 - [ocp4.11]Mem/cpu high utilization by apiserver/etcd for cluster stayed 10 hours
2076877 - network operator tracker to switch to use flowcontrol.apiserver.k8s.io/v1beta2 instead v1beta1 to be deprecated in k8s 1.26
2076880 - OKD: add cluster domain to the uploaded vm configs so that 30-local-dns-prepender can use it
2076975 - Metric unset during static route conversion in configure-ovs.sh
2076984 - TestConfigurableRouteNoConsumingUserNoRBAC fails in CI
2077050 - OCP should default to pd-ssd disk type on GCP
2077150 - Breadcrumbs on a few screens don't have correct top margin spacing
2077160 - Update owners for openshift/cluster-etcd-operator
2077357 - [release-4.11] 200ms packet delay with OVN controller turn on
2077373 - Accessibility warning on developer perspective
2077386 - Import page shows untranslated values for the route advanced routing>security options (devconsole~Edge)
2077457 - failure in test case "[sig-network][Feature:Router] The HAProxy router should serve the correct routes when running with the haproxy config manager"
2077497 - Rebase etcd to 3.5.3 or later
2077597 - machine-api-controller is not taking the proxy configuration when it needs to reach the RHV API
2077599 - OCP should alert users if they are on vsphere version <7.0.2
2077662 - AWS Platform Provisioning Check incorrectly identifies record as part of domain of cluster
2077797 - LSO pods don't have any resource requests
2077851 - "make vendor" target is not working
2077943 - If there is a service with multiple ports, and the route uses 8080, when editing the 8080 port isn't replaced, but a random port gets replaced and 8080 still stays
2077994 - Publish RHEL CoreOS AMIs in AWS ap-southeast-3 region
2078013 - drop multipathd.socket workaround
2078375 - When using the wizard with template using data source the resulting vm use pvc source
2078396 - [OVN AWS] EgressIP was not balanced to another egress node after original node was removed egress label
2078431 - [OCPonRHV] - ERROR failed to instantiate provider "openshift/local/ovirt" to obtain schema:  ERROR fork/exec
2078526 - Multicast breaks after master node reboot/sync
2078573 - SDN CNI -Fail to create nncp when vxlan is up
2078634 - CRI-O not killing Calico CNI stalled (zombie) processes. 
2078698 - search box may not completely remove content
2078769 - Different not translated filter group names (incl. Secret, Pipeline, PIpelineRun)
2078778 - [4.11] oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration fails and caused ?apiserver panic'd...http2: panic serving xxx.xx.xxx.21:49748: cannot deep copy int? when AllRequestBodies audit-profile is used. 
2078781 - PreflightValidation does not handle multiarch images
2078866 - [BM][IPI] Installation with bonds fail - DaemonSet "openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress
2078875 - OpenShift Installer fail to remove Neutron ports
2078895 - [OCPonRHV]-"cow" unsupported value in format field in install-config.yaml
2078910 - CNO spitting out ".spec.groups[0].rules[4].runbook_url: field not declared in schema"
2078945 - Ensure only one apiserver-watcher process is active on a node. 
2078954 - network-metrics-daemon makes costly global pod list calls scaling per node
2078969 - Avoid update races between old and new NTO operands during cluster upgrades
2079012 - egressIP not migrated to correct workers after deleting machineset it was assigned
2079062 - Test for console demo plugin toast notification needs to be increased for ci testing
2079197 - [RFE] alert when more than one default storage class is detected
2079216 - Partial cluster update reference doc link returns 404
2079292 - containers prometheus-operator/kube-rbac-proxy violate PodSecurity
2079315 - (release-4.11) Gather ODF config data with Insights
2079422 - Deprecated 1.25 API call
2079439 - OVN Pods Assigned Same IP Simultaneously
2079468 - Enhance the waitForIngressControllerCondition for better CI results
2079500 - okd-baremetal-install uses fcos for bootstrap but rhcos for cluster
2079610 - Opeatorhub status shows errors
2079663 - change default image features in RBD storageclass
2079673 - Add flags to disable migrated code
2079685 - Storageclass creation page with "Enable encryption" is not displaying saved KMS connection details when vaulttenantsa details are available in csi-kms-details config
2079724 - cluster-etcd-operator - disable defrag-controller as there is unpredictable impact on large OpenShift Container Platform 4 - Cluster
2079788 - Operator restarts while applying the acm-ice example
2079789 - cluster drops ImplicitlyEnabledCapabilities during upgrade
2079803 - Upgrade-triggered etcd backup will be skip during serial upgrade
2079805 - Secondary scheduler operator should comply to restricted pod security level
2079818 - Developer catalog installation overlay (modal?) shows a duplicated padding
2079837 - [RFE] Hub/Spoke example with daemonset
2079844 - EFS cluster csi driver status stuck in AWSEFSDriverCredentialsRequestControllerProgressing with sts installation
2079845 - The Event Sinks catalog page now has a blank space on the left
2079869 - Builds for multiple kernel versions should be ran in parallel when possible
2079913 - [4.10] APIRemovedInNextEUSReleaseInUse alert for OVN endpointslices
2079961 - The search results accordion has no spacing between it and the side navigation bar. 
2079965 - [rebase v1.24]  [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS [Suite:openshift/conformance/parallel] [Suite:k8s]
2080054 - TAGS arg for installer-artifacts images is not propagated to build images
2080153 - aws-load-balancer-operator-controller-manager pod stuck in ContainerCreating status
2080197 - etcd leader changes produce test churn during early stage of test
2080255 - EgressIP broken on AWS with OpenShiftSDN / latest nightly build
2080267 - [Fresh Installation] Openshift-machine-config-operator namespace is flooded with events related to clusterrole, clusterrolebinding
2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses
2080379 - Group all e2e tests as parallel or serial
2080387 - Visual connector not appear between the node if a node get created using "move connector" to a different application
2080416 - oc bash-completion problem
2080429 - CVO must ensure non-upgrade related changes are saved when desired payload fails to load
2080446 - Sync ironic images with latest bug fixes packages
2080679 - [rebase v1.24] [sig-cli] test failure
2080681 - [rebase v1.24]  [sig-cluster-lifecycle] CSRs from machines that are not recognized by the cloud provider are not approved [Suite:openshift/conformance/parallel]
2080687 - [rebase v1.24]  [sig-network][Feature:Router] tests are failing
2080873 - Topology graph crashes after update to 4.11 when Layout 2 (ColaForce) was selected previously
2080964 - Cluster operator special-resource-operator is always in Failing state with reason: "Reconciling simple-kmod"
2080976 - Avoid hooks config maps when hooks are empty
2081012 - [rebase v1.24]  [sig-devex][Feature:OpenShiftControllerManager] TestAutomaticCreationOfPullSecrets [Suite:openshift/conformance/parallel]
2081018 - [rebase v1.24] [sig-imageregistry][Feature:Image] oc tag should work when only imagestreams api is available
2081021 - [rebase v1.24] [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources
2081062 - Unrevert RHCOS back to 8.6
2081067 - admin dev-console /settings/cluster should point out history may be excerpted
2081069 - [sig-network] pods should successfully create sandboxes by adding pod to network
2081081 - PreflightValidation "odd number of arguments passed as key-value pairs for logging" error
2081084 - [rebase v1.24] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed
2081087 - [rebase v1.24] [sig-auth] ServiceAccounts should allow opting out of API token automount
2081119 - oc explain output of default overlaySize is outdated
2081172 - MetallLB: YAML view in webconsole does not show all the available key value pairs of all the objects
2081201 - cloud-init User check for Windows VM refuses to accept capitalized usernames
2081447 - Ingress operator performs spurious updates in response to API's defaulting of router deployment's router container's ports' protocol field
2081562 - lifecycle.posStart hook does not have network connectivity. 
2081685 - Typo in NNCE Conditions
2081743 - [e2e] tests failing
2081788 - MetalLB: the crds are not validated until metallb is deployed
2081821 - SpecialResourceModule CRD is not installed after deploying SRO operator using brew bundle image via OLM
2081895 - Use the managed resource (and not the manifest) for resource health checks
2081997 - disconnected insights operator remains degraded after editing pull secret
2082075 - Removing huge amount of ports takes a lot of time. 
2082235 - CNO exposes a generic apiserver that apparently does nothing
2082283 - Transition to new oVirt Terraform provider
2082360 - OCP 4.10.4, CNI: SDN; Whereabouts IPAM: Duplicate IP address with bond-cni
2082380 - [4.10.z] customize wizard is crashed
2082403 - [LSO] No new build local-storage-operator-metadata-container created
2082428 - oc patch healthCheckInterval with invalid "5 s" to the ingress-controller successfully
2082441 - [UPI] aws-load-balancer-operator-controller-manager failed to get VPC ID in UPI on AWS
2082492 - [IPI IBM]Can't create image-registry-private-configuration secret with error "specified resource key credentials does not contain HMAC keys"
2082535 - [OCPonRHV]-workers are cloned when "clone: false" is specified in install-config.yaml
2082538 - apirequests limits of Cluster CAPI Operator are too low for GCP platform
2082566 - OCP dashboard fails to load when the query to Prometheus takes more than 30s to return
2082604 - [IBMCloud][x86_64] IBM VPC does not properly support RHCOS Custom Image tagging
2082667 - No new machines provisioned while machineset controller drained old nodes for change to machineset
2082687 - [IBM Cloud][x86_64][CCCMO] IBM x86_64 CCM using unsupported --port argument
2082763 - Cluster install stuck on the applying for operatorhub "cluster"
2083149 - "Update blocked" label incorrectly displays on new minor versions in the "Other available paths" modal
2083153 - Unable to use application credentials for Manila PVC creation on OpenStack
2083154 - Dynamic plugin sdk tsdoc generation does not render docs for parameters
2083219 - DPU network operator doesn't deal with c1... inteface names
2083237 - [vsphere-ipi] Machineset scale up process delay
2083299 - SRO does not fetch mirrored DTK images in disconnected clusters
2083445 - [FJ OCP4.11 Bug]: RAID setting during IPI cluster deployment fails if iRMC port number is specified
2083451 - Update external serivces URLs to console.redhat.com
2083459 - Make numvfs > totalvfs error message more verbose
2083466 - Failed to create clusters on AWS C2S/SC2S due to image-registry MissingEndpoint error
2083514 - Operator ignores managementState Removed
2083641 - OpenShift Console Knative Eventing ContainerSource generates wrong api version when pointed to k8s Service
2083756 - Linkify not upgradeable message on ClusterSettings page
2083770 - Release image signature manifest filename extension is yaml
2083919 - openshift4/ose-operator-registry:4.10.0 having security vulnerabilities
2083942 - Learner promotion can temporarily fail with rpc not supported for learner errors
2083964 - Sink resources dropdown is not persisted in form yaml switcher in event source creation form
2083999 - "--prune-over-size-limit" is not working as expected
2084079 - prometheus route is not updated to "path: /api" after upgrade from 4.10 to 4.11
2084081 - nmstate-operator installed cluster on POWER shows issues while adding new dhcp interface
2084124 - The Update cluster modal includes a broken link
2084215 - Resource configmap "openshift-machine-api/kube-rbac-proxy" is defined by 2 manifests
2084249 - panic in ovn pod from an e2e-aws-single-node-serial nightly run
2084280 - GCP API Checks Fail if non-required APIs are not enabled
2084288 - "alert/Watchdog must have no gaps or changes" failing after bump
2084292 - Access to dashboard resources is needed in dynamic plugin SDK
2084331 - Resource with multiple capabilities included unless all capabilities are disabled
2084433 - Podsecurity violation error getting logged for ingresscontroller during deployment. 
2084438 - Change Ping source spec.jsonData (deprecated) field  to spec.data
2084441 - [IPI-Azure]fail to check the vm capabilities in install cluster
2084459 - Topology list view crashes when switching from chart view after moving sink from knative service to uri
2084463 - 5 control plane replica tests fail on ephemeral volumes
2084539 - update azure arm templates to support customer provided vnet
2084545 - [rebase v1.24] cluster-api-operator causes all techpreview tests to fail
2084580 - [4.10] No cluster name sanity validation - cluster name with a dot (".") character
2084615 - Add to navigation option on search page is not properly aligned
2084635 - PipelineRun creation from the GUI for a Pipeline with 2 workspaces hardcode the PVC storageclass
2084732 - A special resource that was created in OCP 4.9 can't be deleted after an upgrade to 4.10
2085187 - installer-artifacts fails to build with go 1.18
2085326 - kube-state-metrics is tripping APIRemovedInNextEUSReleaseInUse
2085336 - [IPI-Azure] Fail to create the worker node which HyperVGenerations is V2 or V1 and vmNetworkingType is Accelerated
2085380 - [IPI-Azure] Incorrect error prompt validate VM image and instance HyperV gen match when install cluster
2085407 - There is no Edit link/icon for labels on Node details page
2085721 - customization controller image name is wrong
2086056 - Missing doc for OVS HW offload
2086086 - Update Cluster Sample Operator dependencies and libraries for OCP 4.11
2086092 - update kube to v.24
2086143 - CNO uses too much memory
2086198 - Cluster CAPI Operator creates unnecessary defaulting webhooks
2086301 - kubernetes nmstate pods are not running after creating instance
2086408 - Podsecurity violation error getting logged for  externalDNS operand pods during deployment
2086417 - Pipeline created from add flow has GIT Revision as required field
2086437 - EgressQoS CRD not available
2086450 - aws-load-balancer-controller-cluster pod logged Podsecurity violation error during deployment
2086459 - oc adm inspect fails when one of resources not exist
2086461 - CNO probes MTU unnecessarily in Hypershift, making cluster startup take too long
2086465 - External identity providers should log login attempts in the audit trail
2086469 - No data about title 'API Request Duration by Verb - 99th Percentile' display on the dashboard 'API Performance'
2086483 - baremetal-runtimecfg k8s dependencies should be on a par with 1.24 rebase
2086505 - Update oauth-server images to be consistent with ART
2086519 - workloads must comply to restricted security policy
2086521 - Icons of Knative actions are not clearly visible on the context menu in the dark mode
2086542 - Cannot create service binding through drag and drop
2086544 - ovn-k master daemonset on hypershift shouldn't log token
2086546 - Service binding connector is not visible in the dark mode
2086718 - PowerVS destroy code does not work
2086728 - [hypershift] Move drain to controller
2086731 - Vertical pod autoscaler operator needs a 4.11 bump
2086734 - Update csi driver images to be consistent with ART
2086737 - cloud-provider-openstack rebase to kubernetes v1.24
2086754 - Cluster resource override operator needs a 4.11 bump
2086759 - [IPI] OCP-4.11 baremetal - boot partition is not mounted on temporary directory
2086791 - Azure: Validate UltraSSD instances in multi-zone regions
2086851 - pods with multiple external gateways may only be have ECMP routes for one gateway
2086936 - vsphere ipi should use cores by default instead of sockets
2086958 - flaky e2e in kube-controller-manager-operator TestPodDisruptionBudgetAtLimitAlert
2086959 - flaky e2e in kube-controller-manager-operator TestLogLevel
2086962 - oc-mirror publishes metadata with --dry-run when publishing to mirror
2086964 - oc-mirror fails on differential run when mirroring a package with multiple channels specified
2086972 - oc-mirror does not error invalid metadata is passed to the describe command
2086974 - oc-mirror does not work with headsonly for operator 4.8
2087024 - The oc-mirror result mapping.txt is not correct , can?t be used by oc image mirror command
2087026 - DTK's imagestream is missing from OCP 4.11 payload
2087037 - Cluster Autoscaler should use K8s 1.24 dependencies
2087039 - Machine API components should use K8s 1.24 dependencies
2087042 - Cloud providers components should use K8s 1.24 dependencies
2087084 - remove unintentional nic support
2087103 - "Updating to release image" from 'oc' should point out that the cluster-version operator hasn't accepted the update
2087114 - Add simple-procfs-kmod in modprobe example in README.md
2087213 - Spoke BMH stuck "inspecting" when deployed via ZTP in 4.11 OCP hub
2087271 - oc-mirror does not check for existing workspace when performing mirror2mirror synchronization
2087556 - Failed to render DPU ovnk manifests
2087579 - --keep-manifest-list=true does not work for oc adm release new , only pick up the linux/amd64 manifest from the manifest list
2087680 - [Descheduler] Sync with sigs.k8s.io/descheduler
2087684 - KCMO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile
2087685 - KASO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile
2087687 - MCO does not generate event when user applies Default -> LowUpdateSlowReaction WorkerLatencyProfile
2087764 - Rewrite the registry backend will hit error
2087771 - [tracker] NetworkManager 1.36.0 loses DHCP lease and doesn't try again
2087772 - Bindable badge causes some layout issues with the side panel of bindable operator backed services
2087942 - CNO references images that are divergent from ART
2087944 - KafkaSink Node visualized incorrectly
2087983 - remove etcd_perf before restore
2087993 - PreflightValidation many "msg":"TODO: preflight checks" in the operator log
2088130 - oc-mirror init does not allow for automated testing
2088161 - Match dockerfile image name with the name used in the release repo
2088248 - Create HANA VM does not use values from customized HANA templates
2088304 - ose-console: enable source containers for open source requirements
2088428 - clusteroperator/baremetal stays in progressing: Applying metal3 resources state on a fresh install
2088431 - AvoidBuggyIPs field of addresspool should be removed
2088483 - oc adm catalog mirror returns 0 even if there are errors
2088489 - Topology list does not allow selecting an application group anymore (again)
2088533 - CRDs for openshift.io should have subresource.status failes on sharedconfigmaps.sharedresource and sharedsecrets.sharedresource
2088535 - MetalLB: Enable debug log level for downstream CI
2088541 - Default CatalogSources in openshift-marketplace namespace keeps throwing pod security admission warnings would violate PodSecurity "restricted:v1.24"
2088561 - BMH unable to start inspection: File name too long
2088634 - oc-mirror does not fail when catalog is invalid
2088660 - Nutanix IPI installation inside container failed
2088663 - Better to change the default value of --max-per-registry to 6
2089163 - NMState CRD out of sync with code
2089191 - should remove grafana from cluster-monitoring-config configmap in hypershift cluster
2089224 - openshift-monitoring/cluster-monitoring-config configmap always revert to default setting
2089254 - CAPI operator: Rotate token secret if its older than 30 minutes
2089276 - origin tests for egressIP and azure fail
2089295 - [Nutanix]machine stuck in Deleting phase when delete a machineset whose replicas>=2 and machine is Provisioning phase on Nutanix
2089309 - [OCP 4.11] Ironic inspector image fails to clean disks that are part of a multipath setup if they are passive paths
2089334 - All cloud providers should use service account credentials
2089344 - Failed to deploy simple-kmod
2089350 - Rebase sdn to 1.24
2089387 - LSO not taking mpath. ignoring device
2089392 - 120 node baremetal upgrade from 4.9.29 --> 4.10.13  crashloops on machine-approver
2089396 - oc-mirror does not show pruned image plan
2089405 - New topology package shows gray build icons instead of green/red icons for builds and pipelines
2089419 - do not block 4.10 to 4.11 upgrades if an existing CSI driver is found. Instead, warn about presence of third party CSI driver
2089488 - Special resources are missing the managementState field
2089563 - Update Power VS MAPI to use api's from openshift/api repo
2089574 - UWM prometheus-operator pod can't start up due to no master node in hypershift cluster
2089675 - Could not move Serverless Service without Revision (or while starting?)
2089681 - [Hypershift] EgressIP doesn't work in hypershift guest cluster
2089682 - Installer expects all nutanix subnets to have a cluster reference which is not the case for e.g. overlay networks
2089687 - alert message of MCDDrainError needs to be updated for new drain controller
2089696 - CR reconciliation is stuck in daemonset lifecycle
2089716 - [4.11][reliability]one worker node became NotReady on which ovnkube-node pod's memory increased sharply
2089719 - acm-simple-kmod fails to build
2089720 - [Hypershift] ICSP doesn't work for the guest cluster
2089743 - acm-ice fails to deploy: helm chart does not appear to be a gzipped archive
2089773 - Pipeline status filter and status colors doesn't work correctly with non-english languages
2089775 - keepalived can keep ingress VIP on wrong node under certain circumstances
2089805 - Config duration metrics aren't exposed
2089827 - MetalLB CI - backward compatible tests are failing due to the order of delete
2089909 - PTP e2e testing not working on SNO cluster
2089918 - oc-mirror skip-missing still returns 404 errors when images do not exist
2089930 - Bump OVN to 22.06
2089933 - Pods do not post readiness status on termination
2089968 - Multus CNI daemonset should use hostPath mounts with type: directory
2089973 - bump libs to k8s 1.24 for OCP 4.11
2089996 - Unnecessary yarn install runs in e2e tests
2090017 - Enable source containers to meet open source requirements
2090049 - destroying GCP cluster which has a compute node without infra id in name would fail to delete 2 k8s firewall-rules and VPC network
2090092 - Will hit error if specify the channel not the latest
2090151 - [RHEL scale up] increase the wait time so that the node has enough time to get ready
2090178 - VM SSH command generated by UI points at api VIP
2090182 - [Nutanix]Create a machineset with invalid image, machine stuck in "Provisioning" phase
2090236 - Only reconcile annotations and status for clusters
2090266 - oc adm release extract is failing on mutli arch image
2090268 - [AWS EFS] Operator not getting installed successfully on Hypershift Guest cluster
2090336 - Multus logging should be disabled prior to release
2090343 - Multus debug logging should be enabled temporarily for debugging podsandbox creation failures. 
2090358 - Initiating drain log message is displayed before the drain actually starts
2090359 - Nutanix mapi-controller: misleading error message when the failure is caused by wrong credentials
2090405 - [tracker] weird port mapping with asymmetric traffic [rhel-8.6.0.z]
2090430 - gofmt code
2090436 - It takes 30min-60min to update the machine count in custom MachineConfigPools (MCPs) when a node is removed from the pool
2090437 - Bump CNO to k8s 1.24
2090465 - golang version mismatch
2090487 - Change default SNO Networking Type and disallow OpenShiftSDN a supported networking Type
2090537 - failure in ovndb migration when db is not ready in HA mode
2090549 - dpu-network-operator shall be able to run on amd64 arch platform
2090621 - Metal3 plugin does not work properly with updated NodeMaintenance CRD
2090627 - Git commit and branch are empty in MetalLB log
2090692 - Bump to latest 1.24 k8s release
2090730 - must-gather should include multus logs. 
2090731 - nmstate deploys two instances of webhook on a single-node cluster
2090751 - oc image mirror skip-missing flag does not skip images
2090755 - MetalLB: BGPAdvertisement validation allows duplicate entries for ip pool selector, ip address pools, node selector and bgp peers
2090774 - Add Readme to plugin directory
2090794 - MachineConfigPool cannot apply a configuration after fixing the pods that caused a drain alert
2090809 - gm.ClockClass  invalid syntax parse error in linux ptp daemon logs
2090816 - OCP 4.8 Baremetal IPI installation failure: "Bootstrap failed to complete: timed out waiting for the condition"
2090819 - oc-mirror does not catch invalid registry input when a namespace is specified
2090827 - Rebase CoreDNS to 1.9.2 and k8s 1.24
2090829 - Bump OpenShift router to k8s 1.24
2090838 - Flaky test: ignore flapping host interface 'tunbr'
2090843 - addLogicalPort() performance/scale optimizations
2090895 - Dynamic plugin nav extension "startsWith" property does not work
2090929 - [etcd] cluster-backup.sh script has a conflict to use the '/etc/kubernetes/static-pod-certs' folder if a custom API certificate is defined
2090993 - [AI Day2] Worker node overview page crashes in Openshift console with TypeError
2091029 - Cancel rollout action only appears when rollout is completed
2091030 - Some BM may fail booting with default bootMode strategy
2091033 - [Descheduler]: provide ability to override included/excluded namespaces
2091087 - ODC Helm backend Owners file needs updates
2091106 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3
2091142 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3
2091167 - IPsec runtime enabling not work in hypershift
2091218 - Update Dev Console Helm backend to use helm 3.9.0
2091433 - Update AWS instance types
2091542 - Error Loading/404 not found page shown after clicking "Current namespace only"
2091547 - Internet connection test with proxy permanently fails
2091567 - oVirt CSI driver should use latest go-ovirt-client
2091595 - Alertmanager configuration can't use OpsGenie's entity field when AlertmanagerConfig is enabled
2091599 - PTP Dual Nic  | Extend Events 4.11 - Up/Down master interface affects all the other interface in the same NIC accoording the events and metric
2091603 - WebSocket connection restarts when switching tabs in WebTerminal
2091613 - simple-kmod fails to build due to missing KVC
2091634 - OVS 2.15 stops handling traffic once ovs-dpctl(2.17.2) is used against it
2091730 - MCO e2e tests are failing with "No token found in openshift-monitoring secrets"
2091746 - "Oh no! Something went wrong" shown after user creates MCP without 'spec'
2091770 - CVO gets stuck downloading an upgrade, with the version pod complaining about invalid options
2091854 - clusteroperator status filter doesn't match all values in Status column
2091901 - Log stream paused right after updating log lines in Web Console in OCP4.10
2091902 - unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server has received too many requests and has asked us to try again later
2091990 - wrong external-ids for ovn-controller lflow-cache-limit-kb
2092003 - PR 3162 | BZ 2084450 - invalid URL schema for AWS causes tests to perma fail and break the cloud-network-config-controller
2092041 - Bump cluster-dns-operator to k8s 1.24
2092042 - Bump cluster-ingress-operator to k8s 1.24
2092047 - Kube 1.24 rebase for cloud-network-config-controller
2092137 - Search doesn't show all entries when name filter is cleared
2092296 - Change Default MachineCIDR of Power VS Platform from 10.x to 192.168.0.0/16
2092390 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and 'Overview' tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown
2092395 - etcdHighNumberOfFailedGRPCRequests alerts with wrong results
2092408 - Wrong icon is used in the virtualization overview permissions card
2092414 - In virtualization overview "running vm per templates" template list can be improved
2092442 - Minimum time between drain retries is not the expected one
2092464 - marketplace catalog defaults to v4.10
2092473 - libovsdb performance backports
2092495 - ovn: use up to 4 northd threads in non-SNO clusters
2092502 - [azure-file-csi-driver] Stop shipping a NFS StorageClass
2092509 - Invalid memory address error if non existing caBundle is configured in DNS-over-TLS using ForwardPlugins
2092572 - acm-simple-kmod chart should create the namespace on the spoke cluster
2092579 - Don't retry pod deletion if objects are not existing
2092650 - [BM IPI with Provisioning Network] Worker nodes are not provisioned: ironic-agent is stuck before writing into disks
2092703 - Incorrect mount propagation information in container status
2092815 - can't delete the unwanted image from registry by oc-mirror
2092851 - [Descheduler]: allow to customize the LowNodeUtilization strategy thresholds
2092867 - make repository name unique in acm-ice/acm-simple-kmod examples
2092880 - etcdHighNumberOfLeaderChanges returns incorrect number of leadership changes
2092887 - oc-mirror list releases command uses filter-options flag instead of filter-by-os
2092889 - Incorrect updating of EgressACLs using direction "from-lport"
2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3)
2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3)
2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3)
2092928 - CVE-2022-26945 go-getter: command injection vulnerability
2092937 - WebScale: OVN-k8s forwarding to external-gw over the secondary interfaces failing
2092966 - [OCP 4.11] [azure] /etc/udev/rules.d/66-azure-storage.rules missing from initramfs
2093044 - Azure machine-api-provider-azure Availability Set Name Length Limit
2093047 - Dynamic Plugins: Generated API markdown duplicates checkAccess and useAccessReview doc
2093126 - [4.11] Bootimage bump tracker
2093236 - DNS operator stopped reconciling after 4.10 to 4.11 upgrade | 4.11 nightly to 4.11 nightly upgrade
2093288 - Default catalogs fails liveness/readiness probes
2093357 - Upgrading sno spoke with acm-ice, causes the sno to get unreachable
2093368 - Installer orphans FIPs created for LoadBalancer Services on cluster destroy
2093396 - Remove node-tainting for too-small MTU
2093445 - ManagementState reconciliation breaks SR
2093454 - Router proxy protocol doesn't work with dual-stack (IPv4 and IPv6) clusters
2093462 - Ingress Operator isn't reconciling the ingress cluster operator object
2093586 - Topology: Ctrl+space opens the quick search modal, but doesn't close it again
2093593 - Import from Devfile shows configuration options that shoudn't be there
2093597 - Import: Advanced option sentence is splited into two parts and headlines has no padding
2093600 - Project access tab should apply new permissions before it delete old ones
2093601 - Project access page doesn't allow the user to update the settings twice (without manually reload the content)
2093783 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.24
2093797 - 'oc registry login' with serviceaccount function need update
2093819 - An etcd member for a new machine was never added to the cluster
2093930 - Gather console helm install  totals metric
2093957 - Oc-mirror write dup metadata to registry backend
2093986 - Podsecurity violation error getting logged for pod-identity-webhook
2093992 - Cluster version operator acknowledges upgrade failing on periodic-ci-openshift-release-master-nightly-4.11-e2e-metal-ipi-upgrade-ovn-ipv6
2094023 - Add Git Flow - Template Labels for Deployment show as DeploymentConfig
2094024 - bump oauth-apiserver deps to include 1.23.1 k8s that fixes etcd blips
2094039 - egressIP panics with nil pointer dereference
2094055 - Bump coreos-installer for s390x Secure Execution
2094071 - No runbook created for SouthboundStale alert
2094088 - Columns in NBDB may never be updated by OVNK
2094104 - Demo dynamic plugin image tests should be skipped when testing console-operator
2094152 - Alerts in the virtualization overview status card aren't filtered
2094196 - Add default and validating webhooks for Power VS MAPI
2094227 - Topology: Create Service Binding should not be the last option (even under delete)
2094239 - custom pool Nodes with 0 nodes are always populated in progress bar
2094303 - If og is configured with sa, operator installation will be failed. 
2094335 - [Nutanix] - debug logs are enabled by default in machine-controller
2094342 - apirequests limits of Cluster CAPI Operator are too low for Azure platform
2094438 - Make AWS URL parsing more lenient for GetNodeEgressIPConfiguration
2094525 - Allow automatic upgrades for efs operator
2094532 - ovn-windows CI jobs are broken
2094675 - PTP Dual Nic  | Extend Events 4.11 - when kill the phc2sys We have notification for the ptp4l physical master moved to free run
2094694 - [Nutanix] No cluster name sanity validation - cluster name with a dot (".") character
2094704 - Verbose log activated on kube-rbac-proxy in deployment prometheus-k8s
2094801 - Kuryr controller keep restarting when handling IPs with leading zeros
2094806 - Machine API oVrit component should use K8s 1.24 dependencies
2094816 - Kuryr controller restarts when over quota
2094833 - Repository overview page does not show default PipelineRun template for developer user
2094857 - CloudShellTerminal loops indefinitely if DevWorkspace CR goes into failed state
2094864 - Rebase CAPG to latest changes
2094866 - oc-mirror does not always delete all manifests associated with an image during pruning
2094896 - Run 'openshift-install agent create image' has segfault exception if cluster-manifests directory missing
2094902 - Fix installer cross-compiling
2094932 - MGMT-10403 Ingress should enable single-node cluster expansion on upgraded clusters
2095049 - managed-csi StorageClass does not create PVs
2095071 - Backend tests fails after devfile registry update
2095083 - Observe > Dashboards: Graphs may change a lot on automatic refresh
2095110 - [ovn] northd container termination script must use bash
2095113 - [ovnkube] bump to openvswitch2.17-2.17.0-22.el8fdp
2095226 - Added changes to verify cloud connection and dhcpservices quota of a powervs instance
2095229 - ingress-operator pod in CrashLoopBackOff in 4.11 after upgrade starting in 4.6 due to go panic
2095231 - Kafka Sink sidebar in topology is empty
2095247 - Event sink form doesn't show channel as sink until app is refreshed
2095248 - [vSphere-CSI-Driver] does not report volume count limits correctly caused pod with multi volumes maybe schedule to not satisfied volume count node
2095256 - Samples Owner needs to be Updated
2095264 - ovs-configuration.service fails with Error: Failed to modify connection 'ovs-if-br-ex': failed to update connection: error writing to file '/etc/NetworkManager/systemConnectionsMerged/ovs-if-br-ex.nmconnection'
2095362 - oVirt CSI driver operator should use latest go-ovirt-client
2095574 - e2e-agnostic CI job fails
2095687 - Debug Container shown for build logs and on click ui breaks
2095703 - machinedeletionhooks doesn't work in vsphere cluster and BM cluster
2095716 - New PSA component for Pod Security Standards enforcement is refusing openshift-operators ns
2095756 - CNO panics with concurrent map read/write
2095772 - Memory requests for ovnkube-master containers are over-sized
2095917 - Nutanix set osDisk with diskSizeGB rather than diskSizeMiB
2095941 - DNS Traffic not kept local to zone or node when Calico SDN utilized
2096053 - Builder Image icons in Git Import flow are hard to see in Dark mode
2096226 - crio fails to bind to tentative IP, causing service failure since RHOCS was rebased on RHEL 8.6
2096315 - NodeClockNotSynchronising alert's severity should be critical
2096350 - Web console doesn't display webhook errors for upgrades
2096352 - Collect whole journal in gather
2096380 - acm-simple-kmod references deprecated KVC example
2096392 - Topology node icons are not properly visible in Dark mode
2096394 - Add page Card items background color does not match with column background color in Dark mode
2096413 - br-ex not created due to default bond interface having a different mac address than expected
2096496 - FIPS issue on OCP SNO with RT Kernel via performance profile
2096605 - [vsphere] no validation checking for diskType
2096691 - [Alibaba 4.11] Specifying ResourceGroup id in install-config.yaml, New pv are still getting created to default ResourceGroups
2096855 - oc adm release new failed with error when use  an existing  multi-arch release image as input
2096905 - Openshift installer should not use the prism client embedded in nutanix terraform provider
2096908 - Dark theme issue in pipeline builder, Helm rollback form, and Git import
2097000 - KafkaConnections disappear from Topology after creating KafkaSink in Topology
2097043 - No clean way to specify operand issues to KEDA OLM operator
2097047 - MetalLB:  matchExpressions used in CR like L2Advertisement, BGPAdvertisement, BGPPeers allow duplicate entries
2097067 - ClusterVersion history pruner does not always retain initial completed update entry
2097153 - poor performance on API call to vCenter ListTags with thousands of tags
2097186 - PSa autolabeling in 4.11 env upgraded from 4.10 does not work due to missing RBAC objects
2097239 - Change Lower CPU limits for Power VS cloud
2097246 - Kuryr: verify and unit jobs failing due to upstream OpenStack dropping py36 support
2097260 - openshift-install create manifests failed for Power VS platform
2097276 - MetalLB CI deploys the operator via manifests and not using the csv
2097282 - chore: update external-provisioner to the latest upstream release
2097283 - chore: update external-snapshotter to the latest upstream release
2097284 - chore: update external-attacher to the latest upstream release
2097286 - chore: update node-driver-registrar to the latest upstream release
2097334 - oc plugin help shows 'kubectl'
2097346 - Monitoring must-gather doesn't seem to be working anymore in 4.11
2097400 - Shared Resource CSI Driver needs additional permissions for validation webhook
2097454 - Placeholder bug for OCP 4.11.0 metadata release
2097503 - chore: rebase against latest external-resizer
2097555 - IngressControllersNotUpgradeable: load balancer service has been modified; changes must be reverted before upgrading
2097607 - Add Power VS support to Webhooks tests in actuator e2e test
2097685 - Ironic-agent can't restart because of existing container
2097716 - settings under httpConfig is dropped with AlertmanagerConfig v1beta1
2097810 - Required Network tools missing for Testing e2e PTP
2097832 - clean up unused IPv6DualStackNoUpgrade feature gate
2097940 - openshift-install destroy cluster traps if vpcRegion not specified
2097954 - 4.11 installation failed at monitoring and network clusteroperators with error "conmon: option parsing failed: Unknown option --log-global-size-max" making all jobs failing
2098172 - oc-mirror does not validatethe registry in the storage config
2098175 - invalid license in python-dataclasses-0.8-2.el8 spec
2098177 - python-pint-0.10.1-2.el8 has unused Patch0 in spec file
2098242 - typo in SRO specialresourcemodule
2098243 - Add error check to Platform create for Power VS
2098392 - [OCP 4.11] Ironic cannot match "wwn" rootDeviceHint for a multipath device
2098508 - Control-plane-machine-set-operator report panic
2098610 - No need to check the push permission with ?manifests-only option
2099293 - oVirt cluster API provider should use latest go-ovirt-client
2099330 - Edit application grouping is shown to user with view only access in a cluster
2099340 - CAPI e2e tests for AWS are missing
2099357 - ovn-kubernetes needs explicit RBAC coordination leases for 1.24 bump
2099358 - Dark mode+Topology update: Unexpected selected+hover border and background colors for app groups
2099528 - Layout issue: No spacing in delete modals
2099561 - Prometheus returns HTTP 500 error on /favicon.ico
2099582 - Format and update Repository overview content
2099611 - Failures on etcd-operator watch channels
2099637 - Should print error when use --keep-manifest-list\xfalse for manifestlist image
2099654 - Topology performance: Endless rerender loop when showing a Http EventSink (KameletBinding)
2099668 - KubeControllerManager should degrade when GC stops working
2099695 - Update CAPG after rebase
2099751 - specialresourcemodule stacktrace while looping over build status
2099755 - EgressIP node's mgmtIP reachability configuration option
2099763 - Update icons for event sources and sinks in topology, Add page, and context menu
2099811 - UDP Packet loss in OpenShift using IPv6 [upcall]
2099821 - exporting a pointer for the loop variable
2099875 - The speaker won't start if there's another component on the host listening on 8080
2099899 - oc-mirror looks for layers in the wrong repository when searching for release images during publishing
2099928 - [FJ OCP4.11 Bug]: Add unit tests to image_customization_test file
2099968 - [Azure-File-CSI] failed to provisioning volume in ARO cluster
2100001 - Sync upstream v1.22.0 downstream
2100007 - Run bundle-upgrade failed from the traditional File-Based Catalog installed operator
2100033 - OCP 4.11 IPI - Some csr remain "Pending" post deployment
2100038 - failure to update special-resource-lifecycle table during update Event
2100079 - SDN needs explicit RBAC coordination leases for 1.24 bump
2100138 - release info --bugs has no differentiator between Jira and Bugzilla
2100155 - kube-apiserver-operator should raise an alert when there is a Pod Security admission violation
2100159 - Dark theme: Build icon for pending status is not inverted in topology sidebar
2100323 - Sqlit-based catsrc cannot be ready due to "Error: open ./db-xxxx: permission denied"
2100347 - KASO retains old config values when switching from Medium/Default to empty worker latency profile
2100356 - Remove Condition tab and create option from console as it is deprecated in OSP-1.8
2100439 - [gce-pd] GCE PD in-tree storage plugin tests not running
2100496 - [OCPonRHV]-oVirt API returns affinity groups without a description field
2100507 - Remove redundant log lines from obj_retry.go
2100536 - Update API to allow EgressIP node reachability check
2100601 - Update CNO to allow EgressIP node reachability check
2100643 - [Migration] [GCP]OVN can not rollback to SDN
2100644 - openshift-ansible FTBFS on RHEL8
2100669 - Telemetry should not log the full path if it contains a username
2100749 - [OCP 4.11] multipath support needs multipath modules
2100825 - Update machine-api-powervs go modules to latest version
2100841 - tiny openshift-install usability fix for setting KUBECONFIG
2101460 - An etcd member for a new machine was never added to the cluster
2101498 - Revert Bug 2082599: add upper bound to number of failed attempts
2102086 - The base image is still 4.10 for operator-sdk 1.22
2102302 - Dummy bug for 4.10 backports
2102362 - Valid regions should be allowed in GCP install config
2102500 - Kubernetes NMState pods can not evict due to PDB on an SNO cluster
2102639 - Drain happens before other image-registry pod is ready to service requests, causing disruption
2102782 - topolvm-controller get into CrashLoopBackOff few minutes after install
2102834 - [cloud-credential-operator]container has runAsNonRoot and image will run as root
2102947 - [VPA] recommender is logging errors for pods with init containers
2103053 - [4.11] Backport Prow CI improvements from master
2103075 - Listing secrets in all namespaces with a specific labelSelector does not work properly
2103080 - br-ex not created due to default bond interface having a different mac address than expected
2103177 - disabling ipv6 router advertisements using "all" does not disable it on secondary interfaces
2103728 - Carry HAProxy patch 'BUG/MEDIUM: h2: match absolute-path not path-absolute for :path'
2103749 - MachineConfigPool is not getting updated
2104282 - heterogeneous arch: oc adm extract encodes arch specific release payload pullspec rather than the manifestlisted pullspec
2104432 - [dpu-network-operator] Updating images to be consistent with ART
2104552 - kube-controller-manager operator 4.11.0-rc.0 degraded on disabled monitoring stack
2104561 - 4.10 to 4.11 update: Degraded node: unexpected on-disk state: mode mismatch for file: "/etc/crio/crio.conf.d/01-ctrcfg-pidsLimit"; expected: -rw-r--r--/420/0644; received: ----------/0/0
2104589 - must-gather namespace should have ?privileged? warn and audit pod security labels besides enforce
2104701 - In CI 4.10 HAProxy must-gather takes longer than 10 minutes
2104717 - NetworkPolicies: ovnkube-master pods crashing due to panic: "invalid memory address or nil pointer dereference"
2104727 - Bootstrap node should honor http proxy
2104906 - Uninstall fails with Observed a panic: runtime.boundsError
2104951 - Web console doesn't display webhook errors for upgrades
2104991 - Completed pods may not be correctly cleaned up
2105101 - NodeIP is used instead of EgressIP if egressPod is recreated within 60 seconds
2105106 - co/node-tuning: Waiting for 15/72 Profiles to be applied
2105146 - Degraded=True noise with: UpgradeBackupControllerDegraded: unable to retrieve cluster version, no completed update was found in cluster version status history
2105167 - BuildConfig throws error when using a label with a / in it
2105334 - vmware-vsphere-csi-driver-controller can't use host port error on e2e-vsphere-serial
2105382 - Add a validation webhook for Nutanix machine provider spec in Machine API Operator
2105468 - The ccoctl does not seem to know how to leverage the VMs service account to talk to GCP APIs. 
2105937 - telemeter golangci-lint outdated blocking ART PRs that update to Go1.18
2106051 - Unable to deploy acm-ice using latest SRO 4.11 build
2106058 - vSphere defaults to SecureBoot on; breaks installation of out-of-tree drivers [4.11.0]
2106062 - [4.11] Bootimage bump tracker
2106116 - IngressController spec.tuningOptions.healthCheckInterval validation allows invalid values such as "0abc"
2106163 - Samples ImageStreams vs. registry.redhat.io: unsupported: V2 schema 1 manifest digests are no longer supported for image pulls
2106313 - bond-cni: backport bond-cni GA items to 4.11
2106543 - Typo in must-gather release-4.10
2106594 - crud/other-routes.spec.ts Cypress test failing at a high rate in CI
2106723 - [4.11] Upgrade from 4.11.0-rc0 -> 4.11.0-rc.1 failed. rpm-ostree status shows No space left on device
2106855 - [4.11.z] externalTrafficPolicy=Local is not working in local gateway mode if ovnkube-node is restarted
2107493 - ReplicaSet prometheus-operator-admission-webhook has timed out progressing
2107501 - metallb greenwave tests failure
2107690 - Driver Container builds fail with "error determining starting point for build: no FROM statement found"
2108175 - etcd backup seems to not be triggered in 4.10.18-->4.10.20 upgrade
2108617 - [oc adm release] extraction of the installer against a manifestlisted payload referenced by tag leads to a bad release image reference
2108686 - rpm-ostreed: start limit hit easily
2110505 - [Upgrade]deployment openshift-machine-api/machine-api-operator has a replica failure FailedCreate
2110715 - openshift-controller-manager(-operator) namespace should clear run-level annotations
2111055 - dummy bug for 4.10.z bz2110938

  1. References:

https://access.redhat.com/security/cve/CVE-2018-25009 https://access.redhat.com/security/cve/CVE-2018-25010 https://access.redhat.com/security/cve/CVE-2018-25012 https://access.redhat.com/security/cve/CVE-2018-25013 https://access.redhat.com/security/cve/CVE-2018-25014 https://access.redhat.com/security/cve/CVE-2018-25032 https://access.redhat.com/security/cve/CVE-2019-5827 https://access.redhat.com/security/cve/CVE-2019-13750 https://access.redhat.com/security/cve/CVE-2019-13751 https://access.redhat.com/security/cve/CVE-2019-17594 https://access.redhat.com/security/cve/CVE-2019-17595 https://access.redhat.com/security/cve/CVE-2019-18218 https://access.redhat.com/security/cve/CVE-2019-19603 https://access.redhat.com/security/cve/CVE-2019-20838 https://access.redhat.com/security/cve/CVE-2020-13435 https://access.redhat.com/security/cve/CVE-2020-14155 https://access.redhat.com/security/cve/CVE-2020-17541 https://access.redhat.com/security/cve/CVE-2020-19131 https://access.redhat.com/security/cve/CVE-2020-24370 https://access.redhat.com/security/cve/CVE-2020-28493 https://access.redhat.com/security/cve/CVE-2020-35492 https://access.redhat.com/security/cve/CVE-2020-36330 https://access.redhat.com/security/cve/CVE-2020-36331 https://access.redhat.com/security/cve/CVE-2020-36332 https://access.redhat.com/security/cve/CVE-2021-3481 https://access.redhat.com/security/cve/CVE-2021-3580 https://access.redhat.com/security/cve/CVE-2021-3634 https://access.redhat.com/security/cve/CVE-2021-3672 https://access.redhat.com/security/cve/CVE-2021-3695 https://access.redhat.com/security/cve/CVE-2021-3696 https://access.redhat.com/security/cve/CVE-2021-3697 https://access.redhat.com/security/cve/CVE-2021-3737 https://access.redhat.com/security/cve/CVE-2021-4115 https://access.redhat.com/security/cve/CVE-2021-4156 https://access.redhat.com/security/cve/CVE-2021-4189 https://access.redhat.com/security/cve/CVE-2021-20095 https://access.redhat.com/security/cve/CVE-2021-20231 https://access.redhat.com/security/cve/CVE-2021-20232 https://access.redhat.com/security/cve/CVE-2021-23177 https://access.redhat.com/security/cve/CVE-2021-23566 https://access.redhat.com/security/cve/CVE-2021-23648 https://access.redhat.com/security/cve/CVE-2021-25219 https://access.redhat.com/security/cve/CVE-2021-31535 https://access.redhat.com/security/cve/CVE-2021-31566 https://access.redhat.com/security/cve/CVE-2021-36084 https://access.redhat.com/security/cve/CVE-2021-36085 https://access.redhat.com/security/cve/CVE-2021-36086 https://access.redhat.com/security/cve/CVE-2021-36087 https://access.redhat.com/security/cve/CVE-2021-38185 https://access.redhat.com/security/cve/CVE-2021-38593 https://access.redhat.com/security/cve/CVE-2021-40528 https://access.redhat.com/security/cve/CVE-2021-41190 https://access.redhat.com/security/cve/CVE-2021-41617 https://access.redhat.com/security/cve/CVE-2021-42771 https://access.redhat.com/security/cve/CVE-2021-43527 https://access.redhat.com/security/cve/CVE-2021-43818 https://access.redhat.com/security/cve/CVE-2021-44225 https://access.redhat.com/security/cve/CVE-2021-44906 https://access.redhat.com/security/cve/CVE-2022-0235 https://access.redhat.com/security/cve/CVE-2022-0778 https://access.redhat.com/security/cve/CVE-2022-1012 https://access.redhat.com/security/cve/CVE-2022-1215 https://access.redhat.com/security/cve/CVE-2022-1271 https://access.redhat.com/security/cve/CVE-2022-1292 https://access.redhat.com/security/cve/CVE-2022-1586 https://access.redhat.com/security/cve/CVE-2022-1621 https://access.redhat.com/security/cve/CVE-2022-1629 https://access.redhat.com/security/cve/CVE-2022-1706 https://access.redhat.com/security/cve/CVE-2022-1729 https://access.redhat.com/security/cve/CVE-2022-2068 https://access.redhat.com/security/cve/CVE-2022-2097 https://access.redhat.com/security/cve/CVE-2022-21698 https://access.redhat.com/security/cve/CVE-2022-22576 https://access.redhat.com/security/cve/CVE-2022-23772 https://access.redhat.com/security/cve/CVE-2022-23773 https://access.redhat.com/security/cve/CVE-2022-23806 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/cve/CVE-2022-24675 https://access.redhat.com/security/cve/CVE-2022-24903 https://access.redhat.com/security/cve/CVE-2022-24921 https://access.redhat.com/security/cve/CVE-2022-25313 https://access.redhat.com/security/cve/CVE-2022-25314 https://access.redhat.com/security/cve/CVE-2022-26691 https://access.redhat.com/security/cve/CVE-2022-26945 https://access.redhat.com/security/cve/CVE-2022-27191 https://access.redhat.com/security/cve/CVE-2022-27774 https://access.redhat.com/security/cve/CVE-2022-27776 https://access.redhat.com/security/cve/CVE-2022-27782 https://access.redhat.com/security/cve/CVE-2022-28327 https://access.redhat.com/security/cve/CVE-2022-28733 https://access.redhat.com/security/cve/CVE-2022-28734 https://access.redhat.com/security/cve/CVE-2022-28735 https://access.redhat.com/security/cve/CVE-2022-28736 https://access.redhat.com/security/cve/CVE-2022-28737 https://access.redhat.com/security/cve/CVE-2022-29162 https://access.redhat.com/security/cve/CVE-2022-29810 https://access.redhat.com/security/cve/CVE-2022-29824 https://access.redhat.com/security/cve/CVE-2022-30321 https://access.redhat.com/security/cve/CVE-2022-30322 https://access.redhat.com/security/cve/CVE-2022-30323 https://access.redhat.com/security/cve/CVE-2022-32250 https://access.redhat.com/security/updates/classification/#important

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYvOfk9zjgjWX9erEAQhJ/w//UlbBGKBBFBAyfEmQf9Zu0yyv6MfZW0Zl iO1qXVIl9UQUFjTY5ejerx7cP8EBWLhKaiiqRRjbjtj+w+ENGB4LLj6TEUrSM5oA YEmhnX3M+GUKF7Px61J7rZfltIOGhYBvJ+qNZL2jvqz1NciVgI4/71cZWnvDbGpa 02w3Dn0JzhTSR9znNs9LKcV/anttJ3NtOYhqMXnN8EpKdtzQkKRazc7xkOTxfxyl jRiER2Z0TzKDE6dMoVijS2Sv5j/JF0LRwetkZl6+oh8ehKh5GRV3lPg3eVkhzDEo /gp0P9GdLMHi6cS6uqcREbod//waSAa7cssgULoycFwjzbDK3L2c+wMuWQIgXJca RYuP6wvrdGwiI1mgUi/226EzcZYeTeoKxnHkp7AsN9l96pJYafj0fnK1p9NM/8g3 jBE/W4K8jdDNVd5l1Z5O0Nyxk6g4P8MKMe10/w/HDXFPSgufiCYIGX4TKqb+ESIR SuYlSMjoGsB4mv1KMDEUJX6d8T05lpEwJT0RYNdZOouuObYMtcHLpRQHH9mkj86W pHdma5aGG/mTMvSMW6l6L05uT41Azm6fVimTv+E5WvViBni2480CVH+9RexKKSyL XcJX1gaLdo+72I/gZrtT+XE5tcJ3Sf5fmfsenQeY4KFum/cwzbM6y7RGn47xlEWB xBWKPzRxz0Q=9r0B -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Relevant releases/architectures:

Red Hat Enterprise Linux AppStream (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64

  1. Description:

Node.js is a software development platform for building fast and scalable network applications in the JavaScript programming language.

The following packages have been upgraded to a later upstream version: nodejs (14.21.1), nodejs-nodemon (2.0.20). Bugs fixed (https://bugzilla.redhat.com/):

2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2066009 - CVE-2021-44906 minimist: prototype pollution 2134609 - CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function 2140911 - CVE-2022-43548 nodejs: DNS rebinding in inspect via invalid octal IP address 2142821 - nodejs:14/nodejs: Rebase to the latest Nodejs 14 release [rhel-8] [rhel-8.7.0.z] 2150323 - CVE-2022-24999 express: "qs" prototype poisoning causes the hang of the node process

  1. Package List:

Red Hat Enterprise Linux AppStream (v. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. (BZ# 2033339)

  2. Restore/backup shows up as Validation failed but the restore backup status in ACM shows success (BZ# 2034279)

  3. Observability - OCP 311 node role are not displayed completely (BZ# 2038650)

  4. Documented uninstall procedure leaves many leftovers (BZ# 2041921)

  5. infrastructure-operator pod crashes due to insufficient privileges in ACM 2.5 (BZ# 2046554)

  6. Acm failed to install due to some missing CRDs in operator (BZ# 2047463)

  7. Navigation icons no longer showing in ACM 2.5 (BZ# 2051298)

  8. ACM home page now includes /home/ in url (BZ# 2051299)

  9. proxy heading in Add Credential should be capitalized (BZ# 2051349)

  10. ACM 2.5 tries to create new MCE instance when install on top of existing MCE 2.0 (BZ# 2051983)

  11. Create Policy button does not work and user cannot use console to create policy (BZ# 2053264)

  12. No cluster information was displayed after a policyset was created (BZ# 2053366)

  13. Dynamic plugin update does not take effect in Firefox (BZ# 2053516)

  14. Replicated policy should not be available when creating a Policy Set (BZ# 2054431)

  15. Placement section in Policy Set wizard does not reset when users click "Back" to re-configured placement (BZ# 2054433)

  16. Bugs fixed (https://bugzilla.redhat.com/):

2014557 - RFE Copy secret with specific secret namespace, name for source and name, namespace and cluster label for target 2024702 - CVE-2021-3918 nodejs-json-schema: Prototype pollution vulnerability 2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion 2028224 - RHACM 2.5.0 images 2028348 - [UI] When you delete host agent from infraenv no confirmation message appear (Are you sure you want to delete x?) 2028647 - Clusters are in 'Degraded' status with upgrade env due to obs-controller not working properly 2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic 2033339 - create cluster pool -> choose infra type , As a result infra providers disappear from UI. Description:

Red Hat Advanced Cluster Management for Kubernetes 2.4.2 images

Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:

https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/

Security updates:

  • nodejs-json-schema: Prototype pollution vulnerability (CVE-2021-3918)

  • containerd: Unprivileged pod may bind mount any privileged regular file on disk (CVE-2021-43816)

  • minio-go: user privilege escalation in AddUser() admin API (CVE-2021-43858)

  • nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes (CVE-2021-3807)

  • fastify-static: open redirect via an URL with double slash followed by a domain (CVE-2021-22963)

  • moby: docker cp allows unexpected chmod of host file (CVE-2021-41089)

  • moby: data directory contains subdirectories with insufficiently restricted permissions, which could lead to directory traversal (CVE-2021-41091)

  • golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)

  • node-fetch: Exposure of Sensitive Information to an Unauthorized Actor (CVE-2022-0235)

  • nats-server: misusing the "dynamically provisioned sandbox accounts" feature authenticated user can obtain the privileges of the System account (CVE-2022-24450)

Bug fixes:

  • Trying to create a new cluster on vSphere and no feedback, stuck in "creating" (Bugzilla #1937078)

  • The hyperlink of *ks cluster node cannot be opened when I want to check the node (Bugzilla #2028100)

  • Unable to make SSH connection to a Bitbucket server (Bugzilla #2028196)

  • RHACM cannot deploy Helm Charts with version numbers starting with letters (e.g. v1.6.1) (Bugzilla #2028931)

  • RHACM 2.4.2 images (Bugzilla #2029506)

  • Git Application still appears in Application Table and Resources are Still Seen in Advanced Configuration Upon Deletion after Upgrade from 2.4.0 (Bugzilla #2030005)

  • Namespace left orphaned after destroying the cluster (Bugzilla #2030379)

  • The results filtered through the filter contain some data that should not be present in cluster page (Bugzilla #2034198)

  • Git over ssh doesn't use custom port set in url (Bugzilla #2036057)

  • The value of name label changed from clusterclaim name to cluster name (Bugzilla #2042223)

  • ACM configuration policies do not handle Limitrange or Quotas values (Bugzilla #2042545)

  • Cluster addons do not appear after upgrade from ACM 2.3.5 to ACM 2.3.6 (Bugzilla #2050847)

  • The azure government regions were not list in the region drop down list when creating the cluster (Bugzilla #2051797)

  • Solution:

Before applying this update, make sure all previously released errata relevant to your system have been applied.

For details on how to apply this update, refer to:

https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/install/index#installing

  1. Bugs fixed (https://bugzilla.redhat.com/):

2001668 - [DDF] normally, in the OCP web console, one sees a yaml of the secret, where at the bottom, the following is shown: 2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes 2008592 - CVE-2021-41089 moby: docker cp allows unexpected chmod of host file 2012909 - [DDF] We feel it would be beneficial to add a sub-section here referencing the reconcile options available to users when 2015152 - CVE-2021-22963 fastify-static: open redirect via an URL with double slash followed by a domain 2023448 - CVE-2021-41091 moby: data directory contains subdirectories with insufficiently restricted permissions, which could lead to directory traversal 2024702 - CVE-2021-3918 nodejs-json-schema: Prototype pollution vulnerability 2028100 - The hyperlink of *ks cluster node can not be opened when I want to check the node 2028196 - Unable to make SSH connection to a Bitbucket server 2028931 - RHACM can not deploy Helm Charts with version numbers starting with letters (e.g. v1.6.1) 2029506 - RHACM 2.4.2 images 2030005 - Git Application still appears in Application Table and Resources are Still Seen in Advanced Configuration Upon Deletion after Upgrade from 2.4.0 2030379 - Namespace left orphaned after destroying the cluster 2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic 2032957 - Missing AWX templates in ACM 2034198 - The results filtered through the filter contain some data that should not be present in cluster page 2036057 - git over ssh doesn't use custom port set in url 2036252 - CVE-2021-43858 minio: user privilege escalation in AddUser() admin API 2039378 - Deploying CRD via Application does not update status in ACM console 2041015 - The base domain did not updated when switch the provider credentials during create the cluster/cluster pool 2042545 - ACM configuration policies do not handle Limitrange or Quotas values 2043519 - "apps.open-cluster-management.io/git-branch" annotation should be mandatory 2044434 - CVE-2021-43816 containerd: Unprivileged pod may bind mount any privileged regular file on disk 2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2050847 - Cluster addons do not appear after upgrade from ACM 2.3.5 to ACM 2.3.6 2051797 - the azure government regions were not list in the region drop down list when create the cluster 2052573 - CVE-2022-24450 nats-server: misusing the "dynamically provisioned sandbox accounts" feature authenticated user can obtain the privileges of the System account

  1. Summary:

The Migration Toolkit for Containers (MTC) 1.7.2 is now available. Description:

The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):

2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes 2038898 - [UI] ?Update Repository? option not getting disabled after adding the Replication Repository details to the MTC web console 2040693 - ?Replication repository? wizard has no validation for name length 2040695 - [MTC UI] ?Add Cluster? wizard stucks when the cluster name length is more than 63 characters 2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2048537 - Exposed route host to image registry? connecting successfully to invalid registry ?xyz.com? 2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak 2055658 - [MTC UI] Cancel button on ?Migrations? page does not disappear when migration gets Failed/Succeeded with warnings 2056962 - [MTC UI] UI shows the wrong migration type info after changing the target namespace 2058172 - [MTC UI] Successful Rollback is not showing the green success icon in the ?Last State? field. 2058529 - [MTC UI] Migrations Plan is missing the type for the state migration performed before upgrade 2061335 - [MTC UI] ?Update cluster? button is not getting disabled 2062266 - MTC UI does not display logs properly [OADP-BL] 2062862 - [MTC UI] Clusters page behaving unexpectedly on deleting the remote cluster?s service account secret from backend 2074675 - HPAs of DeploymentConfigs are not being updated when migration from Openshift 3.x to Openshift 4.x 2076593 - Velero pod log missing from UI drop down 2076599 - Velero pod log missing from downloaded logs folder [OADP-BL] 2078459 - [MTC UI] Storageclass conversion plan is adding migstorage reference in migplan 2079252 - [MTC] Rsync options logs not visible in log-reader pod 2082221 - Don't allow Storage class conversion migration if source cluster has only one storage class defined [UI] 2082225 - non-numeric user when launching stage pods [OADP-BL] 2088022 - Default CPU requests on Velero/Restic are too demanding making scheduling fail in certain environments 2088026 - Cloud propagation phase in migration controller is not doing anything due to missing labels on Velero pods 2089126 - [MTC] Migration controller cannot find Velero Pod because of wrong labels 2089411 - [MTC] Log reader pod is missing velero and restic pod logs [OADP-BL] 2089859 - [Crane] DPA CR is missing the required flag - Migration is getting failed at the EnsureCloudSecretPropagated phase due to the missing secret VolumeMounts 2090317 - [MTC] mig-operator failed to create a DPA CR due to null values are passed instead of int [OADP-BL] 2096939 - Fix legacy operator.yml inconsistencies and errors 2100486 - [MTC UI] Target storage class field is not getting respected when clusters don't have replication repo configured

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202201-0349",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "node-fetch",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "node fetch",
        "version": "2.6.7"
      },
      {
        "model": "node-fetch",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "node fetch",
        "version": "3.0.0"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "10.0"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "node-fetch",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "node fetch",
        "version": "3.1.1"
      },
      {
        "model": "sinec ins",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": null
      },
      {
        "model": "node-fetch",
        "scope": null,
        "trust": 0.8,
        "vendor": "node fetch \u30d7\u30ed\u30b8\u30a7\u30af\u30c8",
        "version": null
      },
      {
        "model": "gnu/linux",
        "scope": null,
        "trust": 0.8,
        "vendor": "debian",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-003319"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0235"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "168657"
      },
      {
        "db": "PACKETSTORM",
        "id": "168638"
      },
      {
        "db": "PACKETSTORM",
        "id": "166946"
      },
      {
        "db": "PACKETSTORM",
        "id": "168042"
      },
      {
        "db": "PACKETSTORM",
        "id": "170429"
      },
      {
        "db": "PACKETSTORM",
        "id": "167459"
      },
      {
        "db": "PACKETSTORM",
        "id": "166199"
      },
      {
        "db": "PACKETSTORM",
        "id": "167679"
      }
    ],
    "trust": 0.8
  },
  "cve": "CVE-2022-0235",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "MEDIUM",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 5.8,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 8.6,
            "id": "CVE-2022-0235",
            "impactScore": 4.9,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 1.9,
            "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:N",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 6.1,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "LOW",
            "exploitabilityScore": 2.8,
            "id": "CVE-2022-0235",
            "impactScore": 2.7,
            "integrityImpact": "LOW",
            "privilegesRequired": "NONE",
            "scope": "CHANGED",
            "trust": 1.0,
            "userInteraction": "REQUIRED",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:L/I:L/A:N",
            "version": "3.1"
          },
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "security@huntr.dev",
            "availabilityImpact": "HIGH",
            "baseScore": 8.8,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.8,
            "id": "CVE-2022-0235",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "LOW",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.0"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "None",
            "baseScore": 6.1,
            "baseSeverity": "Medium",
            "confidentialityImpact": "Low",
            "exploitabilityScore": null,
            "id": "CVE-2022-0235",
            "impactScore": null,
            "integrityImpact": "Low",
            "privilegesRequired": "None",
            "scope": "Changed",
            "trust": 0.8,
            "userInteraction": "Required",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:C/C:L/I:L/A:N",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2022-0235",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "security@huntr.dev",
            "id": "CVE-2022-0235",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "NVD",
            "id": "CVE-2022-0235",
            "trust": 0.8,
            "value": "Medium"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202201-1383",
            "trust": 0.6,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2022-0235",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-0235"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-003319"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-1383"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0235"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0235"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "node-fetch is vulnerable to Exposure of Sensitive Information to an Unauthorized Actor. node-fetch Exists in an open redirect vulnerability.Information may be obtained and information may be tampered with. The purpose of this text-only\nerrata is to inform you about the security issues fixed in this release. Description:\n\nRed Hat Process Automation Manager is an open source business process\nmanagement suite that combines process management and decision service\nmanagement and enables business and IT users to create, manage, validate,\nand deploy process applications and decision services. \n\nSecurity Fix(es):\n\n* chart.js: prototype pollution (CVE-2020-7746)\n\n* moment: inefficient parsing algorithm resulting in DoS (CVE-2022-31129)\n\n* package immer before 9.0.6. Solution:\n\nFor on-premise installations, before applying the update, back up your\nexisting installation, including all applications, configuration files,\ndatabases and database settings, and so on. \n\nRed Hat recommends that you halt the server by stopping the JBoss\nApplication Server process before installing this update. After installing\nthe update, restart the server by starting the JBoss Application Server\nprocess. \n\nThe References section of this erratum contains a download link. You must\nlog in to download the update. Bugs fixed (https://bugzilla.redhat.com/):\n\n2041833 - CVE-2021-23436 immer: type confusion vulnerability can lead to a bypass of CVE-2020-28477\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2047200 - CVE-2022-23437 xerces-j2: infinite loop when handling specially crafted XML document payloads\n2047343 - CVE-2022-21363 mysql-connector-java: Difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Connectors\n2050863 - CVE-2022-21724 jdbc-postgresql: Unchecked Class Instantiation when providing Plugin Classes\n2063601 - CVE-2022-23913 artemis-commons: Apache ActiveMQ Artemis DoS\n2064007 - CVE-2022-26520 postgresql-jdbc: Arbitrary File Write Vulnerability\n2064698 - CVE-2020-36518 jackson-databind: denial of service via a large depth of nested objects\n2066009 - CVE-2021-44906 minimist: prototype pollution\n2067387 - CVE-2022-24771 node-forge: Signature verification leniency in checking `digestAlgorithm` structure can lead to signature forgery\n2067458 - CVE-2022-24772 node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery\n2072009 - CVE-2022-24785 Moment.js: Path traversal  in moment.locale\n2076133 - CVE-2022-1365 cross-fetch: Exposure of Private Personal Information to an Unauthorized Actor\n2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information\n2096966 - CVE-2020-7746 chart.js: prototype pollution\n2103584 - CVE-2022-0722 parse-url: Exposure of Sensitive Information to an Unauthorized Actor in GitHub repository ionicabizau/parse-url\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n2107994 - CVE-2022-2458 Business-central: Possible XML External Entity Injection attack\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Important: OpenShift Container Platform 4.11.0 bug fix and security update\nAdvisory ID:       RHSA-2022:5069-01\nProduct:           Red Hat OpenShift Enterprise\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:5069\nIssue date:        2022-08-10\nCVE Names:         CVE-2018-25009 CVE-2018-25010 CVE-2018-25012\n                   CVE-2018-25013 CVE-2018-25014 CVE-2018-25032\n                   CVE-2019-5827 CVE-2019-13750 CVE-2019-13751\n                   CVE-2019-17594 CVE-2019-17595 CVE-2019-18218\n                   CVE-2019-19603 CVE-2019-20838 CVE-2020-13435\n                   CVE-2020-14155 CVE-2020-17541 CVE-2020-19131\n                   CVE-2020-24370 CVE-2020-28493 CVE-2020-35492\n                   CVE-2020-36330 CVE-2020-36331 CVE-2020-36332\n                   CVE-2021-3481 CVE-2021-3580 CVE-2021-3634\n                   CVE-2021-3672 CVE-2021-3695 CVE-2021-3696\n                   CVE-2021-3697 CVE-2021-3737 CVE-2021-4115\n                   CVE-2021-4156 CVE-2021-4189 CVE-2021-20095\n                   CVE-2021-20231 CVE-2021-20232 CVE-2021-23177\n                   CVE-2021-23566 CVE-2021-23648 CVE-2021-25219\n                   CVE-2021-31535 CVE-2021-31566 CVE-2021-36084\n                   CVE-2021-36085 CVE-2021-36086 CVE-2021-36087\n                   CVE-2021-38185 CVE-2021-38593 CVE-2021-40528\n                   CVE-2021-41190 CVE-2021-41617 CVE-2021-42771\n                   CVE-2021-43527 CVE-2021-43818 CVE-2021-44225\n                   CVE-2021-44906 CVE-2022-0235 CVE-2022-0778\n                   CVE-2022-1012 CVE-2022-1215 CVE-2022-1271\n                   CVE-2022-1292 CVE-2022-1586 CVE-2022-1621\n                   CVE-2022-1629 CVE-2022-1706 CVE-2022-1729\n                   CVE-2022-2068 CVE-2022-2097 CVE-2022-21698\n                   CVE-2022-22576 CVE-2022-23772 CVE-2022-23773\n                   CVE-2022-23806 CVE-2022-24407 CVE-2022-24675\n                   CVE-2022-24903 CVE-2022-24921 CVE-2022-25313\n                   CVE-2022-25314 CVE-2022-26691 CVE-2022-26945\n                   CVE-2022-27191 CVE-2022-27774 CVE-2022-27776\n                   CVE-2022-27782 CVE-2022-28327 CVE-2022-28733\n                   CVE-2022-28734 CVE-2022-28735 CVE-2022-28736\n                   CVE-2022-28737 CVE-2022-29162 CVE-2022-29810\n                   CVE-2022-29824 CVE-2022-30321 CVE-2022-30322\n                   CVE-2022-30323 CVE-2022-32250\n====================================================================\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.11.0 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nThis release includes a security update for Red Hat OpenShift Container\nPlatform 4.11. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.11.0. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:5068\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\nSecurity Fix(es):\n\n* go-getter: command injection vulnerability (CVE-2022-26945)\n* go-getter: unsafe download (issue 1 of 3) (CVE-2022-30321)\n* go-getter: unsafe download (issue 2 of 3) (CVE-2022-30322)\n* go-getter: unsafe download (issue 3 of 3) (CVE-2022-30323)\n* nanoid: Information disclosure via valueOf() function (CVE-2021-23566)\n* sanitize-url: XSS (CVE-2021-23648)\n* minimist: prototype pollution (CVE-2021-44906)\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n* prometheus/client_golang: Denial of service using\nInstrumentHandlerCounter (CVE-2022-21698)\n* golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)\n* go-getter: writes SSH credentials into logfile, exposing sensitive\ncredentials to local uses (CVE-2022-29810)\n* opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-x86_64\n\nThe image digest is\nsha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4\n\n(For aarch64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-aarch64\n\nThe image digest is\nsha256:29fa8419da2afdb64b5475d2b43dad8cc9205e566db3968c5738e7a91cf96dfe\n\n(For s390x architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-s390x\n\nThe image digest is\nsha256:015d6180238b4024d11dfef6751143619a0458eccfb589f2058ceb1a6359dd46\n\n(For ppc64le architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.11.0-ppc64le\n\nThe image digest is\nsha256:5052f8d5597c6656ca9b6bfd3de521504c79917aa80feb915d3c8546241f86ca\n\nAll OpenShift Container Platform 4.11 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.11 see the following documentation,\nwhich will be updated shortly for this release, for important instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1817075 - MCC \u0026 MCO don\u0027t free leader leases during shut down -\u003e 10 minutes of leader election timeouts\n1822752 - cluster-version operator stops applying manifests when blocked by a precondition check\n1823143 - oc adm release extract --command, --tools doesn\u0027t pull from localregistry when given a localregistry/image\n1858418 - [OCPonRHV] OpenShift installer fails when Blank template is missing in oVirt/RHV\n1859153 - [AWS] An IAM error occurred occasionally during the installation phase: Invalid IAM Instance Profile name\n1896181 - [ovirt] install fails: due to terraform error \"Cannot run VM. VM is being updated\" on vm resource\n1898265 - [OCP 4.5][AWS] Installation failed: error updating LB Target Group\n1902307 - [vSphere] cloud labels management via cloud provider makes nodes not ready\n1905850 - `oc adm policy who-can` failed to check the `operatorcondition/status` resource\n1916279 - [OCPonRHV] Sometimes terraform installation fails on -failed to fetch Cluster(another terraform bug)\n1917898 - [ovirt] install fails: due to terraform error \"Tag not matched: expect \u003cfault\u003e but got \u003chtml\u003e\" on vm resource\n1918005 - [vsphere] If there are multiple port groups with the same name installation fails\n1918417 - IPv6 errors after exiting crictl\n1918690 - Should update the KCM resource-graph timely with the latest configure\n1919980 - oVirt installer fails due to terraform error \"Failed to wait for Templte(...) to become ok\"\n1921182 - InspectFailed: kubelet Failed to inspect image: rpc error: code = DeadlineExceeded desc = context deadline exceeded\n1923536 - Image pullthrough does not pass 429 errors back to capable clients\n1926975 - [aws-c2s] kube-apiserver crashloops due to missing cloud config\n1928932 - deploy/route_crd.yaml in openshift/router uses deprecated v1beta1 CRD API\n1932812 - Installer uses the terraform-provider in the Installer\u0027s directory if it exists\n1934304 - MemoryPressure Top Pod Consumers seems to be 2x expected value\n1943937 - CatalogSource incorrect parsing validation\n1944264 - [ovn] CNO should gracefully terminate OVN databases\n1944851 - List of ingress routes not cleaned up when routers no longer exist - take 2\n1945329 - In k8s 1.21 bump conntrack \u0027should drop INVALID conntrack entries\u0027 tests are disabled\n1948556 - Cannot read property \u0027apiGroup\u0027 of undefined error viewing operator CSV\n1949827 - Kubelet bound to incorrect IPs, referring to incorrect NICs in 4.5.x\n1957012 - Deleting the KubeDescheduler CR does not remove the corresponding deployment or configmap\n1957668 - oc login does not show link to console\n1958198 - authentication operator takes too long to pick up a configuration change\n1958512 - No 1.25 shown in REMOVEDINRELEASE for apis audited with k8s.io/removed-release 1.25 and k8s.io/deprecated true\n1961233 - Add CI test coverage for DNS availability during upgrades\n1961844 - baremetal ClusterOperator installed by CVO does not have relatedObjects\n1965468 - [OSP] Delete volume snapshots based on cluster ID in their metadata\n1965934 - can not get new result with \"Refresh off\" if click \"Run queries\" again\n1965969 - [aws] the public hosted zone id is not correct in the destroy log, while destroying a cluster which is using BYO private hosted zone. \n1968253 - GCP CSI driver can provision volume with access mode ROX\n1969794 - [OSP] Document how to use image registry PVC backend with custom availability zones\n1975543 - [OLM] Remove stale cruft installed by CVO in earlier releases\n1976111 - [tracker] multipathd.socket is missing start conditions\n1976782 - Openshift registry starts to segfault after S3 storage configuration\n1977100 - Pod failed to start with message \"set CPU load balancing: readdirent /proc/sys/kernel/sched_domain/cpu66/domain0: no such file or directory\"\n1978303 - KAS pod logs show: [SHOULD NOT HAPPEN] ...failed to convert new object...CertificateSigningRequest) to smd typed: .status.conditions: duplicate entries for key [type=\\\"Approved\\\"]\n1978798 - [Network Operator] Upgrade: The configuration to enable network policy ACL logging is missing on the cluster upgraded from 4.7-\u003e4.8\n1979671 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning\n1982737 - OLM does not warn on invalid CSV\n1983056 - IP conflict while recreating Pod with fixed name\n1984785 - LSO CSV does not contain disconnected annotation\n1989610 - Unsupported data types should not be rendered on operand details page\n1990125 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit\n1990384 - 502 error on \"Observe -\u003e Alerting\" UI after disabled local alertmanager\n1992553 - all the alert rules\u0027 annotations \"summary\" and \"description\" should comply with the OpenShift alerting guidelines\n1994117 - Some hardcodes are detected at the code level in orphaned code\n1994820 - machine controller doesn\u0027t send vCPU quota failed messages to cluster install logs\n1995953 - Ingresscontroller change the replicas to scaleup first time will be rolling update for all the ingress pods\n1996544 - AWS region ap-northeast-3 is missing in installer prompt\n1996638 - Helm operator manager container restart when CR is creating\u0026deleting\n1997120 - test_recreate_pod_in_namespace fails - Timed out waiting for namespace\n1997142 - OperatorHub: Filtering the OperatorHub catalog is extremely slow\n1997704 - [osp][octavia lb] given loadBalancerIP is ignored when creating a LoadBalancer type svc\n1999325 - FailedMount MountVolume.SetUp failed for volume \"kube-api-access\" : object \"openshift-kube-scheduler\"/\"kube-root-ca.crt\" not registered\n1999529 - Must gather fails to gather logs for all the namespace if server doesn\u0027t have volumesnapshotclasses resource\n1999891 - must-gather collects backup data even when Pods fails to be created\n2000653 - Add hypershift namespace to exclude namespaces list in descheduler configmap\n2002009 - IPI Baremetal, qemu-convert takes to long to save image into drive on slow/large disks\n2002602 - Storageclass creation page goes blank when \"Enable encryption\" is clicked if there is a syntax error in the configmap\n2002868 - Node exporter not able to scrape OVS metrics\n2005321 - Web Terminal is not opened on Stage of DevSandbox when terminal instance is not created yet\n2005694 - Removing proxy object takes up to 10 minutes for the changes to propagate to the MCO\n2006067 - Objects are not valid as a React child\n2006201 - ovirt-csi-driver-node pods are crashing intermittently\n2007246 - Openshift Container Platform - Ingress Controller does not set allowPrivilegeEscalation in the router deployment\n2007340 - Accessibility issues on topology - list view\n2007611 - TLS issues with the internal registry and AWS S3 bucket\n2007647 - oc adm release info --changes-from does not show changes in repos that squash-merge\n2008486 - Double scroll bar shows up on dragging the task quick search to the bottom\n2009345 - Overview page does not load from openshift console for some set of users after upgrading to 4.7.19\n2009352 - Add image-registry usage metrics to telemeter\n2009845 - Respect overrides changes during installation\n2010361 - OpenShift Alerting Rules Style-Guide Compliance\n2010364 - OpenShift Alerting Rules Style-Guide Compliance\n2010393 - [sig-arch][Late] clients should not use APIs that are removed in upcoming releases [Suite:openshift/conformance/parallel]\n2011525 - Rate-limit incoming BFD to prevent ovn-controller DoS\n2011895 - Details about cloud errors are missing from PV/PVC errors\n2012111 - LSO still try to find localvolumeset which is already deleted\n2012969 - need to figure out why osupdatedstart to reboot is zero seconds\n2013144 - Developer catalog category links could not be open in a new tab (sharing and open a deep link works fine)\n2013461 - Import deployment from Git with s2i expose always port 8080 (Service and Pod template, not Route) if another Route port is selected by the user\n2013734 - unable to label downloads route in openshift-console namespace\n2013822 - ensure that the `container-tools` content comes from the RHAOS plashets\n2014161 - PipelineRun logs are delayed and stuck on a high log volume\n2014240 - Image registry uses ICSPs only when source exactly matches image\n2014420 - Topology page is crashed\n2014640 - Cannot change storage class of boot disk when cloning from template\n2015023 - Operator objects are re-created even after deleting it\n2015042 - Adding a template from the catalog creates a secret that is not owned by the TemplateInstance\n2015356 - Different status shows on VM list page and details page\n2015375 - PVC creation for ODF/IBM Flashsystem shows incorrect types\n2015459 - [azure][openstack]When image registry configure an invalid proxy, registry pods are CrashLoopBackOff\n2015800 - [IBM]Shouldn\u0027t change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value\n2016425 - Adoption controller generating invalid metadata.Labels for an already adopted Subscription resource\n2016534 - externalIP does not work when egressIP is also present\n2017001 - Topology context menu for Serverless components always open downwards\n2018188 - VRRP ID conflict between keepalived-ipfailover and cluster VIPs\n2018517 - [sig-arch] events should not repeat pathologically expand_less failures -  s390x CI\n2019532 - Logger object in LSO does not log source location accurately\n2019564 - User settings resources (ConfigMap, Role, RB) should be deleted when a user is deleted\n2020483 - Parameter $__auto_interval_period is in Period drop-down list\n2020622 - e2e-aws-upi and e2e-azure-upi jobs are not working\n2021041 - [vsphere] Not found TagCategory when destroying ipi cluster\n2021446 - openshift-ingress-canary is not reporting DEGRADED state, even though the canary route is not available and accessible\n2022253 - Web terminal view is broken\n2022507 - Pods stuck in OutOfpods state after running cluster-density\n2022611 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment\n2022745 - Cluster reader is not able to list NodeNetwork* objects\n2023295 - Must-gather tool gathering data from custom namespaces. \n2023691 - ClusterIP internalTrafficPolicy does not work for ovn-kubernetes\n2024427 - oc completion zsh doesn\u0027t auto complete\n2024708 - The form for creating operational CRs is badly rendering filed names (\"obsoleteCPUs\" -\u003e \"Obsolete CP Us\" )\n2024821 - [Azure-File-CSI] need more clear info when requesting pvc with volumeMode Block\n2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion\n2025624 - Ingress router metrics endpoint serving old certificates after certificate rotation\n2026356 - [IPI on Azure] The bootstrap machine type should be same as master\n2026461 - Completed pods in Openshift cluster not releasing IP addresses and results in err: range is full unless manually deleted\n2027603 - [UI] Dropdown doesn\u0027t close on it\u0027s own after arbiter zone selection on \u0027Capacity and nodes\u0027 page\n2027613 - Users can\u0027t silence alerts from the dev console\n2028493 - OVN-migration failed - ovnkube-node: error waiting for node readiness: timed out waiting for the condition\n2028532 - noobaa-pg-db-0 pod stuck in Init:0/2\n2028821 - Misspelled label in ODF management UI - MCG performance view\n2029438 - Bootstrap node cannot resolve api-int because NetworkManager replaces resolv.conf\n2029470 - Recover from suddenly appearing old operand revision WAS: kube-scheduler-operator test failure: Node\u0027s not achieving new revision\n2029797 - Uncaught exception: ResizeObserver loop limit exceeded\n2029835 - CSI migration for vSphere: Inline-volume tests failing\n2030034 - prometheusrules.openshift.io: dial tcp: lookup prometheus-operator.openshift-monitoring.svc on 172.30.0.10:53: no such host\n2030530 - VM created via customize wizard has single quotation marks surrounding its password\n2030733 - wrong IP selected to connect to the nodes when ExternalCloudProvider enabled\n2030776 - e2e-operator always uses quay master images during presubmit tests\n2032559 - CNO allows migration to dual-stack in unsupported configurations\n2032717 - Unable to download ignition after coreos-installer install --copy-network\n2032924 - PVs are not being cleaned up after PVC deletion\n2033482 - [vsphere] two variables in tf are undeclared and get warning message during installation\n2033575 - monitoring targets are down after the cluster run for more than 1 day\n2033711 - IBM VPC operator needs e2e csi tests for ibmcloud\n2033862 - MachineSet is not scaling up due to an OpenStack error trying to create multiple ports with the same MAC address\n2034147 - OpenShift VMware IPI Installation fails with Resource customization when corespersocket is unset and vCPU count is not a multiple of 4\n2034296 - Kubelet and Crio fails to start during upgrde to 4.7.37\n2034411 - [Egress Router] No NAT rules for ipv6 source and destination created in ip6tables-save\n2034688 - Allow Prometheus/Thanos to return 401 or 403 when the request isn\u0027t authenticated\n2034958 - [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready\n2035005 - MCD is not always removing in progress taint after a successful update\n2035334 - [RFE] [OCPonRHV] Provision machines with preallocated disks\n2035899 - Operator-sdk run bundle doesn\u0027t support arm64 env\n2036202 - Bump podman to \u003e= 3.3.0 so that  setup of multiple credentials for a single registry which can be distinguished by their path  will work\n2036594 - [MAPO] Machine goes to failed state due to a momentary error of the cluster etcd\n2036948 - SR-IOV Network Device Plugin should handle offloaded VF instead of supporting only PF\n2037190 - dns operator status flaps between True/False/False and True/True/(False|True) after updating dnses.operator.openshift.io/default\n2037447 - Ingress Operator is not closing TCP connections. \n2037513 - I/O metrics from the Kubernetes/Compute Resources/Cluster Dashboard show as no datapoints found\n2037542 - Pipeline Builder footer is not sticky and yaml tab doesn\u0027t use full height\n2037610 - typo for the Terminated message from thanos-querier pod description info\n2037620 - Upgrade playbook should quit directly when trying to upgrade RHEL-7 workers to 4.10\n2037625 - AppliedClusterResourceQuotas can not be shown on project overview\n2037626 - unable to fetch ignition file when scaleup rhel worker nodes on cluster enabled Tang disk encryption\n2037628 - Add test id to kms flows for automation\n2037721 - PodDisruptionBudgetAtLimit alert fired in SNO cluster\n2037762 - Wrong ServiceMonitor definition is causing failure during Prometheus configuration reload and preventing changes from being applied\n2037841 - [RFE] use /dev/ptp_hyperv on Azure/AzureStack\n2038115 - Namespace and application bar is not sticky anymore\n2038244 - Import from git ignore the given servername and could not validate On-Premises GitHub and BitBucket installations\n2038405 - openshift-e2e-aws-workers-rhel-workflow in CI step registry broken\n2038774 - IBM-Cloud OVN IPsec fails, IKE UDP ports and  ESP protocol not in security group\n2039135 - the error message is not clear when using \"opm index prune\" to prune a file-based index image\n2039161 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected\n2039253 - ovnkube-node crashes on duplicate endpoints\n2039256 - Domain validation fails when TLD contains a digit. \n2039277 - Topology list view items are not highlighted on keyboard navigation\n2039462 - Application tab in User Preferences dropdown menus are too wide. \n2039477 - validation icon is missing from Import from git\n2039589 - The toolbox command always ignores [command] the first time\n2039647 - Some developer perspective links are not deep-linked causes developer to sometimes delete/modify resources in the wrong project\n2040180 - Bug when adding a new table panel to a dashboard for OCP UI with only one value column\n2040195 - Ignition fails to enable systemd units with backslash-escaped characters in their names\n2040277 - ThanosRuleNoEvaluationFor10Intervals alert description is wrong\n2040488 - OpenShift-Ansible BYOH Unit Tests are Broken\n2040635 - CPU Utilisation is negative number for \"Kubernetes / Compute Resources / Cluster\" dashboard\n2040654 - \u0027oc adm must-gather -- some_script\u0027 should exit with same non-zero code as the failed \u0027some_script\u0027 exits\n2040779 - Nodeport svc not accessible when the backend pod is on a window node\n2040933 - OCP 4.10 nightly build will fail to install if multiple NICs are defined on KVM nodes\n2041133 - \u0027oc explain route.status.ingress.conditions\u0027 shows type \u0027Currently only Ready\u0027 but actually is \u0027Admitted\u0027\n2041454 - Garbage values accepted for `--reference-policy` in `oc import-image` without any error\n2041616 - Ingress operator tries to manage DNS of additional ingresscontrollers that are not under clusters basedomain, which can\u0027t work\n2041769 - Pipeline Metrics page not showing data for normal user\n2041774 - Failing git detection should not recommend Devfiles as import strategy\n2041814 - The KubeletConfigController wrongly process multiple confs for a pool\n2041940 - Namespace pre-population not happening till a Pod is created\n2042027 - Incorrect feedback for \"oc label pods --all\"\n2042348 - Volume ID is missing in output message when expanding volume which is not mounted. \n2042446 - CSIWithOldVSphereHWVersion alert recurring despite upgrade to vmx-15\n2042501 - use lease for leader election\n2042587 - ocm-operator: Improve reconciliation of CA ConfigMaps\n2042652 - Unable to deploy hw-event-proxy operator\n2042838 - The status of container is not consistent on Container details and pod details page\n2042852 - Topology toolbars are unaligned to other toolbars\n2042999 - A pod cannot reach kubernetes.default.svc.cluster.local cluster IP\n2043035 - Wrong error code provided when request contains invalid argument\n2043068 - \u003cx\u003e available of \u003cy\u003e text disappears in Utilization item if x is 0\n2043080 - openshift-installer intermittent failure on AWS with Error: InvalidVpcID.NotFound: The vpc ID \u0027vpc-123456789\u0027 does not exist\n2043094 - ovnkube-node not deleting stale conntrack entries when endpoints go away\n2043118 - Host should transition through Preparing when HostFirmwareSettings changed\n2043132 - Add a metric when vsphere csi storageclass creation fails\n2043314 - `oc debug node` does not meet compliance requirement\n2043336 - Creating multi SriovNetworkNodePolicy cause the worker always be draining\n2043428 - Address Alibaba CSI driver operator review comments\n2043533 - Update ironic, inspector, and ironic-python-agent to latest bugfix release\n2043672 - [MAPO] root volumes not working\n2044140 - When \u0027oc adm upgrade --to-image ...\u0027 rejects an update as not recommended, it should mention --allow-explicit-upgrade\n2044207 - [KMS] The data in the text box does not get cleared on switching the authentication method\n2044227 - Test Managed cluster should only include cluster daemonsets that have maxUnavailable update of 10 or 33 percent fails\n2044412 - Topology list misses separator lines and hover effect let the list jump 1px\n2044421 - Topology list does not allow selecting an application group anymore\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2044803 - Unify button text style on VM tabs\n2044824 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]\n2045065 - Scheduled pod has nodeName changed\n2045073 - Bump golang and build images for local-storage-operator\n2045087 - Failed to apply sriov policy on intel nics\n2045551 - Remove enabled FeatureGates from TechPreviewNoUpgrade\n2045559 - API_VIP moved when kube-api container on another master node was stopped\n2045577 - [ocp 4.9 | ovn-kubernetes] ovsdb_idl|WARN|transaction error: {\"details\":\"cannot delete Datapath_Binding row 29e48972-xxxx because of 2 remaining reference(s)\",\"error\":\"referential integrity violation\n2045872 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2046133 - [MAPO]IPI proxy installation failed\n2046156 - Network policy: preview of affected pods for non-admin shows empty popup\n2046157 - Still uses pod-security.admission.config.k8s.io/v1alpha1 in admission plugin config\n2046191 - Opeartor pod is missing correct qosClass and priorityClass\n2046277 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the module.vpc.aws_subnet.private_subnet[0] resource\n2046319 - oc debug cronjob command failed with error \"unable to extract pod template from type *v1.CronJob\". \n2046435 - Better Devfile Import Strategy support in the \u0027Import from Git\u0027 flow\n2046496 - Awkward wrapping of project toolbar on mobile\n2046497 - Re-enable TestMetricsEndpoint test case in console operator e2e tests\n2046498 - \"All Projects\" and \"all applications\" use different casing on topology page\n2046591 - Auto-update boot source is not available while create new template from it\n2046594 - \"Requested template could not be found\" while creating VM from user-created template\n2046598 - Auto-update boot source size unit is byte on customize wizard\n2046601 - Cannot create VM from template\n2046618 - Start last run action should contain current user name in the started-by annotation of the PLR\n2046662 - Should upgrade the go version to be 1.17 for example go operator memcached-operator\n2047197 - Sould upgrade the operator_sdk.util version to \"0.4.0\" for the \"osdk_metric\" module\n2047257 - [CP MIGRATION] Node drain failure during control plane node migration\n2047277 - Storage status is missing from status card of virtualization overview\n2047308 - Remove metrics and events for master port offsets\n2047310 - Running VMs per template card needs empty state when no VMs exist\n2047320 - New route annotation to show another URL or hide topology URL decorator doesn\u0027t work for Knative Services\n2047335 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2047362 - Removing prometheus UI access breaks origin test\n2047445 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure\n2047670 - Installer should pre-check that the hosted zone is not associated with the VPC and throw the error message. \n2047702 - Issue described on bug #2013528 reproduced: mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2047710 - [OVN] ovn-dbchecker CrashLoopBackOff and sbdb jsonrpc unix socket receive error\n2047732 - [IBM]Volume is not deleted after destroy cluster\n2047741 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the module.masters.aws_network_interface.master[1] resource\n2047790 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2047799 - release-openshift-ocp-installer-e2e-aws-upi-4.9\n2047870 - Prevent redundant queries of BIOS settings in HostFirmwareController\n2047895 - Fix architecture naming in oc adm release mirror for aarch64\n2047911 - e2e: Mock CSI tests fail on IBM ROKS clusters\n2047913 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2047925 - [FJ OCP4.10 Bug]: IRONIC_KERNEL_PARAMS does not contain coreos_kernel_params during iPXE boot\n2047935 - [4.11] Bootimage bump tracker\n2047998 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*\n2048059 - Service Level Agreement (SLA) always show \u0027Unknown\u0027\n2048067 - [IPI on Alibabacloud] \"Platform Provisioning Check\" tells \u0027\"ap-southeast-6\": enhanced NAT gateway is not supported\u0027, which seems false\n2048186 - Image registry operator panics when finalizes config deletion\n2048214 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud\n2048219 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool\n2048221 - Capitalization of titles in the VM details page is inconsistent. \n2048222 - [AWS GovCloud] Cluster can not be installed on AWS GovCloud regions via terminal interactive UI. \n2048276 - Cypress E2E tests fail due to a typo in test-cypress.sh\n2048333 - prometheus-adapter becomes inaccessible during rollout\n2048352 - [OVN] node does not recover after NetworkManager restart, NotReady and unreachable\n2048442 - [KMS] UI does not have option to specify kube auth path and namespace for cluster wide encryption\n2048451 - Custom serviceEndpoints in install-config are reported to be unreachable when environment uses a proxy\n2048538 - Network policies are not implemented or updated by OVN-Kubernetes\n2048541 - incorrect rbac check for install operator quick starts\n2048563 - Leader election conventions for cluster topology\n2048575 - IP reconciler cron job failing on single node\n2048686 - Check MAC address provided on the install-config.yaml file\n2048687 - All bare metal jobs are failing now due to End of Life of centos 8\n2048793 - Many Conformance tests are failing in OCP 4.10 with Kuryr\n2048803 - CRI-O seccomp profile out of date\n2048824 - [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class\n2048841 - [ovn] Missing lr-policy-list and snat rules for egressip when new pods are added\n2048955 - Alibaba Disk CSI Driver does not have CI\n2049073 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured\n2049078 - Bond CNI: Failed to  attach Bond NAD to pod\n2049108 - openshift-installer intermittent failure on AWS with \u0027Error: Error waiting for NAT Gateway (nat-xxxxx) to become available\u0027\n2049117 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently\n2049133 - oc adm catalog mirror throws \u0027missing signature key\u0027 error when using file://local/index\n2049142 - Missing \"app\" label\n2049169 - oVirt CSI driver should use the trusted CA bundle when cluster proxy is configured\n2049234 - ImagePull fails with error  \"unable to pull manifest from example.com/busy.box:v5  invalid reference format\"\n2049410 - external-dns-operator creates provider section, even when not requested\n2049483 - Sidepanel for Connectors/workloads in topology shows invalid tabs\n2049613 - MTU migration on SDN IPv4 causes API alerts\n2049671 - system:serviceaccount:openshift-cluster-csi-drivers:aws-ebs-csi-driver-operator trying to GET and DELETE /api/v1/namespaces/openshift-cluster-csi-drivers/configmaps/kube-cloud-config which does not exist\n2049687 - superfluous apirequestcount entries in audit log\n2049775 - cloud-provider-config change not applied when ExternalCloudProvider enabled\n2049787 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs\n2049832 - ContainerCreateError when trying to launch large (\u003e500) numbers of pods across nodes\n2049872 - cluster storage operator AWS credentialsrequest lacks KMS privileges\n2049889 - oc new-app --search nodejs warns about access to sample content on quay.io\n2050005 - Plugin module IDs can clash with console module IDs causing runtime errors\n2050011 - Observe \u003e Metrics page: Timespan text input and dropdown do not align\n2050120 - Missing metrics in kube-state-metrics\n2050146 - Installation on PSI fails with: \u0027openstack platform does not have the required standard-attr-tag network extension\u0027\n2050173 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0\n2050180 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2\n2050300 - panic in cluster-storage-operator while updating status\n2050332 - Malformed ClusterClaim lifetimes cause the clusterclaims-controller to silently fail to reconcile all clusterclaims\n2050335 - azure-disk failed to mount with error special device does not exist\n2050345 - alert data for burn budget needs to be updated to prevent regression\n2050407 - revert \"force cert rotation every couple days for development\" in 4.11\n2050409 - ip-reconcile job is failing consistently\n2050452 - Update osType and hardware version used by RHCOS OVA to indicate it is a RHEL 8 guest\n2050466 - machine config update with invalid container runtime config should be more robust\n2050637 - Blog Link not re-directing to the intented website in the last modal in the Dev Console Onboarding Tour\n2050698 - After upgrading the cluster the console still show 0 of N, 0% progress for worker nodes\n2050707 - up test for prometheus pod look to far in the past\n2050767 - Vsphere upi tries to access vsphere during manifests generation phase\n2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function\n2050882 - Crio appears to be coredumping in some scenarios\n2050902 - not all resources created during import have common labels\n2050946 - Cluster-version operator fails to notice TechPreviewNoUpgrade featureSet change after initialization-lookup error\n2051320 - Need to build ose-aws-efs-csi-driver-operator-bundle-container image for 4.11\n2051333 - [aws] records in public hosted zone and BYO private hosted zone were not deleted. \n2051377 - Unable to switch vfio-pci to netdevice in policy\n2051378 - Template wizard is crashed when there are no templates existing\n2051423 - migrate loadbalancers from amphora to ovn not working\n2051457 - [RFE] PDB for cloud-controller-manager to avoid going too many replicas down\n2051470 - prometheus: Add validations for relabel configs\n2051558 - RoleBinding in project without subject is causing \"Project access\" page to fail\n2051578 - Sort is broken for the Status and Version columns on the Cluster Settings \u003e ClusterOperators page\n2051583 - sriov must-gather image doesn\u0027t work\n2051593 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line\n2051611 - Remove Check which enforces summary_interval must match logSyncInterval\n2051642 - Remove \"Tech-Preview\" Label for the Web Terminal GA release\n2051657 - Remove \u0027Tech preview\u0027 from minnimal deployment Storage System creation\n2051718 - MetaLLB: Validation Webhook: BGPPeer hold time is allowed to be set to less than 3s\n2051722 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2051881 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2051954 - Allow changing of policyAuditConfig ratelimit post-deployment\n2051969 - Need to build local-storage-operator-metadata-container image for 4.11\n2051985 - An APIRequestCount without dots in the name can cause a panic\n2052016 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n2052034 - Can\u0027t start correct debug pod using pod definition yaml in OCP 4.8\n2052055 - Whereabouts should implement client-go 1.22+\n2052056 - Static pod installer should throttle creating new revisions\n2052071 - local storage operator metrics target down after upgrade\n2052095 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1\n2052270 - FSyncControllerDegraded has \"treshold\" -\u003e \"threshold\" typos\n2052309 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052332 - Probe failures and pod restarts during 4.7 to 4.8 upgrade\n2052393 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052398 - 4.9 to 4.10 upgrade fails for ovnkube-masters\n2052415 - Pod density test causing problems when using kube-burner\n2052513 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. \n2052578 - Create new app from a private git repository using \u0027oc new app\u0027 with basic auth does not work. \n2052595 - Remove dev preview badge from IBM FlashSystem deployment windows\n2052618 - Node reboot causes duplicate persistent volumes\n2052671 - Add Sprint 214 translations\n2052674 - Remove extra spaces\n2052700 - kube-controller-manger should use configmap lease\n2052701 - kube-scheduler should use configmap lease\n2052814 - go fmt fails in OSM after migration to go 1.17\n2052840 - IMAGE_BUILDER=docker make test-e2e-operator-ocp runs with podman instead of docker\n2052953 - Observe dashboard always opens for last viewed workload instead of the selected one\n2052956 - Installing virtualization operator duplicates the first action on workloads in topology\n2052975 - High cpu load on Juniper Qfx5120 Network switches after upgrade to Openshift 4.8.26\n2052986 - Console crashes when Mid cycle hook in Recreate strategy(edit deployment/deploymentConfig) selects Lifecycle strategy as \"Tags the current image as an image stream tag if the deployment succeeds\"\n2053006 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11\n2053104 - [vSphere CSI driver Operator] hw_version_total metric update wrong value after upgrade nodes hardware version from `vmx-13` to  `vmx-15`\n2053112 - nncp status is unknown when nnce is Progressing\n2053118 - nncp Available condition reason should be exposed in `oc get`\n2053168 - Ensure the core dynamic plugin SDK package has correct types and code\n2053205 - ci-openshift-cluster-network-operator-master-e2e-agnostic-upgrade is failing most of the time\n2053304 - Debug terminal no longer works in admin console\n2053312 - requestheader IDP test doesn\u0027t wait for cleanup, causing high failure rates\n2053334 - rhel worker scaleup playbook failed because missing some dependency of podman\n2053343 - Cluster Autoscaler not scaling down nodes which seem to qualify for scale-down\n2053491 - nmstate interprets interface names as float64 and subsequently crashes on state update\n2053501 - Git import detection does not happen for private repositories\n2053582 - inability to detect static lifecycle failure\n2053596 - [IBM Cloud] Storage IOPS limitations and lack of IPI ETCD deployment options trigger leader election during cluster initialization\n2053609 - LoadBalancer SCTP service leaves stale conntrack entry that causes issues if service is recreated\n2053622 - PDB warning alert when CR replica count is set to zero\n2053685 - Topology performance: Immutable .toJSON consumes a lot of CPU time when rendering a large topology graph (~100 nodes)\n2053721 - When using RootDeviceHint rotational setting the host can fail to provision\n2053922 - [OCP 4.8][OVN] pod interface: error while waiting on OVS.Interface.external-ids\n2054095 - [release-4.11] Gather images.conifg.openshift.io cluster resource definiition\n2054197 - The ProjectHelmChartRepositrory schema has merged but has not been initialized in the cluster yet\n2054200 - Custom created services in openshift-ingress removed even though the services are not of type LoadBalancer\n2054238 - console-master-e2e-gcp-console is broken\n2054254 - vSphere test failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n2054285 - Services other than knative service also shows as KSVC in add subscription/trigger modal\n2054319 - must-gather | gather_metallb_logs can\u0027t detect metallb pod\n2054351 - Rrestart of ptp4l/phc2sys  on change of PTPConfig  generates more than one times, socket error in event frame work\n2054385 - redhat-operatori ndex image build failed with AMQ brew build - amq-interconnect-operator-metadata-container-1.10.13\n2054564 - DPU network operator 4.10 branch need to sync with master\n2054630 - cancel create silence from kebab menu of alerts page will navigated to the previous page\n2054693 - Error deploying HorizontalPodAutoscaler with oc new-app command in OpenShift 4\n2054701 - [MAPO] Events are not created for MAPO machines\n2054705 - [tracker] nf_reinject calls nf_queue_entry_free on an already freed entry-\u003estate\n2054735 - Bad link in CNV console\n2054770 - IPI baremetal deployment metal3 pod crashes when using capital letters in hosts bootMACAddress\n2054787 - SRO controller goes to CrashLoopBackOff status when the pull-secret does not have the correct permissions\n2054950 - A large number is showing on disk size field\n2055305 - Thanos Querier high CPU and memory usage till OOM\n2055386 - MetalLB changes the shared external IP of a service upon updating the externalTrafficPolicy definition\n2055433 - Unable to create br-ex as gateway is not found\n2055470 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation\n2055492 - The default YAML on vm wizard is not latest\n2055601 - installer did not destroy *.app dns recored in a IPI on ASH install\n2055702 - Enable Serverless tests in CI\n2055723 - CCM operator doesn\u0027t deploy resources after enabling TechPreviewNoUpgrade feature set. \n2055729 - NodePerfCheck fires and stays active on momentary high latency\n2055814 - Custom dynamic exntension point causes runtime and compile time error\n2055861 - cronjob collect-profiles failed leads node reach to OutOfpods status\n2055980 - [dynamic SDK][internal] console plugin SDK does not support table actions\n2056454 - Implement preallocated disks for oVirt in the cluster API provider\n2056460 - Implement preallocated disks for oVirt in the OCP installer\n2056496 - If image does not exists for builder image then upload jar form crashes\n2056519 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies\n2056607 - Running kubernetes-nmstate handler e2e tests stuck on OVN clusters\n2056752 - Better to named the oc-mirror version info with more information like the `oc version --client`\n2056802 - \"enforcedLabelLimit|enforcedLabelNameLengthLimit|enforcedLabelValueLengthLimit\" do not take effect\n2056841 - [UI] [DR] Web console update is available pop-up is seen multiple times on Hub cluster where ODF operator is not installed and unnecessarily it pop-up on the Managed cluster as well where ODF operator is installed\n2056893 - incorrect warning for --to-image in oc adm upgrade help\n2056967 - MetalLB: speaker metrics is not updated when deleting a service\n2057025 - Resource requests for the init-config-reloader container of prometheus-k8s-* pods are too high\n2057054 - SDK: k8s methods resolves into Response instead of the Resource\n2057079 - [cluster-csi-snapshot-controller-operator] CI failure: events should not repeat pathologically\n2057101 - oc commands working with images print an incorrect and inappropriate warning\n2057160 - configure-ovs selects wrong interface on reboot\n2057183 - OperatorHub: Missing \"valid subscriptions\" filter\n2057251 - response code for Pod count graph changed from 422 to 200 periodically for about 30 minutes if pod is rescheduled\n2057358 - [Secondary Scheduler] - cannot build bundle index image using the secondary scheduler operator bundle\n2057387 - [Secondary Scheduler] - olm.skiprange, com.redhat.openshift.versions is incorrect and no minkubeversion\n2057403 - CMO logs show forbidden: User \"system:serviceaccount:openshift-monitoring:cluster-monitoring-operator\" cannot get resource \"replicasets\" in API group \"apps\" in the namespace \"openshift-monitoring\"\n2057495 - Alibaba Disk CSI driver does not provision small PVCs\n2057558 - Marketplace operator polls too frequently for cluster operator status changes\n2057633 - oc rsync reports misleading error when container is not found\n2057642 - ClusterOperator status.conditions[].reason \"etcd disk metrics exceeded...\" should be a CamelCase slug\n2057644 - FSyncControllerDegraded latches True, even after fsync latency recovers on all members\n2057696 - Removing console still blocks OCP install from completing\n2057762 - ingress operator should report Upgradeable False to remind user before upgrade to 4.10 when Non-SAN certs are used\n2057832 - expr for record rule: \"cluster:telemetry_selected_series:count\" is improper\n2057967 - KubeJobCompletion does not account for possible job states\n2057990 - Add extra debug information to image signature workflow test\n2057994 - SRIOV-CNI failed to load netconf: LoadConf(): failed to get VF information\n2058030 - On OCP 4.10+ using OVNK8s on BM IPI, nodes register as localhost.localdomain\n2058217 - [vsphere-problem-detector-operator] \u0027vsphere_rwx_volumes_total\u0027 metric name make confused\n2058225 - openshift_csi_share_* metrics are not found from telemeter server\n2058282 - Websockets stop updating during cluster upgrades\n2058291 - CI builds should have correct version of Kube without needing to push tags everytime\n2058368 - Openshift OVN-K got restarted mutilple times with the error  \" ovsdb-server/memory-trim-on-compaction on\u0027\u0027 failed: exit status 1   and \" ovndbchecker.go:118] unable to turn on memory       trimming for SB DB, stderr \"  , cluster  unavailable\n2058370 - e2e-aws-driver-toolkit CI job is failing\n2058421 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install\n2058424 - ConsolePlugin proxy always passes Authorization header even if `authorize` property is omitted or false\n2058623 - Bootstrap server dropdown menu in Create Event Source- KafkaSource form is empty even if it\u0027s created\n2058626 - Multiple Azure upstream kube fsgroupchangepolicy tests are permafailing expecting gid \"1000\" but geting \"root\"\n2058671 - whereabouts IPAM CNI ip-reconciler cronjob specification requires hostnetwork, api-int lb usage \u0026 proper backoff\n2058692 - [Secondary Scheduler] Creating secondaryscheduler instance fails with error \"key failed with : secondaryschedulers.operator.openshift.io \"secondary-scheduler\" not found\"\n2059187 - [Secondary Scheduler] -  key failed with : serviceaccounts \"secondary-scheduler\" is forbidden\n2059212 - [tracker] Backport https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa\n2059213 - ART cannot build installer images due to missing terraform binaries for some architectures\n2059338 - A fully upgraded 4.10 cluster defaults to HW-13 hardware version even if HW-15 is default (and supported)\n2059490 - The operator image in CSV file of the ART DPU network operator bundle is incorrect\n2059567 - vMedia based IPI installation of OpenShift fails on Nokia servers due to issues with virtual media attachment and boot source override\n2059586 - (release-4.11) Insights operator doesn\u0027t reconcile clusteroperator status condition messages\n2059654 - Dynamic demo plugin proxy example out of date\n2059674 - Demo plugin fails to build\n2059716 - cloud-controller-manager flaps operator version during 4.9 -\u003e 4.10 update\n2059791 - [vSphere CSI driver Operator] didn\u0027t update \u0027vsphere_csi_driver_error\u0027 metric value when fixed the error manually\n2059840 - [LSO]Could not gather logs for pod diskmaker-discovery and diskmaker-manager\n2059943 - MetalLB: Move CI config files to metallb repo from dev-scripts repo\n2060037 - Configure logging level of FRR containers\n2060083 - CMO doesn\u0027t react to changes in clusteroperator console\n2060091 - CMO produces invalid alertmanager statefulset if console cluster .status.consoleURL is unset\n2060133 - [OVN RHEL upgrade] could not find IP addresses: failed to lookup link br-ex: Link not found\n2060147 - RHEL8 Workers Need to Ensure libseccomp is up to date at install time\n2060159 - LGW: External-\u003eService of type ETP=Cluster doesn\u0027t go to the node\n2060329 - Detect unsupported amount of workloads before rendering a lazy or crashing topology\n2060334 - Azure VNET lookup fails when the NIC subnet is in a different resource group\n2060361 - Unable to enumerate NICs due to missing the \u0027primary\u0027 field due to security restrictions\n2060406 - Test \u0027operators should not create watch channels very often\u0027 fails\n2060492 - Update PtpConfigSlave source-crs to use network_transport L2 instead of UDPv4\n2060509 - Incorrect installation of ibmcloud vpc csi driver in IBM Cloud ROKS 4.10\n2060532 - LSO e2e tests are run against default image and namespace\n2060534 - openshift-apiserver pod in crashloop due to unable to reach kubernetes svc ip\n2060549 - ErrorAddingLogicalPort: duplicate IP found in ECMP Pod route cache!\n2060553 - service domain can\u0027t be resolved when networkpolicy is used in OCP 4.10-rc\n2060583 - Remove Console internal-kubevirt plugin SDK package\n2060605 - Broken access to public images: Unable to connect to the server: no basic auth credentials\n2060617 - IBMCloud destroy DNS regex not strict enough\n2060687 - Azure Ci:  SubscriptionDoesNotSupportZone  - does not support availability zones at location \u0027westus\u0027\n2060697 - [AWS] partitionNumber cannot work for specifying Partition number\n2060714 - [DOCS] Change source_labels to sourceLabels in \"Configuring remote write storage\" section\n2060837 - [oc-mirror] Catalog merging error when two or more bundles does not have a set Replace field\n2060894 - Preceding/Trailing Whitespaces In Form Elements on the add page\n2060924 - Console white-screens while using debug terminal\n2060968 - Installation failing due to ironic-agent.service not starting properly\n2060970 - Bump recommended FCOS to 35.20220213.3.0\n2061002 - Conntrack entry is not removed for LoadBalancer IP\n2061301 - Traffic Splitting Dialog is Confusing With Only One Revision\n2061303 - Cachito request failure with vendor directory is out of sync with go.mod/go.sum\n2061304 - workload info gatherer - don\u0027t serialize empty images map\n2061333 - White screen for Pipeline builder page\n2061447 - [GSS] local pv\u0027s are in terminating state\n2061496 - etcd RecentBackup=Unknown ControllerStarted contains no message string\n2061527 - [IBMCloud] infrastructure asset missing CloudProviderType\n2061544 - AzureStack is hard-coded to use Standard_LRS for the disk type\n2061549 - AzureStack install with internal publishing does not create api DNS record\n2061611 - [upstream] The marker of KubeBuilder doesn\u0027t work if it is close to the code\n2061732 - Cinder CSI crashes when API is not available\n2061755 - Missing breadcrumb on the resource creation page\n2061833 - A single worker can be assigned to multiple baremetal hosts\n2061891 - [IPI on IBMCLOUD]  missing ?br-sao? region in openshift installer\n2061916 - mixed ingress and egress policies can result in half-isolated pods\n2061918 - Topology Sidepanel style is broken\n2061919 - Egress Ip entry stays on node\u0027s primary NIC post deletion from hostsubnet\n2062007 - MCC bootstrap command lacks template flag\n2062126 - IPfailover pod is crashing during creation showing keepalived_script doesn\u0027t exist\n2062151 - Add RBAC for \u0027infrastructures\u0027 to operator bundle\n2062355 - kubernetes-nmstate resources and logs not included in must-gathers\n2062459 - Ingress pods scheduled on the same node\n2062524 - [Kamelet Sink] Topology crashes on click of Event sink node if the resource is created source to Uri over ref\n2062558 - Egress IP with openshift sdn in not functional on worker node. \n2062568 - CVO does not trigger new upgrade again after fail to update to unavailable payload\n2062645 - configure-ovs: don\u0027t restart networking if not necessary\n2062713 - Special Resource Operator(SRO) - No sro_used_nodes metric\n2062849 - hw event proxy is not binding on ipv6 local address\n2062920 - Project selector is too tall with only a few projects\n2062998 - AWS GovCloud regions are recognized as the unknown regions\n2063047 - Configuring a full-path query log file in CMO breaks Prometheus with the latest version of the operator\n2063115 - ose-aws-efs-csi-driver has invalid dependency in go.mod\n2063164 - metal-ipi-ovn-ipv6 Job Permafailing and Blocking OpenShift 4.11 Payloads: insights operator is not available\n2063183 - DefragDialTimeout is set to low for large scale OpenShift Container Platform - Cluster\n2063194 - cluster-autoscaler-default will fail when automated etcd defrag is running on large scale OpenShift Container Platform 4 - Cluster\n2063321 - [OVN]After reboot egress node,  lr-policy-list was not correct, some duplicate records or missed internal IPs\n2063324 - MCO template output directories created with wrong mode causing render failure in unprivileged container environments\n2063375 - ptp operator upgrade from 4.9 to 4.10 stuck at pending due to service account requirements not met\n2063414 - on OKD 4.10, when image-registry is enabled, the /etc/hosts entry is missing on some nodes\n2063699 - Builds - Builds - Logs: i18n misses. \n2063708 - Builds - Builds - Logs: translation correction needed. \n2063720 - Metallb EBGP neighbor stuck in active until adding ebgp-multihop (directly connected neighbors)\n2063732 - Workloads - StatefulSets : I18n misses\n2063747 - When building a bundle, the push command fails because is passes a redundant \"IMG=\" on the the CLI\n2063753 - User Preferences - Language - Language selection : Page refresh rquired to change the UI into selected Language. \n2063756 - User Preferences - Applications - Insecure traffic : i18n misses\n2063795 - Remove go-ovirt-client go.mod replace directive\n2063829 - During an IPI install with the 4.10.4 installer on vSphere, getting \"Check\": platform.vsphere.network: Invalid value: \"VLAN_3912\": unable to find network provided\"\n2063831 - etcd quorum pods landing on same node\n2063897 - Community tasks not shown in pipeline builder page\n2063905 - PrometheusOperatorWatchErrors alert may fire shortly in case of transient errors from the API server\n2063938 - sing the hard coded rest-mapper in library-go\n2063955 - cannot download operator catalogs due to missing images\n2063957 - User Management - Users : While Impersonating user, UI is not switching into user\u0027s set language\n2064024 - SNO OCP upgrade with DU workload stuck at waiting for kube-apiserver static pod\n2064170 - [Azure] Missing punctuation in the  installconfig.controlPlane.platform.azure.osDisk explain\n2064239 - Virtualization Overview page turns into blank page\n2064256 - The Knative traffic distribution doesn\u0027t update percentage in sidebar\n2064553 - UI should prefer to use the virtio-win configmap than v2v-vmware configmap for windows creation\n2064596 - Fix the hubUrl docs link in pipeline quicksearch modal\n2064607 - Pipeline builder makes too many (100+) API calls upfront\n2064613 - [OCPonRHV]- after few days that cluster is alive we got error in storage operator\n2064693 - [IPI][OSP] Openshift-install fails to find the shiftstack cloud defined in clouds.yaml in the current directory\n2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server\n2064705 - the alertmanagerconfig validation catches the wrong value for invalid field\n2064744 - Errors trying to use the Debug Container feature\n2064984 - Update error message for label limits\n2065076 - Access monitoring Routes based on monitoring-shared-config creates wrong URL\n2065160 - Possible leak of load balancer targets on AWS Machine API Provider\n2065224 - Configuration for cloudFront in image-registry operator configuration is ignored \u0026 duration is corrupted\n2065290 - CVE-2021-23648 sanitize-url: XSS\n2065338 - VolumeSnapshot creation date sorting is broken\n2065507 - `oc adm upgrade` should return ReleaseAccepted condition to show upgrade status. \n2065510 - [AWS] failed to create cluster on ap-southeast-3\n2065513 - Dev Perspective -\u003e Project Dashboard shows Resource Quotas which are a bit misleading, and too many decimal places\n2065547 - (release-4.11) Gather kube-controller-manager pod logs with garbage collector errors\n2065552 - [AWS] Failed to install cluster on AWS ap-southeast-3 region due to image-registry panic error\n2065577 - user with user-workload-monitoring-config-edit role can not create user-workload-monitoring-config configmap\n2065597 - Cinder CSI is not configurable\n2065682 - Remote write relabel config adds label __tmp_openshift_cluster_id__ to all metrics\n2065689 - Internal Image registry with GCS backend does not redirect client\n2065749 - Kubelet slowly leaking memory and pods eventually unable to start\n2065785 - ip-reconciler job does not complete, halts node drain\n2065804 - Console backend check for Web Terminal Operator incorrectly returns HTTP 204\n2065806 - stop considering Mint mode as supported on Azure\n2065840 - the cronjob object is created  with a  wrong api version batch/v1beta1 when created  via the openshift console\n2065893 - [4.11] Bootimage bump tracker\n2066009 - CVE-2021-44906 minimist: prototype pollution\n2066232 - e2e-aws-workers-rhel8 is failing on ansible check\n2066418 - [4.11] Update channels information link is taking to a 404 error page\n2066444 - The \"ingress\" clusteroperator\u0027s relatedObjects field has kind names instead of resource names\n2066457 - Prometheus CI failure: 503 Service Unavailable\n2066463 - [IBMCloud] failed to list DNS zones: Exactly one of ApiKey or RefreshToken must be specified\n2066605 - coredns template block matches cluster API to loose\n2066615 - Downstream OSDK still use upstream image for Hybird type operator\n2066619 - The GitCommit of the `oc-mirror version` is not correct\n2066665 - [ibm-vpc-block] Unable to change default storage class\n2066700 - [node-tuning-operator] - Minimize wildcard/privilege Usage in Cluster and Local Roles\n2066754 - Cypress reports for core tests are not captured\n2066782 - Attached disk keeps in loading status when add disk to a power off VM by non-privileged user\n2066865 - Flaky test: In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies\n2066886 - openshift-apiserver pods never going NotReady\n2066887 - Dependabot alert: Path traversal in github.com/valyala/fasthttp\n2066889 - Dependabot alert: Path traversal in github.com/valyala/fasthttp\n2066923 - No rule to make target \u0027docker-push\u0027 when building the SRO bundle\n2066945 - SRO appends \"arm64\" instead of \"aarch64\" to the kernel name and it doesn\u0027t match the DTK\n2067004 - CMO contains grafana image though grafana is removed\n2067005 - Prometheus rule contains grafana though grafana is removed\n2067062 - should update prometheus-operator resources version\n2067064 - RoleBinding in Developer Console is dropping all subjects when editing\n2067155 - Incorrect operator display name shown in pipelines quickstart in devconsole\n2067180 - Missing i18n translations\n2067298 - Console 4.10 operand form refresh\n2067312 - PPT event source is lost when received by the consumer\n2067384 - OCP 4.10 should be firing APIRemovedInNextEUSReleaseInUse for APIs removed in 1.25\n2067456 - OCP 4.11 should be firing APIRemovedInNextEUSReleaseInUse and APIRemovedInNextReleaseInUse for APIs removed in 1.25\n2067995 - Internal registries with a big number of images delay pod creation due to recursive SELinux file context relabeling\n2068115 - resource tab extension fails to show up\n2068148 - [4.11] /etc/redhat-release symlink is broken\n2068180 - OCP UPI on AWS with STS enabled is breaking the Ingress operator\n2068181 - Event source powered with kamelet type source doesn\u0027t show associated deployment in resources tab\n2068490 - OLM descriptors integration test failing\n2068538 - Crashloop back-off popover visual spacing defects\n2068601 - Potential etcd inconsistent revision and data occurs\n2068613 - ClusterRoleUpdated/ClusterRoleBindingUpdated Spamming Event Logs\n2068908 - Manual blog link change needed\n2069068 - reconciling Prometheus Operator Deployment failed while upgrading from 4.7.46 to 4.8.35\n2069075 - [Alibaba 4.11.0-0.nightly] cluster storage component in Progressing state\n2069181 - Disabling community tasks is not working\n2069198 - Flaky CI test in e2e/pipeline-ci\n2069307 - oc mirror hangs when processing the Red Hat 4.10 catalog\n2069312 - extend rest mappings with \u0027job\u0027 definition\n2069457 - Ingress operator has superfluous finalizer deletion logic for LoadBalancer-type services\n2069577 - ConsolePlugin example proxy authorize is wrong\n2069612 - Special Resource Operator (SRO) - Crash when nodeSelector does not match any nodes\n2069632 - Not able to download previous container logs from console\n2069643 - ConfigMaps leftovers while uninstalling SpecialResource with configmap\n2069654 - Creating VMs with YAML on Openshift Virtualization UI is missing labels `flavor`, `os` and `workload`\n2069685 - UI crashes on load if a pinned resource model does not exist\n2069705 - prometheus target \"serviceMonitor/openshift-metallb-system/monitor-metallb-controller/0\" has a failure with \"server returned HTTP status 502 Bad Gateway\"\n2069740 - On-prem loadbalancer ports conflict with kube node port range\n2069760 - In developer perspective divider does not show up in navigation\n2069904 - Sync upstream 1.18.1 downstream\n2069914 - Application Launcher groupings are not case-sensitive\n2069997 - [4.11] should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces\n2070000 - Add warning alerts for installing standalone k8s-nmstate\n2070020 - InContext doesn\u0027t work for Event Sources\n2070047 - Kuryr: Prometheus when installed on the cluster shouldn\u0027t report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured\n2070160 - Copy-to-clipboard and \u003cpre\u003e elements cause display issues for ACM dynamic plugins\n2070172 - SRO uses the chart\u0027s name as Helm release, not the SpecialResource\u0027s\n2070181 - [MAPO] serverGroupName ignored\n2070457 - Image vulnerability Popover overflows from the visible area\n2070674 - [GCP] Routes get timed out and nonresponsive after creating 2K service routes\n2070703 - some ipv6 network policy tests consistently failing\n2070720 - [UI] Filter reset doesn\u0027t work on Pods/Secrets/etc pages and complete list disappears\n2070731 - details switch label is not clickable on add page\n2070791 - [GCP]Image registry are crash on cluster with GCP workload identity enabled\n2070792 - service \"openshift-marketplace/marketplace-operator-metrics\" is not annotated with capability\n2070805 - ClusterVersion: could not download the update\n2070854 - cv.status.capabilities.enabledCapabilities doesn?t show the day-2 enabled caps when there are errors on resources update\n2070887 - Cv condition ImplicitlyEnabledCapabilities doesn?t complain about the disabled capabilities which is previously enabled\n2070888 - Cannot bind driver vfio-pci when apply sriovnodenetworkpolicy with type vfio-pci\n2070929 - OVN-Kubernetes: EgressIP breaks access from a pod with EgressIP to other host networked pods on different nodes\n2071019 - rebase vsphere csi driver 2.5\n2071021 - vsphere driver has snapshot support missing\n2071033 - conditionally relabel volumes given annotation not working - SELinux context match is wrong\n2071139 - Ingress pods scheduled on the same node\n2071364 - All image building tests are broken with \"            error: build error: attempting to convert BUILD_LOGLEVEL env var value \"\" to integer: strconv.Atoi: parsing \"\": invalid syntax\n2071578 - Monitoring navigation should not be shown if monitoring is not available (CRC)\n2071599 - RoleBidings are not getting updated for ClusterRole in OpenShift Web Console\n2071614 - Updating EgressNetworkPolicy rejecting with error UnsupportedMediaType\n2071617 - remove Kubevirt extensions in favour of dynamic plugin\n2071650 - ovn-k ovn_db_cluster metrics are not exposed for SNO\n2071691 - OCP Console global PatternFly overrides adds padding to breadcrumbs\n2071700 - v1 events show \"Generated from\" message without the source/reporting component\n2071715 - Shows 404 on Environment nav in Developer console\n2071719 - OCP Console global PatternFly overrides link button whitespace\n2071747 - Link to documentation from the overview page goes to a missing link\n2071761 - Translation Keys Are Not Namespaced\n2071799 - Multus CNI should exit cleanly on CNI DEL when the API server is unavailable\n2071859 - ovn-kube pods spec.dnsPolicy should be Default\n2071914 - cloud-network-config-controller 4.10.5:  Error building cloud provider client, err: %vfailed to initialize Azure environment: autorest/azure: There is no cloud environment matching the name \"\"\n2071998 - Cluster-version operator should share details of signature verification when it fails in \u0027Force: true\u0027 updates\n2072106 - cluster-ingress-operator tests do not build on go 1.18\n2072134 - Routes are not accessible within cluster from hostnet pods\n2072139 - vsphere driver has permissions to create/update PV objects\n2072154 - Secondary Scheduler operator panics\n2072171 - Test \"[sig-network][Feature:EgressFirewall] EgressFirewall should have no impact outside its namespace [Suite:openshift/conformance/parallel]\" fails\n2072195 - machine api doesn\u0027t issue client cert when AWS DNS suffix missing\n2072215 - Whereabouts ip-reconciler should be opt-in and not required\n2072389 - CVO exits upgrade immediately rather than waiting for etcd backup\n2072439 - openshift-cloud-network-config-controller reports wrong range of IP addresses for Azure worker nodes\n2072455 - make bundle overwrites supported-nic-ids_v1_configmap.yaml\n2072570 - The namespace titles for operator-install-single-namespace test keep changing\n2072710 - Perfscale - pods time out waiting for OVS port binding (ovn-installed)\n2072766 - Cluster Network Operator stuck in CrashLoopBackOff when scheduled to same master\n2072780 - OVN kube-master does not clear NetworkUnavailableCondition on GCP BYOH Windows node\n2072793 - Drop \"Used Filesystem\" from \"Virtualization -\u003e Overview\"\n2072805 - Observe \u003e Dashboards: $__range variables cause PromQL query errors\n2072807 - Observe \u003e Dashboards: Missing `panel.styles` attribute for table panels causes JS error\n2072842 - (release-4.11) Gather namespace names with overlapping UID ranges\n2072883 - sometimes monitoring dashboards charts can not be loaded successfully\n2072891 - Update gcp-pd-csi-driver to 1.5.1;\n2072911 - panic observed in kubedescheduler operator\n2072924 - periodic-ci-openshift-release-master-ci-4.11-e2e-azure-techpreview-serial\n2072957 - ContainerCreateError loop leads to several thousand empty logfiles in the file system\n2072998 - update aws-efs-csi-driver to the latest version\n2072999 - Navigate from logs of selected Tekton task instead of last one\n2073021 - [vsphere] Failed to update OS on master nodes\n2073112 - Prometheus (uwm) externalLabels not showing always in alerts. \n2073113 - Warning is logged to the console: W0407 Defaulting of registry auth file to \"${HOME}/.docker/config.json\" is deprecated. \n2073176 - removing data in form does not remove data from yaml editor\n2073197 - Error in Spoke/SNO agent: Source image rejected: A signature was required, but no signature exists\n2073329 - Pipelines-plugin- Having different title for Pipeline Runs tab, on Pipeline Details page it\u0027s \"PipelineRuns\" and on Repository Details page it\u0027s \"Pipeline Runs\". \n2073373 - Update azure-disk-csi-driver to 1.16.0\n2073378 - failed egressIP assignment - cloud-network-config-controller does not delete failed cloudprivateipconfig\n2073398 - machine-api-provider-openstack does not clean up OSP ports after failed server provisioning\n2073436 - Update azure-file-csi-driver to v1.14.0\n2073437 - Topology performance: Firehose/useK8sWatchResources cache can return unexpected data format if isList differs on multiple calls\n2073452 - [sig-network] pods should successfully create sandboxes by other - failed (add)\n2073473 - [OVN SCALE][ovn-northd] Unnecessary SB record no-op changes added to SB transaction. \n2073522 - Update ibm-vpc-block-csi-driver to v4.2.0\n2073525 - Update vpc-node-label-updater to v4.1.2\n2073901 - Installation failed due to etcd operator Err:DefragControllerDegraded: failed to dial endpoint https://10.0.0.7:2379 with maintenance client: context canceled\n2073937 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for UMW\n2073938 - APIRemovedInNextEUSReleaseInUse alert for runtimeclasses\n2073945 - APIRemovedInNextEUSReleaseInUse alert for podsecuritypolicies\n2073972 - Invalid retention time and invalid retention size should be validated at one place and have error log in one place for platform monitoring\n2074009 - [OVN] ovn-northd doesn\u0027t clean Chassis_Private record after scale down to 0 a machineSet\n2074031 - Admins should be able to tune garbage collector aggressiveness (GOGC) for kube-apiserver if necessary\n2074062 - Node Tuning Operator(NTO) - Cloud provider profile rollback doesn\u0027t work well\n2074084 - CMO metrics not visible in the OCP webconsole UI\n2074100 - CRD filtering according to name broken\n2074210 - asia-south2, australia-southeast2, and southamerica-west1Missing from GCP regions\n2074237 - oc new-app --image-stream flag behavior is unclear\n2074243 - DefaultPlacement API allow empty enum value and remove default\n2074447 - cluster-dashboard: CPU Utilisation iowait and steal\n2074465 - PipelineRun fails in import from Git flow if \"main\" branch is default\n2074471 - Cannot delete namespace with a LB type svc and Kuryr when ExternalCloudProvider is enabled\n2074475 - [e2e][automation] kubevirt plugin cypress tests fail\n2074483 - coreos-installer doesnt work on Dell machines\n2074544 - e2e-metal-ipi-ovn-ipv6 failing due to recent CEO changes\n2074585 - MCG standalone deployment page goes blank when the KMS option is enabled\n2074606 - occm does not have permissions to annotate SVC objects\n2074612 - Operator fails to install due to service name lookup failure\n2074613 - nodeip-configuration container incorrectly attempts to relabel /etc/systemd/system\n2074635 - Unable to start Web Terminal after deleting existing instance\n2074659 - AWS installconfig ValidateForProvisioning always provides blank values to validate zone records\n2074706 - Custom EC2 endpoint is not considered by AWS EBS CSI driver\n2074710 - Transition to go-ovirt-client\n2074756 - Namespace column provide wrong data in ClusterRole Details -\u003e Rolebindings tab\n2074767 - Metrics page show incorrect values due to metrics level config\n2074807 - NodeFilesystemSpaceFillingUp alert fires even before kubelet GC kicks in\n2074902 - `oc debug node/nodename ? chroot /host somecommand` should exit with non-zero when the sub-command failed\n2075015 - etcd-guard connection refused event repeating pathologically (payload blocking)\n2075024 - Metal upgrades permafailing on metal3 containers crash looping\n2075050 - oc-mirror fails to calculate between two channels with different prefixes for the same version of OCP\n2075091 - Symptom Detection.Undiagnosed panic detected in pod\n2075117 - Developer catalog: Order dropdown (A-Z, Z-A) is miss-aligned (in a separate row)\n2075149 - Trigger Translations When Extensions Are Updated\n2075189 - Imports from dynamic-plugin-sdk lead to failed module resolution errors\n2075459 - Set up cluster on aws with rootvolumn io2 failed due to no iops despite it being configured\n2075475 - OVN-Kubernetes: egress router pod (redirect mode), access from pod on different worker-node (redirect) doesn\u0027t work\n2075478 - Bump documentationBaseURL to 4.11\n2075491 - nmstate operator cannot be upgraded on SNO\n2075575 - Local Dev Env - Prometheus 404 Call errors spam the console\n2075584 - improve clarity of build failure messages when using csi shared resources but tech preview is not enabled\n2075592 - Regression - Top of the web terminal drawer is missing a stroke/dropshadow\n2075621 - Cluster upgrade.[sig-mco] Machine config pools complete upgrade\n2075647 - \u0027oc adm upgrade ...\u0027 POSTs ClusterVersion, clobbering any unrecognized spec properties\n2075671 - Cluster Ingress Operator K8S API cache contains duplicate objects\n2075778 - Fix failing TestGetRegistrySamples test\n2075873 - Bump recommended FCOS to 35.20220327.3.0\n2076193 - oc patch command for the liveness probe and readiness probe parameters of an OpenShift router deployment doesn\u0027t take effect\n2076270 - [OCPonRHV] MachineSet scale down operation fails to delete the worker VMs\n2076277 - [RFE] [OCPonRHV] Add storage domain ID valueto Compute/ControlPlain section in the machine object\n2076290 - PTP operator readme missing documentation on BC setup via PTP config\n2076297 - Router process ignores shutdown signal while starting up\n2076323 - OLM blocks all operator installs if an openshift-marketplace catalogsource is unavailable\n2076355 - The KubeletConfigController wrongly process multiple confs for a pool after having kubeletconfig in bootstrap\n2076393 - [VSphere] survey fails to list datacenters\n2076521 - Nodes in the same zone are not updated in the right order\n2076527 - Pipeline Builder: Make unnecessary tekton hub API calls when the user types \u0027too fast\u0027\n2076544 - Whitespace (padding) is missing after an PatternFly update, already in 4.10\n2076553 - Project access view replace group ref with user ref when updating their Role\n2076614 - Missing Events component from the SDK API\n2076637 - Configure metrics for vsphere driver to be reported\n2076646 - openshift-install destroy unable to delete PVC disks in GCP if cluster identifier is longer than 22 characters\n2076793 - CVO exits upgrade immediately rather than waiting for etcd backup\n2076831 - [ocp4.11]Mem/cpu high utilization by apiserver/etcd for cluster stayed 10 hours\n2076877 - network operator tracker to switch to use flowcontrol.apiserver.k8s.io/v1beta2 instead v1beta1 to be deprecated in k8s 1.26\n2076880 - OKD: add cluster domain to the uploaded vm configs so that 30-local-dns-prepender can use it\n2076975 - Metric unset during static route conversion in configure-ovs.sh\n2076984 - TestConfigurableRouteNoConsumingUserNoRBAC fails in CI\n2077050 - OCP should default to pd-ssd disk type on GCP\n2077150 - Breadcrumbs on a few screens don\u0027t have correct top margin spacing\n2077160 - Update owners for openshift/cluster-etcd-operator\n2077357 - [release-4.11] 200ms packet delay with OVN controller turn on\n2077373 - Accessibility warning on developer perspective\n2077386 - Import page shows untranslated values for the route advanced routing\u003esecurity options (devconsole~Edge)\n2077457 - failure in test case \"[sig-network][Feature:Router] The HAProxy router should serve the correct routes when running with the haproxy config manager\"\n2077497 - Rebase etcd to 3.5.3 or later\n2077597 - machine-api-controller is not taking the proxy configuration when it needs to reach the RHV API\n2077599 - OCP should alert users if they are on vsphere version \u003c7.0.2\n2077662 - AWS Platform Provisioning Check incorrectly identifies record as part of domain of cluster\n2077797 - LSO pods don\u0027t have any resource requests\n2077851 - \"make vendor\" target is not working\n2077943 - If there is a service with multiple ports, and the route uses 8080, when editing the 8080 port isn\u0027t replaced, but a random port gets replaced and 8080 still stays\n2077994 - Publish RHEL CoreOS AMIs in AWS ap-southeast-3 region\n2078013 - drop multipathd.socket workaround\n2078375 - When using the wizard with template using data source the resulting vm use pvc source\n2078396 - [OVN AWS] EgressIP was not balanced to another egress node after original node was removed egress label\n2078431 - [OCPonRHV] - ERROR failed to instantiate provider \"openshift/local/ovirt\" to obtain schema:  ERROR fork/exec\n2078526 - Multicast breaks after master node reboot/sync\n2078573 - SDN CNI -Fail to create nncp when vxlan is up\n2078634 - CRI-O not killing Calico CNI stalled (zombie) processes. \n2078698 - search box may not completely remove content\n2078769 - Different not translated filter group names (incl. Secret, Pipeline, PIpelineRun)\n2078778 - [4.11] oc get ValidatingWebhookConfiguration,MutatingWebhookConfiguration fails and caused ?apiserver panic\u0027d...http2: panic serving xxx.xx.xxx.21:49748: cannot deep copy int? when AllRequestBodies audit-profile is used. \n2078781 - PreflightValidation does not handle multiarch images\n2078866 - [BM][IPI] Installation with bonds fail - DaemonSet \"openshift-ovn-kubernetes/ovnkube-node\" rollout is not making progress\n2078875 - OpenShift Installer fail to remove Neutron ports\n2078895 - [OCPonRHV]-\"cow\" unsupported value in format field in install-config.yaml\n2078910 - CNO spitting out \".spec.groups[0].rules[4].runbook_url: field not declared in schema\"\n2078945 - Ensure only one apiserver-watcher process is active on a node. \n2078954 - network-metrics-daemon makes costly global pod list calls scaling per node\n2078969 - Avoid update races between old and new NTO operands during cluster upgrades\n2079012 - egressIP not migrated to correct workers after deleting machineset it was assigned\n2079062 - Test for console demo plugin toast notification needs to be increased for ci testing\n2079197 - [RFE] alert when more than one default storage class is detected\n2079216 - Partial cluster update reference doc link returns 404\n2079292 - containers prometheus-operator/kube-rbac-proxy violate PodSecurity\n2079315 - (release-4.11) Gather ODF config data with Insights\n2079422 - Deprecated 1.25 API call\n2079439 - OVN Pods Assigned Same IP Simultaneously\n2079468 - Enhance the waitForIngressControllerCondition for better CI results\n2079500 - okd-baremetal-install uses fcos for bootstrap but rhcos for cluster\n2079610 - Opeatorhub status shows errors\n2079663 - change default image features in RBD storageclass\n2079673 - Add flags to disable migrated code\n2079685 - Storageclass creation page with \"Enable encryption\" is not displaying saved KMS connection details when vaulttenantsa details are available in csi-kms-details config\n2079724 - cluster-etcd-operator - disable defrag-controller as there is unpredictable impact on large OpenShift Container Platform 4 - Cluster\n2079788 - Operator restarts while applying the acm-ice example\n2079789 - cluster drops ImplicitlyEnabledCapabilities during upgrade\n2079803 - Upgrade-triggered etcd backup will be skip during serial upgrade\n2079805 - Secondary scheduler operator should comply to restricted pod security level\n2079818 - Developer catalog installation overlay (modal?) shows a duplicated padding\n2079837 - [RFE] Hub/Spoke example with daemonset\n2079844 - EFS cluster csi driver status stuck in AWSEFSDriverCredentialsRequestControllerProgressing with sts installation\n2079845 - The Event Sinks catalog page now has a blank space on the left\n2079869 - Builds for multiple kernel versions should be ran in parallel when possible\n2079913 - [4.10] APIRemovedInNextEUSReleaseInUse alert for OVN endpointslices\n2079961 - The search results accordion has no spacing between it and the side navigation bar. \n2079965 - [rebase v1.24]  [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn\u0027t match pod\u0027s OS [Suite:openshift/conformance/parallel] [Suite:k8s]\n2080054 - TAGS arg for installer-artifacts images is not propagated to build images\n2080153 - aws-load-balancer-operator-controller-manager pod stuck in ContainerCreating status\n2080197 - etcd leader changes produce test churn during early stage of test\n2080255 - EgressIP broken on AWS with OpenShiftSDN / latest nightly build\n2080267 - [Fresh Installation] Openshift-machine-config-operator namespace is flooded with events related to clusterrole, clusterrolebinding\n2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses\n2080379 - Group all e2e tests as parallel or serial\n2080387 - Visual connector not appear between the node if a node get created using \"move connector\" to a different application\n2080416 - oc bash-completion problem\n2080429 - CVO must ensure non-upgrade related changes are saved when desired payload fails to load\n2080446 - Sync ironic images with latest bug fixes packages\n2080679 - [rebase v1.24] [sig-cli] test failure\n2080681 - [rebase v1.24]  [sig-cluster-lifecycle] CSRs from machines that are not recognized by the cloud provider are not approved [Suite:openshift/conformance/parallel]\n2080687 - [rebase v1.24]  [sig-network][Feature:Router] tests are failing\n2080873 - Topology graph crashes after update to 4.11 when Layout 2 (ColaForce) was selected previously\n2080964 - Cluster operator special-resource-operator is always in Failing state with reason: \"Reconciling simple-kmod\"\n2080976 - Avoid hooks config maps when hooks are empty\n2081012 - [rebase v1.24]  [sig-devex][Feature:OpenShiftControllerManager] TestAutomaticCreationOfPullSecrets [Suite:openshift/conformance/parallel]\n2081018 - [rebase v1.24] [sig-imageregistry][Feature:Image] oc tag should work when only imagestreams api is available\n2081021 - [rebase v1.24] [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources\n2081062 - Unrevert RHCOS back to 8.6\n2081067 - admin dev-console /settings/cluster should point out history may be excerpted\n2081069 - [sig-network] pods should successfully create sandboxes by adding pod to network\n2081081 - PreflightValidation \"odd number of arguments passed as key-value pairs for logging\" error\n2081084 - [rebase v1.24] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed\n2081087 - [rebase v1.24] [sig-auth] ServiceAccounts should allow opting out of API token automount\n2081119 - `oc explain` output of default overlaySize is outdated\n2081172 - MetallLB: YAML view in webconsole does not show all the available key value pairs of all the objects\n2081201 - cloud-init User check for Windows VM refuses to accept capitalized usernames\n2081447 - Ingress operator performs spurious updates in response to API\u0027s defaulting of router deployment\u0027s router container\u0027s ports\u0027 protocol field\n2081562 - lifecycle.posStart hook does not have network connectivity. \n2081685 - Typo in NNCE Conditions\n2081743 - [e2e] tests failing\n2081788 - MetalLB: the crds are not validated until metallb is deployed\n2081821 - SpecialResourceModule CRD is not installed after deploying SRO operator using brew bundle image via OLM\n2081895 - Use the managed resource (and not the manifest) for resource health checks\n2081997 - disconnected insights operator remains degraded after editing pull secret\n2082075 - Removing huge amount of ports takes a lot of time. \n2082235 - CNO exposes a generic apiserver that apparently does nothing\n2082283 - Transition to new oVirt Terraform provider\n2082360 - OCP 4.10.4, CNI: SDN; Whereabouts IPAM: Duplicate IP address with bond-cni\n2082380 - [4.10.z] customize wizard is crashed\n2082403 - [LSO] No new build local-storage-operator-metadata-container created\n2082428 - oc patch healthCheckInterval with invalid \"5 s\" to the ingress-controller successfully\n2082441 - [UPI] aws-load-balancer-operator-controller-manager failed to get VPC ID in UPI on AWS\n2082492 - [IPI IBM]Can\u0027t create image-registry-private-configuration secret with error \"specified resource key credentials does not contain HMAC keys\"\n2082535 - [OCPonRHV]-workers are cloned when \"clone: false\" is specified in install-config.yaml\n2082538 - apirequests limits of Cluster CAPI Operator are too low for GCP platform\n2082566 - OCP dashboard fails to load when the query to Prometheus takes more than 30s to return\n2082604 - [IBMCloud][x86_64] IBM VPC does not properly support RHCOS Custom Image tagging\n2082667 - No new machines provisioned while machineset controller drained old nodes for change to machineset\n2082687 - [IBM Cloud][x86_64][CCCMO] IBM x86_64 CCM using unsupported --port argument\n2082763 - Cluster install stuck on the applying for operatorhub \"cluster\"\n2083149 - \"Update blocked\" label incorrectly displays on new minor versions in the \"Other available paths\" modal\n2083153 - Unable to use application credentials for Manila PVC creation on OpenStack\n2083154 - Dynamic plugin sdk tsdoc generation does not render docs for parameters\n2083219 - DPU network operator doesn\u0027t deal with c1... inteface names\n2083237 - [vsphere-ipi] Machineset scale up process delay\n2083299 - SRO does not fetch mirrored DTK images in disconnected clusters\n2083445 - [FJ OCP4.11 Bug]: RAID setting during IPI cluster deployment fails if iRMC port number is specified\n2083451 - Update external serivces URLs to console.redhat.com\n2083459 - Make numvfs \u003e totalvfs error message more verbose\n2083466 - Failed to create clusters on AWS C2S/SC2S due to image-registry MissingEndpoint error\n2083514 - Operator ignores managementState Removed\n2083641 - OpenShift Console Knative Eventing ContainerSource generates wrong api version when pointed to k8s Service\n2083756 - Linkify not upgradeable message on ClusterSettings page\n2083770 - Release image signature manifest filename extension is yaml\n2083919 - openshift4/ose-operator-registry:4.10.0 having security vulnerabilities\n2083942 - Learner promotion can temporarily fail with rpc not supported for learner errors\n2083964 - Sink resources dropdown is not persisted in form yaml switcher in event source creation form\n2083999 - \"--prune-over-size-limit\" is not working as expected\n2084079 - prometheus route is not updated to \"path: /api\" after upgrade from 4.10 to 4.11\n2084081 - nmstate-operator installed cluster on POWER shows issues while adding new dhcp interface\n2084124 - The Update cluster modal includes a broken link\n2084215 - Resource configmap \"openshift-machine-api/kube-rbac-proxy\" is defined by 2 manifests\n2084249 - panic in ovn pod from an e2e-aws-single-node-serial nightly run\n2084280 - GCP API Checks Fail if non-required APIs are not enabled\n2084288 - \"alert/Watchdog must have no gaps or changes\" failing after bump\n2084292 - Access to dashboard resources is needed in dynamic plugin SDK\n2084331 - Resource with multiple capabilities included unless all capabilities are disabled\n2084433 - Podsecurity violation error getting logged for ingresscontroller during deployment. \n2084438 - Change Ping source spec.jsonData (deprecated) field  to spec.data\n2084441 - [IPI-Azure]fail to check the vm capabilities in install cluster\n2084459 - Topology list view crashes when switching from chart view after moving sink from knative service to uri\n2084463 - 5 control plane replica tests fail on ephemeral volumes\n2084539 - update azure arm templates to support customer provided vnet\n2084545 - [rebase v1.24] cluster-api-operator causes all techpreview tests to fail\n2084580 - [4.10] No cluster name sanity validation - cluster name with a dot (\".\") character\n2084615 - Add to navigation option on search page is not properly aligned\n2084635 - PipelineRun creation from the GUI for a Pipeline with 2 workspaces hardcode the PVC storageclass\n2084732 - A special resource that was created in OCP 4.9 can\u0027t be deleted after an upgrade to 4.10\n2085187 - installer-artifacts fails to build with go 1.18\n2085326 - kube-state-metrics is tripping APIRemovedInNextEUSReleaseInUse\n2085336 - [IPI-Azure] Fail to create the worker node which HyperVGenerations is V2 or V1 and vmNetworkingType is Accelerated\n2085380 - [IPI-Azure] Incorrect error prompt validate VM image and instance HyperV gen match when install cluster\n2085407 - There is no Edit link/icon for labels on Node details page\n2085721 - customization controller image name is wrong\n2086056 - Missing doc for OVS HW offload\n2086086 - Update Cluster Sample Operator dependencies and libraries for OCP 4.11\n2086092 - update kube to v.24\n2086143 - CNO uses too much memory\n2086198 - Cluster CAPI Operator creates unnecessary defaulting webhooks\n2086301 - kubernetes nmstate pods are not running after creating instance\n2086408 - Podsecurity violation error getting logged for  externalDNS operand pods during deployment\n2086417 - Pipeline created from add flow has GIT Revision as required field\n2086437 - EgressQoS CRD not available\n2086450 - aws-load-balancer-controller-cluster pod logged Podsecurity violation error during deployment\n2086459 - oc adm inspect fails when one of resources not exist\n2086461 - CNO probes MTU unnecessarily in Hypershift, making cluster startup take too long\n2086465 - External identity providers should log login attempts in the audit trail\n2086469 - No data about title \u0027API Request Duration by Verb - 99th Percentile\u0027 display on the dashboard \u0027API Performance\u0027\n2086483 - baremetal-runtimecfg k8s dependencies should be on a par with 1.24 rebase\n2086505 - Update oauth-server images to be consistent with ART\n2086519 - workloads must comply to restricted security policy\n2086521 - Icons of Knative actions are not clearly visible on the context menu in the dark mode\n2086542 - Cannot create service binding through drag and drop\n2086544 - ovn-k master daemonset on hypershift shouldn\u0027t log token\n2086546 - Service binding connector is not visible in the dark mode\n2086718 - PowerVS destroy code does not work\n2086728 - [hypershift] Move drain to controller\n2086731 - Vertical pod autoscaler operator needs a 4.11 bump\n2086734 - Update csi driver images to be consistent with ART\n2086737 - cloud-provider-openstack rebase to kubernetes v1.24\n2086754 - Cluster resource override operator needs a 4.11 bump\n2086759 - [IPI] OCP-4.11 baremetal - boot partition is not mounted on temporary directory\n2086791 - Azure: Validate UltraSSD instances in multi-zone regions\n2086851 - pods with multiple external gateways may only be have ECMP routes for one gateway\n2086936 - vsphere ipi should use cores by default instead of sockets\n2086958 - flaky e2e in kube-controller-manager-operator TestPodDisruptionBudgetAtLimitAlert\n2086959 - flaky e2e in kube-controller-manager-operator TestLogLevel\n2086962 - oc-mirror publishes metadata with --dry-run when publishing to mirror\n2086964 - oc-mirror fails on differential run when mirroring a package with multiple channels specified\n2086972 - oc-mirror does not error invalid metadata is passed to the describe command\n2086974 - oc-mirror does not work with headsonly for operator 4.8\n2087024 - The oc-mirror result mapping.txt is not correct , can?t be used by `oc image mirror` command\n2087026 - DTK\u0027s imagestream is missing from OCP 4.11 payload\n2087037 - Cluster Autoscaler should use K8s 1.24 dependencies\n2087039 - Machine API components should use K8s 1.24 dependencies\n2087042 - Cloud providers components should use K8s 1.24 dependencies\n2087084 - remove unintentional nic support\n2087103 - \"Updating to release image\" from \u0027oc\u0027 should point out that the cluster-version operator hasn\u0027t accepted the update\n2087114 - Add simple-procfs-kmod in modprobe example in README.md\n2087213 - Spoke BMH stuck \"inspecting\" when deployed via ZTP in 4.11 OCP hub\n2087271 - oc-mirror does not check for existing workspace when performing mirror2mirror synchronization\n2087556 - Failed to render DPU ovnk manifests\n2087579 - ` --keep-manifest-list=true` does not work for `oc adm release new` , only pick up the linux/amd64 manifest from the manifest list\n2087680 - [Descheduler] Sync with sigs.k8s.io/descheduler\n2087684 - KCMO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile\n2087685 - KASO should not be able to apply LowUpdateSlowReaction from Default WorkerLatencyProfile\n2087687 - MCO does not generate event when user applies Default -\u003e LowUpdateSlowReaction WorkerLatencyProfile\n2087764 - Rewrite the registry backend will hit error\n2087771 - [tracker] NetworkManager 1.36.0 loses DHCP lease and doesn\u0027t try again\n2087772 - Bindable badge causes some layout issues with the side panel of bindable operator backed services\n2087942 - CNO references images that are divergent from ART\n2087944 - KafkaSink Node visualized incorrectly\n2087983 - remove etcd_perf before restore\n2087993 - PreflightValidation many \"msg\":\"TODO: preflight checks\" in the operator log\n2088130 - oc-mirror init does not allow for automated testing\n2088161 - Match dockerfile image name with the name used in the release repo\n2088248 - Create HANA VM does not use values from customized HANA templates\n2088304 - ose-console: enable source containers for open source requirements\n2088428 - clusteroperator/baremetal stays in progressing: Applying metal3 resources state on a fresh install\n2088431 - AvoidBuggyIPs field of addresspool should be removed\n2088483 - oc adm catalog mirror returns 0 even if there are errors\n2088489 - Topology list does not allow selecting an application group anymore (again)\n2088533 - CRDs for openshift.io should have subresource.status failes on sharedconfigmaps.sharedresource and sharedsecrets.sharedresource\n2088535 - MetalLB: Enable debug log level for downstream CI\n2088541 - Default CatalogSources in openshift-marketplace namespace keeps throwing pod security admission warnings `would violate PodSecurity \"restricted:v1.24\"`\n2088561 - BMH unable to start inspection: File name too long\n2088634 - oc-mirror does not fail when catalog is invalid\n2088660 - Nutanix IPI installation inside container failed\n2088663 - Better to change the default value of --max-per-registry to 6\n2089163 - NMState CRD out of sync with code\n2089191 - should remove grafana from cluster-monitoring-config configmap in hypershift cluster\n2089224 - openshift-monitoring/cluster-monitoring-config configmap always revert to default setting\n2089254 - CAPI operator: Rotate token secret if its older than 30 minutes\n2089276 - origin tests for egressIP and azure fail\n2089295 - [Nutanix]machine stuck in Deleting phase when delete a machineset whose replicas\u003e=2 and machine is Provisioning phase on Nutanix\n2089309 - [OCP 4.11] Ironic inspector image fails to clean disks that are part of a multipath setup if they are passive paths\n2089334 - All cloud providers should use service account credentials\n2089344 - Failed to deploy simple-kmod\n2089350 - Rebase sdn to 1.24\n2089387 - LSO not taking mpath. ignoring device\n2089392 - 120 node baremetal upgrade from 4.9.29 --\u003e 4.10.13  crashloops on machine-approver\n2089396 - oc-mirror does not show pruned image plan\n2089405 - New topology package shows gray build icons instead of green/red icons for builds and pipelines\n2089419 - do not block 4.10 to 4.11 upgrades if an existing CSI driver is found. Instead, warn about presence of third party CSI driver\n2089488 - Special resources are missing the managementState field\n2089563 - Update Power VS MAPI to use api\u0027s from openshift/api repo\n2089574 - UWM prometheus-operator pod can\u0027t start up due to no master node in hypershift cluster\n2089675 - Could not move Serverless Service without Revision (or while starting?)\n2089681 - [Hypershift] EgressIP doesn\u0027t work in hypershift guest cluster\n2089682 - Installer expects all nutanix subnets to have a cluster reference which is not the case for e.g. overlay networks\n2089687 - alert message of MCDDrainError needs to be updated for new drain controller\n2089696 - CR reconciliation is stuck in daemonset lifecycle\n2089716 - [4.11][reliability]one worker node became NotReady on which ovnkube-node pod\u0027s memory increased sharply\n2089719 - acm-simple-kmod fails to build\n2089720 - [Hypershift] ICSP doesn\u0027t work for the guest cluster\n2089743 - acm-ice fails to deploy: helm chart does not appear to be a gzipped archive\n2089773 - Pipeline status filter and status colors doesn\u0027t work correctly with non-english languages\n2089775 - keepalived can keep ingress VIP on wrong node under certain circumstances\n2089805 - Config duration metrics aren\u0027t exposed\n2089827 - MetalLB CI - backward compatible tests are failing due to the order of delete\n2089909 - PTP e2e testing not working on SNO cluster\n2089918 - oc-mirror skip-missing still returns 404 errors when images do not exist\n2089930 - Bump OVN to 22.06\n2089933 - Pods do not post readiness status on termination\n2089968 - Multus CNI daemonset should use hostPath mounts with type: directory\n2089973 - bump libs to k8s 1.24 for OCP 4.11\n2089996 - Unnecessary yarn install runs in e2e tests\n2090017 - Enable source containers to meet open source requirements\n2090049 - destroying GCP cluster which has a compute node without infra id in name would fail to delete 2 k8s firewall-rules and VPC network\n2090092 - Will hit error if specify the channel not the latest\n2090151 - [RHEL scale up] increase the wait time so that the node has enough time to get ready\n2090178 - VM SSH command generated by UI points at api VIP\n2090182 - [Nutanix]Create a machineset with invalid image, machine stuck in \"Provisioning\" phase\n2090236 - Only reconcile annotations and status for clusters\n2090266 - oc adm release extract is failing on mutli arch image\n2090268 - [AWS EFS] Operator not getting installed successfully on Hypershift Guest cluster\n2090336 - Multus logging should be disabled prior to release\n2090343 - Multus debug logging should be enabled temporarily for debugging podsandbox creation failures. \n2090358 - Initiating drain log message is displayed before the drain actually starts\n2090359 - Nutanix mapi-controller: misleading error message when the failure is caused by wrong credentials\n2090405 - [tracker] weird port mapping with asymmetric traffic [rhel-8.6.0.z]\n2090430 - gofmt code\n2090436 - It takes 30min-60min to update the machine count in custom MachineConfigPools (MCPs) when a node is removed from the pool\n2090437 - Bump CNO to k8s 1.24\n2090465 - golang version mismatch\n2090487 - Change default SNO Networking Type and disallow OpenShiftSDN a supported networking Type\n2090537 - failure in ovndb migration when db is not ready in HA mode\n2090549 - dpu-network-operator shall be able to run on amd64 arch platform\n2090621 - Metal3 plugin does not work properly with updated NodeMaintenance CRD\n2090627 - Git commit and branch are empty in MetalLB log\n2090692 - Bump to latest 1.24 k8s release\n2090730 - must-gather should include multus logs. \n2090731 - nmstate deploys two instances of webhook on a single-node cluster\n2090751 - oc image mirror skip-missing flag does not skip images\n2090755 - MetalLB: BGPAdvertisement validation allows duplicate entries for ip pool selector, ip address pools, node selector and bgp peers\n2090774 - Add Readme to plugin directory\n2090794 - MachineConfigPool cannot apply a configuration after fixing the pods that caused a drain alert\n2090809 - gm.ClockClass  invalid syntax parse error in linux ptp daemon logs\n2090816 - OCP 4.8 Baremetal IPI installation failure: \"Bootstrap failed to complete: timed out waiting for the condition\"\n2090819 - oc-mirror does not catch invalid registry input when a namespace is specified\n2090827 - Rebase CoreDNS to 1.9.2 and k8s 1.24\n2090829 - Bump OpenShift router to k8s 1.24\n2090838 - Flaky test: ignore flapping host interface \u0027tunbr\u0027\n2090843 - addLogicalPort() performance/scale optimizations\n2090895 - Dynamic plugin nav extension \"startsWith\" property does not work\n2090929 - [etcd] cluster-backup.sh script has a conflict to use the \u0027/etc/kubernetes/static-pod-certs\u0027 folder if a custom API certificate is defined\n2090993 - [AI Day2] Worker node overview page crashes in Openshift console with TypeError\n2091029 - Cancel rollout action only appears when rollout is completed\n2091030 - Some BM may fail booting with default bootMode strategy\n2091033 - [Descheduler]: provide ability to override included/excluded namespaces\n2091087 - ODC Helm backend Owners file needs updates\n2091106 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3\n2091142 - Dependabot alert: Unhandled exception in gopkg.in/yaml.v3\n2091167 - IPsec runtime enabling not work in hypershift\n2091218 - Update Dev Console Helm backend to use helm 3.9.0\n2091433 - Update AWS instance types\n2091542 - Error Loading/404 not found page shown after clicking \"Current namespace only\"\n2091547 - Internet connection test with proxy permanently fails\n2091567 - oVirt CSI driver should use latest go-ovirt-client\n2091595 - Alertmanager configuration can\u0027t use OpsGenie\u0027s entity field when AlertmanagerConfig is enabled\n2091599 - PTP Dual Nic  | Extend Events 4.11 - Up/Down master interface affects all the other interface in the same NIC accoording the events and metric\n2091603 - WebSocket connection restarts when switching tabs in WebTerminal\n2091613 - simple-kmod fails to build due to missing KVC\n2091634 - OVS 2.15 stops handling traffic once ovs-dpctl(2.17.2) is used against it\n2091730 - MCO e2e tests are failing with \"No token found in openshift-monitoring secrets\"\n2091746 - \"Oh no! Something went wrong\" shown after user creates MCP without \u0027spec\u0027\n2091770 - CVO gets stuck downloading an upgrade, with the version pod complaining about invalid options\n2091854 - clusteroperator status filter doesn\u0027t match all values in Status column\n2091901 - Log stream paused right after updating log lines in Web Console in OCP4.10\n2091902 - unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server has received too many requests and has asked us to try again later\n2091990 - wrong external-ids for ovn-controller lflow-cache-limit-kb\n2092003 - PR 3162 | BZ 2084450 - invalid URL schema for AWS causes tests to perma fail and break the cloud-network-config-controller\n2092041 - Bump cluster-dns-operator to k8s 1.24\n2092042 - Bump cluster-ingress-operator to k8s 1.24\n2092047 - Kube 1.24 rebase for cloud-network-config-controller\n2092137 - Search doesn\u0027t show all entries when name filter is cleared\n2092296 - Change Default MachineCIDR of Power VS Platform from 10.x to 192.168.0.0/16\n2092390 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and \u0027Overview\u0027 tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown\n2092395 - etcdHighNumberOfFailedGRPCRequests alerts with wrong results\n2092408 - Wrong icon is used in the virtualization overview permissions card\n2092414 - In virtualization overview \"running vm per templates\" template list can be improved\n2092442 - Minimum time between drain retries is not the expected one\n2092464 - marketplace catalog defaults to v4.10\n2092473 - libovsdb performance backports\n2092495 - ovn: use up to 4 northd threads in non-SNO clusters\n2092502 - [azure-file-csi-driver] Stop shipping a NFS StorageClass\n2092509 - Invalid memory address error if non existing caBundle is configured in DNS-over-TLS using ForwardPlugins\n2092572 - acm-simple-kmod chart should create the namespace on the spoke cluster\n2092579 - Don\u0027t retry pod deletion if objects are not existing\n2092650 - [BM IPI with Provisioning Network] Worker nodes are not provisioned: ironic-agent is stuck before writing into disks\n2092703 - Incorrect mount propagation information in container status\n2092815 - can\u0027t delete the unwanted image from registry by oc-mirror\n2092851 - [Descheduler]: allow to customize the LowNodeUtilization strategy thresholds\n2092867 - make repository name unique in acm-ice/acm-simple-kmod examples\n2092880 - etcdHighNumberOfLeaderChanges returns incorrect number of leadership changes\n2092887 - oc-mirror list releases command uses filter-options flag instead of filter-by-os\n2092889 - Incorrect updating of EgressACLs using direction \"from-lport\"\n2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3)\n2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3)\n2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3)\n2092928 - CVE-2022-26945 go-getter: command injection vulnerability\n2092937 - WebScale: OVN-k8s forwarding to external-gw over the secondary interfaces failing\n2092966 - [OCP 4.11] [azure] /etc/udev/rules.d/66-azure-storage.rules missing from initramfs\n2093044 - Azure machine-api-provider-azure Availability Set Name Length Limit\n2093047 - Dynamic Plugins: Generated API markdown duplicates `checkAccess` and `useAccessReview` doc\n2093126 - [4.11] Bootimage bump tracker\n2093236 - DNS operator stopped reconciling after 4.10 to 4.11 upgrade | 4.11 nightly to 4.11 nightly upgrade\n2093288 - Default catalogs fails liveness/readiness probes\n2093357 - Upgrading sno spoke with acm-ice, causes the sno to get unreachable\n2093368 - Installer orphans FIPs created for LoadBalancer Services on `cluster destroy`\n2093396 - Remove node-tainting for too-small MTU\n2093445 - ManagementState reconciliation breaks SR\n2093454 - Router proxy protocol doesn\u0027t work with dual-stack (IPv4 and IPv6) clusters\n2093462 - Ingress Operator isn\u0027t reconciling the ingress cluster operator object\n2093586 - Topology: Ctrl+space opens the quick search modal, but doesn\u0027t close it again\n2093593 - Import from Devfile shows configuration options that shoudn\u0027t be there\n2093597 - Import: Advanced option sentence is splited into two parts and headlines has no padding\n2093600 - Project access tab should apply new permissions before it delete old ones\n2093601 - Project access page doesn\u0027t allow the user to update the settings twice (without manually reload the content)\n2093783 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.24\n2093797 - \u0027oc registry login\u0027 with serviceaccount function need update\n2093819 - An etcd member for a new machine was never added to the cluster\n2093930 - Gather console helm install  totals metric\n2093957 - Oc-mirror write dup metadata to registry backend\n2093986 - Podsecurity violation error getting logged for pod-identity-webhook\n2093992 - Cluster version operator acknowledges upgrade failing on periodic-ci-openshift-release-master-nightly-4.11-e2e-metal-ipi-upgrade-ovn-ipv6\n2094023 - Add Git Flow - Template Labels for Deployment show as DeploymentConfig\n2094024 - bump oauth-apiserver deps to include 1.23.1 k8s that fixes etcd blips\n2094039 - egressIP panics with nil pointer dereference\n2094055 - Bump coreos-installer for s390x Secure Execution\n2094071 - No runbook created for SouthboundStale alert\n2094088 - Columns in NBDB may never be updated by OVNK\n2094104 - Demo dynamic plugin image tests should be skipped when testing console-operator\n2094152 - Alerts in the virtualization overview status card aren\u0027t filtered\n2094196 - Add default and validating webhooks for Power VS MAPI\n2094227 - Topology: Create Service Binding should not be the last option (even under delete)\n2094239 - custom pool Nodes with 0 nodes are always populated in progress bar\n2094303 - If og is configured with sa, operator installation will be failed. \n2094335 - [Nutanix] - debug logs are enabled by default in machine-controller\n2094342 - apirequests limits of Cluster CAPI Operator are too low for Azure platform\n2094438 - Make AWS URL parsing more lenient for GetNodeEgressIPConfiguration\n2094525 - Allow automatic upgrades for efs operator\n2094532 - ovn-windows CI jobs are broken\n2094675 - PTP Dual Nic  | Extend Events 4.11 - when kill the phc2sys We have notification for the ptp4l physical master moved to free run\n2094694 - [Nutanix] No cluster name sanity validation - cluster name with a dot (\".\") character\n2094704 - Verbose log activated on kube-rbac-proxy in deployment prometheus-k8s\n2094801 - Kuryr controller keep restarting when handling IPs with leading zeros\n2094806 - Machine API oVrit component should use K8s 1.24 dependencies\n2094816 - Kuryr controller restarts when over quota\n2094833 - Repository overview page does not show default PipelineRun template for developer user\n2094857 - CloudShellTerminal loops indefinitely if DevWorkspace CR goes into failed state\n2094864 - Rebase CAPG to latest changes\n2094866 - oc-mirror does not always delete all manifests associated with an image during pruning\n2094896 - Run \u0027openshift-install agent create image\u0027 has segfault exception if cluster-manifests directory missing\n2094902 - Fix installer cross-compiling\n2094932 - MGMT-10403 Ingress should enable single-node cluster expansion on upgraded clusters\n2095049 - managed-csi StorageClass does not create PVs\n2095071 - Backend tests fails after devfile registry update\n2095083 - Observe \u003e Dashboards: Graphs may change a lot on automatic refresh\n2095110 - [ovn] northd container termination script must use bash\n2095113 - [ovnkube] bump to openvswitch2.17-2.17.0-22.el8fdp\n2095226 - Added changes to verify cloud connection and dhcpservices quota of a powervs instance\n2095229 - ingress-operator pod in CrashLoopBackOff in 4.11 after upgrade starting in 4.6 due to go panic\n2095231 - Kafka Sink sidebar in topology is empty\n2095247 - Event sink form doesn\u0027t show channel as sink until app is refreshed\n2095248 - [vSphere-CSI-Driver] does not report volume count limits correctly caused pod with multi volumes maybe schedule to not satisfied volume count node\n2095256 - Samples Owner needs to be Updated\n2095264 - ovs-configuration.service fails with Error: Failed to modify connection \u0027ovs-if-br-ex\u0027: failed to update connection: error writing to file \u0027/etc/NetworkManager/systemConnectionsMerged/ovs-if-br-ex.nmconnection\u0027\n2095362 - oVirt CSI driver operator should use latest go-ovirt-client\n2095574 - e2e-agnostic CI job fails\n2095687 - Debug Container shown for build logs and on click ui breaks\n2095703 - machinedeletionhooks doesn\u0027t work in vsphere cluster and BM cluster\n2095716 - New PSA component for Pod Security Standards enforcement is refusing openshift-operators ns\n2095756 - CNO panics with concurrent map read/write\n2095772 - Memory requests for ovnkube-master containers are over-sized\n2095917 - Nutanix set osDisk with diskSizeGB rather than diskSizeMiB\n2095941 - DNS Traffic not kept local to zone or node when Calico SDN utilized\n2096053 - Builder Image icons in Git Import flow are hard to see in Dark mode\n2096226 - crio fails to bind to tentative IP, causing service failure since RHOCS was rebased on RHEL 8.6\n2096315 - NodeClockNotSynchronising alert\u0027s severity should be critical\n2096350 - Web console doesn\u0027t display webhook errors for upgrades\n2096352 - Collect whole journal in gather\n2096380 - acm-simple-kmod references deprecated KVC example\n2096392 - Topology node icons are not properly visible in Dark mode\n2096394 - Add page Card items background color does not match with column background color in Dark mode\n2096413 - br-ex not created due to default bond interface having a different mac address than expected\n2096496 - FIPS issue on OCP SNO with RT Kernel via performance profile\n2096605 - [vsphere] no validation checking for diskType\n2096691 - [Alibaba 4.11] Specifying ResourceGroup id in install-config.yaml, New pv are still getting created to default ResourceGroups\n2096855 - `oc adm release new` failed with error when use  an existing  multi-arch release image as input\n2096905 - Openshift installer should not use the prism client embedded in nutanix terraform provider\n2096908 - Dark theme issue in pipeline builder, Helm rollback form, and Git import\n2097000 - KafkaConnections disappear from Topology after creating KafkaSink in Topology\n2097043 - No clean way to specify operand issues to KEDA OLM operator\n2097047 - MetalLB:  matchExpressions used in CR like L2Advertisement, BGPAdvertisement, BGPPeers allow duplicate entries\n2097067 - ClusterVersion history pruner does not always retain initial completed update entry\n2097153 - poor performance on API call to vCenter ListTags with thousands of tags\n2097186 - PSa autolabeling in 4.11 env upgraded from 4.10 does not work due to missing RBAC objects\n2097239 - Change Lower CPU limits for Power VS cloud\n2097246 - Kuryr: verify and unit jobs failing due to upstream OpenStack dropping py36 support\n2097260 - openshift-install create manifests failed for Power VS platform\n2097276 - MetalLB CI deploys the operator via manifests and not using the csv\n2097282 - chore: update external-provisioner to the latest upstream release\n2097283 - chore: update external-snapshotter to the latest upstream release\n2097284 - chore: update external-attacher to the latest upstream release\n2097286 - chore: update node-driver-registrar to the latest upstream release\n2097334 - oc plugin help shows \u0027kubectl\u0027\n2097346 - Monitoring must-gather doesn\u0027t seem to be working anymore in 4.11\n2097400 - Shared Resource CSI Driver needs additional permissions for validation webhook\n2097454 - Placeholder bug for OCP 4.11.0 metadata release\n2097503 - chore: rebase against latest external-resizer\n2097555 - IngressControllersNotUpgradeable: load balancer service has been modified; changes must be reverted before upgrading\n2097607 - Add Power VS support to Webhooks tests in actuator e2e test\n2097685 - Ironic-agent can\u0027t restart because of existing container\n2097716 - settings under httpConfig is dropped with AlertmanagerConfig v1beta1\n2097810 - Required Network tools missing for Testing e2e PTP\n2097832 - clean up unused IPv6DualStackNoUpgrade feature gate\n2097940 - openshift-install destroy cluster traps if vpcRegion not specified\n2097954 - 4.11 installation failed at monitoring and network clusteroperators with error \"conmon: option parsing failed: Unknown option --log-global-size-max\" making all jobs failing\n2098172 - oc-mirror does not validatethe registry in the storage config\n2098175 - invalid license in python-dataclasses-0.8-2.el8 spec\n2098177 - python-pint-0.10.1-2.el8 has unused Patch0 in spec file\n2098242 - typo in SRO specialresourcemodule\n2098243 - Add error check to Platform create for Power VS\n2098392 - [OCP 4.11] Ironic cannot match \"wwn\" rootDeviceHint for a multipath device\n2098508 - Control-plane-machine-set-operator report panic\n2098610 - No need to check the push permission with ?manifests-only option\n2099293 - oVirt cluster API provider should use latest go-ovirt-client\n2099330 - Edit application grouping is shown to user with view only access in a cluster\n2099340 - CAPI e2e tests for AWS are missing\n2099357 - ovn-kubernetes needs explicit RBAC coordination leases for 1.24 bump\n2099358 - Dark mode+Topology update: Unexpected selected+hover border and background colors for app groups\n2099528 - Layout issue: No spacing in delete modals\n2099561 - Prometheus returns HTTP 500 error on /favicon.ico\n2099582 - Format and update Repository overview content\n2099611 - Failures on etcd-operator watch channels\n2099637 - Should print error when use --keep-manifest-list\\xfalse for manifestlist image\n2099654 - Topology performance: Endless rerender loop when showing a Http EventSink (KameletBinding)\n2099668 - KubeControllerManager should degrade when GC stops working\n2099695 - Update CAPG after rebase\n2099751 - specialresourcemodule stacktrace while looping over build status\n2099755 - EgressIP node\u0027s mgmtIP reachability configuration option\n2099763 - Update icons for event sources and sinks in topology, Add page, and context menu\n2099811 - UDP Packet loss in OpenShift using IPv6 [upcall]\n2099821 - exporting a pointer for the loop variable\n2099875 - The speaker won\u0027t start if there\u0027s another component on the host listening on 8080\n2099899 - oc-mirror looks for layers in the wrong repository when searching for release images during publishing\n2099928 - [FJ OCP4.11 Bug]: Add unit tests to image_customization_test file\n2099968 - [Azure-File-CSI] failed to provisioning volume in ARO cluster\n2100001 - Sync upstream v1.22.0 downstream\n2100007 - Run bundle-upgrade failed from the traditional File-Based Catalog installed operator\n2100033 - OCP 4.11 IPI - Some csr remain \"Pending\" post deployment\n2100038 - failure to update special-resource-lifecycle table during update Event\n2100079 - SDN needs explicit RBAC coordination leases for 1.24 bump\n2100138 - release info --bugs has no differentiator between Jira and Bugzilla\n2100155 - kube-apiserver-operator should raise an alert when there is a Pod Security admission violation\n2100159 - Dark theme: Build icon for pending status is not inverted in topology sidebar\n2100323 - Sqlit-based catsrc cannot be ready due to \"Error: open ./db-xxxx: permission denied\"\n2100347 - KASO retains old config values when switching from Medium/Default to empty worker latency profile\n2100356 - Remove Condition tab and create option from console as it is deprecated in OSP-1.8\n2100439 - [gce-pd] GCE PD in-tree storage plugin tests not running\n2100496 - [OCPonRHV]-oVirt API returns affinity groups without a description field\n2100507 - Remove redundant log lines from obj_retry.go\n2100536 - Update API to allow EgressIP node reachability check\n2100601 - Update CNO to allow EgressIP node reachability check\n2100643 - [Migration] [GCP]OVN can not rollback to SDN\n2100644 - openshift-ansible FTBFS on RHEL8\n2100669 - Telemetry should not log the full path if it contains a username\n2100749 - [OCP 4.11] multipath support needs multipath modules\n2100825 - Update machine-api-powervs go modules to latest version\n2100841 - tiny openshift-install usability fix for setting KUBECONFIG\n2101460 - An etcd member for a new machine was never added to the cluster\n2101498 - Revert Bug 2082599: add upper bound to number of failed attempts\n2102086 - The base image is still 4.10 for operator-sdk 1.22\n2102302 - Dummy bug for 4.10 backports\n2102362 - Valid regions should be allowed in GCP install config\n2102500 - Kubernetes NMState pods can not evict due to PDB on an SNO cluster\n2102639 - Drain happens before other image-registry pod is ready to service requests, causing disruption\n2102782 - topolvm-controller get into CrashLoopBackOff few minutes after install\n2102834 - [cloud-credential-operator]container has runAsNonRoot and image will run as root\n2102947 - [VPA] recommender is logging errors for pods with init containers\n2103053 - [4.11] Backport Prow CI improvements from master\n2103075 - Listing secrets in all namespaces with a specific labelSelector does not work properly\n2103080 - br-ex not created due to default bond interface having a different mac address than expected\n2103177 - disabling ipv6 router advertisements using \"all\" does not disable it on secondary interfaces\n2103728 - Carry HAProxy patch \u0027BUG/MEDIUM: h2: match absolute-path not path-absolute for :path\u0027\n2103749 - MachineConfigPool is not getting updated\n2104282 - heterogeneous arch: oc adm extract encodes arch specific release payload pullspec rather than the manifestlisted pullspec\n2104432 - [dpu-network-operator] Updating images to be consistent with ART\n2104552 - kube-controller-manager operator 4.11.0-rc.0 degraded on disabled monitoring stack\n2104561 - 4.10 to 4.11 update: Degraded node: unexpected on-disk state: mode mismatch for file: \"/etc/crio/crio.conf.d/01-ctrcfg-pidsLimit\"; expected: -rw-r--r--/420/0644; received: ----------/0/0\n2104589 - must-gather namespace should have ?privileged? warn and audit pod security labels besides enforce\n2104701 - In CI 4.10 HAProxy must-gather takes longer than 10 minutes\n2104717 - NetworkPolicies: ovnkube-master pods crashing due to panic: \"invalid memory address or nil pointer dereference\"\n2104727 - Bootstrap node should honor http proxy\n2104906 - Uninstall fails with Observed a panic: runtime.boundsError\n2104951 - Web console doesn\u0027t display webhook errors for upgrades\n2104991 - Completed pods may not be correctly cleaned up\n2105101 - NodeIP is used instead of EgressIP if egressPod is recreated within 60 seconds\n2105106 - co/node-tuning: Waiting for 15/72 Profiles to be applied\n2105146 - Degraded=True noise with: UpgradeBackupControllerDegraded: unable to retrieve cluster version, no completed update was found in cluster version status history\n2105167 - BuildConfig throws error when using a label with a / in it\n2105334 - vmware-vsphere-csi-driver-controller can\u0027t use host port error on e2e-vsphere-serial\n2105382 - Add a validation webhook for Nutanix machine provider spec in Machine API Operator\n2105468 - The ccoctl does not seem to know how to leverage the VMs service account to talk to GCP APIs. \n2105937 - telemeter golangci-lint outdated blocking ART PRs that update to Go1.18\n2106051 - Unable to deploy acm-ice using latest SRO 4.11 build\n2106058 - vSphere defaults to SecureBoot on; breaks installation of out-of-tree drivers [4.11.0]\n2106062 - [4.11] Bootimage bump tracker\n2106116 - IngressController spec.tuningOptions.healthCheckInterval validation allows invalid values such as \"0abc\"\n2106163 - Samples ImageStreams vs. registry.redhat.io: unsupported: V2 schema 1 manifest digests are no longer supported for image pulls\n2106313 - bond-cni: backport bond-cni GA items to 4.11\n2106543 - Typo in must-gather release-4.10\n2106594 - crud/other-routes.spec.ts Cypress test failing at a high rate in CI\n2106723 - [4.11] Upgrade from 4.11.0-rc0 -\u003e 4.11.0-rc.1 failed. rpm-ostree status shows No space left on device\n2106855 - [4.11.z] externalTrafficPolicy=Local is not working in local gateway mode if ovnkube-node is restarted\n2107493 - ReplicaSet prometheus-operator-admission-webhook has timed out progressing\n2107501 - metallb greenwave tests failure\n2107690 - Driver Container builds fail with \"error determining starting point for build: no FROM statement found\"\n2108175 - etcd backup seems to not be triggered in 4.10.18--\u003e4.10.20 upgrade\n2108617 - [oc adm release] extraction of the installer against a manifestlisted payload referenced by tag leads to a bad release image reference\n2108686 - rpm-ostreed: start limit hit easily\n2110505 - [Upgrade]deployment openshift-machine-api/machine-api-operator has a replica failure FailedCreate\n2110715 - openshift-controller-manager(-operator) namespace should clear run-level annotations\n2111055 - dummy bug for 4.10.z bz2110938\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-25009\nhttps://access.redhat.com/security/cve/CVE-2018-25010\nhttps://access.redhat.com/security/cve/CVE-2018-25012\nhttps://access.redhat.com/security/cve/CVE-2018-25013\nhttps://access.redhat.com/security/cve/CVE-2018-25014\nhttps://access.redhat.com/security/cve/CVE-2018-25032\nhttps://access.redhat.com/security/cve/CVE-2019-5827\nhttps://access.redhat.com/security/cve/CVE-2019-13750\nhttps://access.redhat.com/security/cve/CVE-2019-13751\nhttps://access.redhat.com/security/cve/CVE-2019-17594\nhttps://access.redhat.com/security/cve/CVE-2019-17595\nhttps://access.redhat.com/security/cve/CVE-2019-18218\nhttps://access.redhat.com/security/cve/CVE-2019-19603\nhttps://access.redhat.com/security/cve/CVE-2019-20838\nhttps://access.redhat.com/security/cve/CVE-2020-13435\nhttps://access.redhat.com/security/cve/CVE-2020-14155\nhttps://access.redhat.com/security/cve/CVE-2020-17541\nhttps://access.redhat.com/security/cve/CVE-2020-19131\nhttps://access.redhat.com/security/cve/CVE-2020-24370\nhttps://access.redhat.com/security/cve/CVE-2020-28493\nhttps://access.redhat.com/security/cve/CVE-2020-35492\nhttps://access.redhat.com/security/cve/CVE-2020-36330\nhttps://access.redhat.com/security/cve/CVE-2020-36331\nhttps://access.redhat.com/security/cve/CVE-2020-36332\nhttps://access.redhat.com/security/cve/CVE-2021-3481\nhttps://access.redhat.com/security/cve/CVE-2021-3580\nhttps://access.redhat.com/security/cve/CVE-2021-3634\nhttps://access.redhat.com/security/cve/CVE-2021-3672\nhttps://access.redhat.com/security/cve/CVE-2021-3695\nhttps://access.redhat.com/security/cve/CVE-2021-3696\nhttps://access.redhat.com/security/cve/CVE-2021-3697\nhttps://access.redhat.com/security/cve/CVE-2021-3737\nhttps://access.redhat.com/security/cve/CVE-2021-4115\nhttps://access.redhat.com/security/cve/CVE-2021-4156\nhttps://access.redhat.com/security/cve/CVE-2021-4189\nhttps://access.redhat.com/security/cve/CVE-2021-20095\nhttps://access.redhat.com/security/cve/CVE-2021-20231\nhttps://access.redhat.com/security/cve/CVE-2021-20232\nhttps://access.redhat.com/security/cve/CVE-2021-23177\nhttps://access.redhat.com/security/cve/CVE-2021-23566\nhttps://access.redhat.com/security/cve/CVE-2021-23648\nhttps://access.redhat.com/security/cve/CVE-2021-25219\nhttps://access.redhat.com/security/cve/CVE-2021-31535\nhttps://access.redhat.com/security/cve/CVE-2021-31566\nhttps://access.redhat.com/security/cve/CVE-2021-36084\nhttps://access.redhat.com/security/cve/CVE-2021-36085\nhttps://access.redhat.com/security/cve/CVE-2021-36086\nhttps://access.redhat.com/security/cve/CVE-2021-36087\nhttps://access.redhat.com/security/cve/CVE-2021-38185\nhttps://access.redhat.com/security/cve/CVE-2021-38593\nhttps://access.redhat.com/security/cve/CVE-2021-40528\nhttps://access.redhat.com/security/cve/CVE-2021-41190\nhttps://access.redhat.com/security/cve/CVE-2021-41617\nhttps://access.redhat.com/security/cve/CVE-2021-42771\nhttps://access.redhat.com/security/cve/CVE-2021-43527\nhttps://access.redhat.com/security/cve/CVE-2021-43818\nhttps://access.redhat.com/security/cve/CVE-2021-44225\nhttps://access.redhat.com/security/cve/CVE-2021-44906\nhttps://access.redhat.com/security/cve/CVE-2022-0235\nhttps://access.redhat.com/security/cve/CVE-2022-0778\nhttps://access.redhat.com/security/cve/CVE-2022-1012\nhttps://access.redhat.com/security/cve/CVE-2022-1215\nhttps://access.redhat.com/security/cve/CVE-2022-1271\nhttps://access.redhat.com/security/cve/CVE-2022-1292\nhttps://access.redhat.com/security/cve/CVE-2022-1586\nhttps://access.redhat.com/security/cve/CVE-2022-1621\nhttps://access.redhat.com/security/cve/CVE-2022-1629\nhttps://access.redhat.com/security/cve/CVE-2022-1706\nhttps://access.redhat.com/security/cve/CVE-2022-1729\nhttps://access.redhat.com/security/cve/CVE-2022-2068\nhttps://access.redhat.com/security/cve/CVE-2022-2097\nhttps://access.redhat.com/security/cve/CVE-2022-21698\nhttps://access.redhat.com/security/cve/CVE-2022-22576\nhttps://access.redhat.com/security/cve/CVE-2022-23772\nhttps://access.redhat.com/security/cve/CVE-2022-23773\nhttps://access.redhat.com/security/cve/CVE-2022-23806\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/cve/CVE-2022-24675\nhttps://access.redhat.com/security/cve/CVE-2022-24903\nhttps://access.redhat.com/security/cve/CVE-2022-24921\nhttps://access.redhat.com/security/cve/CVE-2022-25313\nhttps://access.redhat.com/security/cve/CVE-2022-25314\nhttps://access.redhat.com/security/cve/CVE-2022-26691\nhttps://access.redhat.com/security/cve/CVE-2022-26945\nhttps://access.redhat.com/security/cve/CVE-2022-27191\nhttps://access.redhat.com/security/cve/CVE-2022-27774\nhttps://access.redhat.com/security/cve/CVE-2022-27776\nhttps://access.redhat.com/security/cve/CVE-2022-27782\nhttps://access.redhat.com/security/cve/CVE-2022-28327\nhttps://access.redhat.com/security/cve/CVE-2022-28733\nhttps://access.redhat.com/security/cve/CVE-2022-28734\nhttps://access.redhat.com/security/cve/CVE-2022-28735\nhttps://access.redhat.com/security/cve/CVE-2022-28736\nhttps://access.redhat.com/security/cve/CVE-2022-28737\nhttps://access.redhat.com/security/cve/CVE-2022-29162\nhttps://access.redhat.com/security/cve/CVE-2022-29810\nhttps://access.redhat.com/security/cve/CVE-2022-29824\nhttps://access.redhat.com/security/cve/CVE-2022-30321\nhttps://access.redhat.com/security/cve/CVE-2022-30322\nhttps://access.redhat.com/security/cve/CVE-2022-30323\nhttps://access.redhat.com/security/cve/CVE-2022-32250\nhttps://access.redhat.com/security/updates/classification/#important\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYvOfk9zjgjWX9erEAQhJ/w//UlbBGKBBFBAyfEmQf9Zu0yyv6MfZW0Zl\niO1qXVIl9UQUFjTY5ejerx7cP8EBWLhKaiiqRRjbjtj+w+ENGB4LLj6TEUrSM5oA\nYEmhnX3M+GUKF7Px61J7rZfltIOGhYBvJ+qNZL2jvqz1NciVgI4/71cZWnvDbGpa\n02w3Dn0JzhTSR9znNs9LKcV/anttJ3NtOYhqMXnN8EpKdtzQkKRazc7xkOTxfxyl\njRiER2Z0TzKDE6dMoVijS2Sv5j/JF0LRwetkZl6+oh8ehKh5GRV3lPg3eVkhzDEo\n/gp0P9GdLMHi6cS6uqcREbod//waSAa7cssgULoycFwjzbDK3L2c+wMuWQIgXJca\nRYuP6wvrdGwiI1mgUi/226EzcZYeTeoKxnHkp7AsN9l96pJYafj0fnK1p9NM/8g3\njBE/W4K8jdDNVd5l1Z5O0Nyxk6g4P8MKMe10/w/HDXFPSgufiCYIGX4TKqb+ESIR\nSuYlSMjoGsB4mv1KMDEUJX6d8T05lpEwJT0RYNdZOouuObYMtcHLpRQHH9mkj86W\npHdma5aGG/mTMvSMW6l6L05uT41Azm6fVimTv+E5WvViBni2480CVH+9RexKKSyL\nXcJX1gaLdo+72I/gZrtT+XE5tcJ3Sf5fmfsenQeY4KFum/cwzbM6y7RGn47xlEWB\nxBWKPzRxz0Q=9r0B\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. Description:\n\nNode.js is a software development platform for building fast and scalable\nnetwork applications in the JavaScript programming language. \n\nThe following packages have been upgraded to a later upstream version:\nnodejs (14.21.1), nodejs-nodemon (2.0.20). Bugs fixed (https://bugzilla.redhat.com/):\n\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2066009 - CVE-2021-44906 minimist: prototype pollution\n2134609 - CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function\n2140911 - CVE-2022-43548 nodejs: DNS rebinding in inspect via invalid octal IP address\n2142821 - nodejs:14/nodejs: Rebase to the latest Nodejs 14 release [rhel-8] [rhel-8.7.0.z]\n2150323 - CVE-2022-24999 express: \"qs\" prototype poisoning causes the hang of the node process\n\n6. Package List:\n\nRed Hat Enterprise Linux AppStream (v.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. (BZ# 2033339)\n\n* Restore/backup shows up as Validation failed but the restore backup\nstatus in ACM shows success (BZ# 2034279)\n\n* Observability - OCP 311 node role are not displayed completely (BZ#\n2038650)\n\n* Documented uninstall procedure leaves many leftovers (BZ# 2041921)\n\n* infrastructure-operator pod crashes due to insufficient privileges in ACM\n2.5 (BZ# 2046554)\n\n* Acm failed to install due to some missing CRDs in operator (BZ# 2047463)\n\n* Navigation icons no longer showing in ACM 2.5 (BZ# 2051298)\n\n* ACM home page now includes /home/ in url (BZ# 2051299)\n\n* proxy heading in Add Credential should be capitalized (BZ# 2051349)\n\n* ACM 2.5 tries to create new MCE instance when install on top of existing\nMCE 2.0 (BZ# 2051983)\n\n* Create Policy button does not work and user cannot use console to create\npolicy (BZ# 2053264)\n\n* No cluster information was displayed after a policyset was created (BZ#\n2053366)\n\n* Dynamic plugin update does not take effect in Firefox (BZ# 2053516)\n\n* Replicated policy should not be available when creating a Policy Set (BZ#\n2054431)\n\n* Placement section in Policy Set wizard does not reset when users click\n\"Back\" to re-configured placement (BZ# 2054433)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2014557 - RFE Copy secret with specific secret namespace, name for source and name, namespace and cluster label for target\n2024702 - CVE-2021-3918 nodejs-json-schema: Prototype pollution vulnerability\n2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion\n2028224 - RHACM 2.5.0 images\n2028348 - [UI] When you delete host agent from infraenv no confirmation message appear (Are you sure you want to delete x?)\n2028647 - Clusters are in \u0027Degraded\u0027 status with upgrade env due to obs-controller not working properly\n2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic\n2033339 - create cluster pool -\u003e choose infra type , As a result infra providers disappear from UI. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.4.2 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/\n\nSecurity updates:\n\n* nodejs-json-schema: Prototype pollution vulnerability (CVE-2021-3918)\n\n* containerd: Unprivileged pod may bind mount any privileged regular file\non disk (CVE-2021-43816)\n\n* minio-go: user privilege escalation in AddUser() admin API\n(CVE-2021-43858)\n\n* nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching\nANSI escape codes (CVE-2021-3807)\n\n* fastify-static: open redirect via an URL with double slash followed by a\ndomain (CVE-2021-22963)\n\n* moby: `docker cp` allows unexpected chmod of host file (CVE-2021-41089)\n\n* moby: data directory contains subdirectories with insufficiently\nrestricted permissions, which could lead to directory traversal\n(CVE-2021-41091)\n\n* golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)\n\n* node-fetch: Exposure of Sensitive Information to an Unauthorized Actor\n(CVE-2022-0235)\n\n* nats-server: misusing the \"dynamically provisioned sandbox accounts\"\nfeature authenticated user can obtain the privileges of the System account\n(CVE-2022-24450)\n\nBug fixes:\n\n* Trying to create a new cluster on vSphere and no feedback, stuck in\n\"creating\" (Bugzilla #1937078)\n\n* The hyperlink of *ks cluster node cannot be opened when I want to check\nthe node (Bugzilla #2028100)\n\n* Unable to make SSH connection to a Bitbucket server (Bugzilla #2028196)\n\n* RHACM cannot deploy Helm Charts with version numbers starting with\nletters (e.g. v1.6.1) (Bugzilla #2028931)\n\n* RHACM 2.4.2 images (Bugzilla #2029506)\n\n* Git Application still appears in Application Table and Resources are\nStill Seen in Advanced Configuration Upon Deletion after Upgrade from 2.4.0\n(Bugzilla #2030005)\n\n* Namespace left orphaned after destroying the cluster (Bugzilla #2030379)\n\n* The results filtered through the filter contain some data that should not\nbe present in cluster page (Bugzilla #2034198)\n\n* Git over ssh doesn\u0027t use custom port set in url (Bugzilla #2036057)\n\n* The value of name label changed from clusterclaim name to cluster name\n(Bugzilla #2042223)\n\n* ACM configuration policies do not handle Limitrange or Quotas values\n(Bugzilla #2042545)\n\n* Cluster addons do not appear after upgrade from ACM 2.3.5 to ACM 2.3.6\n(Bugzilla #2050847)\n\n* The azure government regions were not list in the region drop down list\nwhen creating the cluster (Bugzilla #2051797)\n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nFor details on how to apply this update, refer to:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/install/index#installing\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2001668 - [DDF] normally, in the OCP web console, one sees a yaml of the secret, where at the bottom, the following is shown:\n2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes\n2008592 - CVE-2021-41089 moby: `docker cp` allows unexpected chmod of host file\n2012909 - [DDF] We feel it would be beneficial to add a sub-section here referencing the reconcile options available to users when\n2015152 - CVE-2021-22963 fastify-static: open redirect via an URL with double slash followed by a domain\n2023448 - CVE-2021-41091 moby: data directory contains subdirectories with insufficiently restricted permissions, which could lead to directory traversal\n2024702 - CVE-2021-3918 nodejs-json-schema: Prototype pollution vulnerability\n2028100 - The hyperlink of *ks cluster node can not be opened when I want to check the node\n2028196 - Unable to make SSH connection to a Bitbucket server\n2028931 - RHACM can not deploy Helm Charts with version numbers starting with letters (e.g. v1.6.1)\n2029506 - RHACM 2.4.2 images\n2030005 - Git Application still appears in Application Table and Resources are Still Seen in Advanced Configuration Upon Deletion  after Upgrade from 2.4.0\n2030379 - Namespace left orphaned after destroying the cluster\n2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic\n2032957 - Missing AWX templates in ACM\n2034198 - The results filtered through the filter contain some data that should not be present in cluster page\n2036057 - git over ssh doesn\u0027t use custom port set in url\n2036252 - CVE-2021-43858 minio: user privilege escalation in AddUser() admin API\n2039378 - Deploying CRD via Application does not update status in ACM console\n2041015 - The base domain did not updated when switch the provider credentials during create the cluster/cluster pool\n2042545 - ACM configuration policies do not handle Limitrange or Quotas values\n2043519 - \"apps.open-cluster-management.io/git-branch\" annotation should be mandatory\n2044434 - CVE-2021-43816 containerd: Unprivileged pod may bind mount any privileged regular file on disk\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2050847 - Cluster addons do not appear after upgrade from ACM 2.3.5 to ACM 2.3.6\n2051797 - the azure government regions were not list in the region drop down list when create the cluster\n2052573 - CVE-2022-24450 nats-server: misusing the \"dynamically provisioned sandbox accounts\" feature  authenticated user can obtain the privileges of the System account\n\n5. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.7.2 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):\n\n2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes\n2038898 - [UI] ?Update Repository? option not getting disabled after adding the Replication Repository details to the MTC web console\n2040693 - ?Replication repository? wizard has no validation for name length\n2040695 - [MTC UI] ?Add Cluster? wizard stucks when the cluster name length is more than 63 characters\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2048537 - Exposed route host to image registry? connecting successfully to invalid registry ?xyz.com?\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2055658 - [MTC UI] Cancel button on ?Migrations? page does not disappear when migration gets Failed/Succeeded with warnings\n2056962 - [MTC UI] UI shows the wrong migration type info after changing the target namespace\n2058172 - [MTC UI] Successful Rollback is not showing the green success icon in the ?Last State? field. \n2058529 - [MTC UI] Migrations Plan is missing the type for the state migration performed before upgrade\n2061335 - [MTC UI] ?Update cluster? button is not getting disabled\n2062266 - MTC UI does not display logs properly [OADP-BL]\n2062862 - [MTC UI] Clusters page behaving unexpectedly on deleting the remote cluster?s service account secret from backend\n2074675 - HPAs of DeploymentConfigs are not being updated when migration from Openshift 3.x to Openshift 4.x\n2076593 - Velero pod log missing from UI  drop down\n2076599 - Velero pod log missing from downloaded logs folder [OADP-BL]\n2078459 - [MTC UI] Storageclass conversion plan is adding migstorage reference in migplan\n2079252 - [MTC]  Rsync options logs not visible in log-reader pod\n2082221 - Don\u0027t allow Storage class conversion migration if source cluster has only one storage class defined [UI]\n2082225 - non-numeric user when launching stage pods [OADP-BL]\n2088022 - Default CPU requests on Velero/Restic are too demanding making scheduling fail in certain environments\n2088026 - Cloud propagation phase in migration controller is not doing anything due to missing labels on Velero pods\n2089126 - [MTC] Migration controller cannot find Velero Pod because of wrong labels\n2089411 - [MTC] Log reader pod is missing velero and restic pod logs [OADP-BL]\n2089859 - [Crane] DPA CR is missing the required flag - Migration is getting failed at the EnsureCloudSecretPropagated phase due to the missing secret VolumeMounts\n2090317 - [MTC] mig-operator failed to create a DPA CR due to null values are passed instead of int [OADP-BL]\n2096939 - Fix legacy operator.yml inconsistencies and errors\n2100486 - [MTC UI] Target storage class field is not getting respected when clusters don\u0027t have replication repo configured",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-0235"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-003319"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-0235"
      },
      {
        "db": "PACKETSTORM",
        "id": "168657"
      },
      {
        "db": "PACKETSTORM",
        "id": "168638"
      },
      {
        "db": "PACKETSTORM",
        "id": "166946"
      },
      {
        "db": "PACKETSTORM",
        "id": "168042"
      },
      {
        "db": "PACKETSTORM",
        "id": "170429"
      },
      {
        "db": "PACKETSTORM",
        "id": "167459"
      },
      {
        "db": "PACKETSTORM",
        "id": "166199"
      },
      {
        "db": "PACKETSTORM",
        "id": "167679"
      }
    ],
    "trust": 2.43
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2022-0235",
        "trust": 4.1
      },
      {
        "db": "SIEMENS",
        "id": "SSA-637483",
        "trust": 1.7
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-003319",
        "trust": 0.8
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-22-258-05",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "168657",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "166946",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "170429",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "166199",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.2427",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3236",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5790",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.2855",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.6001",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3136",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5013",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.3344",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.0115",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4616",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.2010",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0903",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.6316",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3977",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022032843",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022062931",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022032009",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022070643",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "166812",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "166983",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "169935",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "166516",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "168150",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-1383",
        "trust": 0.6
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-0235",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168638",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168042",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "167459",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "167679",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-0235"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-003319"
      },
      {
        "db": "PACKETSTORM",
        "id": "168657"
      },
      {
        "db": "PACKETSTORM",
        "id": "168638"
      },
      {
        "db": "PACKETSTORM",
        "id": "166946"
      },
      {
        "db": "PACKETSTORM",
        "id": "168042"
      },
      {
        "db": "PACKETSTORM",
        "id": "170429"
      },
      {
        "db": "PACKETSTORM",
        "id": "167459"
      },
      {
        "db": "PACKETSTORM",
        "id": "166199"
      },
      {
        "db": "PACKETSTORM",
        "id": "167679"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-1383"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0235"
      }
    ]
  },
  "id": "VAR-202201-0349",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-11-29T22:12:23.267000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "SSA-637483",
        "trust": 0.8,
        "url": "https://lists.debian.org/debian-lts-announce/2022/12/msg00007.html"
      },
      {
        "title": "node-fetch Repair measures for information disclosure vulnerabilities",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqById.tag?id=177991"
      },
      {
        "title": "Red Hat: Moderate: nodejs:14 security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20230050 - Security Advisory"
      },
      {
        "title": "Red Hat: CVE-2022-0235",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=CVE-2022-0235"
      },
      {
        "title": "Red Hat: Moderate: rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20230612 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Red Hat Data Grid 8.4.0 security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228524 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat OpenShift Service Mesh 2.1.2.1 containers security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221739 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.10 security updates and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221715 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.4.4 security updates and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221681 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Red Hat Advanced Cluster Management 2.4.2 security updates and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20220735 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, \u0026 bugfix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20226156 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.8 security and container updates",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221083 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.4.3 security updates and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221476 - Security Advisory"
      },
      {
        "title": "IBM: Security Bulletin: IBM QRadar Assistant app for IBM QRadar SIEM includes components with multiple known vulnerabilities",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=0c5e20c044e4005143b2303b28407553"
      },
      {
        "title": "IBM: Security Bulletin: Multiple security vulnerabilities are addressed with IBM Business Automation Manager Open Editions 8.0.1",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=ac267c598ae2a2882a98ed5463cc028d"
      },
      {
        "title": "Red Hat: Moderate: Migration Toolkit for Containers (MTC) 1.7.2 security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225483 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: Red Hat Advanced Cluster Management 2.5 security updates, images, and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20224956 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.11 security updates and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225392 - Security Advisory"
      },
      {
        "title": "Red Hat: Important: OpenShift Container Platform 4.11.0 bug fix and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225069 - Security Advisory"
      },
      {
        "title": "npcheck",
        "trust": 0.1,
        "url": "https://github.com/nodeshift/npcheck "
      },
      {
        "title": "",
        "trust": 0.1,
        "url": "https://github.com/Live-Hack-CVE/CVE-2022-0235 "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-0235"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-003319"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-1383"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-601",
        "trust": 1.0
      },
      {
        "problemtype": "CWE-200",
        "trust": 1.0
      },
      {
        "problemtype": "Open redirect (CWE-601) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-003319"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0235"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.9,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0235"
      },
      {
        "trust": 1.7,
        "url": "https://huntr.dev/bounties/d26ab655-38d6-48b3-be15-f9ad6b6ae6f7"
      },
      {
        "trust": 1.7,
        "url": "https://github.com/node-fetch/node-fetch/commit/36e47e8a6406185921e4985dcbeff140d73eaa10"
      },
      {
        "trust": 1.7,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf"
      },
      {
        "trust": 1.7,
        "url": "https://lists.debian.org/debian-lts-announce/2022/12/msg00007.html"
      },
      {
        "trust": 0.8,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/cve/cve-2022-0235"
      },
      {
        "trust": 0.8,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.8,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3977"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.2427"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166983/red-hat-security-advisory-2022-1739-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/170429/red-hat-security-advisory-2023-0050-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169935/red-hat-security-advisory-2022-8524-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.0115"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.3344"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022062931"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166516/red-hat-security-advisory-2022-1083-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/168657/red-hat-security-advisory-2022-6835-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022032843"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.2010"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022032009"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022070643"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166946/red-hat-security-advisory-2022-1681-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/168150/red-hat-security-advisory-2022-6156-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.6316"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.2855"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4616"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/node-fetch-information-disclosure-via-cookie-header-37787"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166812/red-hat-security-advisory-2022-1476-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.6001"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5790"
      },
      {
        "trust": 0.6,
        "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0903"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5013"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3136"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166199/red-hat-security-advisory-2022-0735-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3236"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24771"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-24771"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-24772"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-0536"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-24785"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-44906"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-24450"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-25032"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-43565"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-1271"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-4189"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-3634"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-19131"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-3737"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/errata/rhsa-2023:0050"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-24773"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0536"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21724"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-26520"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24773"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-21724"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-31129"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24772"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-1365"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44906"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1365"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/install/index#installing"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-21803"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-1154"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-4115"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43565"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24450"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-41617"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-24407"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-27191"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-35492"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-23806"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-41190"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-29810"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-26691"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0778"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3752"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-4157"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3744"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-13974"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-45485"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3773"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-4002"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-29154"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-43976"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-0941"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-43389"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3634"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27820"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-44733"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21781"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3918"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-4037"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-29154"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-37159"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-4788"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3772"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-43858"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-0404"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3669"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3764"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-20322"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-43056"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3612"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-41864"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-4197"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-0941"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3612"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-26401"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-27820"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3743"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-1011"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13974"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20322"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-4083"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-45486"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0322"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-4788"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-26401"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0286"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0001"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-43816"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3759"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-21781"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0002"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-4203"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-19131"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-42739"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3918"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0404"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3807"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/200.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.1,
        "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-qradar-assistant-app-for-ibm-qradar-siem-includes-components-with-multiple-known-vulnerabilities/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6835"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-25647"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37136"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41269"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-25858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-37136"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-25647"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22569"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-23647"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-37734"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0981"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23647"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-41269"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-25857"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37137"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-37137"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-25857"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0981"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22569"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23913"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23437"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23436"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7746"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0722"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23436"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1650"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-23437"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-23913"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2458"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21363"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6813"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36518"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2458"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7746"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0722"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36518"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1650"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:1681"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1154"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24723"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24785"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/index"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/install/index#installing"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0155"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-25636"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-25636"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23555"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1271"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0155"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4028"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/index"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24723"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4115"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4028"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21803"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23555"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0613"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0613"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36084"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28327"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-44225"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36085"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32250"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27776"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1586"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43818"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-20838"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:5068"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27774"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36331"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26945"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-38593"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20095"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1629"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-5827"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25014"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2097"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25009"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3481"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3580"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3696"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24921"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-38185"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23648"
      },
      {
        "trust": 0.1,
        "url": "https://github.com/util-linux/util-linux/commit/eab90ef8d4f66394285e0cff1dfc0a27242c05aa"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2068"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-24370"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4156"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:5069"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-25313"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28733"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-29162"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36330"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25010"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3672"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-29824"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23772"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23177"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1621"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27782"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-14155"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28736"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-19603"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30321"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-42771"
      },
      {
        "trust": 0.1,
        "url": "https://10.0.0.7:2379"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21698"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1292"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-22576"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-17541"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25012"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3697"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13750"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36087"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1706"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20231"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28734"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.11/updating/updating-cluster-cli.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-40528"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-13751"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28737"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30322"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-20232"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25219"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-31566"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3695"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-25314"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-17595"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28735"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1215"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-36086"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1729"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-36332"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2019-18218"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43527"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24903"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1012"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-23566"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2018-25013"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-31535"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28493"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23773"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-13435"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24675"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30323"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-43548"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24999"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3517"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-3517"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-43548"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24999"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3669"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3752"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3772"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3773"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3743"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3764"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37159"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24778"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3737"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4157"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/install/index#installing"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-41190"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3759"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4083"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4037"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4002"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3744"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:4956"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3872"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3521"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4034"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4034"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4019"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4155"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4122"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3872"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4192"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3712"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22963"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3984"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-22963"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3984"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4193"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24407"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0185"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3807"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-42574"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0185"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4155"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-41091"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4193"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4122"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-42574"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-41089"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41089"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41091"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43858"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43816"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-4192"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:0735"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3712"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4019"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3521"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35492"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:5483"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23852"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-0235"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-003319"
      },
      {
        "db": "PACKETSTORM",
        "id": "168657"
      },
      {
        "db": "PACKETSTORM",
        "id": "168638"
      },
      {
        "db": "PACKETSTORM",
        "id": "166946"
      },
      {
        "db": "PACKETSTORM",
        "id": "168042"
      },
      {
        "db": "PACKETSTORM",
        "id": "170429"
      },
      {
        "db": "PACKETSTORM",
        "id": "167459"
      },
      {
        "db": "PACKETSTORM",
        "id": "166199"
      },
      {
        "db": "PACKETSTORM",
        "id": "167679"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-1383"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0235"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULMON",
        "id": "CVE-2022-0235"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-003319"
      },
      {
        "db": "PACKETSTORM",
        "id": "168657"
      },
      {
        "db": "PACKETSTORM",
        "id": "168638"
      },
      {
        "db": "PACKETSTORM",
        "id": "166946"
      },
      {
        "db": "PACKETSTORM",
        "id": "168042"
      },
      {
        "db": "PACKETSTORM",
        "id": "170429"
      },
      {
        "db": "PACKETSTORM",
        "id": "167459"
      },
      {
        "db": "PACKETSTORM",
        "id": "166199"
      },
      {
        "db": "PACKETSTORM",
        "id": "167679"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-1383"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-0235"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-01-16T00:00:00",
        "db": "VULMON",
        "id": "CVE-2022-0235"
      },
      {
        "date": "2023-02-14T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-003319"
      },
      {
        "date": "2022-10-07T15:02:16",
        "db": "PACKETSTORM",
        "id": "168657"
      },
      {
        "date": "2022-10-06T12:37:43",
        "db": "PACKETSTORM",
        "id": "168638"
      },
      {
        "date": "2022-05-04T05:42:06",
        "db": "PACKETSTORM",
        "id": "166946"
      },
      {
        "date": "2022-08-10T15:56:22",
        "db": "PACKETSTORM",
        "id": "168042"
      },
      {
        "date": "2023-01-10T14:09:04",
        "db": "PACKETSTORM",
        "id": "170429"
      },
      {
        "date": "2022-06-09T16:11:52",
        "db": "PACKETSTORM",
        "id": "167459"
      },
      {
        "date": "2022-03-04T16:03:16",
        "db": "PACKETSTORM",
        "id": "166199"
      },
      {
        "date": "2022-07-01T15:04:32",
        "db": "PACKETSTORM",
        "id": "167679"
      },
      {
        "date": "2022-01-16T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202201-1383"
      },
      {
        "date": "2022-01-16T17:15:07.870000",
        "db": "NVD",
        "id": "CVE-2022-0235"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-02-03T00:00:00",
        "db": "VULMON",
        "id": "CVE-2022-0235"
      },
      {
        "date": "2023-02-14T04:12:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-003319"
      },
      {
        "date": "2023-06-14T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202201-1383"
      },
      {
        "date": "2024-11-21T06:38:12.150000",
        "db": "NVD",
        "id": "CVE-2022-0235"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-1383"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "node-fetch\u00a0 Open redirect vulnerability in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-003319"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "information disclosure",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202201-1383"
      }
    ],
    "trust": 0.6
  }
}

var-202207-0381
Vulnerability from variot

A OS Command Injection vulnerability exists in Node.js versions <14.20.0, <16.20.0, <18.5.0 due to an insufficient IsAllowedHost check that can easily be bypassed because IsIPAddress does not properly check if an IP address is invalid before making DBS requests allowing rebinding attacks. Node.js Foundation of Node.js For products from other vendors, OS A command injection vulnerability exists.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. Node.js July 7th 2022 Security Releases: DNS rebinding in --inspect via invalid IP addresses. When an invalid IPv4 address is provided (for instance 10.0.2.555 is provided), browsers (such as Firefox) will make DNS requests to the DNS server, providing a vector for an attacker-controlled DNS server or a MITM who can spoof DNS responses to perform a rebinding attack and hence connect to the WebSocket debugger, allowing for arbitrary code execution. This is a bypass of CVE-2021-22884. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Moderate: rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon security and bug fix update Advisory ID: RHSA-2022:6389-01 Product: Red Hat Software Collections Advisory URL: https://access.redhat.com/errata/RHSA-2022:6389 Issue date: 2022-09-08 CVE Names: CVE-2022-32212 CVE-2022-32213 CVE-2022-32214 CVE-2022-32215 CVE-2022-33987 ==================================================================== 1. Summary:

An update for rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon is now available for Red Hat Software Collections.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

Red Hat Software Collections for Red Hat Enterprise Linux Server (v. 7) - noarch, ppc64le, s390x, x86_64 Red Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64

  1. Description:

Node.js is a software development platform for building fast and scalable network applications in the JavaScript programming language.

The following packages have been upgraded to a later upstream version: rh-nodejs14-nodejs (14.20.0).

Security Fix(es):

  • nodejs: DNS rebinding in --inspect via invalid IP addresses (CVE-2022-32212)

  • nodejs: HTTP request smuggling due to flawed parsing of Transfer-Encoding (CVE-2022-32213)

  • nodejs: HTTP request smuggling due to improper delimiting of header fields (CVE-2022-32214)

  • nodejs: HTTP request smuggling due to incorrect parsing of multi-line Transfer-Encoding (CVE-2022-32215)

  • got: missing verification of requested URLs allows redirects to UNIX sockets (CVE-2022-33987)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Bug Fix(es):

  • rh-nodejs14-nodejs: rebase to latest upstream release (BZ#2106673)

  • Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

  1. Bugs fixed (https://bugzilla.redhat.com/):

2102001 - CVE-2022-33987 got: missing verification of requested URLs allows redirects to UNIX sockets 2105422 - CVE-2022-32212 nodejs: DNS rebinding in --inspect via invalid IP addresses 2105426 - CVE-2022-32215 nodejs: HTTP request smuggling due to incorrect parsing of multi-line Transfer-Encoding 2105428 - CVE-2022-32214 nodejs: HTTP request smuggling due to improper delimiting of header fields 2105430 - CVE-2022-32213 nodejs: HTTP request smuggling due to flawed parsing of Transfer-Encoding 2106673 - rh-nodejs14-nodejs: rebase to latest upstream release [rhscl-3.8.z]

  1. Package List:

Red Hat Software Collections for Red Hat Enterprise Linux Server (v. 7):

Source: rh-nodejs14-nodejs-14.20.0-2.el7.src.rpm rh-nodejs14-nodejs-nodemon-2.0.19-1.el7.src.rpm

noarch: rh-nodejs14-nodejs-docs-14.20.0-2.el7.noarch.rpm rh-nodejs14-nodejs-nodemon-2.0.19-1.el7.noarch.rpm

ppc64le: rh-nodejs14-nodejs-14.20.0-2.el7.ppc64le.rpm rh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.ppc64le.rpm rh-nodejs14-nodejs-devel-14.20.0-2.el7.ppc64le.rpm rh-nodejs14-npm-6.14.17-14.20.0.2.el7.ppc64le.rpm

s390x: rh-nodejs14-nodejs-14.20.0-2.el7.s390x.rpm rh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.s390x.rpm rh-nodejs14-nodejs-devel-14.20.0-2.el7.s390x.rpm rh-nodejs14-npm-6.14.17-14.20.0.2.el7.s390x.rpm

x86_64: rh-nodejs14-nodejs-14.20.0-2.el7.x86_64.rpm rh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.x86_64.rpm rh-nodejs14-nodejs-devel-14.20.0-2.el7.x86_64.rpm rh-nodejs14-npm-6.14.17-14.20.0.2.el7.x86_64.rpm

Red Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7):

Source: rh-nodejs14-nodejs-14.20.0-2.el7.src.rpm rh-nodejs14-nodejs-nodemon-2.0.19-1.el7.src.rpm

noarch: rh-nodejs14-nodejs-docs-14.20.0-2.el7.noarch.rpm rh-nodejs14-nodejs-nodemon-2.0.19-1.el7.noarch.rpm

x86_64: rh-nodejs14-nodejs-14.20.0-2.el7.x86_64.rpm rh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.x86_64.rpm rh-nodejs14-nodejs-devel-14.20.0-2.el7.x86_64.rpm rh-nodejs14-npm-6.14.17-14.20.0.2.el7.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. References:

https://access.redhat.com/security/cve/CVE-2022-32212 https://access.redhat.com/security/cve/CVE-2022-32213 https://access.redhat.com/security/cve/CVE-2022-32214 https://access.redhat.com/security/cve/CVE-2022-32215 https://access.redhat.com/security/cve/CVE-2022-33987 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYxnqU9zjgjWX9erEAQipBg/+NJmkBsKEPkFHZAiZhGKiwIkwaFcHK+e/ ODClFTTT9SkkMBheuc9HQDmwukaVlLMvbOJSVL/6NvuLQvOcQHtprOAJXr3I6KQm VScJRQny4et+D/N3bJJiuhqe9YY9Bh+EP7omS4aq2UuphEhkuTSQ0V2+Fa4O8wdZ bAhUhU660Q6aGzNGvcyz8vi7ohmOFZS94/x2Lr6cBG8LF0dmr/pIw+uPlO36ghXF IPEM3VcGisTGQRg2Xy5yqeouK1S+YAcZ1f0QUOePP+WRhIecfmG3cj6oYTRnrOyq +62525BHDNjIz55z6H32dKBIy+r+HT7WaOGgPwvH+ugmlH6NyKHjSyy+IJoglkfM 4+QA0zun7WhLet5y4jmsWCpT3mOCWj7h+iW6IqTlfcad3wCQ6OnySRq67W3GDq+M 3kdUdBoyfLm1vzLceEF4AK8qChj7rVl8x0b4v8OfRGv6ZEIe+BfJYNzI9HeuIE91 BYtLGe18vMs5mcWxcYMWlfAgzVSGTaqaaBie9qPtAThs00lJd9oRf/Mfga42/6vI nBLHwE3NyPyKfaLvcyLa/oPwGnOhKyPtD8HeN2MORm6RUeUClaq9s+ihDIPvbyLX bcKKdjGoJDWyJy2yU2GkVwrbF6gcKgdvo2uFckOpouKQ4P9KEooI/15fLy8NPIZz hGdWoRKL34w\xcePC -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . 9) - aarch64, noarch, ppc64le, s390x, x86_64

  1. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512

Debian Security Advisory DSA-5326-1 security@debian.org https://www.debian.org/security/ Aron Xu January 24, 2023 https://www.debian.org/security/faq


Package : nodejs CVE ID : CVE-2022-32212 CVE-2022-32213 CVE-2022-32214 CVE-2022-32215 CVE-2022-35255 CVE-2022-35256 CVE-2022-43548

Multiple vulnerabilities were discovered in Node.js, which could result in HTTP request smuggling, bypass of host IP address validation and weak randomness setup.

For the stable distribution (bullseye), these problems have been fixed in version 12.22.12~dfsg-1~deb11u3.

We recommend that you upgrade your nodejs packages.

For the detailed security status of nodejs please refer to its security tracker page at: https://security-tracker.debian.org/tracker/nodejs

Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/

Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmPQNhIACgkQEMKTtsN8 TjaRmA/+KDFkQcd2sE/eAAx9cVikICNkfu7uIVKHpeDH9o5oq5M2nj4zHJCeAArp WblguyZwEtqzAOO2WesbrmwfXLmglhrNZwRMOrsbu63JxSnecp7qcMwR8A4JWdmd Txb4aZr6Prmwq6fT0G3K6oV8Hw+OeqYA/RZKenxtkBf/jdzVahGJHJ/NrFKKWVQW xbqHwCkP7uUlm+5UR5XzNrodTRCQYHJvUmDUrjEOjM6x+sqYirKWiERN0A14kVn9 0Ufrw6+Z2tKhdKFZfU1BtDthhlH/nybz0h3aHsk+E5/vx20WAURiCEDVi7nf8+Rf EtbCxaqV+/xVoPmXStHY/ogCo8CgRVsyYUIemgi4q5LwVx/Oqjm2CJ/xCwOKh0E2 idXLJfLSpxxBe598MUn9iKbnFFCN9DQZXf7BYs3djtn8ALFVBSHZSF1QXFoFQ86w Y9xGhBQzfEgCoEW7H4S30ZQ+Gz+ZnOMCSH+MKIMtSpqbc7wLtrKf839DO6Uux7B7 u0WR3lZlsihi92QKq9X/VRkyy8ZiA2TYy3IE+KDKlXDHKls9FR9BUClYe9L8RiRu boP8KPFUHUsSVaTzkufMStdKkcXCqgj/6KhJL6E9ZunTBpTmqx1Ty7/N2qktLFnH ujrffzV3rCE6eIg7ps8OdZbjCfqUqmQk9/pV6ZDjymqjZ1LKZDs\xfeRn -----END PGP SIGNATURE----- . ========================================================================== Ubuntu Security Notice USN-6491-1 November 21, 2023

nodejs vulnerabilities

A security issue affects these releases of Ubuntu and its derivatives:

  • Ubuntu 22.04 LTS
  • Ubuntu 20.04 LTS
  • Ubuntu 18.04 LTS (Available with Ubuntu Pro)

Summary:

Several security issues were fixed in Node.js.

Software Description: - nodejs: An open-source, cross-platform JavaScript runtime environment.

Details:

Axel Chong discovered that Node.js incorrectly handled certain inputs. If a user or an automated system were tricked into opening a specially crafted input file, a remote attacker could possibly use this issue to execute arbitrary code. (CVE-2022-32212)

Zeyu Zhang discovered that Node.js incorrectly handled certain inputs. If a user or an automated system were tricked into opening a specially crafted input file, a remote attacker could possibly use this issue to execute arbitrary code. This issue only affected Ubuntu 22.04 LTS. (CVE-2022-32213, CVE-2022-32214, CVE-2022-32215)

It was discovered that Node.js incorrectly handled certain inputs. If a user or an automated system were tricked into opening a specially crafted input file, a remote attacker could possibly use this issue to execute arbitrary code. This issue only affected Ubuntu 22.04 LTS. (CVE-2022-35256)

It was discovered that Node.js incorrectly handled certain inputs. If a user or an automated system were tricked into opening a specially crafted input file, a remote attacker could possibly use this issue to execute arbitrary code. This issue only affected Ubuntu 22.04 LTS. (CVE-2022-43548)

Update instructions:

The problem can be corrected by updating your system to the following package versions:

Ubuntu 22.04 LTS: libnode-dev 12.22.9~dfsg-1ubuntu3.2 libnode72 12.22.9~dfsg-1ubuntu3.2 nodejs 12.22.9~dfsg-1ubuntu3.2 nodejs-doc 12.22.9~dfsg-1ubuntu3.2

Ubuntu 20.04 LTS: libnode-dev 10.19.0~dfsg-3ubuntu1.3 libnode64 10.19.0~dfsg-3ubuntu1.3 nodejs 10.19.0~dfsg-3ubuntu1.3 nodejs-doc 10.19.0~dfsg-3ubuntu1.3

Ubuntu 18.04 LTS (Available with Ubuntu Pro): nodejs 8.10.0~dfsg-2ubuntu0.4+esm4 nodejs-dev 8.10.0~dfsg-2ubuntu0.4+esm4 nodejs-doc 8.10.0~dfsg-2ubuntu0.4+esm4

In general, a standard system update will make all the necessary changes. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202405-29


                                       https://security.gentoo.org/

Severity: Low Title: Node.js: Multiple Vulnerabilities Date: May 08, 2024 Bugs: #772422, #781704, #800986, #805053, #807775, #811273, #817938, #831037, #835615, #857111, #865627, #872692, #879617, #918086, #918614 ID: 202405-29


Synopsis

Multiple vulnerabilities have been discovered in Node.js.

Background

Node.js is a JavaScript runtime built on Chrome’s V8 JavaScript engine.

Affected packages

Package Vulnerable Unaffected


net-libs/nodejs < 16.20.2 >= 16.20.2

Description

Multiple vulnerabilities have been discovered in Node.js. Please review the CVE identifiers referenced below for details.

Impact

Please review the referenced CVE identifiers for details.

Workaround

There is no known workaround at this time.

Resolution

All Node.js 20 users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-20.5.1"

All Node.js 18 users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-18.17.1"

All Node.js 16 users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-16.20.2"

References

[ 1 ] CVE-2020-7774 https://nvd.nist.gov/vuln/detail/CVE-2020-7774 [ 2 ] CVE-2021-3672 https://nvd.nist.gov/vuln/detail/CVE-2021-3672 [ 3 ] CVE-2021-22883 https://nvd.nist.gov/vuln/detail/CVE-2021-22883 [ 4 ] CVE-2021-22884 https://nvd.nist.gov/vuln/detail/CVE-2021-22884 [ 5 ] CVE-2021-22918 https://nvd.nist.gov/vuln/detail/CVE-2021-22918 [ 6 ] CVE-2021-22930 https://nvd.nist.gov/vuln/detail/CVE-2021-22930 [ 7 ] CVE-2021-22931 https://nvd.nist.gov/vuln/detail/CVE-2021-22931 [ 8 ] CVE-2021-22939 https://nvd.nist.gov/vuln/detail/CVE-2021-22939 [ 9 ] CVE-2021-22940 https://nvd.nist.gov/vuln/detail/CVE-2021-22940 [ 10 ] CVE-2021-22959 https://nvd.nist.gov/vuln/detail/CVE-2021-22959 [ 11 ] CVE-2021-22960 https://nvd.nist.gov/vuln/detail/CVE-2021-22960 [ 12 ] CVE-2021-37701 https://nvd.nist.gov/vuln/detail/CVE-2021-37701 [ 13 ] CVE-2021-37712 https://nvd.nist.gov/vuln/detail/CVE-2021-37712 [ 14 ] CVE-2021-39134 https://nvd.nist.gov/vuln/detail/CVE-2021-39134 [ 15 ] CVE-2021-39135 https://nvd.nist.gov/vuln/detail/CVE-2021-39135 [ 16 ] CVE-2021-44531 https://nvd.nist.gov/vuln/detail/CVE-2021-44531 [ 17 ] CVE-2021-44532 https://nvd.nist.gov/vuln/detail/CVE-2021-44532 [ 18 ] CVE-2021-44533 https://nvd.nist.gov/vuln/detail/CVE-2021-44533 [ 19 ] CVE-2022-0778 https://nvd.nist.gov/vuln/detail/CVE-2022-0778 [ 20 ] CVE-2022-3602 https://nvd.nist.gov/vuln/detail/CVE-2022-3602 [ 21 ] CVE-2022-3786 https://nvd.nist.gov/vuln/detail/CVE-2022-3786 [ 22 ] CVE-2022-21824 https://nvd.nist.gov/vuln/detail/CVE-2022-21824 [ 23 ] CVE-2022-32212 https://nvd.nist.gov/vuln/detail/CVE-2022-32212 [ 24 ] CVE-2022-32213 https://nvd.nist.gov/vuln/detail/CVE-2022-32213 [ 25 ] CVE-2022-32214 https://nvd.nist.gov/vuln/detail/CVE-2022-32214 [ 26 ] CVE-2022-32215 https://nvd.nist.gov/vuln/detail/CVE-2022-32215 [ 27 ] CVE-2022-32222 https://nvd.nist.gov/vuln/detail/CVE-2022-32222 [ 28 ] CVE-2022-35255 https://nvd.nist.gov/vuln/detail/CVE-2022-35255 [ 29 ] CVE-2022-35256 https://nvd.nist.gov/vuln/detail/CVE-2022-35256 [ 30 ] CVE-2022-35948 https://nvd.nist.gov/vuln/detail/CVE-2022-35948 [ 31 ] CVE-2022-35949 https://nvd.nist.gov/vuln/detail/CVE-2022-35949 [ 32 ] CVE-2022-43548 https://nvd.nist.gov/vuln/detail/CVE-2022-43548 [ 33 ] CVE-2023-30581 https://nvd.nist.gov/vuln/detail/CVE-2023-30581 [ 34 ] CVE-2023-30582 https://nvd.nist.gov/vuln/detail/CVE-2023-30582 [ 35 ] CVE-2023-30583 https://nvd.nist.gov/vuln/detail/CVE-2023-30583 [ 36 ] CVE-2023-30584 https://nvd.nist.gov/vuln/detail/CVE-2023-30584 [ 37 ] CVE-2023-30586 https://nvd.nist.gov/vuln/detail/CVE-2023-30586 [ 38 ] CVE-2023-30587 https://nvd.nist.gov/vuln/detail/CVE-2023-30587 [ 39 ] CVE-2023-30588 https://nvd.nist.gov/vuln/detail/CVE-2023-30588 [ 40 ] CVE-2023-30589 https://nvd.nist.gov/vuln/detail/CVE-2023-30589 [ 41 ] CVE-2023-30590 https://nvd.nist.gov/vuln/detail/CVE-2023-30590 [ 42 ] CVE-2023-32002 https://nvd.nist.gov/vuln/detail/CVE-2023-32002 [ 43 ] CVE-2023-32003 https://nvd.nist.gov/vuln/detail/CVE-2023-32003 [ 44 ] CVE-2023-32004 https://nvd.nist.gov/vuln/detail/CVE-2023-32004 [ 45 ] CVE-2023-32005 https://nvd.nist.gov/vuln/detail/CVE-2023-32005 [ 46 ] CVE-2023-32006 https://nvd.nist.gov/vuln/detail/CVE-2023-32006 [ 47 ] CVE-2023-32558 https://nvd.nist.gov/vuln/detail/CVE-2023-32558 [ 48 ] CVE-2023-32559 https://nvd.nist.gov/vuln/detail/CVE-2023-32559

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202405-29

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2024 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202207-0381",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "node.js",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "16.17.1"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "16.0.0"
      },
      {
        "model": "node.js",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "14.20.1"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "18.0.0"
      },
      {
        "model": "node.js",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "18.5.0"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "14.15.0"
      },
      {
        "model": "node.js",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "16.12.0"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "10.0"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "35"
      },
      {
        "model": "linux",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "debian",
        "version": "11.0"
      },
      {
        "model": "node.js",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "14.14.0"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "14.0.0"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "36"
      },
      {
        "model": "node.js",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "nodejs",
        "version": "16.13.0"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "37"
      },
      {
        "model": "sinec ins",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": null
      },
      {
        "model": "fedora",
        "scope": null,
        "trust": 0.8,
        "vendor": "fedora",
        "version": null
      },
      {
        "model": "gnu/linux",
        "scope": null,
        "trust": 0.8,
        "vendor": "debian",
        "version": null
      },
      {
        "model": "node.js",
        "scope": null,
        "trust": 0.8,
        "vendor": "node js",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013369"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-32212"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "168305"
      },
      {
        "db": "PACKETSTORM",
        "id": "169410"
      },
      {
        "db": "PACKETSTORM",
        "id": "168442"
      },
      {
        "db": "PACKETSTORM",
        "id": "168358"
      },
      {
        "db": "PACKETSTORM",
        "id": "168359"
      }
    ],
    "trust": 0.5
  },
  "cve": "CVE-2022-32212",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [],
        "cvssV3": [
          {
            "attackComplexity": "HIGH",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "HIGH",
            "baseScore": 8.1,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.2,
            "id": "CVE-2022-32212",
            "impactScore": 5.9,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "High",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "High",
            "baseScore": 8.1,
            "baseSeverity": "High",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2022-32212",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:H/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2022-32212",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "NVD",
            "id": "CVE-2022-32212",
            "trust": 0.8,
            "value": "High"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202207-684",
            "trust": 0.6,
            "value": "HIGH"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013369"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-684"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-32212"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A OS Command Injection vulnerability exists in Node.js versions \u003c14.20.0, \u003c16.20.0, \u003c18.5.0 due to an insufficient IsAllowedHost check that can easily be bypassed because IsIPAddress does not properly check if an IP address is invalid before making DBS requests allowing rebinding attacks. Node.js Foundation of Node.js For products from other vendors, OS A command injection vulnerability exists.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. Node.js July 7th 2022 Security Releases: DNS rebinding in --inspect via invalid IP addresses. When an invalid IPv4 address is provided (for instance 10.0.2.555 is provided), browsers (such as Firefox) will make DNS requests to the DNS server, providing a vector for an attacker-controlled DNS server or a MITM who can spoof DNS responses to perform a rebinding attack and hence connect to the WebSocket debugger, allowing for arbitrary code execution. This is a bypass of CVE-2021-22884. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Moderate: rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon security and bug fix update\nAdvisory ID:       RHSA-2022:6389-01\nProduct:           Red Hat Software Collections\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:6389\nIssue date:        2022-09-08\nCVE Names:         CVE-2022-32212 CVE-2022-32213 CVE-2022-32214\n                   CVE-2022-32215 CVE-2022-33987\n====================================================================\n1. Summary:\n\nAn update for rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon is now\navailable for Red Hat Software Collections. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Software Collections for Red Hat Enterprise Linux Server (v. 7) - noarch, ppc64le, s390x, x86_64\nRed Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64\n\n3. Description:\n\nNode.js is a software development platform for building fast and scalable\nnetwork applications in the JavaScript programming language. \n\nThe following packages have been upgraded to a later upstream version:\nrh-nodejs14-nodejs (14.20.0). \n\nSecurity Fix(es):\n\n* nodejs: DNS rebinding in --inspect via invalid IP addresses\n(CVE-2022-32212)\n\n* nodejs: HTTP request smuggling due to flawed parsing of Transfer-Encoding\n(CVE-2022-32213)\n\n* nodejs: HTTP request smuggling due to improper delimiting of header\nfields (CVE-2022-32214)\n\n* nodejs: HTTP request smuggling due to incorrect parsing of multi-line\nTransfer-Encoding (CVE-2022-32215)\n\n* got: missing verification of requested URLs allows redirects to UNIX\nsockets (CVE-2022-33987)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fix(es):\n\n* rh-nodejs14-nodejs: rebase to latest upstream release (BZ#2106673)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2102001 - CVE-2022-33987 got: missing verification of requested URLs allows redirects to UNIX sockets\n2105422 - CVE-2022-32212 nodejs: DNS rebinding in --inspect via invalid IP addresses\n2105426 - CVE-2022-32215 nodejs: HTTP request smuggling due to incorrect parsing of multi-line Transfer-Encoding\n2105428 - CVE-2022-32214 nodejs: HTTP request smuggling due to improper delimiting of header fields\n2105430 - CVE-2022-32213 nodejs: HTTP request smuggling due to flawed parsing of Transfer-Encoding\n2106673 - rh-nodejs14-nodejs: rebase to latest upstream release [rhscl-3.8.z]\n\n6. Package List:\n\nRed Hat Software Collections for Red Hat Enterprise Linux Server (v. 7):\n\nSource:\nrh-nodejs14-nodejs-14.20.0-2.el7.src.rpm\nrh-nodejs14-nodejs-nodemon-2.0.19-1.el7.src.rpm\n\nnoarch:\nrh-nodejs14-nodejs-docs-14.20.0-2.el7.noarch.rpm\nrh-nodejs14-nodejs-nodemon-2.0.19-1.el7.noarch.rpm\n\nppc64le:\nrh-nodejs14-nodejs-14.20.0-2.el7.ppc64le.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.ppc64le.rpm\nrh-nodejs14-nodejs-devel-14.20.0-2.el7.ppc64le.rpm\nrh-nodejs14-npm-6.14.17-14.20.0.2.el7.ppc64le.rpm\n\ns390x:\nrh-nodejs14-nodejs-14.20.0-2.el7.s390x.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.s390x.rpm\nrh-nodejs14-nodejs-devel-14.20.0-2.el7.s390x.rpm\nrh-nodejs14-npm-6.14.17-14.20.0.2.el7.s390x.rpm\n\nx86_64:\nrh-nodejs14-nodejs-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-devel-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-npm-6.14.17-14.20.0.2.el7.x86_64.rpm\n\nRed Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nrh-nodejs14-nodejs-14.20.0-2.el7.src.rpm\nrh-nodejs14-nodejs-nodemon-2.0.19-1.el7.src.rpm\n\nnoarch:\nrh-nodejs14-nodejs-docs-14.20.0-2.el7.noarch.rpm\nrh-nodejs14-nodejs-nodemon-2.0.19-1.el7.noarch.rpm\n\nx86_64:\nrh-nodejs14-nodejs-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-devel-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-npm-6.14.17-14.20.0.2.el7.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-32212\nhttps://access.redhat.com/security/cve/CVE-2022-32213\nhttps://access.redhat.com/security/cve/CVE-2022-32214\nhttps://access.redhat.com/security/cve/CVE-2022-32215\nhttps://access.redhat.com/security/cve/CVE-2022-33987\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYxnqU9zjgjWX9erEAQipBg/+NJmkBsKEPkFHZAiZhGKiwIkwaFcHK+e/\nODClFTTT9SkkMBheuc9HQDmwukaVlLMvbOJSVL/6NvuLQvOcQHtprOAJXr3I6KQm\nVScJRQny4et+D/N3bJJiuhqe9YY9Bh+EP7omS4aq2UuphEhkuTSQ0V2+Fa4O8wdZ\nbAhUhU660Q6aGzNGvcyz8vi7ohmOFZS94/x2Lr6cBG8LF0dmr/pIw+uPlO36ghXF\nIPEM3VcGisTGQRg2Xy5yqeouK1S+YAcZ1f0QUOePP+WRhIecfmG3cj6oYTRnrOyq\n+62525BHDNjIz55z6H32dKBIy+r+HT7WaOGgPwvH+ugmlH6NyKHjSyy+IJoglkfM\n4+QA0zun7WhLet5y4jmsWCpT3mOCWj7h+iW6IqTlfcad3wCQ6OnySRq67W3GDq+M\n3kdUdBoyfLm1vzLceEF4AK8qChj7rVl8x0b4v8OfRGv6ZEIe+BfJYNzI9HeuIE91\nBYtLGe18vMs5mcWxcYMWlfAgzVSGTaqaaBie9qPtAThs00lJd9oRf/Mfga42/6vI\nnBLHwE3NyPyKfaLvcyLa/oPwGnOhKyPtD8HeN2MORm6RUeUClaq9s+ihDIPvbyLX\nbcKKdjGoJDWyJy2yU2GkVwrbF6gcKgdvo2uFckOpouKQ4P9KEooI/15fLy8NPIZz\nhGdWoRKL34w\\xcePC\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. 9) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA512\n\n- -------------------------------------------------------------------------\nDebian Security Advisory DSA-5326-1                   security@debian.org\nhttps://www.debian.org/security/                                  Aron Xu\nJanuary 24, 2023                      https://www.debian.org/security/faq\n- -------------------------------------------------------------------------\n\nPackage        : nodejs\nCVE ID         : CVE-2022-32212 CVE-2022-32213 CVE-2022-32214 CVE-2022-32215\n                 CVE-2022-35255 CVE-2022-35256 CVE-2022-43548\n\nMultiple vulnerabilities were discovered in Node.js, which could result\nin HTTP request smuggling, bypass of host IP address validation and weak\nrandomness setup. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 12.22.12~dfsg-1~deb11u3. \n\nWe recommend that you upgrade your nodejs packages. \n\nFor the detailed security status of nodejs please refer to\nits security tracker page at:\nhttps://security-tracker.debian.org/tracker/nodejs\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmPQNhIACgkQEMKTtsN8\nTjaRmA/+KDFkQcd2sE/eAAx9cVikICNkfu7uIVKHpeDH9o5oq5M2nj4zHJCeAArp\nWblguyZwEtqzAOO2WesbrmwfXLmglhrNZwRMOrsbu63JxSnecp7qcMwR8A4JWdmd\nTxb4aZr6Prmwq6fT0G3K6oV8Hw+OeqYA/RZKenxtkBf/jdzVahGJHJ/NrFKKWVQW\nxbqHwCkP7uUlm+5UR5XzNrodTRCQYHJvUmDUrjEOjM6x+sqYirKWiERN0A14kVn9\n0Ufrw6+Z2tKhdKFZfU1BtDthhlH/nybz0h3aHsk+E5/vx20WAURiCEDVi7nf8+Rf\nEtbCxaqV+/xVoPmXStHY/ogCo8CgRVsyYUIemgi4q5LwVx/Oqjm2CJ/xCwOKh0E2\nidXLJfLSpxxBe598MUn9iKbnFFCN9DQZXf7BYs3djtn8ALFVBSHZSF1QXFoFQ86w\nY9xGhBQzfEgCoEW7H4S30ZQ+Gz+ZnOMCSH+MKIMtSpqbc7wLtrKf839DO6Uux7B7\nu0WR3lZlsihi92QKq9X/VRkyy8ZiA2TYy3IE+KDKlXDHKls9FR9BUClYe9L8RiRu\nboP8KPFUHUsSVaTzkufMStdKkcXCqgj/6KhJL6E9ZunTBpTmqx1Ty7/N2qktLFnH\nujrffzV3rCE6eIg7ps8OdZbjCfqUqmQk9/pV6ZDjymqjZ1LKZDs\\xfeRn\n-----END PGP SIGNATURE-----\n. ==========================================================================\nUbuntu Security Notice USN-6491-1\nNovember 21, 2023\n\nnodejs vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 22.04 LTS\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS (Available with Ubuntu Pro)\n\nSummary:\n\nSeveral security issues were fixed in Node.js. \n\nSoftware Description:\n- nodejs: An open-source, cross-platform JavaScript runtime environment. \n\nDetails:\n\nAxel Chong discovered that Node.js incorrectly handled certain inputs. If a\nuser or an automated system were tricked into opening a specially crafted\ninput file, a remote attacker could possibly use this issue to execute\narbitrary code. (CVE-2022-32212)\n\nZeyu Zhang discovered that Node.js incorrectly handled certain inputs. If a\nuser or an automated system were tricked into opening a specially crafted\ninput file, a remote attacker could possibly use this issue to execute\narbitrary code. This issue only affected Ubuntu 22.04 LTS. (CVE-2022-32213,\nCVE-2022-32214, CVE-2022-32215)\n\nIt was discovered that Node.js incorrectly handled certain inputs. If a user\nor an automated system were tricked into opening a specially crafted input\nfile, a remote attacker could possibly use this issue to execute arbitrary\ncode. This issue only affected Ubuntu 22.04 LTS. (CVE-2022-35256)\n\nIt was discovered that Node.js incorrectly handled certain inputs. If a user\nor an automated system were tricked into opening a specially crafted input\nfile, a remote attacker could possibly use this issue to execute arbitrary\ncode. This issue only affected Ubuntu 22.04 LTS. (CVE-2022-43548)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 22.04 LTS:\n   libnode-dev                     12.22.9~dfsg-1ubuntu3.2\n   libnode72                       12.22.9~dfsg-1ubuntu3.2\n   nodejs                          12.22.9~dfsg-1ubuntu3.2\n   nodejs-doc                      12.22.9~dfsg-1ubuntu3.2\n\nUbuntu 20.04 LTS:\n   libnode-dev                     10.19.0~dfsg-3ubuntu1.3\n   libnode64                       10.19.0~dfsg-3ubuntu1.3\n   nodejs                          10.19.0~dfsg-3ubuntu1.3\n   nodejs-doc                      10.19.0~dfsg-3ubuntu1.3\n\nUbuntu 18.04 LTS (Available with Ubuntu Pro):\n   nodejs                          8.10.0~dfsg-2ubuntu0.4+esm4\n   nodejs-dev                      8.10.0~dfsg-2ubuntu0.4+esm4\n   nodejs-doc                      8.10.0~dfsg-2ubuntu0.4+esm4\n\nIn general, a standard system update will make all the necessary changes. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202405-29\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Low\n    Title: Node.js: Multiple Vulnerabilities\n     Date: May 08, 2024\n     Bugs: #772422, #781704, #800986, #805053, #807775, #811273, #817938, #831037, #835615, #857111, #865627, #872692, #879617, #918086, #918614\n       ID: 202405-29\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been discovered in Node.js. \n\nBackground\n=========\nNode.js is a JavaScript runtime built on Chrome\u2019s V8 JavaScript engine. \n\nAffected packages\n================\nPackage          Vulnerable    Unaffected\n---------------  ------------  ------------\nnet-libs/nodejs  \u003c 16.20.2     \u003e= 16.20.2\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in Node.js. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll Node.js 20 users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-20.5.1\"\n\nAll Node.js 18 users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-18.17.1\"\n\nAll Node.js 16 users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-16.20.2\"\n\nReferences\n=========\n[ 1 ] CVE-2020-7774\n      https://nvd.nist.gov/vuln/detail/CVE-2020-7774\n[ 2 ] CVE-2021-3672\n      https://nvd.nist.gov/vuln/detail/CVE-2021-3672\n[ 3 ] CVE-2021-22883\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22883\n[ 4 ] CVE-2021-22884\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22884\n[ 5 ] CVE-2021-22918\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22918\n[ 6 ] CVE-2021-22930\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22930\n[ 7 ] CVE-2021-22931\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22931\n[ 8 ] CVE-2021-22939\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22939\n[ 9 ] CVE-2021-22940\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22940\n[ 10 ] CVE-2021-22959\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22959\n[ 11 ] CVE-2021-22960\n      https://nvd.nist.gov/vuln/detail/CVE-2021-22960\n[ 12 ] CVE-2021-37701\n      https://nvd.nist.gov/vuln/detail/CVE-2021-37701\n[ 13 ] CVE-2021-37712\n      https://nvd.nist.gov/vuln/detail/CVE-2021-37712\n[ 14 ] CVE-2021-39134\n      https://nvd.nist.gov/vuln/detail/CVE-2021-39134\n[ 15 ] CVE-2021-39135\n      https://nvd.nist.gov/vuln/detail/CVE-2021-39135\n[ 16 ] CVE-2021-44531\n      https://nvd.nist.gov/vuln/detail/CVE-2021-44531\n[ 17 ] CVE-2021-44532\n      https://nvd.nist.gov/vuln/detail/CVE-2021-44532\n[ 18 ] CVE-2021-44533\n      https://nvd.nist.gov/vuln/detail/CVE-2021-44533\n[ 19 ] CVE-2022-0778\n      https://nvd.nist.gov/vuln/detail/CVE-2022-0778\n[ 20 ] CVE-2022-3602\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3602\n[ 21 ] CVE-2022-3786\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3786\n[ 22 ] CVE-2022-21824\n      https://nvd.nist.gov/vuln/detail/CVE-2022-21824\n[ 23 ] CVE-2022-32212\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32212\n[ 24 ] CVE-2022-32213\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32213\n[ 25 ] CVE-2022-32214\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32214\n[ 26 ] CVE-2022-32215\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32215\n[ 27 ] CVE-2022-32222\n      https://nvd.nist.gov/vuln/detail/CVE-2022-32222\n[ 28 ] CVE-2022-35255\n      https://nvd.nist.gov/vuln/detail/CVE-2022-35255\n[ 29 ] CVE-2022-35256\n      https://nvd.nist.gov/vuln/detail/CVE-2022-35256\n[ 30 ] CVE-2022-35948\n      https://nvd.nist.gov/vuln/detail/CVE-2022-35948\n[ 31 ] CVE-2022-35949\n      https://nvd.nist.gov/vuln/detail/CVE-2022-35949\n[ 32 ] CVE-2022-43548\n      https://nvd.nist.gov/vuln/detail/CVE-2022-43548\n[ 33 ] CVE-2023-30581\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30581\n[ 34 ] CVE-2023-30582\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30582\n[ 35 ] CVE-2023-30583\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30583\n[ 36 ] CVE-2023-30584\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30584\n[ 37 ] CVE-2023-30586\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30586\n[ 38 ] CVE-2023-30587\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30587\n[ 39 ] CVE-2023-30588\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30588\n[ 40 ] CVE-2023-30589\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30589\n[ 41 ] CVE-2023-30590\n      https://nvd.nist.gov/vuln/detail/CVE-2023-30590\n[ 42 ] CVE-2023-32002\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32002\n[ 43 ] CVE-2023-32003\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32003\n[ 44 ] CVE-2023-32004\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32004\n[ 45 ] CVE-2023-32005\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32005\n[ 46 ] CVE-2023-32006\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32006\n[ 47 ] CVE-2023-32558\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32558\n[ 48 ] CVE-2023-32559\n      https://nvd.nist.gov/vuln/detail/CVE-2023-32559\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202405-29\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2024 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2022-32212"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013369"
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-32212"
      },
      {
        "db": "PACKETSTORM",
        "id": "168305"
      },
      {
        "db": "PACKETSTORM",
        "id": "169410"
      },
      {
        "db": "PACKETSTORM",
        "id": "168442"
      },
      {
        "db": "PACKETSTORM",
        "id": "168358"
      },
      {
        "db": "PACKETSTORM",
        "id": "170727"
      },
      {
        "db": "PACKETSTORM",
        "id": "175817"
      },
      {
        "db": "PACKETSTORM",
        "id": "178512"
      },
      {
        "db": "PACKETSTORM",
        "id": "168359"
      }
    ],
    "trust": 2.43
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2022-32212",
        "trust": 4.1
      },
      {
        "db": "HACKERONE",
        "id": "1632921",
        "trust": 2.4
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013369",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "168305",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "169410",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "168442",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "168358",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "170727",
        "trust": 0.7
      },
      {
        "db": "CS-HELP",
        "id": "SB2022072639",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022071338",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022072522",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022071612",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022071827",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3586",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3488",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3487",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.0997",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3505",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4101",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4681",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3673",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4136",
        "trust": 0.6
      },
      {
        "db": "SIEMENS",
        "id": "SSA-332410",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-684",
        "trust": 0.6
      },
      {
        "db": "VULMON",
        "id": "CVE-2022-32212",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "175817",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "178512",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "168359",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-32212"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013369"
      },
      {
        "db": "PACKETSTORM",
        "id": "168305"
      },
      {
        "db": "PACKETSTORM",
        "id": "169410"
      },
      {
        "db": "PACKETSTORM",
        "id": "168442"
      },
      {
        "db": "PACKETSTORM",
        "id": "168358"
      },
      {
        "db": "PACKETSTORM",
        "id": "170727"
      },
      {
        "db": "PACKETSTORM",
        "id": "175817"
      },
      {
        "db": "PACKETSTORM",
        "id": "178512"
      },
      {
        "db": "PACKETSTORM",
        "id": "168359"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-684"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-32212"
      }
    ]
  },
  "id": "VAR-202207-0381",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-11-29T22:27:49.386000Z",
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-284",
        "trust": 1.0
      },
      {
        "problemtype": "CWE-78",
        "trust": 1.0
      },
      {
        "problemtype": "OS Command injection (CWE-78) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013369"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-32212"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.4,
        "url": "https://hackerone.com/reports/1632921"
      },
      {
        "trust": 1.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32212"
      },
      {
        "trust": 1.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-32212"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32215"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32214"
      },
      {
        "trust": 0.7,
        "url": "https://nodejs.org/en/blog/vulnerability/july-2022-security-releases/"
      },
      {
        "trust": 0.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32213"
      },
      {
        "trust": 0.6,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2018-7160"
      },
      {
        "trust": 0.6,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/vmqk5l5sbyd47qqz67lemhnq662gh3oy/"
      },
      {
        "trust": 0.6,
        "url": "https://www.debian.org/security/2023/dsa-5326"
      },
      {
        "trust": 0.6,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/2icg6csib3guwh5dusqevx53mojw7lyk/"
      },
      {
        "trust": 0.6,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf"
      },
      {
        "trust": 0.6,
        "url": "https://security.netapp.com/advisory/ntap-20220915-0001/"
      },
      {
        "trust": 0.6,
        "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2021-22884"
      },
      {
        "trust": 0.6,
        "url": "https://lists.debian.org/debian-lts-announce/2022/10/msg00006.html"
      },
      {
        "trust": 0.6,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/qcnn3yg2bcls4zekj3clsut6as7axth3/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/170727/debian-security-advisory-5326-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3505"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/168305/red-hat-security-advisory-2022-6389-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022072522"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/168442/red-hat-security-advisory-2022-6595-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/168358/red-hat-security-advisory-2022-6449-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.0997"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4681"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022072639"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4101"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3673"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4136"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3487"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022071827"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3586"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3488"
      },
      {
        "trust": 0.6,
        "url": "https://cxsecurity.com/cveshow/cve-2022-32212/"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022071612"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169410/red-hat-security-advisory-2022-6985-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022071338"
      },
      {
        "trust": 0.5,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2022-32214"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2022-32213"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-33987"
      },
      {
        "trust": 0.5,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2022-32215"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2022-33987"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35256"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-43548"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3807"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3807"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35255"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6389"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6985"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-33502"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-29244"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6595"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33502"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-7788"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28469"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-29244"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2020-28469"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7788"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6449"
      },
      {
        "trust": 0.1,
        "url": "https://security-tracker.debian.org/tracker/nodejs"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/faq"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/nodejs/12.22.9~dfsg-1ubuntu3.2"
      },
      {
        "trust": 0.1,
        "url": "https://ubuntu.com/security/notices/usn-6491-1"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/nodejs/10.19.0~dfsg-3ubuntu1.3"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22960"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30587"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32006"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22931"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32222"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22939"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32558"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30588"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21824"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3672"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44532"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35949"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22959"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/glsa/202405-29"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22918"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32004"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30584"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7774"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30589"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32003"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22883"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22884"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35948"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44533"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32002"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30582"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3602"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3786"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30590"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30586"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22940"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32005"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32559"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22930"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39135"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39134"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30581"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37712"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30583"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44531"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37701"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6448"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2022-32212"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013369"
      },
      {
        "db": "PACKETSTORM",
        "id": "168305"
      },
      {
        "db": "PACKETSTORM",
        "id": "169410"
      },
      {
        "db": "PACKETSTORM",
        "id": "168442"
      },
      {
        "db": "PACKETSTORM",
        "id": "168358"
      },
      {
        "db": "PACKETSTORM",
        "id": "170727"
      },
      {
        "db": "PACKETSTORM",
        "id": "175817"
      },
      {
        "db": "PACKETSTORM",
        "id": "178512"
      },
      {
        "db": "PACKETSTORM",
        "id": "168359"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-684"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-32212"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULMON",
        "id": "CVE-2022-32212"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013369"
      },
      {
        "db": "PACKETSTORM",
        "id": "168305"
      },
      {
        "db": "PACKETSTORM",
        "id": "169410"
      },
      {
        "db": "PACKETSTORM",
        "id": "168442"
      },
      {
        "db": "PACKETSTORM",
        "id": "168358"
      },
      {
        "db": "PACKETSTORM",
        "id": "170727"
      },
      {
        "db": "PACKETSTORM",
        "id": "175817"
      },
      {
        "db": "PACKETSTORM",
        "id": "178512"
      },
      {
        "db": "PACKETSTORM",
        "id": "168359"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-684"
      },
      {
        "db": "NVD",
        "id": "CVE-2022-32212"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-09-07T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-013369"
      },
      {
        "date": "2022-09-08T14:41:32",
        "db": "PACKETSTORM",
        "id": "168305"
      },
      {
        "date": "2022-10-18T22:30:49",
        "db": "PACKETSTORM",
        "id": "169410"
      },
      {
        "date": "2022-09-21T13:47:04",
        "db": "PACKETSTORM",
        "id": "168442"
      },
      {
        "date": "2022-09-13T15:43:41",
        "db": "PACKETSTORM",
        "id": "168358"
      },
      {
        "date": "2023-01-25T16:09:12",
        "db": "PACKETSTORM",
        "id": "170727"
      },
      {
        "date": "2023-11-21T16:00:44",
        "db": "PACKETSTORM",
        "id": "175817"
      },
      {
        "date": "2024-05-09T15:46:44",
        "db": "PACKETSTORM",
        "id": "178512"
      },
      {
        "date": "2022-09-13T15:43:55",
        "db": "PACKETSTORM",
        "id": "168359"
      },
      {
        "date": "2022-07-08T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202207-684"
      },
      {
        "date": "2022-07-14T15:15:08.237000",
        "db": "NVD",
        "id": "CVE-2022-32212"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-09-07T08:25:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-013369"
      },
      {
        "date": "2023-02-24T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202207-684"
      },
      {
        "date": "2023-02-23T20:15:12.057000",
        "db": "NVD",
        "id": "CVE-2022-32212"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "175817"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-684"
      }
    ],
    "trust": 0.7
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Node.js\u00a0Foundation\u00a0 of \u00a0Node.js\u00a0 in products from other multiple vendors \u00a0OS\u00a0 Command injection vulnerability",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-013369"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "operating system commend injection",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202207-684"
      }
    ],
    "trust": 0.6
  }
}

var-202203-0664
Vulnerability from variot

BIND 9.11.0 -> 9.11.36 9.12.0 -> 9.16.26 9.17.0 -> 9.18.0 BIND Supported Preview Editions: 9.11.4-S1 -> 9.11.36-S1 9.16.8-S1 -> 9.16.26-S1 Versions of BIND 9 earlier than those shown - back to 9.1.0, including Supported Preview Editions - are also believed to be affected but have not been tested as they are EOL. The cache could become poisoned with incorrect records leading to queries being made to the wrong servers, which might also result in false information being returned to clients. Bogus NS records supplied by the forwarders may be cached and used by name if it needs to recurse for any reason. This issue causes it to obtain and pass on potentially incorrect answers. (CVE-2021-25220) By flooding the target resolver with queries exploiting this flaw an attacker can significantly impair the resolver's performance, effectively denying legitimate clients access to the DNS resolution service. (CVE-2022-2795) By spoofing the target resolver with responses that have a malformed ECDSA signature, an attacker can trigger a small memory leak. It is possible to gradually erode available memory to the point where named crashes for lack of resources. (CVE-2022-38177) By spoofing the target resolver with responses that have a malformed EdDSA signature, an attacker can trigger a small memory leak. It is possible to gradually erode available memory to the point where named crashes for lack of resources. (CVE-2022-38178).

For the oldstable distribution (buster), this problem has been fixed in version 1:9.11.5.P4+dfsg-5.1+deb10u7.

For the stable distribution (bullseye), this problem has been fixed in version 1:9.16.27-1~deb11u1.

We recommend that you upgrade your bind9 packages.

For the detailed security status of bind9 please refer to its security tracker page at: https://security-tracker.debian.org/tracker/bind9

Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/

Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmI010UACgkQEMKTtsN8 Tjbp3xAAil38qfAIdNkaIxY2bauvTyZDWzr6KUjph0vzmLEoAFQ3bysVSGlCnZk9 IgdyfPRWQ+Bjau1/dlhNYaTlnQajbeyvCXfJcjRRgtUDCp7abZcOcb1WDu8jWLGW iRtKsvKKrTKkIou5LgDlyqZyf6OzjgRdwtm86GDPQiCaSEpmbRt+APj5tkIA9R1G ELWuZsjbIraBU0TsNfOalgNpAWtSBayxKtWB69J8rxUV69JI194A4AJ0wm9SPpFV G/TzlyHp1dUZJRLNmZOZU/dq4pPsXzh9I4QCg1kJWsVHe2ycAJKho6hr5iy43fNl MuokfI9YnU6/9SjHrQAWp1X/6MYCR8NieJ933W89/Zb8eTjTZC8EQGo6fkA287G8 glQOrJHMQyV+b97lT67+ioTHNzTEBXTih7ZDeC1TlLqypCNYhRF/ll0Hx/oeiJFU rbjh2Og9huhD5JH8z8YAvY2g81e7KdPxazuKJnQpxGutqddCuwBvyI9fovYrah9W bYD6rskLZM2x90RI2LszHisl6FV5k37PaczamlRqGgbbMb9YlnDFjJUbM8rZZgD4 +8u/AkHq2+11pTtZ40NYt1gpdidmIC/gzzha2TfZCHMs44KPMMdH+Fid1Kc6/Cq8 QygtL4M387J9HXUrlN7NDUOrDVuVqfBG+ve3i9GCZzYjwtajTAQ= =6st2 -----END PGP SIGNATURE----- . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

====================================================================
Red Hat Security Advisory

Synopsis: Moderate: bind security update Advisory ID: RHSA-2023:0402-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2023:0402 Issue date: 2023-01-24 CVE Names: CVE-2021-25220 CVE-2022-2795 ==================================================================== 1. Summary:

An update for bind is now available for Red Hat Enterprise Linux 7.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

Red Hat Enterprise Linux Client (v. 7) - noarch, x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode (v. 7) - noarch, x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64 Red Hat Enterprise Linux Server (v. 7) - noarch, ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - x86_64

  1. Description:

The Berkeley Internet Name Domain (BIND) is an implementation of the Domain Name System (DNS) protocols. BIND includes a DNS server (named); a resolver library (routines for applications to use when interfacing with DNS); and tools for verifying that the DNS server is operating correctly.

Security Fix(es):

  • bind: DNS forwarders - cache poisoning vulnerability (CVE-2021-25220)

  • bind: processing large delegations may severely degrade resolver performance (CVE-2022-2795)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

  1. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

After installing the update, the BIND daemon (named) will be restarted automatically.

  1. Bugs fixed (https://bugzilla.redhat.com/):

2064512 - CVE-2021-25220 bind: DNS forwarders - cache poisoning vulnerability 2128584 - CVE-2022-2795 bind: processing large delegations may severely degrade resolver performance

  1. Package List:

Red Hat Enterprise Linux Client (v. 7):

Source: bind-9.11.4-26.P2.el7_9.13.src.rpm

noarch: bind-license-9.11.4-26.P2.el7_9.13.noarch.rpm

x86_64: bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm

Red Hat Enterprise Linux Client Optional (v. 7):

x86_64: bind-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm

Red Hat Enterprise Linux ComputeNode (v. 7):

Source: bind-9.11.4-26.P2.el7_9.13.src.rpm

noarch: bind-license-9.11.4-26.P2.el7_9.13.noarch.rpm

x86_64: bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm

Red Hat Enterprise Linux ComputeNode Optional (v. 7):

x86_64: bind-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm

Red Hat Enterprise Linux Server (v. 7):

Source: bind-9.11.4-26.P2.el7_9.13.src.rpm

noarch: bind-license-9.11.4-26.P2.el7_9.13.noarch.rpm

ppc64: bind-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-chroot-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.ppc.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.ppc.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-libs-9.11.4-26.P2.el7_9.13.ppc.rpm bind-libs-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.ppc.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.ppc.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-utils-9.11.4-26.P2.el7_9.13.ppc64.rpm

ppc64le: bind-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-chroot-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-libs-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-utils-9.11.4-26.P2.el7_9.13.ppc64le.rpm

s390x: bind-9.11.4-26.P2.el7_9.13.s390x.rpm bind-chroot-9.11.4-26.P2.el7_9.13.s390x.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.s390.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.s390x.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.s390.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.s390x.rpm bind-libs-9.11.4-26.P2.el7_9.13.s390.rpm bind-libs-9.11.4-26.P2.el7_9.13.s390x.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.s390.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.s390x.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.s390x.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.s390.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.s390x.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.s390x.rpm bind-utils-9.11.4-26.P2.el7_9.13.s390x.rpm

x86_64: bind-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm

Red Hat Enterprise Linux Server Optional (v. 7):

ppc64: bind-debuginfo-9.11.4-26.P2.el7_9.13.ppc.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-devel-9.11.4-26.P2.el7_9.13.ppc.rpm bind-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.ppc.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.ppc.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.ppc.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-sdb-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.ppc64.rpm

ppc64le: bind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-sdb-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.ppc64le.rpm

s390x: bind-debuginfo-9.11.4-26.P2.el7_9.13.s390.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.s390x.rpm bind-devel-9.11.4-26.P2.el7_9.13.s390.rpm bind-devel-9.11.4-26.P2.el7_9.13.s390x.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.s390.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.s390x.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.s390.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.s390x.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.s390.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.s390x.rpm bind-sdb-9.11.4-26.P2.el7_9.13.s390x.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.s390x.rpm

x86_64: bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm

Red Hat Enterprise Linux Workstation (v. 7):

Source: bind-9.11.4-26.P2.el7_9.13.src.rpm

noarch: bind-license-9.11.4-26.P2.el7_9.13.noarch.rpm

x86_64: bind-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm

Red Hat Enterprise Linux Workstation Optional (v. 7):

x86_64: bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. References:

https://access.redhat.com/security/cve/CVE-2021-25220 https://access.redhat.com/security/cve/CVE-2022-2795 https://access.redhat.com/security/updates/classification/#moderate

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2023 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBY9AIs9zjgjWX9erEAQiz9BAAiQvmAQ5DWdOQbHHizPAHBnKnBtNBfCT3 iaAzKQ0Yrpk26N9cdrvcBJwdrHpI28VJ3eemFUxQFseUqtAErsgfL4QqnjPjQgsp U2qLPjqbzfOrbi1CuruMMIIbtxfwvsdic8OB9Zi7XzfZjWm2X4c6Ima+QXol6x9a 8J2qdzCqhoYUXJgdpVK9nAAGsPtidcnqLYYIcTclJArp6uRSlEEk7EbNJvs2SAbj MUo5aq5BoVy2TkiMyqhT5voy6K8f4c7WbQYerNieps18541ZSr29fAzWBznr3Yns gE10Aaoa8uCxlaexFR8EahPVYe6wJAm6R62LBabEWChbzW0oxr7X2DdzX9eiOwl0 wJT0n4GHoFsCGMa+v1yybkjHIUfiW25WC7bC4QDj4fjTpbicVlnttXhQJwCJK5bb PC27GE6qi7EqwHYJa/jPenbIG38mXj/r2bwIr1qYQMLjQ8BQIneShky3ZWE4l/jd zTMwGVal8ACBYdCALx/O9QNyzaO92xHLnKl3DIoqaQdjasIfGp/G6Xc1YggKyZAP VVtXPiOIbReBVNWiBXMH1ZEQeNon4su0/MbMWrmJpwvEzYeXkuWO98LZ4dlLVuim NG/dJ6RqzT6/aqRNVyOt5s4SLIQ5DrPXoPnZRUBsbpWhP6lxPhESKA0TUg5FYz33 eDGIrZR4jEY=azJw -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . 9) - aarch64, noarch, ppc64le, s390x, x86_64

  1. Description:

The Dynamic Host Configuration Protocol (DHCP) is a protocol that allows individual devices on an IP network to get their own network configuration information, including an IP address, a subnet mask, and a broadcast address. The dhcp packages provide a relay agent and ISC DHCP service required to enable and administer DHCP on a network. 8) - aarch64, ppc64le, s390x, x86_64


  1. Gentoo Linux Security Advisory GLSA 202210-25

                                       https://security.gentoo.org/

Severity: Low Title: ISC BIND: Multiple Vulnerabilities Date: October 31, 2022 Bugs: #820563, #835439, #872206 ID: 202210-25


Synopsis

Multiple vulnerabilities have been discovered in ISC BIND, the worst of which could result in denial of service.

Affected packages

-------------------------------------------------------------------
 Package              /     Vulnerable     /            Unaffected
-------------------------------------------------------------------

1 net-dns/bind < 9.16.33 >= 9.16.33 2 net-dns/bind-tools < 9.16.33 >= 9.16.33

Description

Multiple vulnerabilities have been discovered in ISC BIND. Please review the CVE identifiers referenced below for details.

Impact

Please review the referenced CVE identifiers for details.

Workaround

There is no known workaround at this time.

Resolution

All ISC BIND users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-dns/bind-9.16.33"

All ISC BIND-tools users should upgrade to the latest version:

# emerge --sync # emerge --ask --oneshot --verbose ">=net-dns/bind-tools-9.16.33"

References

[ 1 ] CVE-2021-25219 https://nvd.nist.gov/vuln/detail/CVE-2021-25219 [ 2 ] CVE-2021-25220 https://nvd.nist.gov/vuln/detail/CVE-2021-25220 [ 3 ] CVE-2022-0396 https://nvd.nist.gov/vuln/detail/CVE-2022-0396 [ 4 ] CVE-2022-2795 https://nvd.nist.gov/vuln/detail/CVE-2022-2795 [ 5 ] CVE-2022-2881 https://nvd.nist.gov/vuln/detail/CVE-2022-2881 [ 6 ] CVE-2022-2906 https://nvd.nist.gov/vuln/detail/CVE-2022-2906 [ 7 ] CVE-2022-3080 https://nvd.nist.gov/vuln/detail/CVE-2022-3080 [ 8 ] CVE-2022-38177 https://nvd.nist.gov/vuln/detail/CVE-2022-38177 [ 9 ] CVE-2022-38178 https://nvd.nist.gov/vuln/detail/CVE-2022-38178

Availability

This GLSA and any updates to it are available for viewing at the Gentoo Security Website:

https://security.gentoo.org/glsa/202210-25

Concerns?

Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.

License

Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).

The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.

https://creativecommons.org/licenses/by-sa/2.5 . ========================================================================== Ubuntu Security Notice USN-5332-1 March 17, 2022

bind9 vulnerabilities

A security issue affects these releases of Ubuntu and its derivatives:

  • Ubuntu 21.10
  • Ubuntu 20.04 LTS
  • Ubuntu 18.04 LTS

Summary:

Several security issues were fixed in Bind.

Software Description: - bind9: Internet Domain Name Server

Details:

Xiang Li, Baojun Liu, Chaoyi Lu, and Changgen Zou discovered that Bind incorrectly handled certain bogus NS records when using forwarders. A remote attacker could possibly use this issue to manipulate cache results. (CVE-2021-25220)

It was discovered that Bind incorrectly handled certain crafted TCP streams. A remote attacker could possibly use this issue to cause Bind to consume resources, leading to a denial of service. This issue only affected Ubuntu 21.10. (CVE-2022-0396)

Update instructions:

The problem can be corrected by updating your system to the following package versions:

Ubuntu 21.10: bind9 1:9.16.15-1ubuntu1.2

Ubuntu 20.04 LTS: bind9 1:9.16.1-0ubuntu2.10

Ubuntu 18.04 LTS: bind9 1:9.11.3+dfsg-1ubuntu1.17

In general, a standard system update will make all the necessary changes

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202203-0664",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "h700e",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "34"
      },
      {
        "model": "bind",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "isc",
        "version": "9.11.0"
      },
      {
        "model": "h410c",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h500e",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "bind",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "isc",
        "version": "9.12.0"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "bind",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "isc",
        "version": "9.16.8"
      },
      {
        "model": "h300s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "junos",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "juniper",
        "version": "19.4"
      },
      {
        "model": "h410s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "junos",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "juniper",
        "version": "20.4"
      },
      {
        "model": "h500s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "junos",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "juniper",
        "version": "21.2"
      },
      {
        "model": "h700s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "bind",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "isc",
        "version": "9.17.0"
      },
      {
        "model": "junos",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "juniper",
        "version": "19.3"
      },
      {
        "model": "junos",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "juniper",
        "version": "22.1"
      },
      {
        "model": "bind",
        "scope": "lte",
        "trust": 1.0,
        "vendor": "isc",
        "version": "9.18.0"
      },
      {
        "model": "junos",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "juniper",
        "version": "19.3"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "h300e",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "junos",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "juniper",
        "version": "22.2"
      },
      {
        "model": "junos",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "juniper",
        "version": "21.3"
      },
      {
        "model": "bind",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "isc",
        "version": "9.11.37"
      },
      {
        "model": "junos",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "juniper",
        "version": "20.3"
      },
      {
        "model": "junos",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "juniper",
        "version": "20.2"
      },
      {
        "model": "bind",
        "scope": "gte",
        "trust": 1.0,
        "vendor": "isc",
        "version": "9.11.4"
      },
      {
        "model": "bind",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "isc",
        "version": "9.16.27"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "35"
      },
      {
        "model": "junos",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "juniper",
        "version": "21.4"
      },
      {
        "model": "fedora",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "fedoraproject",
        "version": "36"
      },
      {
        "model": "junos",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "juniper",
        "version": "21.1"
      },
      {
        "model": "bind",
        "scope": null,
        "trust": 0.8,
        "vendor": "isc",
        "version": null
      },
      {
        "model": "fedora",
        "scope": null,
        "trust": 0.8,
        "vendor": "fedora",
        "version": null
      },
      {
        "model": "esmpro/serveragent",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u65e5\u672c\u96fb\u6c17",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001797"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-25220"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Siemens reported these vulnerabilities to CISA.",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202203-1514"
      }
    ],
    "trust": 0.6
  },
  "cve": "CVE-2021-25220",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "SINGLE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 4.0,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 8.0,
            "id": "CVE-2021-25220",
            "impactScore": 2.9,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 1.1,
            "vectorString": "AV:N/AC:L/Au:S/C:N/I:P/A:N",
            "version": "2.0"
          },
          {
            "acInsufInfo": null,
            "accessComplexity": "Low",
            "accessVector": "Network",
            "authentication": "None",
            "author": "NVD",
            "availabilityImpact": "None",
            "baseScore": 5.0,
            "confidentialityImpact": "None",
            "exploitabilityScore": null,
            "id": "CVE-2021-25220",
            "impactScore": null,
            "integrityImpact": "Partial",
            "obtainAllPrivilege": null,
            "obtainOtherPrivilege": null,
            "obtainUserPrivilege": null,
            "severity": "Medium",
            "trust": 0.8,
            "userInteractionRequired": null,
            "vectorString": "AV:N/AC:L/Au:N/C:N/I:P/A:N",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 6.8,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 2.3,
            "id": "CVE-2021-25220",
            "impactScore": 4.0,
            "integrityImpact": "HIGH",
            "privilegesRequired": "HIGH",
            "scope": "CHANGED",
            "trust": 2.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:C/C:N/I:H/A:N",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "None",
            "baseScore": 8.6,
            "baseSeverity": "High",
            "confidentialityImpact": "None",
            "exploitabilityScore": null,
            "id": "CVE-2021-25220",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "None",
            "scope": "Changed",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:C/C:N/I:H/A:N",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2021-25220",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "security-officer@isc.org",
            "id": "CVE-2021-25220",
            "trust": 1.0,
            "value": "MEDIUM"
          },
          {
            "author": "NVD",
            "id": "CVE-2021-25220",
            "trust": 0.8,
            "value": "High"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202203-1514",
            "trust": 0.6,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2021-25220",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-25220"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001797"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202203-1514"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-25220"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-25220"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "BIND 9.11.0 -\u003e 9.11.36 9.12.0 -\u003e 9.16.26 9.17.0 -\u003e 9.18.0 BIND Supported Preview Editions: 9.11.4-S1 -\u003e 9.11.36-S1 9.16.8-S1 -\u003e 9.16.26-S1 Versions of BIND 9 earlier than those shown - back to 9.1.0, including Supported Preview Editions - are also believed to be affected but have not been tested as they are EOL. The cache could become poisoned with incorrect records leading to queries being made to the wrong servers, which might also result in false information being returned to clients. Bogus NS records supplied by the forwarders may be cached and used by name if it needs to recurse for any reason. This issue causes it to obtain and pass on potentially incorrect answers. (CVE-2021-25220)\nBy flooding the target resolver with queries exploiting this flaw an attacker can significantly impair the resolver\u0027s performance, effectively denying legitimate clients access to the DNS resolution service. (CVE-2022-2795)\nBy spoofing the target resolver with responses that have a malformed ECDSA signature, an attacker can trigger a small memory leak. It is possible to gradually erode available memory to the point where named crashes for lack of resources. (CVE-2022-38177)\nBy spoofing the target resolver with responses that have a malformed EdDSA signature, an attacker can trigger a small memory leak. It is possible to gradually erode available memory to the point where named crashes for lack of resources. (CVE-2022-38178). \n\nFor the oldstable distribution (buster), this problem has been fixed\nin version 1:9.11.5.P4+dfsg-5.1+deb10u7. \n\nFor the stable distribution (bullseye), this problem has been fixed in\nversion 1:9.16.27-1~deb11u1. \n\nWe recommend that you upgrade your bind9 packages. \n\nFor the detailed security status of bind9 please refer to\nits security tracker page at:\nhttps://security-tracker.debian.org/tracker/bind9\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmI010UACgkQEMKTtsN8\nTjbp3xAAil38qfAIdNkaIxY2bauvTyZDWzr6KUjph0vzmLEoAFQ3bysVSGlCnZk9\nIgdyfPRWQ+Bjau1/dlhNYaTlnQajbeyvCXfJcjRRgtUDCp7abZcOcb1WDu8jWLGW\niRtKsvKKrTKkIou5LgDlyqZyf6OzjgRdwtm86GDPQiCaSEpmbRt+APj5tkIA9R1G\nELWuZsjbIraBU0TsNfOalgNpAWtSBayxKtWB69J8rxUV69JI194A4AJ0wm9SPpFV\nG/TzlyHp1dUZJRLNmZOZU/dq4pPsXzh9I4QCg1kJWsVHe2ycAJKho6hr5iy43fNl\nMuokfI9YnU6/9SjHrQAWp1X/6MYCR8NieJ933W89/Zb8eTjTZC8EQGo6fkA287G8\nglQOrJHMQyV+b97lT67+ioTHNzTEBXTih7ZDeC1TlLqypCNYhRF/ll0Hx/oeiJFU\nrbjh2Og9huhD5JH8z8YAvY2g81e7KdPxazuKJnQpxGutqddCuwBvyI9fovYrah9W\nbYD6rskLZM2x90RI2LszHisl6FV5k37PaczamlRqGgbbMb9YlnDFjJUbM8rZZgD4\n+8u/AkHq2+11pTtZ40NYt1gpdidmIC/gzzha2TfZCHMs44KPMMdH+Fid1Kc6/Cq8\nQygtL4M387J9HXUrlN7NDUOrDVuVqfBG+ve3i9GCZzYjwtajTAQ=\n=6st2\n-----END PGP SIGNATURE-----\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n====================================================================                   \nRed Hat Security Advisory\n\nSynopsis:          Moderate: bind security update\nAdvisory ID:       RHSA-2023:0402-01\nProduct:           Red Hat Enterprise Linux\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2023:0402\nIssue date:        2023-01-24\nCVE Names:         CVE-2021-25220 CVE-2022-2795\n====================================================================\n1. Summary:\n\nAn update for bind is now available for Red Hat Enterprise Linux 7. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64\nRed Hat Enterprise Linux Server (v. 7) - noarch, ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - x86_64\n\n3. Description:\n\nThe Berkeley Internet Name Domain (BIND) is an implementation of the Domain\nName System (DNS) protocols. BIND includes a DNS server (named); a resolver\nlibrary (routines for applications to use when interfacing with DNS); and\ntools for verifying that the DNS server is operating correctly. \n\nSecurity Fix(es):\n\n* bind: DNS forwarders - cache poisoning vulnerability (CVE-2021-25220)\n\n* bind: processing large delegations may severely degrade resolver\nperformance (CVE-2022-2795)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nAfter installing the update, the BIND daemon (named) will be restarted\nautomatically. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2064512 - CVE-2021-25220 bind: DNS forwarders - cache poisoning vulnerability\n2128584 - CVE-2022-2795 bind: processing large delegations may severely degrade resolver performance\n\n6. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nbind-9.11.4-26.P2.el7_9.13.src.rpm\n\nnoarch:\nbind-license-9.11.4-26.P2.el7_9.13.noarch.rpm\n\nx86_64:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nx86_64:\nbind-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode (v. 7):\n\nSource:\nbind-9.11.4-26.P2.el7_9.13.src.rpm\n\nnoarch:\nbind-license-9.11.4-26.P2.el7_9.13.noarch.rpm\n\nx86_64:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nx86_64:\nbind-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nbind-9.11.4-26.P2.el7_9.13.src.rpm\n\nnoarch:\nbind-license-9.11.4-26.P2.el7_9.13.noarch.rpm\n\nppc64:\nbind-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.ppc64.rpm\n\nppc64le:\nbind-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.ppc64le.rpm\n\ns390x:\nbind-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.s390x.rpm\n\nx86_64:\nbind-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nppc64:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.ppc64.rpm\n\nppc64le:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.ppc64le.rpm\n\ns390x:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.s390x.rpm\n\nx86_64:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nbind-9.11.4-26.P2.el7_9.13.src.rpm\n\nnoarch:\nbind-license-9.11.4-26.P2.el7_9.13.noarch.rpm\n\nx86_64:\nbind-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. 7):\n\nx86_64:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-25220\nhttps://access.redhat.com/security/cve/CVE-2022-2795\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2023 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBY9AIs9zjgjWX9erEAQiz9BAAiQvmAQ5DWdOQbHHizPAHBnKnBtNBfCT3\niaAzKQ0Yrpk26N9cdrvcBJwdrHpI28VJ3eemFUxQFseUqtAErsgfL4QqnjPjQgsp\nU2qLPjqbzfOrbi1CuruMMIIbtxfwvsdic8OB9Zi7XzfZjWm2X4c6Ima+QXol6x9a\n8J2qdzCqhoYUXJgdpVK9nAAGsPtidcnqLYYIcTclJArp6uRSlEEk7EbNJvs2SAbj\nMUo5aq5BoVy2TkiMyqhT5voy6K8f4c7WbQYerNieps18541ZSr29fAzWBznr3Yns\ngE10Aaoa8uCxlaexFR8EahPVYe6wJAm6R62LBabEWChbzW0oxr7X2DdzX9eiOwl0\nwJT0n4GHoFsCGMa+v1yybkjHIUfiW25WC7bC4QDj4fjTpbicVlnttXhQJwCJK5bb\nPC27GE6qi7EqwHYJa/jPenbIG38mXj/r2bwIr1qYQMLjQ8BQIneShky3ZWE4l/jd\nzTMwGVal8ACBYdCALx/O9QNyzaO92xHLnKl3DIoqaQdjasIfGp/G6Xc1YggKyZAP\nVVtXPiOIbReBVNWiBXMH1ZEQeNon4su0/MbMWrmJpwvEzYeXkuWO98LZ4dlLVuim\nNG/dJ6RqzT6/aqRNVyOt5s4SLIQ5DrPXoPnZRUBsbpWhP6lxPhESKA0TUg5FYz33\neDGIrZR4jEY=azJw\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. 9) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe Dynamic Host Configuration Protocol (DHCP) is a protocol that allows\nindividual devices on an IP network to get their own network configuration\ninformation, including an IP address, a subnet mask, and a broadcast\naddress. The dhcp packages provide a relay agent and ISC DHCP service\nrequired to enable and administer DHCP on a network. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory                           GLSA 202210-25\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n                                           https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Low\n    Title: ISC BIND: Multiple Vulnerabilities\n     Date: October 31, 2022\n     Bugs: #820563, #835439, #872206\n       ID: 202210-25\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been discovered in ISC BIND, the worst of\nwhich could result in denial of service. \n\nAffected packages\n=================\n\n    -------------------------------------------------------------------\n     Package              /     Vulnerable     /            Unaffected\n    -------------------------------------------------------------------\n  1  net-dns/bind               \u003c 9.16.33                  \u003e= 9.16.33\n  2  net-dns/bind-tools         \u003c 9.16.33                  \u003e= 9.16.33\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in ISC BIND. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll ISC BIND users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-dns/bind-9.16.33\"\n\nAll ISC BIND-tools users should upgrade to the latest version:\n\n  # emerge --sync\n  # emerge --ask --oneshot --verbose \"\u003e=net-dns/bind-tools-9.16.33\"\n\nReferences\n==========\n\n[ 1 ] CVE-2021-25219\n      https://nvd.nist.gov/vuln/detail/CVE-2021-25219\n[ 2 ] CVE-2021-25220\n      https://nvd.nist.gov/vuln/detail/CVE-2021-25220\n[ 3 ] CVE-2022-0396\n      https://nvd.nist.gov/vuln/detail/CVE-2022-0396\n[ 4 ] CVE-2022-2795\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2795\n[ 5 ] CVE-2022-2881\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2881\n[ 6 ] CVE-2022-2906\n      https://nvd.nist.gov/vuln/detail/CVE-2022-2906\n[ 7 ] CVE-2022-3080\n      https://nvd.nist.gov/vuln/detail/CVE-2022-3080\n[ 8 ] CVE-2022-38177\n      https://nvd.nist.gov/vuln/detail/CVE-2022-38177\n[ 9 ] CVE-2022-38178\n      https://nvd.nist.gov/vuln/detail/CVE-2022-38178\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202210-25\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. ==========================================================================\nUbuntu Security Notice USN-5332-1\nMarch 17, 2022\n\nbind9 vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 21.10\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in Bind. \n\nSoftware Description:\n- bind9: Internet Domain Name Server\n\nDetails:\n\nXiang Li, Baojun Liu, Chaoyi Lu, and Changgen Zou discovered that Bind\nincorrectly handled certain bogus NS records when using forwarders. A\nremote attacker could possibly use this issue to manipulate cache results. \n(CVE-2021-25220)\n\nIt was discovered that Bind incorrectly handled certain crafted TCP\nstreams. A remote attacker could possibly use this issue to cause Bind to\nconsume resources, leading to a denial of service. This issue only affected\nUbuntu 21.10. (CVE-2022-0396)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 21.10:\n  bind9                           1:9.16.15-1ubuntu1.2\n\nUbuntu 20.04 LTS:\n  bind9                           1:9.16.1-0ubuntu2.10\n\nUbuntu 18.04 LTS:\n  bind9                           1:9.11.3+dfsg-1ubuntu1.17\n\nIn general, a standard system update will make all the necessary changes",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-25220"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001797"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-25220"
      },
      {
        "db": "PACKETSTORM",
        "id": "169261"
      },
      {
        "db": "PACKETSTORM",
        "id": "170724"
      },
      {
        "db": "PACKETSTORM",
        "id": "169894"
      },
      {
        "db": "PACKETSTORM",
        "id": "169846"
      },
      {
        "db": "PACKETSTORM",
        "id": "169745"
      },
      {
        "db": "PACKETSTORM",
        "id": "169773"
      },
      {
        "db": "PACKETSTORM",
        "id": "169587"
      },
      {
        "db": "PACKETSTORM",
        "id": "166356"
      },
      {
        "db": "PACKETSTORM",
        "id": "166354"
      }
    ],
    "trust": 2.52
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2021-25220",
        "trust": 4.2
      },
      {
        "db": "SIEMENS",
        "id": "SSA-637483",
        "trust": 1.7
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-22-258-05",
        "trust": 1.5
      },
      {
        "db": "JVN",
        "id": "JVNVU99475301",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU98927070",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001797",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "170724",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "169894",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "169846",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "169773",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "169587",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "166356",
        "trust": 0.7
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1150",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5750",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.4616",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1223",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1289",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.2694",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1183",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1160",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022032124",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022031701",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022031728",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202203-1514",
        "trust": 0.6
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-25220",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "169261",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "169745",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "166354",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-25220"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001797"
      },
      {
        "db": "PACKETSTORM",
        "id": "169261"
      },
      {
        "db": "PACKETSTORM",
        "id": "170724"
      },
      {
        "db": "PACKETSTORM",
        "id": "169894"
      },
      {
        "db": "PACKETSTORM",
        "id": "169846"
      },
      {
        "db": "PACKETSTORM",
        "id": "169745"
      },
      {
        "db": "PACKETSTORM",
        "id": "169773"
      },
      {
        "db": "PACKETSTORM",
        "id": "169587"
      },
      {
        "db": "PACKETSTORM",
        "id": "166356"
      },
      {
        "db": "PACKETSTORM",
        "id": "166354"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202203-1514"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-25220"
      }
    ]
  },
  "id": "VAR-202203-0664",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-11-29T21:30:09.981000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "NV22-009",
        "trust": 0.8,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/API7U5E7SX7BAAVFNW366FFJGD6NZZKV/"
      },
      {
        "title": "Ubuntu Security Notice: USN-5332-2: Bind vulnerability",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-5332-2"
      },
      {
        "title": "Red Hat: Moderate: dhcp security and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228385 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: bind security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20227790 - Security Advisory"
      },
      {
        "title": "Ubuntu Security Notice: USN-5332-1: Bind vulnerabilities",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-5332-1"
      },
      {
        "title": "Red Hat: Moderate: bind security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20228068 - Security Advisory"
      },
      {
        "title": "Red Hat: Moderate: bind security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20230402 - Security Advisory"
      },
      {
        "title": "Debian Security Advisories: DSA-5105-1 bind9 -- security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=16d84b908a424f50b3236db9219500e3"
      },
      {
        "title": "Arch Linux Issues: ",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2021-25220"
      },
      {
        "title": "Amazon Linux 2: ALAS2-2023-2001",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2023-2001"
      },
      {
        "title": "Amazon Linux 2022: ALAS2022-2022-166",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=ALAS2022-2022-166"
      },
      {
        "title": "Amazon Linux 2022: ALAS2022-2022-138",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=ALAS2022-2022-138"
      },
      {
        "title": "",
        "trust": 0.1,
        "url": "https://github.com/Live-Hack-CVE/CVE-2021-25220 "
      },
      {
        "title": "",
        "trust": 0.1,
        "url": "https://github.com/vincent-deng/veracode-container-security-finding-parser "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-25220"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001797"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-444",
        "trust": 1.0
      },
      {
        "problemtype": "Lack of information (CWE-noinfo) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001797"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-25220"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.8,
        "url": "https://kb.isc.org/v1/docs/cve-2021-25220"
      },
      {
        "trust": 1.8,
        "url": "https://security.gentoo.org/glsa/202210-25"
      },
      {
        "trust": 1.7,
        "url": "https://security.netapp.com/advisory/ntap-20220408-0001/"
      },
      {
        "trust": 1.7,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf"
      },
      {
        "trust": 1.7,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25220"
      },
      {
        "trust": 1.6,
        "url": "https://supportportal.juniper.net/s/article/2022-10-security-bulletin-junos-os-srx-series-cache-poisoning-vulnerability-in-bind-used-by-dns-proxy-cve-2021-25220?language=en_us"
      },
      {
        "trust": 1.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25220"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/2sxt7247qtknbq67mnrgzd23adxu6e5u/"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/5vx3i2u3icoiei5y7oya6cholfmnh3yq/"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/api7u5e7sx7baavfnw366ffjgd6nzzkv/"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/de3uavcpumakg27zl5yxsp2c3riow3jz/"
      },
      {
        "trust": 1.0,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/nyd7us4hzrfugaj66zthfbyvp5n3oqby/"
      },
      {
        "trust": 0.9,
        "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.8,
        "url": "http://jvn.jp/vu/jvnvu98927070/index.html"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu99475301/"
      },
      {
        "trust": 0.7,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/nyd7us4hzrfugaj66zthfbyvp5n3oqby/"
      },
      {
        "trust": 0.7,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/api7u5e7sx7baavfnw366ffjgd6nzzkv/"
      },
      {
        "trust": 0.7,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/5vx3i2u3icoiei5y7oya6cholfmnh3yq/"
      },
      {
        "trust": 0.7,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/2sxt7247qtknbq67mnrgzd23adxu6e5u/"
      },
      {
        "trust": 0.7,
        "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/de3uavcpumakg27zl5yxsp2c3riow3jz/"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169846/red-hat-security-advisory-2022-8385-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1223"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1289"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/isc-bind-spoofing-via-dns-forwarders-cache-poisoning-37754"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.4616"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169894/red-hat-security-advisory-2022-8068-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022031728"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166356/ubuntu-security-notice-usn-5332-2.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1150"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1183"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1160"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169773/red-hat-security-advisory-2022-7643-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/170724/red-hat-security-advisory-2023-0402-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169587/gentoo-linux-security-advisory-202210-25.html"
      },
      {
        "trust": 0.6,
        "url": "https://cxsecurity.com/cveshow/cve-2021-25220/"
      },
      {
        "trust": 0.6,
        "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5750"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022031701"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.2694"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022032124"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0396"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.5,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.5,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.2,
        "url": "https://ubuntu.com/security/notices/usn-5332-2"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2795"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/9.1_release_notes/index"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0396"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.7_release_notes/index"
      },
      {
        "trust": 0.2,
        "url": "https://ubuntu.com/security/notices/usn-5332-1"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/444.html"
      },
      {
        "trust": 0.1,
        "url": "https://github.com/live-hack-cve/cve-2021-25220"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://alas.aws.amazon.com/al2/alas-2023-2001.html"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/faq"
      },
      {
        "trust": 0.1,
        "url": "https://security-tracker.debian.org/tracker/bind9"
      },
      {
        "trust": 0.1,
        "url": "https://www.debian.org/security/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2023:0402"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2795"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:8068"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:8385"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:7790"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:7643"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-38178"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2906"
      },
      {
        "trust": 0.1,
        "url": "https://bugs.gentoo.org."
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2881"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25219"
      },
      {
        "trust": 0.1,
        "url": "https://security.gentoo.org/"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3080"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-38177"
      },
      {
        "trust": 0.1,
        "url": "https://creativecommons.org/licenses/by-sa/2.5"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/bind9/1:9.16.1-0ubuntu2.10"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/bind9/1:9.16.15-1ubuntu1.2"
      },
      {
        "trust": 0.1,
        "url": "https://launchpad.net/ubuntu/+source/bind9/1:9.11.3+dfsg-1ubuntu1.17"
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-25220"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001797"
      },
      {
        "db": "PACKETSTORM",
        "id": "169261"
      },
      {
        "db": "PACKETSTORM",
        "id": "170724"
      },
      {
        "db": "PACKETSTORM",
        "id": "169894"
      },
      {
        "db": "PACKETSTORM",
        "id": "169846"
      },
      {
        "db": "PACKETSTORM",
        "id": "169745"
      },
      {
        "db": "PACKETSTORM",
        "id": "169773"
      },
      {
        "db": "PACKETSTORM",
        "id": "169587"
      },
      {
        "db": "PACKETSTORM",
        "id": "166356"
      },
      {
        "db": "PACKETSTORM",
        "id": "166354"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202203-1514"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-25220"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULMON",
        "id": "CVE-2021-25220"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001797"
      },
      {
        "db": "PACKETSTORM",
        "id": "169261"
      },
      {
        "db": "PACKETSTORM",
        "id": "170724"
      },
      {
        "db": "PACKETSTORM",
        "id": "169894"
      },
      {
        "db": "PACKETSTORM",
        "id": "169846"
      },
      {
        "db": "PACKETSTORM",
        "id": "169745"
      },
      {
        "db": "PACKETSTORM",
        "id": "169773"
      },
      {
        "db": "PACKETSTORM",
        "id": "169587"
      },
      {
        "db": "PACKETSTORM",
        "id": "166356"
      },
      {
        "db": "PACKETSTORM",
        "id": "166354"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202203-1514"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-25220"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-03-23T00:00:00",
        "db": "VULMON",
        "id": "CVE-2021-25220"
      },
      {
        "date": "2022-05-12T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-001797"
      },
      {
        "date": "2022-03-28T19:12:00",
        "db": "PACKETSTORM",
        "id": "169261"
      },
      {
        "date": "2023-01-25T16:07:50",
        "db": "PACKETSTORM",
        "id": "170724"
      },
      {
        "date": "2022-11-16T16:09:16",
        "db": "PACKETSTORM",
        "id": "169894"
      },
      {
        "date": "2022-11-15T16:40:52",
        "db": "PACKETSTORM",
        "id": "169846"
      },
      {
        "date": "2022-11-08T13:44:36",
        "db": "PACKETSTORM",
        "id": "169745"
      },
      {
        "date": "2022-11-08T13:49:24",
        "db": "PACKETSTORM",
        "id": "169773"
      },
      {
        "date": "2022-10-31T14:50:53",
        "db": "PACKETSTORM",
        "id": "169587"
      },
      {
        "date": "2022-03-17T15:54:34",
        "db": "PACKETSTORM",
        "id": "166356"
      },
      {
        "date": "2022-03-17T15:54:20",
        "db": "PACKETSTORM",
        "id": "166354"
      },
      {
        "date": "2022-03-09T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202203-1514"
      },
      {
        "date": "2022-03-23T13:15:07.680000",
        "db": "NVD",
        "id": "CVE-2021-25220"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2022-11-28T00:00:00",
        "db": "VULMON",
        "id": "CVE-2021-25220"
      },
      {
        "date": "2022-09-20T06:12:00",
        "db": "JVNDB",
        "id": "JVNDB-2022-001797"
      },
      {
        "date": "2023-07-24T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202203-1514"
      },
      {
        "date": "2023-11-09T14:44:33.733000",
        "db": "NVD",
        "id": "CVE-2021-25220"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "166356"
      },
      {
        "db": "PACKETSTORM",
        "id": "166354"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202203-1514"
      }
    ],
    "trust": 0.8
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "BIND\u00a0 Cache Pollution with Incorrect Records Vulnerability in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2022-001797"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "environmental issue",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202203-1514"
      }
    ],
    "trust": 0.6
  }
}

var-202411-0477
Vulnerability from variot

A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 3). The affected application does not properly validate input sent to specific endpoints of its web API. This could allow an authenticated remote attacker with high privileges on the application to execute arbitrary code on the underlying OS. Siemens' SINEC INS for, OS A command injection vulnerability exists.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202411-0477",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": null
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012756"
      },
      {
        "db": "NVD",
        "id": "CVE-2024-46890"
      }
    ]
  },
  "cve": "CVE-2024-46890",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "productcert@siemens.com",
            "availabilityImpact": "HIGH",
            "baseScore": 9.1,
            "baseSeverity": "CRITICAL",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 2.3,
            "id": "CVE-2024-46890",
            "impactScore": 6.0,
            "integrityImpact": "HIGH",
            "privilegesRequired": "HIGH",
            "scope": "CHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:C/C:H/I:H/A:H",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "OTHER",
            "availabilityImpact": "High",
            "baseScore": 9.1,
            "baseSeverity": "Critical",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "JVNDB-2024-012756",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "High",
            "scope": "Changed",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:H/UI:N/S:C/C:H/I:H/A:H",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "productcert@siemens.com",
            "id": "CVE-2024-46890",
            "trust": 1.0,
            "value": "Critical"
          },
          {
            "author": "OTHER",
            "id": "JVNDB-2024-012756",
            "trust": 0.8,
            "value": "Critical"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012756"
      },
      {
        "db": "NVD",
        "id": "CVE-2024-46890"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 3). The affected application does not properly validate input sent to specific endpoints of its web API. This could allow an authenticated remote attacker with high privileges on the application to execute arbitrary code on the underlying OS. Siemens\u0027 SINEC INS for, OS A command injection vulnerability exists.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2024-46890"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012756"
      }
    ],
    "trust": 1.62
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2024-46890",
        "trust": 2.6
      },
      {
        "db": "SIEMENS",
        "id": "SSA-915275",
        "trust": 1.8
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-24-319-08",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU96191615",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012756",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012756"
      },
      {
        "db": "NVD",
        "id": "CVE-2024-46890"
      }
    ]
  },
  "id": "VAR-202411-0477",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-11-16T21:48:39.542000Z",
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-78",
        "trust": 1.0
      },
      {
        "problemtype": "OS Command injection (CWE-78) [ others ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012756"
      },
      {
        "db": "NVD",
        "id": "CVE-2024-46890"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.8,
        "url": "https://cert-portal.siemens.com/productcert/html/ssa-915275.html"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu96191615/"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2024-46890"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-24-319-08"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012756"
      },
      {
        "db": "NVD",
        "id": "CVE-2024-46890"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012756"
      },
      {
        "db": "NVD",
        "id": "CVE-2024-46890"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2024-11-15T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2024-012756"
      },
      {
        "date": "2024-11-12T13:15:09.463000",
        "db": "NVD",
        "id": "CVE-2024-46890"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2024-11-15T06:46:00",
        "db": "JVNDB",
        "id": "JVNDB-2024-012756"
      },
      {
        "date": "2024-11-13T23:12:39.993000",
        "db": "NVD",
        "id": "CVE-2024-46890"
      }
    ]
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Siemens\u0027 \u00a0SINEC\u00a0INS\u00a0 In \u00a0OS\u00a0 Command injection vulnerability",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012756"
      }
    ],
    "trust": 0.8
  }
}

var-202411-0478
Vulnerability from variot

A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 3). The affected application uses hard-coded cryptographic key material to obfuscate configuration files. This could allow an attacker to learn that cryptographic key material through reverse engineering of the application binary and decrypt arbitrary backup files. Siemens' SINEC INS contains a vulnerability related to the use of hardcoded encryption keys.Information may be obtained

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202411-0478",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "siemens",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": "1.0"
      },
      {
        "model": "sinec ins",
        "scope": null,
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": null
      },
      {
        "model": "sinec ins",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012786"
      },
      {
        "db": "NVD",
        "id": "CVE-2024-46889"
      }
    ]
  },
  "cve": "CVE-2024-46889",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "productcert@siemens.com",
            "availabilityImpact": "NONE",
            "baseScore": 5.3,
            "baseSeverity": "MEDIUM",
            "confidentialityImpact": "LOW",
            "exploitabilityScore": 3.9,
            "id": "CVE-2024-46889",
            "impactScore": 1.4,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "OTHER",
            "availabilityImpact": "None",
            "baseScore": 5.3,
            "baseSeverity": "Medium",
            "confidentialityImpact": "Low",
            "exploitabilityScore": null,
            "id": "JVNDB-2024-012786",
            "impactScore": null,
            "integrityImpact": "None",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "productcert@siemens.com",
            "id": "CVE-2024-46889",
            "trust": 1.0,
            "value": "Medium"
          },
          {
            "author": "OTHER",
            "id": "JVNDB-2024-012786",
            "trust": 0.8,
            "value": "Medium"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012786"
      },
      {
        "db": "NVD",
        "id": "CVE-2024-46889"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 3). The affected application uses hard-coded cryptographic key material to obfuscate configuration files. This could allow an attacker to learn that cryptographic key material through reverse engineering of the application binary and decrypt arbitrary backup files. Siemens\u0027 SINEC INS contains a vulnerability related to the use of hardcoded encryption keys.Information may be obtained",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2024-46889"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012786"
      }
    ],
    "trust": 1.62
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2024-46889",
        "trust": 2.6
      },
      {
        "db": "SIEMENS",
        "id": "SSA-915275",
        "trust": 1.8
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-24-319-08",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU96191615",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012786",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012786"
      },
      {
        "db": "NVD",
        "id": "CVE-2024-46889"
      }
    ]
  },
  "id": "VAR-202411-0478",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VARIoT devices database",
        "id": null
      }
    ],
    "trust": 0.20766129
  },
  "last_update_date": "2024-11-16T22:16:53.347000Z",
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-321",
        "trust": 1.0
      },
      {
        "problemtype": "Using hardcoded encryption keys (CWE-321) [ others ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012786"
      },
      {
        "db": "NVD",
        "id": "CVE-2024-46889"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 1.8,
        "url": "https://cert-portal.siemens.com/productcert/html/ssa-915275.html"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu96191615/"
      },
      {
        "trust": 0.8,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2024-46889"
      },
      {
        "trust": 0.8,
        "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-24-319-08"
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012786"
      },
      {
        "db": "NVD",
        "id": "CVE-2024-46889"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012786"
      },
      {
        "db": "NVD",
        "id": "CVE-2024-46889"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2024-11-15T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2024-012786"
      },
      {
        "date": "2024-11-12T13:15:09.200000",
        "db": "NVD",
        "id": "CVE-2024-46889"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2024-11-15T07:58:00",
        "db": "JVNDB",
        "id": "JVNDB-2024-012786"
      },
      {
        "date": "2024-11-13T23:11:58.763000",
        "db": "NVD",
        "id": "CVE-2024-46889"
      }
    ]
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Siemens\u0027 \u00a0SINEC\u00a0INS\u00a0 Vulnerability related to the use of hard-coded encryption keys in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2024-012786"
      }
    ],
    "trust": 0.8
  }
}

var-202009-0304
Vulnerability from variot

This vulnerability allows an attacker to use the internal WebSockets API for CodeMeter (All versions prior to 7.00 are affected, including Version 7.0 or newer with the affected WebSockets API still enabled. This is especially relevant for systems or devices where a web browser is used to access a web server) via a specifically crafted Java Script payload, which may allow alteration or creation of license files for when combined with CVE-2020-14515. CodeMeter Exists in a vulnerability related to same-origin policy violations.Information may be tampered with. Siemens SIMATIC WinCC OA (Open Architecture) is a set of SCADA system of Siemens (Siemens), Germany, and it is also an integral part of HMI series. The system is mainly suitable for industries such as rail transit, building automation and public power supply. Information Server is used to report and visualize the process data stored in the Process Historian. SINEC INS is a web-based application that combines various network services in one tool.

Many Siemens products have security vulnerabilities. Attackers can use vulnerabilities to change or create license files

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202009-0304",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "codemeter",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "wibu",
        "version": "7.00"
      },
      {
        "model": "codemeter",
        "scope": null,
        "trust": 0.8,
        "vendor": "wibu",
        "version": null
      },
      {
        "model": "codemeter",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "wibu",
        "version": "7.00"
      },
      {
        "model": "codemeter",
        "scope": "eq",
        "trust": 0.8,
        "vendor": "wibu",
        "version": null
      },
      {
        "model": "sinec ins",
        "scope": null,
        "trust": 0.6,
        "vendor": "siemens",
        "version": null
      },
      {
        "model": "sinema remote connect",
        "scope": null,
        "trust": 0.6,
        "vendor": "siemens",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51241"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011223"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-14519"
      }
    ]
  },
  "cve": "CVE-2020-14519",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 5.0,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 10.0,
            "id": "CVE-2020-14519",
            "impactScore": 2.9,
            "integrityImpact": "PARTIAL",
            "severity": "MEDIUM",
            "trust": 1.8,
            "vectorString": "AV:N/AC:L/Au:N/C:N/I:P/A:N",
            "version": "2.0"
          },
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "CNVD",
            "availabilityImpact": "COMPLETE",
            "baseScore": 9.4,
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 10.0,
            "id": "CNVD-2020-51241",
            "impactScore": 9.2,
            "integrityImpact": "COMPLETE",
            "severity": "HIGH",
            "trust": 0.6,
            "vectorString": "AV:N/AC:L/Au:N/C:N/I:C/A:C",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "nvd@nist.gov",
            "availabilityImpact": "NONE",
            "baseScore": 7.5,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "NONE",
            "exploitabilityScore": 3.9,
            "id": "CVE-2020-14519",
            "impactScore": 3.6,
            "integrityImpact": "HIGH",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:H/A:N",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "None",
            "baseScore": 7.5,
            "baseSeverity": "High",
            "confidentialityImpact": "None",
            "exploitabilityScore": null,
            "id": "CVE-2020-14519",
            "impactScore": null,
            "integrityImpact": "High",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:H/A:N",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "nvd@nist.gov",
            "id": "CVE-2020-14519",
            "trust": 1.0,
            "value": "HIGH"
          },
          {
            "author": "NVD",
            "id": "CVE-2020-14519",
            "trust": 0.8,
            "value": "High"
          },
          {
            "author": "CNVD",
            "id": "CNVD-2020-51241",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202009-486",
            "trust": 0.6,
            "value": "HIGH"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51241"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011223"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-486"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-14519"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "This vulnerability allows an attacker to use the internal WebSockets API for CodeMeter (All versions prior to 7.00 are affected, including Version 7.0 or newer with the affected WebSockets API still enabled. This is especially relevant for systems or devices where a web browser is used to access a web server) via a specifically crafted Java Script payload, which may allow alteration or creation of license files for when combined with CVE-2020-14515. CodeMeter Exists in a vulnerability related to same-origin policy violations.Information may be tampered with. Siemens SIMATIC WinCC OA (Open Architecture) is a set of SCADA system of Siemens (Siemens), Germany, and it is also an integral part of HMI series. The system is mainly suitable for industries such as rail transit, building automation and public power supply. Information Server is used to report and visualize the process data stored in the Process Historian. SINEC INS is a web-based application that combines various network services in one tool. \n\r\n\r\nMany Siemens products have security vulnerabilities. Attackers can use vulnerabilities to change or create license files",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2020-14519"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011223"
      },
      {
        "db": "CNVD",
        "id": "CNVD-2020-51241"
      }
    ],
    "trust": 2.16
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2020-14519",
        "trust": 3.8
      },
      {
        "db": "ICS CERT",
        "id": "ICSA-20-203-01",
        "trust": 2.4
      },
      {
        "db": "JVN",
        "id": "JVNVU90770748",
        "trust": 0.8
      },
      {
        "db": "JVN",
        "id": "JVNVU94568336",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011223",
        "trust": 0.8
      },
      {
        "db": "SIEMENS",
        "id": "SSA-455843",
        "trust": 0.6
      },
      {
        "db": "CNVD",
        "id": "CNVD-2020-51241",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3076.2",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3076.3",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2020.3076",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022021806",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-486",
        "trust": 0.6
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51241"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011223"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-486"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-14519"
      }
    ]
  },
  "id": "VAR-202009-0304",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51241"
      }
    ],
    "trust": 1.0737775833333334
  },
  "iot_taxonomy": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot_taxonomy#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "category": [
          "ICS"
        ],
        "sub_category": null,
        "trust": 0.6
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51241"
      }
    ]
  },
  "last_update_date": "2024-11-23T20:54:05.760000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "CodeMeter",
        "trust": 0.8,
        "url": "https://www.wibu.com/products/codemeter.html"
      },
      {
        "title": "Patch for Multiple Siemens products verification error vulnerabilities",
        "trust": 0.6,
        "url": "https://www.cnvd.org.cn/patchInfo/show/233347"
      },
      {
        "title": "Wibu-Systems AG CodeMeter Security vulnerabilities",
        "trust": 0.6,
        "url": "http://www.cnnvd.org.cn/web/xxk/bdxqById.tag?id=127907"
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51241"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011223"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-486"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-346",
        "trust": 1.0
      },
      {
        "problemtype": "Same-origin policy violation (CWE-346) [ Other ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011223"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-14519"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.4,
        "url": "https://us-cert.cisa.gov/ics/advisories/icsa-20-203-01"
      },
      {
        "trust": 1.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14519"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu94568336/"
      },
      {
        "trust": 0.8,
        "url": "https://jvn.jp/vu/jvnvu90770748/"
      },
      {
        "trust": 0.6,
        "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-455843.pdf"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/siemens-simatic-six-vulnerabilities-via-wibu-systems-codemeter-runtime-33282"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022021806"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3076.2/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3076.3/"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2020.3076/"
      }
    ],
    "sources": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51241"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011223"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-486"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-14519"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "CNVD",
        "id": "CNVD-2020-51241"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011223"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-486"
      },
      {
        "db": "NVD",
        "id": "CVE-2020-14519"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-09-10T00:00:00",
        "db": "CNVD",
        "id": "CNVD-2020-51241"
      },
      {
        "date": "2021-03-24T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-011223"
      },
      {
        "date": "2020-09-08T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202009-486"
      },
      {
        "date": "2020-09-16T20:15:13.723000",
        "db": "NVD",
        "id": "CVE-2020-14519"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2020-09-10T00:00:00",
        "db": "CNVD",
        "id": "CNVD-2020-51241"
      },
      {
        "date": "2022-03-15T05:12:00",
        "db": "JVNDB",
        "id": "JVNDB-2020-011223"
      },
      {
        "date": "2022-02-21T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202009-486"
      },
      {
        "date": "2024-11-21T05:03:26.710000",
        "db": "NVD",
        "id": "CVE-2020-14519"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-486"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "CodeMeter\u00a0 Vulnerability regarding same-origin policy violation in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2020-011223"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "access control error",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202009-486"
      }
    ],
    "trust": 0.6
  }
}

cve-2024-46888
Vulnerability from cvelistv5
Published
2024-11-12 12:49
Modified
2024-11-12 14:32
Summary
A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 3). The affected application does not properly sanitize user provided paths for SFTP-based file up- and downloads. This could allow an authenticated remote attacker to manipulate arbitrary files on the filesystem and achieve arbitrary code execution on the device.
Impacted products
Vendor Product Version
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "affected": [
          {
            "cpes": [
              "cpe:2.3:a:seimens:sinec_ins:*:*:*:*:*:*:*:*"
            ],
            "defaultStatus": "unknown",
            "product": "sinec_ins",
            "vendor": "seimens",
            "versions": [
              {
                "lessThan": "V1.0 SP2 Update 3",
                "status": "affected",
                "version": "0",
                "versionType": "custom"
              }
            ]
          }
        ],
        "metrics": [
          {
            "other": {
              "content": {
                "id": "CVE-2024-46888",
                "options": [
                  {
                    "Exploitation": "none"
                  },
                  {
                    "Automatable": "no"
                  },
                  {
                    "Technical Impact": "partial"
                  }
                ],
                "role": "CISA Coordinator",
                "timestamp": "2024-11-12T14:31:00.141310Z",
                "version": "2.0.3"
              },
              "type": "ssvc"
            }
          }
        ],
        "providerMetadata": {
          "dateUpdated": "2024-11-12T14:32:11.296Z",
          "orgId": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
          "shortName": "CISA-ADP"
        },
        "title": "CISA ADP Vulnrichment"
      }
    ],
    "cna": {
      "affected": [
        {
          "defaultStatus": "unknown",
          "product": "SINEC INS",
          "vendor": "Siemens",
          "versions": [
            {
              "lessThan": "V1.0 SP2 Update 3",
              "status": "affected",
              "version": "0",
              "versionType": "custom"
            }
          ]
        }
      ],
      "descriptions": [
        {
          "lang": "en",
          "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 3). The affected application does not properly sanitize user provided paths for SFTP-based file up- and downloads. This could allow an authenticated remote attacker to manipulate arbitrary files on the filesystem and achieve arbitrary code execution on the device."
        }
      ],
      "metrics": [
        {
          "cvssV3_1": {
            "baseScore": 9.9,
            "baseSeverity": "CRITICAL",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:H/E:P/RL:O/RC:C",
            "version": "3.1"
          }
        },
        {
          "cvssV4_0": {
            "baseScore": 9.4,
            "baseSeverity": "CRITICAL",
            "vectorString": "CVSS:4.0/AV:N/AC:L/AT:N/PR:L/UI:N/VC:H/VI:H/VA:H/SC:H/SI:H/SA:H",
            "version": "4.0"
          }
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "cweId": "CWE-22",
              "description": "CWE-22: Improper Limitation of a Pathname to a Restricted Directory (\u0027Path Traversal\u0027)",
              "lang": "en",
              "type": "CWE"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2024-11-12T12:49:39.127Z",
        "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
        "shortName": "siemens"
      },
      "references": [
        {
          "url": "https://cert-portal.siemens.com/productcert/html/ssa-915275.html"
        }
      ]
    }
  },
  "cveMetadata": {
    "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
    "assignerShortName": "siemens",
    "cveId": "CVE-2024-46888",
    "datePublished": "2024-11-12T12:49:39.127Z",
    "dateReserved": "2024-09-12T11:24:19.243Z",
    "dateUpdated": "2024-11-12T14:32:11.296Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}

cve-2024-46890
Vulnerability from cvelistv5
Published
2024-11-12 12:49
Modified
2024-11-12 14:28
Summary
A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 3). The affected application does not properly validate input sent to specific endpoints of its web API. This could allow an authenticated remote attacker with high privileges on the application to execute arbitrary code on the underlying OS.
Impacted products
Vendor Product Version
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "affected": [
          {
            "cpes": [
              "cpe:2.3:a:seimens:sinec_ins:*:*:*:*:*:*:*:*"
            ],
            "defaultStatus": "unknown",
            "product": "sinec_ins",
            "vendor": "seimens",
            "versions": [
              {
                "lessThan": "V1.0 SP2 Update 3",
                "status": "affected",
                "version": "0",
                "versionType": "custom"
              }
            ]
          }
        ],
        "metrics": [
          {
            "other": {
              "content": {
                "id": "CVE-2024-46890",
                "options": [
                  {
                    "Exploitation": "none"
                  },
                  {
                    "Automatable": "no"
                  },
                  {
                    "Technical Impact": "partial"
                  }
                ],
                "role": "CISA Coordinator",
                "timestamp": "2024-11-12T14:26:52.518770Z",
                "version": "2.0.3"
              },
              "type": "ssvc"
            }
          }
        ],
        "providerMetadata": {
          "dateUpdated": "2024-11-12T14:28:21.227Z",
          "orgId": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
          "shortName": "CISA-ADP"
        },
        "title": "CISA ADP Vulnrichment"
      }
    ],
    "cna": {
      "affected": [
        {
          "defaultStatus": "unknown",
          "product": "SINEC INS",
          "vendor": "Siemens",
          "versions": [
            {
              "lessThan": "V1.0 SP2 Update 3",
              "status": "affected",
              "version": "0",
              "versionType": "custom"
            }
          ]
        }
      ],
      "descriptions": [
        {
          "lang": "en",
          "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 3). The affected application does not properly validate input sent to specific endpoints of its web API. This could allow an authenticated remote attacker with high privileges on the application to execute arbitrary code on the underlying OS."
        }
      ],
      "metrics": [
        {
          "cvssV3_1": {
            "baseScore": 9.1,
            "baseSeverity": "CRITICAL",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:C/C:H/I:H/A:H/E:P/RL:O/RC:C",
            "version": "3.1"
          }
        },
        {
          "cvssV4_0": {
            "baseScore": 9.4,
            "baseSeverity": "CRITICAL",
            "vectorString": "CVSS:4.0/AV:N/AC:L/AT:N/PR:H/UI:N/VC:H/VI:H/VA:H/SC:H/SI:H/SA:H",
            "version": "4.0"
          }
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "cweId": "CWE-78",
              "description": "CWE-78: Improper Neutralization of Special Elements used in an OS Command (\u0027OS Command Injection\u0027)",
              "lang": "en",
              "type": "CWE"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2024-11-12T12:49:41.829Z",
        "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
        "shortName": "siemens"
      },
      "references": [
        {
          "url": "https://cert-portal.siemens.com/productcert/html/ssa-915275.html"
        }
      ]
    }
  },
  "cveMetadata": {
    "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
    "assignerShortName": "siemens",
    "cveId": "CVE-2024-46890",
    "datePublished": "2024-11-12T12:49:41.829Z",
    "dateReserved": "2024-09-12T11:24:19.243Z",
    "dateUpdated": "2024-11-12T14:28:21.227Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}

cve-2024-46889
Vulnerability from cvelistv5
Published
2024-11-12 12:49
Modified
2024-11-12 14:30
Summary
A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 3). The affected application uses hard-coded cryptographic key material to obfuscate configuration files. This could allow an attacker to learn that cryptographic key material through reverse engineering of the application binary and decrypt arbitrary backup files.
Impacted products
Vendor Product Version
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "affected": [
          {
            "cpes": [
              "cpe:2.3:a:seimens:sinec_ins:*:*:*:*:*:*:*:*"
            ],
            "defaultStatus": "unknown",
            "product": "sinec_ins",
            "vendor": "seimens",
            "versions": [
              {
                "lessThan": "V1.0 SP2 Update 3",
                "status": "affected",
                "version": "0",
                "versionType": "custom"
              }
            ]
          }
        ],
        "metrics": [
          {
            "other": {
              "content": {
                "id": "CVE-2024-46889",
                "options": [
                  {
                    "Exploitation": "none"
                  },
                  {
                    "Automatable": "yes"
                  },
                  {
                    "Technical Impact": "partial"
                  }
                ],
                "role": "CISA Coordinator",
                "timestamp": "2024-11-12T14:29:00.705847Z",
                "version": "2.0.3"
              },
              "type": "ssvc"
            }
          }
        ],
        "providerMetadata": {
          "dateUpdated": "2024-11-12T14:30:25.375Z",
          "orgId": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
          "shortName": "CISA-ADP"
        },
        "title": "CISA ADP Vulnrichment"
      }
    ],
    "cna": {
      "affected": [
        {
          "defaultStatus": "unknown",
          "product": "SINEC INS",
          "vendor": "Siemens",
          "versions": [
            {
              "lessThan": "V1.0 SP2 Update 3",
              "status": "affected",
              "version": "0",
              "versionType": "custom"
            }
          ]
        }
      ],
      "descriptions": [
        {
          "lang": "en",
          "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 3). The affected application uses hard-coded cryptographic key material to obfuscate configuration files. This could allow an attacker to learn that cryptographic key material through reverse engineering of the application binary and decrypt arbitrary backup files."
        }
      ],
      "metrics": [
        {
          "cvssV3_1": {
            "baseScore": 5.3,
            "baseSeverity": "MEDIUM",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N/E:P/RL:O/RC:C",
            "version": "3.1"
          }
        },
        {
          "cvssV4_0": {
            "baseScore": 6.9,
            "baseSeverity": "MEDIUM",
            "vectorString": "CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:L/VI:N/VA:N/SC:N/SI:N/SA:N",
            "version": "4.0"
          }
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "cweId": "CWE-321",
              "description": "CWE-321: Use of Hard-coded Cryptographic Key",
              "lang": "en",
              "type": "CWE"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2024-11-12T12:49:40.474Z",
        "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
        "shortName": "siemens"
      },
      "references": [
        {
          "url": "https://cert-portal.siemens.com/productcert/html/ssa-915275.html"
        }
      ]
    }
  },
  "cveMetadata": {
    "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
    "assignerShortName": "siemens",
    "cveId": "CVE-2024-46889",
    "datePublished": "2024-11-12T12:49:40.474Z",
    "dateReserved": "2024-09-12T11:24:19.243Z",
    "dateUpdated": "2024-11-12T14:30:25.375Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}

cve-2023-48427
Vulnerability from cvelistv5
Published
2023-12-12 11:27
Modified
2024-11-25 21:16
Summary
A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 2). Affected products do not properly validate the certificate of the configured UMC server. This could allow an attacker to intercept credentials that are sent to the UMC server as well as to manipulate responses, potentially allowing an attacker to escalate privileges.
Impacted products
Vendor Product Version
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "providerMetadata": {
          "dateUpdated": "2024-08-02T21:30:35.359Z",
          "orgId": "af854a3a-2127-422b-91ae-364da2661108",
          "shortName": "CVE"
        },
        "references": [
          {
            "tags": [
              "x_transferred"
            ],
            "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf"
          }
        ],
        "title": "CVE Program Container"
      },
      {
        "metrics": [
          {
            "other": {
              "content": {
                "id": "CVE-2023-48427",
                "options": [
                  {
                    "Exploitation": "none"
                  },
                  {
                    "Automatable": "no"
                  },
                  {
                    "Technical Impact": "total"
                  }
                ],
                "role": "CISA Coordinator",
                "timestamp": "2024-11-25T21:16:08.935587Z",
                "version": "2.0.3"
              },
              "type": "ssvc"
            }
          }
        ],
        "providerMetadata": {
          "dateUpdated": "2024-11-25T21:16:41.045Z",
          "orgId": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
          "shortName": "CISA-ADP"
        },
        "title": "CISA ADP Vulnrichment"
      }
    ],
    "cna": {
      "affected": [
        {
          "defaultStatus": "unknown",
          "product": "SINEC INS",
          "vendor": "Siemens",
          "versions": [
            {
              "status": "affected",
              "version": "All versions \u003c V1.0 SP2 Update 2"
            }
          ]
        }
      ],
      "descriptions": [
        {
          "lang": "en",
          "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 2). Affected products do not properly validate the certificate of the configured UMC server. This could allow an attacker to intercept credentials that are sent to the UMC server as well as to manipulate responses, potentially allowing an attacker to escalate privileges."
        }
      ],
      "metrics": [
        {
          "cvssV3_1": {
            "baseScore": 8.1,
            "baseSeverity": "HIGH",
            "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:H/A:H/E:P/RL:O/RC:C",
            "version": "3.1"
          }
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "cweId": "CWE-295",
              "description": "CWE-295: Improper Certificate Validation",
              "lang": "en",
              "type": "CWE"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2023-12-12T11:27:18.362Z",
        "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
        "shortName": "siemens"
      },
      "references": [
        {
          "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf"
        }
      ]
    }
  },
  "cveMetadata": {
    "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
    "assignerShortName": "siemens",
    "cveId": "CVE-2023-48427",
    "datePublished": "2023-12-12T11:27:18.362Z",
    "dateReserved": "2023-11-16T16:30:40.849Z",
    "dateUpdated": "2024-11-25T21:16:41.045Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}

cve-2023-48429
Vulnerability from cvelistv5
Published
2023-12-12 11:27
Modified
2024-08-02 21:30
Summary
A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 2). The Web UI of affected devices does not check the length of parameters in certain conditions. This allows a malicious admin to crash the server by sending a crafted request to the server. The server will automatically restart.
Impacted products
Vendor Product Version
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "providerMetadata": {
          "dateUpdated": "2024-08-02T21:30:35.075Z",
          "orgId": "af854a3a-2127-422b-91ae-364da2661108",
          "shortName": "CVE"
        },
        "references": [
          {
            "tags": [
              "x_transferred"
            ],
            "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf"
          }
        ],
        "title": "CVE Program Container"
      }
    ],
    "cna": {
      "affected": [
        {
          "defaultStatus": "unknown",
          "product": "SINEC INS",
          "vendor": "Siemens",
          "versions": [
            {
              "status": "affected",
              "version": "All versions \u003c V1.0 SP2 Update 2"
            }
          ]
        }
      ],
      "descriptions": [
        {
          "lang": "en",
          "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 2). The Web UI of affected devices does not check the length of parameters in certain conditions. This allows a malicious admin to crash the server by sending a crafted request to the server. The server will automatically restart."
        }
      ],
      "metrics": [
        {
          "cvssV3_1": {
            "baseScore": 2.7,
            "baseSeverity": "LOW",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:N/I:N/A:L/E:P/RL:O/RC:C",
            "version": "3.1"
          }
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "cweId": "CWE-394",
              "description": "CWE-394: Unexpected Status Code or Return Value",
              "lang": "en",
              "type": "CWE"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2023-12-12T11:27:20.840Z",
        "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
        "shortName": "siemens"
      },
      "references": [
        {
          "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf"
        }
      ]
    }
  },
  "cveMetadata": {
    "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
    "assignerShortName": "siemens",
    "cveId": "CVE-2023-48429",
    "datePublished": "2023-12-12T11:27:20.840Z",
    "dateReserved": "2023-11-16T16:30:40.849Z",
    "dateUpdated": "2024-08-02T21:30:35.075Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}

cve-2023-48431
Vulnerability from cvelistv5
Published
2023-12-12 11:27
Modified
2024-08-02 21:30
Summary
A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 2). Affected software does not correctly validate the response received by an UMC server. An attacker can use this to crash the affected software by providing and configuring a malicious UMC server or by manipulating the traffic from a legitimate UMC server (i.e. leveraging CVE-2023-48427).
Impacted products
Vendor Product Version
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "providerMetadata": {
          "dateUpdated": "2024-08-02T21:30:35.087Z",
          "orgId": "af854a3a-2127-422b-91ae-364da2661108",
          "shortName": "CVE"
        },
        "references": [
          {
            "tags": [
              "x_transferred"
            ],
            "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf"
          }
        ],
        "title": "CVE Program Container"
      }
    ],
    "cna": {
      "affected": [
        {
          "defaultStatus": "unknown",
          "product": "SINEC INS",
          "vendor": "Siemens",
          "versions": [
            {
              "status": "affected",
              "version": "All versions \u003c V1.0 SP2 Update 2"
            }
          ]
        }
      ],
      "descriptions": [
        {
          "lang": "en",
          "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 2). Affected software does not correctly validate the response received by an UMC server. An attacker can use this to crash the affected software by providing and configuring a malicious UMC server or by manipulating the traffic from a legitimate UMC server (i.e. leveraging CVE-2023-48427)."
        }
      ],
      "metrics": [
        {
          "cvssV3_1": {
            "baseScore": 6.8,
            "baseSeverity": "MEDIUM",
            "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:C/C:N/I:N/A:H/E:P/RL:O/RC:C",
            "version": "3.1"
          }
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "cweId": "CWE-754",
              "description": "CWE-754: Improper Check for Unusual or Exceptional Conditions",
              "lang": "en",
              "type": "CWE"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2023-12-12T11:27:23.326Z",
        "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
        "shortName": "siemens"
      },
      "references": [
        {
          "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf"
        }
      ]
    }
  },
  "cveMetadata": {
    "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
    "assignerShortName": "siemens",
    "cveId": "CVE-2023-48431",
    "datePublished": "2023-12-12T11:27:23.326Z",
    "dateReserved": "2023-11-16T16:30:40.850Z",
    "dateUpdated": "2024-08-02T21:30:35.087Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}

cve-2024-46892
Vulnerability from cvelistv5
Published
2024-11-12 12:49
Modified
2024-11-12 14:21
Summary
A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 3). The affected application does not properly invalidate sessions when the associated user is deleted or disabled or their permissions are modified. This could allow an authenticated attacker to continue performing malicious actions even after their user account has been disabled.
Impacted products
Vendor Product Version
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "metrics": [
          {
            "other": {
              "content": {
                "id": "CVE-2024-46892",
                "options": [
                  {
                    "Exploitation": "none"
                  },
                  {
                    "Automatable": "no"
                  },
                  {
                    "Technical Impact": "partial"
                  }
                ],
                "role": "CISA Coordinator",
                "timestamp": "2024-11-12T14:21:05.449383Z",
                "version": "2.0.3"
              },
              "type": "ssvc"
            }
          }
        ],
        "providerMetadata": {
          "dateUpdated": "2024-11-12T14:21:32.457Z",
          "orgId": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
          "shortName": "CISA-ADP"
        },
        "title": "CISA ADP Vulnrichment"
      }
    ],
    "cna": {
      "affected": [
        {
          "defaultStatus": "unknown",
          "product": "SINEC INS",
          "vendor": "Siemens",
          "versions": [
            {
              "lessThan": "V1.0 SP2 Update 3",
              "status": "affected",
              "version": "0",
              "versionType": "custom"
            }
          ]
        }
      ],
      "descriptions": [
        {
          "lang": "en",
          "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 3). The affected application does not properly invalidate sessions when the associated user is deleted or disabled or their permissions are modified. This could allow an authenticated attacker to continue performing malicious actions even after their user account has been disabled."
        }
      ],
      "metrics": [
        {
          "cvssV3_1": {
            "baseScore": 4.9,
            "baseSeverity": "MEDIUM",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:N/I:H/A:N/E:P/RL:O/RC:C",
            "version": "3.1"
          }
        },
        {
          "cvssV4_0": {
            "baseScore": 6.9,
            "baseSeverity": "MEDIUM",
            "vectorString": "CVSS:4.0/AV:N/AC:L/AT:N/PR:H/UI:N/VC:N/VI:H/VA:N/SC:N/SI:N/SA:N",
            "version": "4.0"
          }
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "cweId": "CWE-613",
              "description": "CWE-613: Insufficient Session Expiration",
              "lang": "en",
              "type": "CWE"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2024-11-12T12:49:44.470Z",
        "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
        "shortName": "siemens"
      },
      "references": [
        {
          "url": "https://cert-portal.siemens.com/productcert/html/ssa-915275.html"
        }
      ]
    }
  },
  "cveMetadata": {
    "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
    "assignerShortName": "siemens",
    "cveId": "CVE-2024-46892",
    "datePublished": "2024-11-12T12:49:44.470Z",
    "dateReserved": "2024-09-12T11:24:19.243Z",
    "dateUpdated": "2024-11-12T14:21:32.457Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}

cve-2023-48430
Vulnerability from cvelistv5
Published
2023-12-12 11:27
Modified
2024-08-02 21:30
Summary
A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 2). The REST API of affected devices does not check the length of parameters in certain conditions. This allows a malicious admin to crash the server by sending a crafted request to the API. The server will automatically restart.
Impacted products
Vendor Product Version
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "providerMetadata": {
          "dateUpdated": "2024-08-02T21:30:35.228Z",
          "orgId": "af854a3a-2127-422b-91ae-364da2661108",
          "shortName": "CVE"
        },
        "references": [
          {
            "tags": [
              "x_transferred"
            ],
            "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf"
          }
        ],
        "title": "CVE Program Container"
      }
    ],
    "cna": {
      "affected": [
        {
          "defaultStatus": "unknown",
          "product": "SINEC INS",
          "vendor": "Siemens",
          "versions": [
            {
              "status": "affected",
              "version": "All versions \u003c V1.0 SP2 Update 2"
            }
          ]
        }
      ],
      "descriptions": [
        {
          "lang": "en",
          "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 2). The REST API of affected devices does not check the length of parameters in certain conditions. This allows a malicious admin to crash the server by sending a crafted request to the API. The server will automatically restart."
        }
      ],
      "metrics": [
        {
          "cvssV3_1": {
            "baseScore": 2.7,
            "baseSeverity": "LOW",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:N/I:N/A:L/E:P/RL:O/RC:C",
            "version": "3.1"
          }
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "cweId": "CWE-392",
              "description": "CWE-392: Missing Report of Error Condition",
              "lang": "en",
              "type": "CWE"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2023-12-12T11:27:22.091Z",
        "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
        "shortName": "siemens"
      },
      "references": [
        {
          "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf"
        }
      ]
    }
  },
  "cveMetadata": {
    "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
    "assignerShortName": "siemens",
    "cveId": "CVE-2023-48430",
    "datePublished": "2023-12-12T11:27:22.091Z",
    "dateReserved": "2023-11-16T16:30:40.849Z",
    "dateUpdated": "2024-08-02T21:30:35.228Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}

cve-2023-48428
Vulnerability from cvelistv5
Published
2023-12-12 11:27
Modified
2024-08-02 21:30
Summary
A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 2). The radius configuration mechanism of affected products does not correctly check uploaded certificates. A malicious admin could upload a crafted certificate resulting in a denial-of-service condition or potentially issue commands on system level.
Impacted products
Vendor Product Version
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "providerMetadata": {
          "dateUpdated": "2024-08-02T21:30:34.959Z",
          "orgId": "af854a3a-2127-422b-91ae-364da2661108",
          "shortName": "CVE"
        },
        "references": [
          {
            "tags": [
              "x_transferred"
            ],
            "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf"
          }
        ],
        "title": "CVE Program Container"
      }
    ],
    "cna": {
      "affected": [
        {
          "defaultStatus": "unknown",
          "product": "SINEC INS",
          "vendor": "Siemens",
          "versions": [
            {
              "status": "affected",
              "version": "All versions \u003c V1.0 SP2 Update 2"
            }
          ]
        }
      ],
      "descriptions": [
        {
          "lang": "en",
          "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 2). The radius configuration mechanism of affected products does not correctly check uploaded certificates. A malicious admin could upload a crafted certificate resulting in a denial-of-service condition or potentially issue commands on system level."
        }
      ],
      "metrics": [
        {
          "cvssV3_1": {
            "baseScore": 7.2,
            "baseSeverity": "HIGH",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H/E:P/RL:O/RC:C",
            "version": "3.1"
          }
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "cweId": "CWE-78",
              "description": "CWE-78: Improper Neutralization of Special Elements used in an OS Command (\u0027OS Command Injection\u0027)",
              "lang": "en",
              "type": "CWE"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2023-12-12T11:27:19.590Z",
        "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
        "shortName": "siemens"
      },
      "references": [
        {
          "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf"
        }
      ]
    }
  },
  "cveMetadata": {
    "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
    "assignerShortName": "siemens",
    "cveId": "CVE-2023-48428",
    "datePublished": "2023-12-12T11:27:19.590Z",
    "dateReserved": "2023-11-16T16:30:40.849Z",
    "dateUpdated": "2024-08-02T21:30:34.959Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}

cve-2022-45093
Vulnerability from cvelistv5
Published
2023-01-10 11:39
Modified
2024-08-03 14:01
Summary
A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 1). An authenticated remote attacker with access to the Web Based Management (443/tcp) of the affected product as well as with access to the SFTP server of the affected product (22/tcp), could potentially read and write arbitrary files from and to the device's file system. An attacker might leverage this to trigger remote code execution on the affected component.
Impacted products
Vendor Product Version
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "providerMetadata": {
          "dateUpdated": "2024-08-03T14:01:31.489Z",
          "orgId": "af854a3a-2127-422b-91ae-364da2661108",
          "shortName": "CVE"
        },
        "references": [
          {
            "tags": [
              "x_transferred"
            ],
            "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf"
          }
        ],
        "title": "CVE Program Container"
      }
    ],
    "cna": {
      "affected": [
        {
          "defaultStatus": "unknown",
          "product": "SINEC INS",
          "vendor": "Siemens",
          "versions": [
            {
              "status": "affected",
              "version": "All versions \u003c V1.0 SP2 Update 1"
            }
          ]
        }
      ],
      "descriptions": [
        {
          "lang": "en",
          "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 1). An authenticated remote attacker with access to the Web Based Management (443/tcp) of the affected product as well as with access to the SFTP server of the affected product (22/tcp), could potentially read and write arbitrary files from and to the device\u0027s file system. An attacker might leverage this to trigger remote code execution on the affected component."
        }
      ],
      "metrics": [
        {
          "cvssV3_1": {
            "baseScore": 8.5,
            "baseSeverity": "HIGH",
            "vectorString": "CVSS:3.1/AV:N/AC:H/PR:L/UI:N/S:C/C:H/I:H/A:H/E:P/RL:O/RC:C",
            "version": "3.1"
          }
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "cweId": "CWE-22",
              "description": "CWE-22: Improper Limitation of a Pathname to a Restricted Directory (\u0027Path Traversal\u0027)",
              "lang": "en",
              "type": "CWE"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2023-01-10T11:39:43.047Z",
        "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
        "shortName": "siemens"
      },
      "references": [
        {
          "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf"
        }
      ]
    }
  },
  "cveMetadata": {
    "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
    "assignerShortName": "siemens",
    "cveId": "CVE-2022-45093",
    "datePublished": "2023-01-10T11:39:43.047Z",
    "dateReserved": "2022-11-09T14:32:46.476Z",
    "dateUpdated": "2024-08-03T14:01:31.489Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}

cve-2024-46894
Vulnerability from cvelistv5
Published
2024-11-12 12:49
Modified
2024-11-12 14:19
Summary
A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 3). The affected application does not properly validate authorization of a user to query the "/api/sftp/users" endpoint. This could allow an authenticated remote attacker to gain knowledge about the list of configured users of the SFTP service and also modify that configuration.
Impacted products
Vendor Product Version
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "affected": [
          {
            "cpes": [
              "cpe:2.3:a:siemens:sinec_ins:-:*:*:*:*:*:*:*"
            ],
            "defaultStatus": "unknown",
            "product": "sinec_ins",
            "vendor": "siemens",
            "versions": [
              {
                "lessThan": "v1.0_sp2_update_3",
                "status": "affected",
                "version": "0",
                "versionType": "custom"
              }
            ]
          }
        ],
        "metrics": [
          {
            "other": {
              "content": {
                "id": "CVE-2024-46894",
                "options": [
                  {
                    "Exploitation": "none"
                  },
                  {
                    "Automatable": "yes"
                  },
                  {
                    "Technical Impact": "partial"
                  }
                ],
                "role": "CISA Coordinator",
                "timestamp": "2024-11-12T14:16:33.854628Z",
                "version": "2.0.3"
              },
              "type": "ssvc"
            }
          }
        ],
        "problemTypes": [
          {
            "descriptions": [
              {
                "cweId": "CWE-276",
                "description": "CWE-276 Incorrect Default Permissions",
                "lang": "en",
                "type": "CWE"
              }
            ]
          }
        ],
        "providerMetadata": {
          "dateUpdated": "2024-11-12T14:19:46.429Z",
          "orgId": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
          "shortName": "CISA-ADP"
        },
        "title": "CISA ADP Vulnrichment"
      }
    ],
    "cna": {
      "affected": [
        {
          "defaultStatus": "unknown",
          "product": "SINEC INS",
          "vendor": "Siemens",
          "versions": [
            {
              "lessThan": "V1.0 SP2 Update 3",
              "status": "affected",
              "version": "0",
              "versionType": "custom"
            }
          ]
        }
      ],
      "descriptions": [
        {
          "lang": "en",
          "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 3). The affected application does not properly validate authorization of a user to query the \"/api/sftp/users\" endpoint. This could allow an authenticated remote attacker to gain knowledge about the list of configured users of the SFTP service and also modify that configuration."
        }
      ],
      "metrics": [
        {
          "cvssV3_1": {
            "baseScore": 6.3,
            "baseSeverity": "MEDIUM",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:L/I:L/A:L/E:P/RL:O/RC:C",
            "version": "3.1"
          }
        },
        {
          "cvssV4_0": {
            "baseScore": 5.3,
            "baseSeverity": "MEDIUM",
            "vectorString": "CVSS:4.0/AV:N/AC:L/AT:N/PR:L/UI:N/VC:L/VI:L/VA:L/SC:N/SI:N/SA:N",
            "version": "4.0"
          }
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "cweId": "CWE-200",
              "description": "CWE-200: Exposure of Sensitive Information to an Unauthorized Actor",
              "lang": "en",
              "type": "CWE"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2024-11-12T12:49:45.831Z",
        "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
        "shortName": "siemens"
      },
      "references": [
        {
          "url": "https://cert-portal.siemens.com/productcert/html/ssa-915275.html"
        }
      ]
    }
  },
  "cveMetadata": {
    "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
    "assignerShortName": "siemens",
    "cveId": "CVE-2024-46894",
    "datePublished": "2024-11-12T12:49:45.831Z",
    "dateReserved": "2024-09-12T11:26:58.816Z",
    "dateUpdated": "2024-11-12T14:19:46.429Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}

cve-2022-45094
Vulnerability from cvelistv5
Published
2023-01-10 11:39
Modified
2024-08-03 14:01
Summary
A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 1). An authenticated remote attacker with access to the Web Based Management (443/tcp) of the affected product, could potentially inject commands into the dhcpd configuration of the affected product. An attacker might leverage this to trigger remote code execution on the affected component.
Impacted products
Vendor Product Version
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "providerMetadata": {
          "dateUpdated": "2024-08-03T14:01:31.530Z",
          "orgId": "af854a3a-2127-422b-91ae-364da2661108",
          "shortName": "CVE"
        },
        "references": [
          {
            "tags": [
              "x_transferred"
            ],
            "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf"
          }
        ],
        "title": "CVE Program Container"
      }
    ],
    "cna": {
      "affected": [
        {
          "defaultStatus": "unknown",
          "product": "SINEC INS",
          "vendor": "Siemens",
          "versions": [
            {
              "status": "affected",
              "version": "All versions \u003c V1.0 SP2 Update 1"
            }
          ]
        }
      ],
      "descriptions": [
        {
          "lang": "en",
          "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 1). An authenticated remote attacker with access to the Web Based Management (443/tcp) of the affected product, could potentially inject commands into the dhcpd configuration of the affected product. An attacker might leverage this to trigger remote code execution on the affected component."
        }
      ],
      "metrics": [
        {
          "cvssV3_1": {
            "baseScore": 8.4,
            "baseSeverity": "HIGH",
            "vectorString": "CVSS:3.1/AV:A/AC:L/PR:H/UI:N/S:C/C:H/I:H/A:H/E:P/RL:O/RC:C",
            "version": "3.1"
          }
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "cweId": "CWE-77",
              "description": "CWE-77: Improper Neutralization of Special Elements used in a Command (\u0027Command Injection\u0027)",
              "lang": "en",
              "type": "CWE"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2023-01-10T11:39:44.116Z",
        "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
        "shortName": "siemens"
      },
      "references": [
        {
          "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf"
        }
      ]
    }
  },
  "cveMetadata": {
    "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
    "assignerShortName": "siemens",
    "cveId": "CVE-2022-45094",
    "datePublished": "2023-01-10T11:39:44.116Z",
    "dateReserved": "2022-11-09T14:32:46.476Z",
    "dateUpdated": "2024-08-03T14:01:31.530Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}

cve-2022-45092
Vulnerability from cvelistv5
Published
2023-01-10 11:39
Modified
2024-08-03 14:01
Summary
A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 1). An authenticated remote attacker with access to the Web Based Management (443/tcp) of the affected product, could potentially read and write arbitrary files from and to the device's file system. An attacker might leverage this to trigger remote code execution on the affected component.
Impacted products
Vendor Product Version
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "providerMetadata": {
          "dateUpdated": "2024-08-03T14:01:31.534Z",
          "orgId": "af854a3a-2127-422b-91ae-364da2661108",
          "shortName": "CVE"
        },
        "references": [
          {
            "tags": [
              "x_transferred"
            ],
            "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf"
          }
        ],
        "title": "CVE Program Container"
      }
    ],
    "cna": {
      "affected": [
        {
          "defaultStatus": "unknown",
          "product": "SINEC INS",
          "vendor": "Siemens",
          "versions": [
            {
              "status": "affected",
              "version": "All versions \u003c V1.0 SP2 Update 1"
            }
          ]
        }
      ],
      "descriptions": [
        {
          "lang": "en",
          "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 1). An authenticated remote attacker with access to the Web Based Management (443/tcp) of the affected product, could potentially read and write arbitrary files from and to the device\u0027s file system. An attacker might leverage this to trigger remote code execution on the affected component."
        }
      ],
      "metrics": [
        {
          "cvssV3_1": {
            "baseScore": 9.9,
            "baseSeverity": "CRITICAL",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:H/E:P/RL:O/RC:C",
            "version": "3.1"
          }
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "cweId": "CWE-22",
              "description": "CWE-22: Improper Limitation of a Pathname to a Restricted Directory (\u0027Path Traversal\u0027)",
              "lang": "en",
              "type": "CWE"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2023-01-10T11:39:41.994Z",
        "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
        "shortName": "siemens"
      },
      "references": [
        {
          "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf"
        }
      ]
    }
  },
  "cveMetadata": {
    "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
    "assignerShortName": "siemens",
    "cveId": "CVE-2022-45092",
    "datePublished": "2023-01-10T11:39:41.994Z",
    "dateReserved": "2022-11-09T14:32:46.476Z",
    "dateUpdated": "2024-08-03T14:01:31.534Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}

cve-2024-46891
Vulnerability from cvelistv5
Published
2024-11-12 12:49
Modified
2024-11-12 14:25
Summary
A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 3). The affected application does not properly restrict the size of generated log files. This could allow an unauthenticated remote attacker to trigger a large amount of logged events to exhaust the system's resources and create a denial of service condition.
Impacted products
Vendor Product Version
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "affected": [
          {
            "cpes": [
              "cpe:2.3:a:seimens:sinec_ins:*:*:*:*:*:*:*:*"
            ],
            "defaultStatus": "unknown",
            "product": "sinec_ins",
            "vendor": "seimens",
            "versions": [
              {
                "lessThan": "V1.0_SP2_Update 3",
                "status": "affected",
                "version": "0",
                "versionType": "custom"
              }
            ]
          }
        ],
        "metrics": [
          {
            "other": {
              "content": {
                "id": "CVE-2024-46891",
                "options": [
                  {
                    "Exploitation": "none"
                  },
                  {
                    "Automatable": "yes"
                  },
                  {
                    "Technical Impact": "partial"
                  }
                ],
                "role": "CISA Coordinator",
                "timestamp": "2024-11-12T14:22:45.870908Z",
                "version": "2.0.3"
              },
              "type": "ssvc"
            }
          }
        ],
        "problemTypes": [
          {
            "descriptions": [
              {
                "cweId": "CWE-125",
                "description": "CWE-125 Out-of-bounds Read",
                "lang": "en",
                "type": "CWE"
              }
            ]
          }
        ],
        "providerMetadata": {
          "dateUpdated": "2024-11-12T14:25:48.481Z",
          "orgId": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
          "shortName": "CISA-ADP"
        },
        "title": "CISA ADP Vulnrichment"
      }
    ],
    "cna": {
      "affected": [
        {
          "defaultStatus": "unknown",
          "product": "SINEC INS",
          "vendor": "Siemens",
          "versions": [
            {
              "lessThan": "V1.0 SP2 Update 3",
              "status": "affected",
              "version": "0",
              "versionType": "custom"
            }
          ]
        }
      ],
      "descriptions": [
        {
          "lang": "en",
          "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 3). The affected application does not properly restrict the size of generated log files. This could allow an unauthenticated remote attacker to trigger a large amount of logged events to exhaust the system\u0027s resources and create a denial of service condition."
        }
      ],
      "metrics": [
        {
          "cvssV3_1": {
            "baseScore": 5.3,
            "baseSeverity": "MEDIUM",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L/E:P/RL:O/RC:C",
            "version": "3.1"
          }
        },
        {
          "cvssV4_0": {
            "baseScore": 6.9,
            "baseSeverity": "MEDIUM",
            "vectorString": "CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:L/SC:N/SI:N/SA:N",
            "version": "4.0"
          }
        }
      ],
      "problemTypes": [
        {
          "descriptions": [
            {
              "cweId": "CWE-400",
              "description": "CWE-400: Uncontrolled Resource Consumption",
              "lang": "en",
              "type": "CWE"
            }
          ]
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2024-11-12T12:49:43.155Z",
        "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
        "shortName": "siemens"
      },
      "references": [
        {
          "url": "https://cert-portal.siemens.com/productcert/html/ssa-915275.html"
        }
      ]
    }
  },
  "cveMetadata": {
    "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77",
    "assignerShortName": "siemens",
    "cveId": "CVE-2024-46891",
    "datePublished": "2024-11-12T12:49:43.155Z",
    "dateReserved": "2024-09-12T11:24:19.243Z",
    "dateUpdated": "2024-11-12T14:25:48.481Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1"
}